text
stringlengths
16
172k
source
stringlengths
32
122
Riskware, aportmanteauofriskandsoftware, is a word used to describe software whoseinstallationandexecutionposes a potential risk to ahost computer. Relatively normal programs can often fall into the category of riskware as someapplicationscan be modified for another purpose and used against the computeruseror owner.[1] While a wide variety of software may be considered riskware, one common example isRemote desktop software. This type of software has both legitimate purposes, such as when used for remote technical support, but could also be used by an unauthorized user for malicious purposes.[2] Thismalware-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Riskware
Aweb application(orweb app) isapplication softwarethat is created withweb technologiesand runs via aweb browser.[1][2]Web applications emerged during the late 1990s and allowed for the server todynamicallybuild a response to the request, in contrast tostatic web pages.[3] Web applications are commonly distributed via aweb server. There are several different tier systems that web applications use to communicate between the web browsers, the client interface, and server data. Each system has its own uses as they function in different ways. However, there are many security risks that developers must be aware of during development; proper measures to protect user data are vital. Web applications are often constructed with the use of aweb application framework.Single-page applications (SPAs)andprogressive web apps (PWAs)are two architectural approaches to creating web applications that provide auser experiencesimilar tonative apps, including features such as smooth navigation, offline support, and faster interactions. The concept of a "web application" was first introduced in the Java language in the Servlet Specification version 2.2, which was released in 1999. At that time, both JavaScript andXMLhad already been developed, but theXMLHttpRequestobject had only been recently introduced on Internet Explorer 5 as anActiveXobject.[citation needed]Beginning around the early 2000s, applications such as "Myspace(2003),Gmail(2004),Digg(2004), [and]Google Maps(2005)," started to make their client sides more and more interactive. A web page script is able to contact the server for storing/retrieving data without downloading an entire web page. The practice became known as Ajax in 2005. In earlier computing models like client-server, the processing load for the application was shared between code on the server and code installed on each client locally. In other words, an application had its own pre-compiled client program which served as itsuser interfaceand had to be separately installed on each user'spersonal computer. An upgrade to the server-side code of the application would typically also require an upgrade to the client-side code installed on each user workstation, adding to thesupportcost and decreasingproductivity. Additionally, both the client and server components of the application were bound tightly to a particularcomputer architectureandoperating system, which madeportingthem to other systems prohibitively expensive for all but the largest applications. Later, in 1995,Netscapeintroduced theclient-side scriptinglanguage calledJavaScript, which allowed programmers to adddynamic elementsto the user interface that ran on the client side. Essentially, instead of sending data to the server in order to generate an entire web page, the embedded scripts of the downloaded page can perform various tasks such asinput validationor showing/hiding parts of the page. "Progressive web apps", the term coined by designer Frances Berriman andGoogle Chromeengineer Alex Russell in 2015, refers to apps taking advantage of new features supported by modern browsers, which initially run inside a web browser tab but later can run completely offline and can be launched without entering the app URL in the browser. Traditional PC applications are typically single-tiered, residing solely on the client machine. In contrast, web applications inherently facilitate a multi-tiered architecture. Though many variations are possible, the most common structure is thethree-tieredapplication. In its most common form, the three tiers are calledpresentation,applicationandstorage. The first tier, presentation, refers to a web browser itself. The second tier refers to any engine using dynamic web content technology (such asASP,CGI,ColdFusion,Dart,JSP/Java,Node.js,PHP,PythonorRuby on Rails). The third tier refers to a database that stores data and determines the structure of a user interface. Essentially, when using the three-tiered system, the web browser sends requests to the engine, which then services them by making queries and updates against the database and generates a user interface. The 3-tier solution may fall short when dealing with more complex applications, and may need to be replaced with the n-tiered approach; the greatest benefit of which is howbusiness logic(which resides on the application tier) is broken down into a more fine-grained model.[4]Another benefit would be to add an integration tier, which separates the data tier and provides an easy-to-use interface to access the data.[4]For example, the client data would be accessed by calling a "list_clients()" function instead of making anSQLquery directly against the client table on the database. This allows the underlying database to be replaced without making any change to the other tiers.[4] There are some who view a web application as a two-tier architecture. This can be a "smart" client that performs all the work and queries a "dumb" server, or a "dumb" client that relies on a "smart" server.[4]The client would handle the presentation tier, the server would have the database (storage tier), and the business logic (application tier) would be on one of them or on both.[4]While this increases the scalability of the applications and separates the display and the database, it still does not allow for true specialization of layers, so most applications will outgrow this model.[4] Security breaches on these kinds of applications are a major concern because it can involve both enterprise information and private customer data. Protecting these assets is an important part of any web application, and there are some key operational areas that must be included in the development process.[5]This includes processes for authentication, authorization, asset handling, input, and logging and auditing. Building security into the applications from the beginning is sometimes more effective and less disruptive in the long run. Writing web applications is simplified with the use ofweb application frameworks. These frameworks facilitaterapid application developmentby allowing a development team to focus on the parts of their application which are unique to their goals without having to resolve common development issues such as user management.[6] In addition, there is potential for the development of applications onInternet operating systems, although currently there are not many viable platforms that fit this model.[citation needed]
https://en.wikipedia.org/wiki/Web_application#Development
In the context ofinformation security,social engineeringis the use of psychological influence of people into performing actions or divulgingconfidential information. This differs frompsychological manipulationin that it doesn't need to be controlling, negative or a one-way transaction. Manipulation involves azero-sum gamewhere one party wins and the other loses while social engineering can be win-win for both parties. A type ofconfidence trickfor the purpose of information gathering,fraud, or system access, it differs from a traditional "con" in the sense that it is often one of the many steps in a more complex fraud scheme.[1]It has also been defined as "any act that influences a person to take an action that may or may not be in their best interests."[2] Research done in 2020 has indicated that social engineering will be one of the most prominent challenges of the upcoming decade. Having proficiency in social engineering will be increasingly important for organizations and countries, due to the impact ongeopoliticsas well. Social engineering raises the question of whether our decisions will be accurately informed if our primary information is engineered and biased.[3] Social engineering attacks have been increasing in intensity and number, cementing the need for novel detection techniques andcyber securityeducational programs.[4] All social engineering techniques are based on exploitable weaknesses in humandecision-makingknown ascognitive biases.[5][6] One example of social engineering is an individual who walks into a building and posts an official-looking announcement to the company bulletin that says the number for the help desk has changed. So, when employees call for help the individual asks them for their passwords and IDs thereby gaining the ability to access the company's private information. Another example of social engineering would be that the hacker contacts the target on asocial networking siteand starts a conversation with the target. Gradually the hacker gains the trust of the target and then uses that trust to get access to sensitive information like password or bank account details.[7] Pretexting(adj.pretextual), also known in the UK asblagging,[8]is the act of creating and using an invented scenario (thepretext) to engage a targeted victim in a manner that increases the chance the victim will divulge information or perform actions that would be unlikely in ordinary circumstances.[9]An elaboratelie, it most often involves some prior research or setup and the use of this information for impersonation (e.g., date of birth,Social Security number, last bill amount) to establish legitimacy in the mind of the target.[10] Water holing is a targeted social engineering strategy that capitalizes on the trust users have inwebsitesthey regularly visit. The victim feels safe to do things they would not do in a different situation. A wary person might, for example, purposefully avoid clicking a link in an unsolicited email, but the same person would not hesitate to follow a link on a website they often visit. So, the attacker prepares a trap for the unwary prey at a favored watering hole. This strategy has been successfully used to gain access to some (supposedly) very secure systems.[11] Baiting is like the real-worldTrojan horsethat uses physical media and relies on the curiosity or greed of the victim.[12]In thisattack, attackers leavemalware-infectedfloppy disks,CD-ROMs, orUSB flash drivesin locations people will find them (bathrooms, elevators, sidewalks, parking lots, etc.), give them legitimate and curiosity-piquing labels, and wait for victims. Unless computer controls block infections, insertion compromises PCs "auto-running" media. Hostile devices can also be used.[13]For instance, a "lucky winner" is sent a freedigital audio playercompromising any computer it is plugged to. A "road apple" (the colloquial term for horsemanure, suggesting the device's undesirable nature) is anyremovable mediawith malicious software left in opportunistic or conspicuous places. It may be a CD, DVD, orUSB flash drive, among other media. Curious people take it and plug it into a computer, infecting the host and any attached networks. Again, hackers may give them enticing labels, such as "Employee Salaries" or "Confidential".[14] One study published in 2016 had researchers drop 297 USB drives around the campus of the University of Illinois. The drives contained files on them that linked to webpages owned by the researchers. The researchers were able to see how many of the drives had files on them opened, but not how many were inserted into a computer without having a file opened. Of the 297 drives that were dropped, 290 (98%) of them were picked up and 135 (45%) of them "called home".[15] An attacker offers to provide sensitive information (e.g. login credentials) or pay some amount of money in exchange for a favor. The attacker may pose as an expert offering free IT help, whereby they need login credentials from the user.[16] The victim is bombarded with multiple messages about fake threats and alerts, making them think that the system is infected with malware. Thus, attackers force them to install remote login software or other malicious software. Or directly extort a ransom, such as offering to send a certain amount of money incryptocurrencyin exchange for the safety of confidential videos that the criminal has, as he claims.[16] An attacker pretends to be a company employee or other person with access rights in order to enter an office or other restricted area. Deception and social engineering tools are actively used. For example, the intruder pretends to be a courier or loader carrying something in his hands and asks an employee who is walking outside to hold the door, gaining access to the building.[16] Incommon law, pretexting is an invasion ofprivacytort of appropriation.[17] In December 2006,United States Congressapproved a Senate sponsored bill making the pretexting of telephone records a federalfelonywith fines of up to $250,000 and ten years in prison for individuals (or fines of up to $500,000 for companies). It was signed byPresident George W. Bushon 12 January 2007.[18] The 1999Gramm-Leach-Bliley Act(GLBA) is aU.S. Federallaw that specifically addresses pretexting of banking records as an illegal act punishable under federal statutes. When a business entity such as a private investigator, SIU insurance investigator, or an adjuster conducts any type of deception, it falls under the authority of theFederal Trade Commission(FTC). This federal agency has the obligation and authority to ensure that consumers are not subjected to any unfair or deceptive business practices. US Federal Trade Commission Act, Section 5 of theFTCAstates, in part: "Whenever the Commission shall have reason to believe that any such person, partnership, or corporation has been or is using any unfair method of competition or unfair or deceptive act or practice in or affecting commerce, and if it shall appear to the Commission that a proceeding by it in respect thereof would be to the interest of the public, it shall issue and serve upon such person, partnership, or corporation a complaint stating its charges in that respect." The statute states that when someone obtains any personal, non-public information from a financial institution or the consumer, their action is subject to the statute. It relates to the consumer's relationship with the financial institution. For example, a pretexter using false pretenses either to get a consumer's address from the consumer's bank, or to get a consumer to disclose the name of their bank, would be covered. The determining principle is that pretexting only occurs when information is obtained through false pretenses. While the sale of cell telephone records has gained significant media attention, and telecommunications records are the focus of the two bills currently before theUnited States Senate, many other types of private records are being bought and sold in the public market. Alongside many advertisements for cell phone records, wireline records and the records associated with calling cards are advertised. As individuals shift to VoIP telephones, it is safe to assume that those records will be offered for sale as well. Currently, it is legal to sell telephone records, but illegal to obtain them.[19] U.S. Rep.Fred Upton(R-Kalamazoo, Michigan), chairman of the Energy and Commerce Subcommittee on Telecommunications and the Internet, expressed concern over the easy access to personal mobile phone records on the Internet during a House Energy & Commerce Committee hearing on "Phone Records For Sale:Why Aren't Phone Records Safe From Pretexting?"Illinoisbecame the first state to sue an online records broker when Attorney General Lisa Madigan sued 1st Source Information Specialists, Inc. A spokeswoman for Madigan's office said. The Florida-based company operates several Web sites that sell mobile telephone records, according to a copy of the suit. The attorneys general of Florida andMissouriquickly followed Madigan's lead, filing suits respectively, against 1st Source Information Specialists and, in Missouri's case, one other records broker – First Data Solutions, Inc. Several wireless providers, including T-Mobile, Verizon, and Cingular filed earlier lawsuits against records brokers, with Cingular winning an injunction against First Data Solutions and 1st Source Information Specialists. U.S. SenatorCharles Schumer(D-New York) introduced legislation in February 2006 aimed at curbing the practice. The Consumer Telephone Records Protection Act of 2006 would createfelonycriminalpenalties for stealing and selling the records of mobile phone,landline, andVoice over Internet Protocol(VoIP) subscribers. Patricia Dunn, former chairwoman ofHewlett Packard, reported that the HP board hired a private investigation company to delve into who was responsible for leaks within the board. Dunn acknowledged that the company used the practice of pretexting to solicit the telephone records of board members and journalists. Chairman Dunn later apologized for this act and offered to step down from the board if it was desired by board members.[20]Unlike Federal law, California law specifically forbids such pretexting. The four felony charges brought on Dunn were dismissed.[21] Following the2017 Equifax data breachlinked toChina'sPeople's Liberation Army[22]in which over 150 million private records were leaked (includingSocial Security numbers, anddrivers licensenumbers, birthdates, etc.), warnings were sent out regarding the dangers of impending security risks.[23]In the day after the establishment of a legitimate help website (equifaxsecurity2017.com) dedicated to people potentially victimized by the breach, 194 malicious domains were reserved from small variations on the URL, capitalizing on the likelihood of people mistyping.[24][25] Two tech giants—GoogleandFacebook—were phished out of $100 million by aLithuanianfraudster.[26]He impersonated a hardware supplier to falsely invoice both companies over two years.[27]Despite their technological sophistication, the companies lost the money, although they were later able to recuperate the majority of the funds stolen.[28] During the2016 United States Elections, hackers associated withRussian Military Intelligence (GRU)sent phishing emails directed to members ofHillary Clinton's campaign, disguised as a Google alert.[29]Many members, including the chairman of the campaign,John Podesta, had entered their passwords thinking it would be reset, causing their personal information, and thousands of private emails and documents to be leaked.[30]With this information, they hacked into other computers in theDemocratic Congressional Campaign Committee, implanting malware in them, which caused their computer activities to be monitored and leaked.[30] In 2015, specializedWi-Fihardware and software maker Ubiquiti lost nearly $47 million to hackers. Attackers sent Ubiquiti's accounting department aphishingemail from aHong Kongbranch with instructions to change payment account details. Upon discovering the theft, the company began cooperating with law enforcement, but was only able to recover $8 million of the stolen funds, although they had hoped for $15 million.[31][32] On24 November 2014, thehackergroup "Guardians of Peace"[33](probably linked toNorth Korea)[34]leakedconfidential data from the film studioSony Pictures Entertainment. The data included emails, executive salaries, and employees' personal and family information. The phishers pretended to be high up employees to install malware on workers' computers.[35] In 2013, aU.S. Department of Laborserver was hacked and used to host malware and redirect some visitors to a site using a zero-dayInternet Explorerexploit to install a remote access trojan calledPoison Ivy. Watering hole attacks were used, with the attackers creating pages related to toxic nuclear substances overseen by theDepartment of Energy. The targets were likely DoL and DOE employees with access to sensitive nuclear data.[31][36] In 2011, hackers broke into the сryptographic corporationRSAand obtained information aboutSecurIDtwo-factor authentication fobs. Using this data, the hackers later tried to infiltrate the network of defense contractorLockheed Martin. The hackers gained access to the key fob data by sending emails to four employees of the parent corporation from an alleged recruitment site. The emails contained anExcelattachment titled 2011 Recruitment Plan. The spreadsheet contained azero-dayFlashexploitthat providedbackdooraccess to the work computers.[31][37] Susan Headleybecame involved inphreakingwithKevin Mitnickand Lewis de Payne inLos Angeles, but later framed them for erasing the system files at US Leasing after a falling out, leading to Mitnick's first conviction. She retired to professional poker.[38] Mike Ridpath is a security consultant, published author, speaker and previous member ofw00w00. He is well known for developing techniques and tactics for social engineering throughcold calling. He became well known for live demonstrations as well as playing recorded calls after talks where he explained his thought process on what he was doing to get passwords through the phone.[39][40][41][42][43]As a child, Ridpath was connected with Badir Brothers and was widely known within thephreakingandhackingcommunity for his articles with popular undergroundezines, such as, Phrack, B4B0 and 9x on modifying Oki 900s, blueboxing,satellite hackingand RCMAC.[44][45] Brothers Ramy, Muzher, and Shadde Badir—all of whom were blind from birth—managed to set up an extensive phone and computer fraud scheme inIsraelin the 1990s using social engineering, voice impersonation, andBraille-display computers.[46][47] Christopher J. Hadnagyis an American social engineer and information technology security consultant. He is best known as an author of 4 books on social engineering and cyber security[48][49][50]and founder of Innocent Lives Foundation, an organization that helps tracking and identifying child trafficking by seeking the assistance of information security specialists, using data from open-source intelligence (OSINT) and collaborating with law enforcement.[51][52]
https://en.wikipedia.org/wiki/Social_engineering_(security)
Targeted threatsare a class ofmalwaredestined for one specific organization or industry. A type of crimeware, thesethreatsare of particular concern because they are designed to capture sensitive information. Targeted attacks may include threats delivered via SMTP e-mail, port attacks,zero day attackvulnerabilityexploits orphishingmessages. Government organisations are the most targeted sector.[1]Financial industries are the second most targeted sector, most likely because cybercriminals desire to profit from the confidential, sensitive information the financial industry IT infrastructure houses.[2]Similarly, online brokerage accounts have also been targeted by such attacks.[3] The impact of targetedattackscan be far-reaching. In addition to regulatory sanctions imposed byHIPAA,Sarbanes-Oxley, theGramm-Leach-BlileyAct and other laws, they can lead to the loss of revenue, focus and corporate momentum. They not only expose sensitive customer data, but damage corporate reputations and incur potential lawsuits.[4] In contrast to a widespreadspamattack, which are widely noticed, because targeted attacks are only sent to a limited number of organizations, these crimeware threats tend to not be reported and thus elude malware scanners.[5]
https://en.wikipedia.org/wiki/Targeted_threat
Atechnical support scam, ortech support scam, is a type ofscamin which a scammer claims to offer a legitimatetechnical supportservice. Victims contact scammers in a variety of ways, often through fakepop-upsresemblingerror messagesor via fake "help lines" advertised onwebsitesowned by the scammers. Technical support scammers usesocial engineeringand a variety ofconfidence tricksto persuade their victim of the presence of problems on theircomputerormobile device, such as amalwareinfection, when there are no issues with the victim's device. The scammer will then persuade the victim to pay to fix the fictitious "problems" that they claim to have found. Payment is made to the scammer viagift cardsor cryptocurrency, which are hard to trace and have fewconsumer protectionsin place. Technical support scams have occurred as early as 2008. A 2017 study of technical support scams found that of the IPs that could be geolocated, 85% could be traced to locations inIndia, 7% to locations in theUnited Statesand 3% to locations inCosta Rica. Research into tech support scams suggests thatmillennialsand those ingeneration Zhave the highest exposure to such scams; however, senior citizens are more likely to fall for these scams and lose money to them. Technical support scams were named byNortonas the topphishingthreat toconsumersin October 2021;Microsoftfound that 60% of consumers who took part in a survey had been exposed to a technical support scam within the previous twelve months. Responses to technical support scams includelawsuitsbrought against companies responsible for running fraudulent call centres andscam baiting. The first tech support scams were recorded in 2008.[1][2]Technical support scams have been seen in a variety of countries, including theUnited States,[3]Canada,[4]United Kingdom,[1]Ireland,[5]The Netherlands,Germany,Australia,[6][7]New Zealand,[8]India, andSouth Africa.[9][10] A 2017 study of technical support scams published at theNDSS Symposiumfound that, of the tech support scams in which the IPs involved could begeolocated, 85% could be traced to locations in India, 7% to locations in the United States and 3% to locations in Costa Rica.[11]India has millions ofEnglish speakerswho are competing for relatively few jobs. One municipality had 114 jobs and received 19,000 applicants.[12]This high level ofunemploymentserves as an incentive for tech scamming jobs, which are often well-paid.[13]Additionally, scammers exploit the levels of unemployment by offering jobs to people desperate to be employed.[12]Many scammers do not realise they are applying and being trained for tech support scam jobs,[14]but many decide to stay after finding out the nature of their job as they feel it is too late to back out of the job and change careers.[14]Scammers are forced to choose between keeping their job or becoming jobless.[12]Some scammers convince themselves that they are targeting wealthy people that have money to spare, which justifies their theft,[14]whilst others see their job as generating "easy money".[13][14]Some scammers rationalize that the victim needs an anti-virus anyway and therefore, it is acceptable to tell the victim lies and charge them for technical support or to charge them for an anti-virus. Technical support scams rely on social engineering to persuade victims that their device is infected with malware.[15][16]Scammers use a variety of confidence tricks to persuade the victim to installremote desktop software, with which the scammer can then take control of the victim's computer. With this access, the scammer may then launch various Windows components and utilities (such as theEvent Viewer), install third-party utilities (such asrogue security software) and perform other tasks in an effort to convince the victim that the computer has critical problems that must be remediated, such as infection with avirus. Scammers target a variety of people, though research by Microsoft suggests that millennials (defined by Microsoft as age 24-37) and people part of generation Z (age 18-23) have the highest exposure to tech support scams and theFederal Trade Commissionhas found that seniors (age 60 and over) are more likely to lose money to tech support scams.[17][18]The scammer will urge the victim to pay so the "issues" can be fixed.[1][19][20] Technical support scams can begin in a variety of ways. Some variants of the scam are initiated using pop-up advertising on infected websites or viacybersquattingof major websites. The victim is shown pop-ups which resemble legitimate error messages such as aBlue Screen of Death[21][22][23]and freeze the victim'sweb browser.[24][25]The pop-up instructs the victim to call the scammers via a phone number to "fix the error". Technical support scams can also be initiated viacold calls. These are usuallyrobocallswhich claim to be associated with a legitimate third party such asApple Inc..[26][19]Technical support scams can also attract victims by purchasingkeyword advertisingon major search engines for phrases such as "Microsoft support". Victims who click on these adverts are taken toweb pagescontaining the scammer's phone numbers.[27][28]In some cases, mass emailing is used. The email tends to state that a certain product has been purchased using their Amazon account and contact a certain telephone number if this is an error. Once a victim has contacted a scammer, the scammer will usually instruct them to download and install aremote access programsuch asTeamViewer,AnyDesk,LogMeInorGoToAssist.[21][29]The scammer convinces the victim to provide them with the credentials required to initiate a remote-control session, giving the scammer complete control of the victim's desktop.[1]The scammer will not tell the victim that he is using a remote control software and that the purpose is to gain access to the victim’s PC. The scammer will say "this is for connecting you to our secure server" or "I am going to give you a secure code" which in reality is just an ID number used by the remote desktop software package. After gaining access, the scammer attempts to convince the victim that the computer is suffering from problems that must be repaired. They will use several methods to misrepresent the content and significance of common Windows tools and system directories as evidence of malicious activity, such as viruses and other malware.[21]These tricks are meant to target victims who may be unfamiliar with the actual uses of these tools, such as inexperienced users and senior citizens.[1][26][30]The scammer then coaxes the victim into paying for the scammer's services and/or software, which they claim is designed to "repair" or "clean" the computer but is either malicious or simply does nothing at all.[31] The preferred method of payment in a technical support scam is viagift cards.[41]Gift cards are favoured by scammers because they are readily available to buy and have lessconsumer protectionsin place that could allow the victim to reclaim their money back. Additionally, the usage of gift cards as payment allows the scammers to extract money quickly whilst remaining anonymous.[42][43]Tech support scammers have also been known to ask for payment in the form ofcryptocurrency,chequesand directbank transfersmade throughautomated clearing house(the latter only gives victims 60 days to recover their funds).[44] If a victim refuses to follow the scammer's instructions or to pay them, scammers have been known to resort to insulting[45]and threatening[46][47]their victim to procure payment. Scammers may also resort tobullying,coercion,threatsand other forms ofintimidationandpsychological abusetowards their target in an effort to undermine the victim's ability to think clearly, making them more likely to be forced further into the scam.[48]Crimes threatened to be inflicted on victims or their families by scammers have ranged fromtheft,fraudandextortion,[49]to serious crimes such asrape[50]andmurder.[45]Canadiancitizen Jakob Dulisse reported toCBCin 2019 that, upon asking a scammer who made contact with him as to why he had been targeted, the scammer responded with adeath threat; 'Anglo people who travel to the country' (India) were 'cut up in little piecesand thrown in the river.'[46][51]Scammers have also been known to lock uncooperative victims out of their computer using thesyskeyutility (present only in Windows versions previous toWindows 10)[52]or third party applications which they install on the victim's computer,[49][53][54]and to delete documents and/or programs essential to the operation of the victim's computer if they do not receive payment.[32]On Windows 10 and 11, since Microsoft removed the syskey utility, scammers will change the user’s account password. The scammer will open the Control Panel, go into user settings and click on change password, and the scammer will ask the user to type in his password in the old password field. The scammer will then create a password that only he knows and will reboot the computer. The user won’t be able to log into his PC unless he pays the scammer. Microsoftcommissioned a survey byYouGovacross 16 countries in July 2021 to research tech support scams and their impact on consumers. The survey found that approximately 60% of consumers who participated had been exposed to a technical support scam within the last 12 months.[16]Victims reported losing an average of 200USDto the scammers and many faced repeated interactions from other scammers once they had been successfully scammed.[16]Nortonnamed technical support scams as the top phishing threat to consumers in October 2021, having blocked over 12.3 million tech support scamURLsbetween July and September 2021.[55] Legal action has been taken against some companies carrying out technical support scams.[56]In December 2014, Microsoft filed a lawsuit against aCalifornia-based company operating such scams for "misusing Microsoft's name and trademarks" and "creating security issues for victims by gaining access to their computers and installing malicious software, including a password grabber that could provide access to personal and financial information".[57]In December 2015, thestate of Washingtonsued the firmiYogifor scamming consumers and making false claims in order to scare the users into buying iYogi's diagnostic software.[58]iYogi was also accused of falsely claiming that they were affiliated with Microsoft,Hewlett-PackardandApple.[59] In September 2011, Microsoft dropped gold partner Comantra from itsMicrosoft Partner Networkfollowing accusations of involvement in cold-call technical-support scams.[60]However, the ease with which companies that carry out technical support scams can be launched makes it difficult to prevent tech support scams from taking place.[61] Major search engines such asBingandGooglehave taken steps to restrict the promotion of fake technical support websites through keyword advertising.[62][63]Microsoft-ownedadvertising networkBing Ads(which services ad sales on Bing andYahoo! Searchengines)[64]amended its terms of service in May 2016 to prohibit the advertising of third-party technical support services or ads claiming to "provide a service that can only be provided by the actual owner of the products or service advertised".[62][63]Google announced a verification program in 2018 in an attempt to restrict advertising for third-party tech support to legitimate companies.[65] Tech support scammers are regularly targeted byscam baiting,[45]with individuals seeking to raise awareness of these scams by uploading recordings on platforms likeYouTube, cause scammers inconvenience by wasting their time and protect potential victims. A good example of this is the YouTube communityScammer Payback.[66][67] Advanced scam baiters may infiltrate the scammer's computer, and potentially disable it by deployingremote access trojans,distributed denial of service attacksand destructive malware.[68]Scam baiters may also attempt to lure scammers into exposing their unethical practices by leaving dummy files or malware disguised as confidential information[69]such as credit/debit card information and passwords on avirtual machine, which the scammer may attempt to steal, only to become infected.[45]Sensitive information important to carrying out further investigations by alaw enforcement agencymay be retrieved, and additional information on the rogue firm may then be posted or compiled online to warn potential victims.[69] In March 2020, an anonymous YouTuber under the aliasJim Browningsuccessfully infiltrated and gathereddroneandCCTVfootage of a fraudulent call centre scam operation through the help of fellow YouTube personalityKarl Rock. Through the aid of the British documentary programmePanorama, a police raid was carried out when the documentary was brought to the attention of assistant police commissioner Karan Goel,[70]leading to the arrest of call centre operator Amit Chauhan who also operated a fraudulenttravel agencyunder the name "Faremart Travels".[71]
https://en.wikipedia.org/wiki/Technical_support_scam
Telemetryis thein situcollection of measurementsor other data at remote points and their automatictransmissionto receiving equipment (telecommunication) for monitoring.[1]The word is derived from theGreekrootstele, 'far off', andmetron, 'measure'. Systems that need external instructions and data to operate require the counterpart of telemetry:telecommand.[2] Although the term commonly refers towirelessdata transfer mechanisms (e.g., usingradio, ultrasonic, orinfraredsystems), it also encompasses data transferred over other media such as a telephone orcomputer network, optical link or other wired communications like power line carriers. Many modern telemetry systems take advantage of the low cost and ubiquity ofGSMnetworks by usingSMSto receive and transmit telemetry data. Atelemeteris a physical device used in telemetry. It consists of asensor, a transmission path, and a display, recording, or control device. Electronic devices are widely used in telemetry and can be wireless or hard-wired,analogordigital. Other technologies are also possible, such as mechanical, hydraulic and optical.[3] Telemetry may becommutatedto allow the transmission of multiple data streams in a fixedframe. The beginning of industrial telemetry lies in thesteam age, although the sensor was not calledtelemeterat that time.[4]Examples areJames Watt's (1736-1819) additions to his steam engines for monitoring from a (near) distance such as themercury pressure gaugeand thefly-ball governor.[4] Although the original telemeter referred to a ranging device (therangefinding telemeter), by the late 19th century the same term had been in wide use by electrical engineers applying it refer to electrically operated devices measuring many other quantities besides distance (for instance, in the patent of an "Electric Telemeter Transmitter"[5]). General telemeters included such sensors as thethermocouple(from the work ofThomas Johann Seebeck), theresistance thermometer(byWilliam Siemensbased on the work ofHumphry Davy), and the electricalstrain gauge(based onLord Kelvin's discovery that conductors under mechanical strain change theirresistance) and output devices such asSamuel Morse'stelegraph sounderand therelay. In 1889 this led an author in theInstitution of Civil Engineersproceedings to suggest that the term for the rangefinder telemeter might be replaced withtacheometer.[6] In the 1930s use of electrical telemeters grew rapidly. The electrical strain gauge was widely used in rocket and aviation research and theradiosondewas invented formeteorologicalmeasurements. The advent ofWorld War IIgave an impetus to industrial development and henceforth many of these telemeters became commercially viable.[7] Carrying on from rocket research, radio telemetry was used routinely as space exploration got underway. Spacecraft are in a place where a physical connection is not possible, leaving radio or other electromagnetic waves (such as infrared lasers) as the only viable option for telemetry. During crewed space missions it is used to monitor not only parameters of the vehicle, but also the health and life support of the astronauts.[8]During theCold Wartelemetry found uses in espionage. US intelligence found that they could monitor the telemetry fromSovietmissile tests by building a telemeter of their own to intercept the radio signals and hence learn a great deal about Soviet capabilities.[9] Telemeters are the physical devices used in telemetry. It consists of asensor, a transmission path, and a display, recording, or control device. Electronic devices are widely used in telemetry and can bewirelessor hard-wired,analogordigital. Other technologies are also possible, such as mechanical, hydraulic and optical.[10] Telemetering information over wire had its origins in the 19th century. One of the first data-transmission circuits was developed in 1845 between theRussian Tsar'sWinter Palaceand army headquarters. In 1874, French engineers built a system of weather and snow-depth sensors onMont Blancthat transmitted real-time information toParis. In 1901 the American inventor C. Michalke patented theselsyn, a circuit for sending synchronized rotation information over a distance. In 1906 a set of seismic stations were built with telemetering to the Pulkovo Observatory in Russia. In 1912,Commonwealth Edisondeveloped a system of telemetry to monitor electrical loads on its power grid. ThePanama Canal(completed 1913–1914) used extensive telemetry systems to monitor locks and water levels.[11] Wireless telemetry made early appearances in theradiosonde, developed concurrently in 1930 by Robert Bureau in France andPavel MolchanovinRussia. Molchanov's system modulated temperature and pressure measurements by converting them to wirelessMorse code. The GermanV-2rocket used a system of primitive multiplexed radio signals called "Messina" to report four rocket parameters, but it was so unreliable thatWernher von Braunonce claimed it was more useful to watch the rocket through binoculars. In the US and the USSR, the Messina system was quickly replaced with better systems; in both cases, based onpulse-position modulation(PPM).[12]Early Soviet missile and space telemetry systems which were developed in the late 1940s used either PPM (e.g., the Tral telemetry system developed by OKB-MEI) orpulse-duration modulation(e.g., the RTS-5 system developed by NII-885). In the United States, early work employed similar systems, but were later replaced bypulse-code modulation(PCM) (for example, in the Mars probeMariner 4). Later Soviet interplanetary probes used redundant radio systems, transmitting telemetry by PCM on a decimeter band and PPM on a centimeter band.[13] Weather balloons use telemetry to transmit meteorological data since 1920. Telemetry is used to transmit drilling mechanics and formation evaluation information uphole, in real time, as a well is drilled. These services are known asMeasurement while drillingandLogging while drilling. Information acquired thousands of feet below ground, while drilling, is sent through the drilling hole to the surface sensors and the demodulation software. The pressure wave (sana) is translated into useful information after DSP and noise filters. This information is used forFormation evaluation,DrillingOptimization, andGeosteering. Telemetry is a key factor in modern motor racing, allowing race engineers to interpret data collected during a test or race and use it to properly tune the car for optimum performance. Systems used in series such asFormula Onehave become advanced to the point where the potential lap time of the car can be calculated, and this time is what the driver is expected to meet. Examples of measurements on a race car include accelerations (G forces) in three axes, temperature readings, wheel speed, and suspension displacement. In Formula One, driver input is also recorded so the team can assess driver performance and (in case of an accident) theFIAcan determine or rule out driver error as a possible cause. Later developments include two-way telemetry which allows engineers to update calibrations on the car in real time (even while it is out on the track). In Formula One, two-way telemetry surfaced in the early 1990s and consisted of a message display on the dashboard which the team could update. Its development continued until May 2001, when it was first allowed on the cars. By 2002, teams were able to change engine mapping and deactivate engine sensors from the pit while the car was on the track.[citation needed]For the 2003 season, the FIA banned two-way telemetry from Formula One;[14]however, the technology may be used in other types of racing or on road cars. One way telemetry system has also been applied inR/C racing carto get information by car's sensors like: engine RPM, voltage, temperatures, throttle. In the transportation industry, telemetry provides meaningful information about a vehicle or driver's performance by collecting data from sensors within the vehicle. This is undertaken for various reasons ranging from staff compliance monitoring, insurance rating to predictive maintenance. Telemetry is used to linktraffic counter devicesto data recorders to measure traffic flows and vehicle lengths and weights.[15] Telemetry is used by the railway industry for measuring the health oftrackage. This permits optimized and focused predictive and preventative maintenance. Typically this is done with specialized trains, such as theNew Measurement Trainused in the United Kingdom byNetwork Rail, which can check for track defects, such as problems withgauge, and deformations in the rail.[16]Japan uses similar, but quicker trains, nicknamedDoctor Yellow.[17]Such trains, besides checking the tracks, can also verify whether or not there are any problems with theoverhead power supply(catenary), where it is installed. Dedicated rail inspection companies, such asSperry Rail,[18]have their own customized rail cars and rail-wheel equipped trucks, that use a variety of methods, including lasers, ultrasound, and induction (measuring resulting magnetic fields from running electricity into rails) to find any defects.[19] Most activities related to healthy crops and good yields depend on timely availability of weather and soil data. Therefore, wireless weather stations play a major role in disease prevention and precision irrigation. These stations transmit parameters necessary for decision-making to a base station:air temperatureandrelative humidity,precipitationandleaf wetness(for disease prediction models), solar radiation and wind speed (to calculateevapotranspiration), water deficit stress (WDS) leaf sensors and soil moisture (crucial to irrigation decisions). Because local micro-climates can vary significantly, such data needs to come from within the crop. Monitoring stations usually transmit data back by terrestrialradio, although occasionallysatellitesystems are used. Solar power is often employed to make the station independent of the power grid. Telemetry is important inwater management, includingwater qualityandstream gaugingfunctions. Major applications include AMR (automatic meter reading),groundwatermonitoring, leak detection in distribution pipelines and equipment surveillance. Having data available in almost real time allows quick reactions to events in the field. Telemetry control allows engineers to intervene with assets such as pumps and by remotely switching pumps on or off depending on the circumstances. Watershed telemetry is an excellent strategy of how to implement a water management system.[20] Telemetry is used in complex systems such as missiles, RPVs,spacecraft,oil rigs, andchemical plantssince it allows the automatic monitoring, alerting, and record-keeping necessary for efficient and safe operation. Space agencies such asNASA,ISRO, theEuropean Space Agency(ESA), and other agencies use telemetry and/or telecommand systems to collect data from spacecraft and satellites. Telemetry is vital in the development of missiles, satellites and aircraft because the system might be destroyed during or after the test. Engineers need critical system parameters to analyze (and improve) the performance of the system. In the absence of telemetry, this data would often be unavailable. Telemetry is used by crewed or uncrewed spacecraft for data transmission. Distances of more than 25.1 billion kilometers (May 2025)[21]have been covered, e.g., byVoyager 1. In rocketry, telemetry equipment forms an integral part of therocket rangeassets used to monitor the position and health of a launch vehicle to determine range safety flight termination criteria (Range purpose is for public safety). Problems include the extreme environment (temperature, acceleration and vibration), theenergy supply,antennaalignment and (at long distances, e.g., inspaceflight) signaltravel time. Today nearly every type ofaircraft,missiles, orspacecraftcarries a wireless telemetry system as it is tested.[22]Aeronautical mobile telemetry is used for the safety of the pilots and persons on the ground during flight tests. Telemetry from an on-boardflight test instrumentationsystem is the primary source of real-time measurement and status information transmitted during the testing of crewed and uncrewed aircraft.[23] Intercepted telemetry was an important source of intelligence for the United States and UK whenSovietmissiles were tested; for this purpose, the United States operated a listening post inIran. Eventually, the Russians discovered the United States intelligence-gathering network and encrypted their missile-test telemetry signals. Telemetry was also a source for the Soviets, who operated listening ships inCardigan Bayto eavesdrop on UK missile tests performed in the area[citation needed]. In factories, buildings and houses, energy consumption of systems such asHVACare monitored at multiple locations; related parameters (e.g., temperature) are sent via wireless telemetry to a central location. The information is collected and processed, enabling the most efficient use of energy. Such systems also facilitatepredictive maintenance. Many resources need to be distributed over wide areas. Telemetry is useful in these cases, since it allows the logistics system to channel resources where they are needed, as well as provide security for those assets; principal examples of this are dry goods, fluids, and granular bulk solids. Dry goods, such as packaged merchandise, may be tracked and remotely monitored, tracked and inventoried byRFIDsensing systems,barcode reader,optical character recognition(OCR) reader, or other sensing devices—coupled to telemetry devices, to detectRFID tags,barcodelabels or other identifying markers affixed to the item, its package, or (for large items and bulk shipments) affixed to its shipping container or vehicle. This facilitates knowledge of their location, and can record their status and disposition, as when merchandise with barcode labels is scanned through a checkout reader atpoint-of-salesystems in a retail store. Stationary or hand-held barcodeRFIDscanners orOptical readerwith remote communications, can be used to expedite inventory tracking and counting in stores, warehouses, shipping terminals, transportation carriers and factories.[24][25][26] Fluids stored in tanks are a principal object of constant commercial telemetry. This typically includes monitoring of tank farms in gasoline refineries and chemical plants—and distributed or remote tanks, which must be replenished when empty (as with gas station storage tanks, home heating oil tanks, or ag-chemical tanks at farms), or emptied when full (as with production from oil wells, accumulated waste products, and newly produced fluids).[27]Telemetry is used to communicate the variable measurements of flow and tank level sensors detecting fluid movements and/or volumes bypneumatic,hydrostatic, or differential pressure; tank-confinedultrasonic,radarorDoppler effectechoes; or mechanical or magnetic sensors.[27][28][29] Telemetry of bulk solids is common for tracking and reporting the volume status and condition ofgrainandlivestock feedbins, powdered or granular food, powders and pellets for manufacturing, sand and gravel, and other granular bulk solids. While technology associated with fluid tank monitoring also applies, in part, to granular bulk solids, reporting of overall container weight, or other gross characteristics and conditions, are sometimes required, owing to bulk solids' more complex and variable physical characteristics.[30][31] Telemetry is used for patients (biotelemetry) who are at risk of abnormalheartactivity, generally in acoronary care unit. Telemetry specialists are sometimes used tomonitormany patients within a hospital.[32]Such patients are outfitted with measuring, recording and transmitting devices. A data log can be useful indiagnosisof the patient's condition bydoctors. An alerting function can alertnursesif the patient is suffering from an acute (or dangerous) condition. Systems are available inmedical-surgical nursingfor monitoring to rule out a heart condition, or to monitor a response toantiarrhythmic medicationssuch asamiodarone. A new and emerging application for telemetry is in the field of neurophysiology, or neurotelemetry.Neurophysiologyis the study of the central and peripheral nervous systems through the recording of bioelectrical activity, whether spontaneous or stimulated. In neurotelemetry (NT) theelectroencephalogram(EEG) of a patient is monitored remotely by a registered EEG technologist using advanced communication software. The goal of neurotelemetry is to recognize a decline in a patient's condition before physical signs and symptoms are present. Neurotelemetry is synonymous withreal-time continuous video EEG monitoringand has application in the epilepsy monitoring unit, neuro ICU, pediatric ICU and newborn ICU. Due to the labor-intensive nature of continuous EEG monitoring NT is typically done in the larger academic teaching hospitals using in-house programs that include R.EEG Technologists, IT support staff, neurologist and neurophysiologist and monitoring support personnel. Modern microprocessor speeds, software algorithms and video data compression allow hospitals to centrally record and monitor continuous digital EEGs of multiple critically ill patients simultaneously. Neurotelemetry and continuous EEG monitoring provides dynamic information about brain function that permits early detection of changes in neurologic status, which is especially useful when the clinical examination is limited. Telemetry is used to study wildlife,[33]and has been useful for monitoring threatened species at the individual level. Animals under study can be outfitted with instrumentation tags, which include sensors that measure temperature, diving depth and duration (for marine animals), speed and location (usingGPSorArgospackages). Telemetry tags can give researchers information about animal behavior, functions, and their environment. This information is then either stored (with archival tags) or the tags can send (or transmit) their information to a satellite or handheld receiving device.[34]Capturing and marking wild animals can put them at some risk, so it is important to minimize these impacts.[35] At a 2005 workshop inLas Vegas, a seminar noted the introduction of telemetry equipment which would allowvending machinesto communicate sales and inventory data to a route truck or to a headquarters.[citation needed]This data could be used for a variety of purposes, such as eliminating the need for drivers to make a first trip to see which items needed to be restocked before delivering the inventory. Retailers also useRFIDtags to track inventory and prevent shoplifting. Most of these tags passively respond to RFID readers (e.g., at the cashier), but active RFID tags are available which periodically transmit location information to a base station. Telemetry hardware is useful for tracking persons and property in law enforcement. Anankle collarworn by convicts on probation can warn authorities if a person violates the terms of his or herparole, such as by straying from authorized boundaries or visiting an unauthorized location. Telemetry has also enabledbait cars, where law enforcement can rig a car with cameras and tracking equipment and leave it somewhere they expect it to be stolen. When stolen the telemetry equipment reports the location of the vehicle, enabling law enforcement to deactivate the engine and lock the doors when it is stopped by responding officers. In some countries, telemetry is used to measure the amount of electrical energy consumed. The electricity meter communicates with aconcentrator, and the latter sends the information throughGPRSorGSMto the energy provider's server. Telemetry is also used for the remote monitoring of substations and their equipment. For data transmission, phase line carrier systems operating on frequencies between 30 and 400 kHz are sometimes used. Infalconry, "telemetry" means a small radio transmitter carried by abird of preythat will allow the bird's owner to track it when it is out of sight. Telemetry is used in testing hostile environments which are dangerous to humans. Examples include munitions storage facilities, radioactive sites, volcanoes, deep sea, and outer space. Telemetry is used in many battery operated wireless systems to inform monitoring personnel when the battery power is reaching a low point and the end item needs fresh batteries. In the mining industry, telemetry serves two main purposes: the measurement of key parameters from mining equipment and the monitoring of safety practices.[36]The information provided by the collection and analysis of key parameters allows for root-cause identification of inefficient operations, unsafe practices and incorrect equipment usage for maximizing productivity and safety.[37]Further applications of the technology allow for sharing knowledge and best practices across the organization.[37] In software, telemetry is used to gather data on the use and performance of applications and application components, e.g. how often certain features are used, measurements of start-up time and processing time, hardware, application crashes, and general usage statistics and/or user behavior. In some cases, very detailed data is reported like individual window metrics, counts of used features, and individual function timings. This kind of telemetry can be essential to software developers to receive data from a wide variety of endpoints that can't possibly all be tested in-house, as well as getting data on the popularity of certain features and whether they should be given priority or be considered for removal. Due to concerns aboutprivacysince software telemetry can easily be used toprofileusers, telemetry in user software is often user choice, commonly presented as an opt-out feature (requiring explicit user action to disable it) or user choice during the software installation process. As in other telecommunications fields, international standards exist for telemetry equipment and software. International standards producing bodies includeConsultative Committee for Space Data Systems(CCSDS) for space agencies,Inter-Range Instrumentation Group(IRIG) for missile ranges, and Telemetering Standards Coordination Committee (TSCC), an organisation of the International Foundation for Telemetering.
https://en.wikipedia.org/wiki/Telemetry#Software
Typosquatting, also calledURL hijacking, asting site, acousin domain, or afake URL, is a form ofcybersquatting, and possiblybrandjackingwhich relies on mistakes such astyposmade by Internet users when inputting awebsite addressinto aweb browser. A user accidentally entering an incorrect website address may be led to any URL, including an alternative website owned by a cybersquatter. The typosquatter'sURLwill usually besimilarto the victim's site address; the typosquatting site could be in the form of: Similar abuses: Once on the typosquatter's site, the user may also be tricked into thinking that they are actually on the real site through the use of copied or similar logos, website layouts, or content. Spam emails sometimes make use of typosquatting URLs to trick users into visiting malicious sites that look like a given bank's site, for instance. There are several different reasons for typosquatters buying a typo domain: Many companies, includingVerizon,Lufthansa, andLego, have gained reputations for aggressively chasing down typosquatted names. Lego, for example, has spent roughlyUS$500,000 on taking 309 cases throughUDRPproceedings.[2] Celebrities have also pursued their domain names. Prominent examples include basketball playerDirk Nowitzki'sUDRP of DirkSwish.com[3]and actressEva Longoria'sUDRP of EvaLongoria.org.[4] Goggle, a typosquatted version ofGoogle, was the subject of a 2006 web safety promotion byMcAfee, a computer security company, which depicted the significant amounts of malware installed throughdrive-by downloadsupon accessing the site at the time. Goggle installedSpySheriff. Later, the URL was redirected to google.com;[5]a 2018 check revealed it to redirect users toadwarepages, and a 2020 attempt to access the site through a privateDNSresolver hosted byAdGuardresulted in the page being identified asmalwareand blocked for the user'ssecurity. By mid-2022, it had been turned into a political blog.[citation needed]As of April 2025 goggle.com is not operational. Another example of corporate typosquatting is yuube.com, targetingYouTubeusers by programming that URL toredirectto a malicious website or page that asks users to add a malware "security check extension".[6]Similarly, www.airfrance.com has been typosquatted by www.arifrance.com, diverting users to a website peddling discount travel (although it now redirects to a warning fromAir Franceabout malware).[7]Other examples are equifacks.com (Equifax.com), experianne.com (Experian.com), and tramsonion.com (TransUnion.com); these three typosquatted sites were registered by comedianJohn Oliverfor his showLast Week Tonight.[8][9]Over 550 typosquats related to the2020 U.S. presidential electionwere detected in 2019.[10] The Magniberransomwareis being distributed in a typosquatting method that exploits typos made when entering domains, targeting mainly Chrome and Edge users.[11] In the United States, the 1999Anticybersquatting Consumer Protection Act(ACPA) contains a clause (Section 3(a), amending 15 USC 1117 to include sub-section (d)(2)(B)(ii)) aimed at combatting typosquatting.[12][13] On April 17, 2006, evangelistJerry Falwellfailed to get theU.S Supreme Courtto review a decision allowing Christopher Lamparello to use www.fallwell.com. Relying on a plausible misspelling of Falwell's name, Lamparello'sgripe sitepresents misdirected visitors with scriptural references that are intended to counter the fundamentalist preacher's scathing rebukes againsthomosexuality. InLamparello v. Falwell, the high court let stand a 2005Fourth Circuitopinion that "the use of a mark in a domain name for a gripe site criticizing the markholder does not constitute cybersquatting." Under theUniform Domain-Name Dispute-Resolution Policy(UDRP),trademarkholders can file a case at theWorld Intellectual Property Organization(WIPO) against typosquatters (as with cybersquatters in general).[7]The complainant has to show that the registered domain name is identical orconfusingly similarto their trademark, that the registrant has no legitimate interest in the domain name, and that the domain name is being used inbad faith.[7]
https://en.wikipedia.org/wiki/Typosquatting
Aweb serveriscomputersoftwareand underlyinghardwarethat accepts requests viaHTTP(thenetwork protocolcreated to distributeweb content) or its secure variantHTTPS. A user agent, commonly aweb browserorweb crawler, initiates communication by making a request for aweb pageor otherresourceusing HTTP, and theserverresponds with the content of that resource or anerror message. A web server can also accept and store resources sent from the user agent if configured to do so.[1][2][3][4][5] The hardware used to run a web server can vary according to the volume of requests that it needs to handle. At the low end of the range areembedded systems, such as arouterthat runs a small web server as its configuration interface. A high-trafficInternetwebsitemight handle requests with hundreds of servers that run on racks of high-speed computers.[6] A resource sent from a web server can be a pre-existingfile(static content) available to the web server, or it can be generated at the time of the request (dynamic content) by anotherprogramthat communicates with the server software. The former usually can be served faster and can be more easilycachedfor repeated requests, while the latter supports a broader range of applications. Technologies such asRESTandSOAP, which use HTTP as a basis for general computer-to-computer communication, as well as support forWebDAVextensions, have extended the application of web servers well beyond their original purpose of serving human-readable pages. This is a very brief history ofweb server programs, so some information necessarily overlaps with the histories of theweb browsers, theWorld Wide Weband theInternet; therefore, for the sake of clarity and understandability, some key historical information below reported may be similar to that found also in one or more of the above-mentioned history articles.[7] In March 1989,Sir Tim Berners-Leeproposed a new project to his employerCERN, with the goal of easing the exchange of information between scientists by using ahypertextsystem. The proposal titled"HyperText and CERN", asked for comments and it was read by several people. In October 1990 the proposal was reformulated and enriched (having as co-authorRobert Cailliau), and finally, it was approved.[8][9][10] Between late 1990 and early 1991 the project resulted in Berners-Lee and his developers writing and testing several software libraries along with three programs, which initially ran onNeXTSTEP OSinstalled onNeXTworkstations:[11][12][10] Those early browsers retrieved web pages written in asimple early form of HTML, from web server(s) using a new basic communication protocol that was namedHTTP 0.9. In August 1991 Tim Berners-Lee announced thebirth of WWW technologyand encouraged scientists to adopt and develop it.[13]Soon after, those programs, along with theirsource code, were made available to people interested in their usage.[11]Although the source code was not formally licensed or placed in the public domain, CERN informally allowed users and developers to experiment and further develop on top of them. Berners-Lee started promoting the adoption and the usage of those programs along with theirportingto otheroperating systems.[10] In December 1991, thefirst web server outside Europewas installed at SLAC (U.S.A.).[12]This was a very important event because it started trans-continental web communications between web browsers and web servers. In 1991–1993, CERN web server program continued to be actively developed by the www group, meanwhile, thanks to the availability of its source code and the public specifications of the HTTP protocol, many other implementations of web servers started to be developed. In April 1993, CERN issued a public official statement stating that the three components of Web software (the basic line-mode client, the web server and the library of common code), along with theirsource code, were put in thepublic domain.[14]This statement freed web server developers from any possible legal issue about the development ofderivative workbased on that source code (a threat that in practice never existed). At the beginning of 1994, the most notable among new web servers wasNCSA httpdwhich ran on a variety ofUnix-based OSs and could servedynamically generated contentby implementing thePOSTHTTP method and theCGIto communicate with external programs. These capabilities, along with the multimedia features of NCSA'sMosaicbrowser (also able to manageHTMLFORMsin order to send data to a web server) highlighted the potential of web technology for publishing anddistributed computingapplications. In the second half of 1994, the development of NCSA httpd stalled to the point that a group of external software developers,webmastersand other professional figures interested in that server, started to write and collectpatchesthanks to the NCSA httpd source code being available to the public domain. At the beginning of 1995 those patches were all applied to the last release of NCSA source code and, after several tests, theApache HTTP serverproject was started.[17][18] At the end of 1994, a new commercial web server, namedNetsite, was released with specific features. It was the first one of many other similar products that were developed first byNetscape, then also bySun Microsystems, and finally byOracle Corporation. In mid-1995, the first version ofIISwas released, forWindows NTOS, byMicrosoft. This marked the entry, in the field of World Wide Web technologies, of a very important commercial developer and vendor that has played and still is playing a key role on both sides (client and server) of the web. In the second half of 1995, CERN and NCSA web servers started to decline (in global percentage usage) because of the widespread adoption of new web servers which had a much faster development cycle along with more features, more fixes applied, and more performances than the previous ones. At the end of 1996, there were already overfiftyknown (different) web server software programs that were available to everybody who wanted to own an Internetdomain nameand/or to host websites.[20]Many of them lived only shortly and were replaced by other web servers. The publication ofRFCsabout protocol versions HTTP/1.0 (1996) and HTTP/1.1 (1997, 1999), forced most web servers to comply (not always completely) with those standards. The use of TCP/IPpersistent connections(HTTP/1.1) required web servers both to increase the maximum number of concurrent connections allowed and to improve their level of scalability. Between 1996 and 1999,Netscape Enterprise Serverand Microsoft's IIS emerged among the leading commercial options whereas among the freely available andopen-sourceprograms Apache HTTP Server held the lead as the preferred server (because of its reliability and its many features). In those years there was also another commercial, highly innovative and thus notable web server calledZeus(now discontinued) that was known as one of the fastest and most scalable web servers available on market, at least till the first decade of 2000s, despite its low percentage of usage. Apache resulted in the most used web server from mid-1996 to the end of 2015 when, after a few years of decline, it was surpassed initially by IIS and then by Nginx. Afterward IIS dropped to much lower percentages of usage than Apache (see alsomarket share). From 2005–2006, Apache started to improve its speed and its scalability level by introducing new performance features (e.g. event MPM and new content cache).[21][22]As those new performance improvements initially were marked as experimental, they were not enabled by its users for a long time and so Apache suffered, even more, the competition of commercial servers and, above all, of other open-source servers which meanwhile had already achieved far superior performances (mostly when serving static content) since the beginning of their development and at the time of the Apache decline were able to offer also a long enough list of well tested advanced features. In fact, a few years after 2000 started, not only other commercial and highly competitive web servers, e.g.LiteSpeed, but also many other open-source programs, often of excellent quality and very high performances, among which should be notedHiawatha,Cherokee HTTP server,Lighttpd,Nginxand other derived/related products also available with commercial support, emerged. Around 2007–2008, most popular web browsers increased their previous default limit of 2 persistent connections per host-domain (a limit recommended by RFC-2616)[23]to 4, 6 or 8 persistent connections per host-domain, in order to speed up the retrieval of heavy web pages with lots of images, and to mitigate the problem of the shortage of persistent connections dedicated to dynamic objects used for bi-directional notifications of events in web pages.[24]Within a year, these changes, on average, nearly tripled the maximum number of persistent connections that web servers had to manage. This trend (of increasing the number of persistent connections) definitely gave a strong impetus to the adoption ofreverse proxiesin front of slower web servers and it gave also one more chance to the emerging new web servers that could show all their speed and their capability to handle very high numbers of concurrent connections without requiring too many hardware resources (expensive computers with lots of CPUs, RAM and fast disks).[25] In 2015, RFCs published new protocol version [HTTP/2], and as the implementation of new specifications was not trivial at all, adilemma arose among developers of less popular web servers(e.g. with a percentage of usage lower than 1% .. 2%), about adding or not adding support for that new protocol version.[26][27] In fact supporting HTTP/2 often required radical changes to their internal implementation due to many factors (practically always required encrypted connections, capability to distinguish between HTTP/1.x and HTTP/2 connections on the same TCP port, binary representation of HTTP messages, message priority, compression of HTTP headers, use of streams also known as TCP/IP sub-connections and related flow-control, etc.) and so a few developers of those web servers opted fornot supporting new HTTP/2 version(at least in the near future) also because of these main reasons:[26][27] Instead, developers ofmost popular web servers, rushed to offer the availability of new protocol, not only because they had the work force and the time to do so, but also because usually their previous implementation ofSPDYprotocol could be reused as a starting point and because most used web browsers implemented it very quickly for the same reason. Another reason that prompted those developers to act quickly was that webmasters felt the pressure of the ever increasingweb trafficand they really wanted to install and to try – as soon as possible – something that could drastically lower the number of TCP/IP connections and speedup accesses to hosted websites.[28] In 2020–2021 the HTTP/2 dynamics about its implementation (by top web servers and popular web browsers) were partly replicated after the publication of advanced drafts of future RFC aboutHTTP/3protocol. The following technical overview should be considered only as an attempt to give a few verylimited examplesaboutsomefeatures that may beimplementedin a web server andsomeof the tasks that it may perform in order to have a sufficiently wide scenario about the topic. Aweb server programplays the role of a server in aclient–server modelby implementing one or more versions of HTTP protocol, often including the HTTPS secure variant and other features and extensions that are considered useful for its planned usage. The complexity and the efficiency of a web server program may vary a lot depending on (e.g.):[1] Although web server programs differ in how they are implemented, most of them offer the following common features. These arebasic featuresthat most web servers usually have. A few other moreadvancedand popularfeatures(only a very short selection) are the following ones. A web server program, when it is running, usually performs several generaltasks, (e.g.):[1] Web server programs are able:[29][30][31] Once an HTTP request message has been decoded and verified, its values can be used to determine whether that request can be satisfied or not. This requires many other steps, includingsecurity checks. Web server programs usually perform some type ofURL normalization(URLfound in most HTTP request messages) in order to: The termURL normalizationrefers to the process of modifying and standardizing a URL in a consistent manner. There are several types of normalization that may be performed, including the conversion of the scheme and host to lowercase. Among the most important normalizations are the removal of "." and ".." path segments and adding trailing slashes to a non-empty path component. "URL mapping is the process by which a URL is analyzed to figure out what resource it is referring to, so that that resource can be returned to the requesting client. This process is performed with every request that is made to a web server, with some of the requests being served with a file, such as an HTML document, or a gif image, others with the results of running a CGI program, and others by some other process, such as a built-in module handler, a PHP document, or a Java servlet."[32][needs update] In practice, web server programs that implement advanced features, beyond the simplestatic content serving(e.g. URL rewrite engine, dynamic content serving), usually have to figure out how that URL has to be handled, e.g. as a: One or more configuration files of web server may specify the mapping of parts ofURL path(e.g. initial parts offile path,filename extensionand other path components) to a specific URL handler (file, directory, external program or internal module).[33] When a web server implements one or more of the above-mentioned advanced features then the path part of a valid URL may not always match an existing file system path under website directory tree (a file or a directory infile system) because it can refer to a virtual name of an internal or external module processor for dynamic requests. Web server programs are able to translate an URL path (all or part of it), that refers to a physical file system path, to anabsolute pathunder the target website's root directory.[33] Website's root directory may be specified by a configuration file or by some internal rule of the web server by using the name of the website which is thehostpart of the URL found in HTTP client request.[33] Path translation to file system is done for the following types of web resources: The web server appends the path found in requested URL (HTTP request message) and appends it to the path of the (Host) website root directory. On anApache server, this is commonly/home/www/website(onUnixmachines, usually it is:/var/www/website). See the following examples of how it may result. URL path translation for a static file request Example of astatic requestof an existing file specified by the following URL: The client'suser agentconnects towww.example.comand then sends the followingHTTP/1.1 request: The result is the local file system resource: The web server then reads thefile, if it exists, and sends a response to the client's web browser. The response will describe the content of the file and contain the file itself or an error message will return saying that the file does not exist or its access is forbidden. URL path translation for a directory request (without a static index file) Example of an implicitdynamic requestof an existing directory specified by the following URL: The client'suser agentconnects towww.example.comand then sends the followingHTTP/1.1 request: The result is the local directory path: The web server then verifies the existence of thedirectoryand if it exists and it can be accessed then tries to find out an index file (which in this case does not exist) and so it passes the request to an internal module or a program dedicated to directory listings and finally reads data output and sends a response to the client's web browser. The response will describe the content of the directory (list of contained subdirectories and files) or an error message will return saying that the directory does not exist or its access is forbidden. URL path translation for a dynamic program request For adynamic requestthe URL path specified by the client should refer to an existing external program (usually an executable file with a CGI) used by the web server to generate dynamic content.[34] Example of adynamic requestusing a program file to generate output: The client'suser agentconnects towww.example.comand then sends the followingHTTP/1.1 request: The result is the local file path of the program (in this example, aPHPprogram): The web server executes that program, passing in the path-info and thequery stringaction=view&orderby=thread&date=2021-10-15so that the program has the info it needs to run. (In this case, it will return an HTML document containing a view of forum entries ordered by thread from October 15, 2021). In addition to this, the web server reads data sent from the external program and resends that data to the client that made the request. Once a request has been read, interpreted, and verified, it has to be managed depending on its method, its URL, and its parameters, which may include values of HTTP headers. In practice, the web server has to handle the request by using one of these response paths:[33] If a web server program is capable ofserving static contentand it has been configured to do so, then it is able to send file content whenever a request message has a valid URL path matching (after URL mapping, URL translation and URL redirection) that of an existing file under the root directory of a website and file has attributes which match those required by internal rules of web server program.[33] That kind of content is calledstaticbecause usually it is not changed by the web server when it is sent to clients and because it remains the same until it is modified (file modification) by some program. NOTE: when servingstatic content only, a web server program usuallydoes not change file contentsof served websites (as they are only read and never written) and so it suffices to support only theseHTTP methods: Response of static file content can be sped up by afile cache. If a web server program receives a client request message with an URL whose path matches one of an existingdirectoryand that directory is accessible and serving directory index file(s) is enabled then a web server program may try to serve the first of known (or configured) static index file names (aregular file) found in that directory; if no index file is found or other conditions are not met then an error message is returned. Most used names for static index files are:index.html,index.htmandDefault.htm. If a web server program receives a client request message with an URL whose path matches the file name of an existingfileand that file is accessible by web server program and its attributes match internal rules of web server program, then web server program can send that file to client. Usually, for security reasons, most web server programs are pre-configured to serve onlyregular filesor to avoid to usespecial file typeslikedevice files, along withsymbolic linksorhard linksto them. The aim is to avoid undesirable side effects when serving static web resources.[35] If a web server program is capable ofserving dynamic contentand it has been configured to do so, then it is able to communicate with the proper internal module or external program (associated with the requested URL path) in order to pass to it the parameters of the client request. After that, the web server program reads from it its data response (that it has generated, often on the fly) and then it resends it to the client program who made the request.[citation needed] NOTE: when servingstatic and dynamic content, a web server program usually has to support also the following HTTP method in order to be able to safelyreceive datafrom client(s) and so to be able to host also websites with interactive form(s) that may send large data sets (e.g. lots ofdata entryorfile uploads) to web server / external programs / modules: In order to be able to communicate with its internal modules and/or external programs, a web server program must have implemented one or more of the many availablegateway interface(s)(see alsoWeb Server Gateway Interfaces used for dynamic content). The threestandardand historicalgateway interfacesare the following ones. A web server program may be capable to manage the dynamic generation (on the fly) of adirectory index listof files and sub-directories.[36] If a web server program is configured to do so and a requested URL path matches an existing directory and its access is allowed and no static index file is found under that directory then a web page (usually in HTML format), containing the list of files and/or subdirectories of above mentioned directory, isdynamically generated(on the fly). If it cannot be generated an error is returned. Some web server programs allow the customization of directory listings by allowing the usage of a web page template (an HTML document containing placeholders, e.g.$(FILE_NAME), $(FILE_SIZE), etc., that are replaced with the field values of each file entry found in directory by web server), e.g.index.tplor the usage of HTML and embedded source code that is interpreted and executed on the fly, e.g.index.asp, and / or by supporting the usage of dynamic index programs such as CGIs, SCGIs, FCGIs, e.g.index.cgi,index.php,index.fcgi. Usage of dynamically generateddirectory listingsis usually avoided or limited to a few selected directories of a website because that generation takes much more OS resources than sending a static index page. The main usage ofdirectory listingsis to allow the download of files (usually when their names, sizes, modification date-times orfile attributesmay change randomly / frequently)as they are, without requiring to provide further information to requesting user.[37] An external program or an internal module (processing unit) can execute some sort of application function that may be used to get data from or to store data to one or moredata repositories, e.g.:[citation needed] Aprocessing unitcan return any kind of web content, also by using data retrieved from a data repository, e.g.:[citation needed] In practice whenever there is content that may vary, depending on one or more parameters contained in client request or in configuration settings, then, usually, it is generated dynamically. Web server programs are able to send response messages as replies to client request messages.[29] An error response message may be sent because a request message could not be successfully read or decoded or analyzed or executed.[30] NOTE: the following sections are reported only as examples to help to understand what a web server, more or less, does; these sections are by any means neither exhaustive nor complete. A web server program may reply to a client request message with many kinds of error messages, anyway these errors are divided mainly in two categories: When an error response / message is received by a client browser, then if it is related to the main user request (e.g. an URL of a web resource such as a web page) then usually that error message is shown in some browser window / message. A web server program may be able to verify whether the requested URL path:[40] If the authorization / access rights feature has been implemented and enabled and access to web resource is not granted, then, depending on the required access rights, a web server program: A web server programmayhave the capability of doing URL redirections to new URLs (new locations) which consists in replying to a client request message with a response message containing a new URL suited to access a valid or an existing web resource (client should redo the request with the new URL).[41] URL redirection of location is used:[41] Example 1: a URL path points to adirectoryname but it does not have a final slash '/' so web server sends a redirect to client in order to instruct it to redo the request with the fixed path name.[36] From:/directory1/directory2To:/directory1/directory2/ Example 2: a whole set of documents has beenmoved inside websitein order to reorganize their file system paths. From:/directory1/directory2/2021-10-08/To:/directory1/directory2/2021/10/08/ Example 3: a whole set of documents has beenmoved to a new websiteand now it is mandatory to use secure HTTPS connections to access them. From:http://www.example.com/directory1/directory2/2021-10-08/To:https://docs.example.com/directory1/2021-10-08/ Above examples are only a few of the possible kind of redirections. A web server program is able to reply to a valid client request message with a successful message, optionally containing requestedweb resource data.[42] If web resource data is sent back to client, then it can bestatic contentordynamic contentdepending on how it has been retrieved (from a file or from the output of some program / module). In order to speed up web server responses by lowering average HTTP response times and hardware resources used, many popular web servers implement one or more contentcaches, each one specialized in a content category.[43][44] Content is usually cached by its origin, e.g.: Historically, static contents found infileswhich had to be accessed frequently, randomly and quickly, have been stored mostly on electro-mechanicaldiskssince mid-late 1960s / 1970s; regrettably reads from and writes to those kind ofdeviceshave always been considered very slow operations when compared toRAMspeed and so, since earlyOSs, first disk caches and then alsoOSfilecachesub-systems were developed to speed upI/Ooperations of frequently accessed data / files. Even with the aid of an OS file cache, the relative / occasional slowness of I/O operations involving directories and files stored on disks became soon abottleneckin the increase ofperformancesexpected from top level web servers, specially since mid-late 1990s, when web Internet traffic started to grow exponentially along with the constant increase of speed of Internet / network lines. The problem about how to further efficiently speed-up the serving of static files, thus increasing the maximum number of requests/responses per second (RPS), started to be studied / researched since mid 1990s, with the aim to propose useful cache models that could be implemented in web server programs.[45] In practice, nowadays, many popular / high performance web server programs include their ownuserlandfile cache, tailored for a web server usage and using their specific implementation and parameters.[46][47][48] The wide spread adoption ofRAIDand/or fastsolid-state drives(storage hardware with very high I/O speed) has slightly reduced but of course not eliminated the advantage of having a file cache incorporated in a web server. Dynamic content, output by an internal module or an external program, may not always change very frequently (given a unique URL with keys / parameters) and so, maybe for a while (e.g. from 1 second to several hours or more), the resulting output can be cached in RAM or even on a fastdisk.[49] The typical usage of a dynamic cache is when a website hasdynamic web pagesabout news, weather, images, maps, etc. that do not change frequently (e.g. everynminutes) and that are accessed by a huge number of clients per minute / hour; in those cases it is useful to return cached content too (without calling the internal module or the external program) because clients often do not have an updated copy of the requested content in their browser caches.[50] Anyway, in most cases those kind of caches are implemented by external servers (e.g.reverse proxy) or by storing dynamic data output in separate computers, managed by specific applications (e.g.memcached), in order to not compete for hardware resources (CPU, RAM, disks) with web server(s).[51][52] A web server software can be either incorporated into theOSand executed inkernelspace, or it can be executed inuser space(like other regular applications). Web servers that run inkernel mode(usually calledkernel space web servers) can have direct access to kernel resources and so they can be, in theory, faster than those running in user mode; anyway there are disadvantages in running a web server in kernel mode, e.g.: difficulties in developing (debugging) software whereasrun-timecritical errorsmay lead to serious problems in OS kernel. Web servers that run inuser-modehave to ask the system for permission to use more memory or moreCPUresources. Not only do these requests to the kernel take time, but they might not always be satisfied because the system reserves resources for its own usage and has the responsibility to share hardware resources with all the other running applications. Executing in user mode can also mean using more buffer/data copies (between user-space and kernel-space) which can lead to a decrease in the performance of a user-mode web server. Nowadays almost all web server software is executed in user mode (because many of the aforementioned small disadvantages have been overcome by faster hardware, new OS versions, much faster OSsystem callsand new optimized web server software). See alsocomparison of web server softwareto discover which of them run in kernel mode or in user mode (also referred as kernel space or user space). Toimprove theuser experience(on client / browser side), a web server shouldreply quickly(as soon as possible) to client requests; unless content response is throttled (by configuration) for some type of files (e.g. big or huge files), also returned data content should be sent as fast as possible (high transfer speed). In other words, aweb server should always be veryresponsive, even under high load of web traffic, in order to keeptotal user's wait(sum of browser time + network time +web server response time) for a responseas low as possible. For web server software, main keyperformance metrics(measured under varyoperating conditions) usually are at least the following ones (i.e.):[53] Among the operating conditions, thenumber(1 ..n) ofconcurrent client connectionsused during a test is an important parameter because it allows to correlate theconcurrencylevelsupported by web server with results of the tested performance metrics. Thespecific web serversoftware designand model adopted(e.g.): ... and otherprogramming techniques, such as (e.g.): ... used to implement a web server program,can bias a lot theperformancesand in particular thescalabilitylevelthat can be achieved underheavy loador when using high end hardware (many CPUs, disks and lots of RAM). In practice some web server software models may require more OS resources (specially more CPUs and more RAM) than others to be able to work well and so to achieve target performances. There are manyoperating conditions that can affect the performancesof a web server; performance values may vary depending on (i.e.): Performances of a web server are typicallybenchmarkedby using one or more of the availableautomated load testing tools. A web server (program installation) usually has pre-definedload limitsfor each combination ofoperating conditions, also because it is limited by OS resources and because it can handle only a limited number of concurrent client connections (usually between 2 and several tens of thousands for each active web server process, see also theC10k problemand theC10M problem). When a web server is near to or over its load limits, it getsoverloadedand so it may becomeunresponsive. At any time web servers can be overloaded due to one or more of the following causes (e.g.). The symptoms of an overloaded web server are usually the following ones (e.g.). To partially overcome above average load limits and to prevent overload, most popular websites use common techniques like the following ones (e.g.). Caveats about using HTTP/2 and HTTP/3 protocols Even if newer HTTP (2 and 3) protocols usually generate less network traffic for each request / response data, they may require moreOSresources (i.e. RAM and CPU) used by web server software (because ofencrypted data, lots of stream buffers and other implementation details); besides this, HTTP/2 and maybe HTTP/3 too, depending also on settings of web server and client program, may not be the best options for data upload of big or huge files at very high speed because their data streams are optimized for concurrency of requests and so, in many cases, using HTTP/1.1 TCP/IP connections may lead to better results / higher upload speeds (your mileage may vary).[58][59] Below are the latest statistics of the market share of all sites of the top web servers on the Internet byNetcraft. NOTE: (*) percentage rounded to integer number, because its decimal values are not publicly reported by source page (only its rounded value is reported in graph). Standard Web Server Gateway Interfaces used fordynamic contents: A few other Web Server Interfaces (server orprogramming languagespecific) used for dynamic contents:
https://en.wikipedia.org/wiki/Web_server#Causes_of_overload
Webattackeris a do-it-yourselfmalwarecreation kit that includesscriptsthat simplify the task of infecting computers andspam-sending techniques to lure victims to specially riggedWebsites. It was allegedly created by a group ofRussianprogrammers. The kit demands minimal technical sophistication to be manipulated and used bycrackers. Sophoshas reported that WebAttacker is being sold at some hacker Web sites or through a network of individual resellers and includes technical support.[1]The malware code is currently being delivered in at least sevenexploits, including threats aimed atMicrosoft'sMDACsoftware,Mozilla's FirefoxWeb browser andSun Microsystems'sJava virtual machineprograms. The exploitation process usually consists of the following steps: The software appears to be updated regularly to exploit new flaws, such as the flaw discovered in September 2006 in howInternet Explorerhandles certain graphics files.[2]
https://en.wikipedia.org/wiki/Webattacker
Incomputing, azombieis a computer connected to the Internet that has beencompromisedby ahackervia acomputer virus,computer worm, ortrojan horseprogram and can be used to perform malicious tasks under the remote direction of the hacker. Zombie computers often coordinate together in abotnetcontrolled by the hacker, and are used for activities such as spreadinge-mail spamand launchingdistributed denial-of-service attacks(DDoS attacks) against web servers. Most victims are unaware that their computers have become zombies. The concept is similar to thezombieofHaitian Voodoofolklore, which refers to a corpse resurrected by asorcerervia magic and enslaved to the sorcerer's commands, having no free will of its own.[1]A coordinatedDDoS attackby multiple botnet machines also resembles a "zombie horde attack", as depicted in fictionalzombie films. Zombie computers have been used extensively to sende-mail spam; as of 2005, an estimated 50–80% of all spam worldwide was sent by zombie computers.[2]This allowsspammersto avoid detection and presumably reduces theirbandwidthcosts, since the owners of zombies pay for their own bandwidth. This spam also greatly increases the spread ofTrojan horses, as Trojans are not self-replicating. They rely on the movement of e-mails or spam to grow, whereas worms can spread by other means.[3]For similar reasons, zombies are also used to commitclick fraudagainst sites displayingpay-per-clickadvertising. Others can hostphishingormoney mulerecruiting websites. Zombies can be used to conductdistributed denial-of-service(DDoS) attacks, a term which refers to the orchestrated flooding of target websites by large numbers of computers at once. The large number of Internet users making simultaneous requests of a website's server is intended to result in crashing and the prevention of legitimate users from accessing the site.[4]A variant of this type of flooding is known as distributed degradation-of-service. Committed by "pulsing" zombies, distributed degradation-of-service is the moderated and periodical flooding of websites intended to slow down rather than crash a victim site. The effectiveness of this tactic springs from the fact that intense flooding can be quickly detected and remedied, but pulsing zombie attacks and the resulting slow-down in website access can go unnoticed for months and even years.[5] The computing facilitated by theInternet of Things(IoT) has been productive for modern-day usage, yet it has played a significant role in the increase in web attacks. The potential of IoT enables every device to communicate efficiently, but this also intensifies the need for policy enforcement regarding security threats. Among these threats, Distributed Denial-of-Service (DDoS) attacks are prevalent. Research has been conducted to study the impact of such attacks on IoT networks and to develop compensating provisions for defense.[6]Consultation services specialized in IoT security, such as those offered byIoT consulting firms[dead link], play a vital role in devising comprehensive strategies to safeguard IoT ecosystems from cyber threats. Notable incidents of distributed denial- and degradation-of-service attacks in the past include the attack upon theSPEWSservice in 2003, and the one againstBlue Frogservice in 2006. In 2000, several prominent Web sites (Yahoo,eBay, etc.) were clogged to a standstill by a distributed denial of service attack mounted by 'MafiaBoy', a Canadian teenager. Beginning in July 2009, similar botnet capabilities have also emerged for the growingsmartphonemarket. Examples include the July 2009 in the "wild" release of the Sexy Spacetext messageworm, the world's first botnet capableSMSworm, which targeted theSymbianoperating system inNokiasmartphones. Later that month, researcherCharlie Millerrevealed aproof of concepttext message worm for theiPhoneatBlack Hat Briefings. Also in July,United Arab Emiratesconsumers were targeted by theEtisalatBlackBerryspywareprogram. In the 2010s, the security community is divided as to the real world potential of mobile botnets. But in an August 2009 interview withThe New York Times, cyber security consultantMichael Greggsummarized the issue this way: "We are about at the point with [smart]phones that we were with desktops in the '80s."[7]
https://en.wikipedia.org/wiki/Zombie_(computer_science)
TheUnixandLinuxaccess rights flagssetuidandsetgid(short forset user identityandset group identity)[1]allow users to run anexecutablewith thefile system permissionsof the executable's owner or group respectively and to change behaviour in directories. They are often used to allow users on a computer system to run programs with temporarily elevated privileges to perform a specific task. While the assumed user id or group id privileges provided are not always elevated, at a minimum they are specific. The flagssetuidandsetgidare needed for tasks that require different privileges than what the user is normally granted, such as the ability to alter system files or databases to change their login password.[2]Some of the tasks that require additional privileges may not immediately be obvious, though, such as thepingcommand, which must send and listen forcontrol packetson a network interface. Thesetuidandsetgidbits are normally represented as the values 4 forsetuidand 2 forsetgidin the high-order octal digit of the file mode. For example,6711has both thesetuidandsetgidbits (4 + 2 = 6) set, and also the file read/write/executable for the owner (7), and executable by the group (first 1) and others (second 1). Most implementations have a symbolic representation of these bits; in the previous example, this could beu=rwx,go=x,ug+s. Typically,chmoddoes not have a recursive mode restricted to directories, so modifying an existing directory tree must be done manually, with a command such asfind/path/to/directory-typed-execchmodg+s'{}''\'. Thesetuidandsetgidflags have different effects, depending on whether they are applied to a file, to a directory or binary executable or non-binary executable file. Thesetuidandsetgidflags have an effect only on binary executable files and not on scripts (e.g., Bash, Perl, Python).[3] When thesetuidorsetgidattributes are set on anexecutablefile, then any users able to execute the file will automatically execute the file with the privileges of the file's owner (commonlyroot) and/or the file's group, depending upon the flags set.[2]This allows the system designer to permit trusted programs to be run which a user would otherwise not be allowed to execute. These may not always be obvious. For example, thepingcommand may need access to networking privileges that a normal user cannot access; therefore it may be given the setuid flag to ensure that a user who needs to ping another system can do so, even if their account does not have the required privilege for sending packets. For security purposes, the invoking user is usually prohibited by the system from altering the new process in any way, such as by usingptrace,LD_LIBRARY_PATHor sending signals to it, to exploit the raised privilege, although signals from the terminal will still be accepted. While thesetuidfeature is very useful in many cases, its improper use can pose a security risk[2]if thesetuidattribute is assigned toexecutableprograms that are not carefully designed. Due to potential security issues,[4]many operating systems ignore thesetuidattribute when applied to executableshell scripts.[citation needed] The presence ofsetuidexecutables explains why thechrootsystem call is not available to non-rootusers on Unix. Seelimitations ofchrootfor more details. Setting thesetgidpermission on a directory causes files and subdirectories created within to inherit its group ownership, rather than the primary group of the file-creating process. Created subdirectories also inherit thesetgidbit. The policy is only applied during creation and, thus, only prospectively. Directories and files existing when thesetgidbit is applied are unaffected, as are directories and files moved into the directory on which the bit is set. Thus is granted a capacity to work with files amongst a group of users without explicitly setting permissions, but limited by the security model expectation that existing files permissions do not implicitly change. Thesetuidpermission set on a directory is ignored on mostUNIXandLinuxsystems.[5][citation needed]HoweverFreeBSDcan be configured to interpretsetuidin a manner similar tosetgid, in which case it forces all files and sub-directories created in a directory to be owned by that directory's owner - a simple form of inheritance.[6]This is generally not needed on most systems derived fromBSD, since by default directories are treated as if theirsetgidbit is always set, regardless of the actual value. As is stated inopen(2), "When a new file is created it is given the group of the directory which contains it."[7] Permissions of a file can be checked in octal form and/or alphabetic form with the command line toolstat 4701on an executable file owned by 'root' and the group 'root' A user named 'thompson' attempts to execute the file. The executable permission for all users is set (the '1') so 'thompson' can execute the file. The file owner is 'root' and the SUID permission is set (the '4') - so the file is executed as 'root'. The reason an executable would be run as 'root' is so that it can modify specific files that the user would not normally be allowed to, without giving the user full root access. A default use of this can be seen with the/usr/bin/passwdbinary file./usr/bin/passwdneeds to modify/etc/passwdand/etc/shadowwhich store account information and password hashes for all users, and these can only be modified by the user 'root'. The owner of the process is not the user running the executable file but the owner of the executable file 2770on a directory named 'music' owned by the user 'root' and the group 'engineers' A user named 'torvalds' who belongs primarily to the group 'torvalds' but secondarily to the group 'engineers' makes a directory named 'electronic' under the directory named 'music'. The group ownership of the new directory named 'electronic' inherits 'engineers.' This is the same when making a newfilenamed 'imagine.txt' Without SGID the group ownership of the new directory/file would have been 'torvalds' as that is the primary group of user 'torvalds'. 1770on a directory named 'videogames' owned by the user 'torvalds' and the group 'engineers'. A user named 'torvalds' creates a file named 'tekken' under the directory named 'videogames'. A user named 'wozniak', who is also part of the group 'engineers', attempts to delete the file named 'tekken' but he cannot, since he is not the owner. Without sticky bit, 'wozniak' could have deleted the file, because the directory named 'videogames' allows read and write by 'engineers'. A default use of this can be seen at the/tmpfolder. 3171on a directory named 'blog' owned by the group 'engineers' and the user 'root' A user named 'torvalds' who belongs primarily to the group 'torvalds' but secondarily to the group 'engineers' creates a file or directory named 'thoughts' inside the directory 'blog'. A user named 'wozniak' who also belongs to the group 'engineers' cannot delete, rename, or move the file or directory named 'thoughts', because he is not the owner and the sticky bit is set. However, if 'thoughts' is a file, then 'wozniak' can edit it. Sticky bit has the final decision.If sticky bit and SGID had not been set, the user 'wozniak' could rename, move, or delete the file named 'thoughts' because the directory named 'blog' allows read and write by group, and wozniak belongs to the group, and the default 0002umaskallows new files to be edited by group. Sticky bit and SGID could be combined with something such as a read-only umask or an append only attribute. Developers design and implement programs that use this bit on executables carefully in order to avoid security vulnerabilities includingbuffer overrunsandpath injection. Successful buffer-overrun attacks on vulnerable applications allow the attacker to execute arbitrary code under the rights of the process exploited. In the event that a vulnerable process uses thesetuidbit to run asroot, the code will execute with root privileges, in effect giving the attacker root access to the system on which the vulnerable process is running. Of particular importance in the case of asetuidprocess is theenvironmentof the process. If the environment is not properly sanitized by a privileged process, its behavior can be changed by the unprivileged process that started it.[8]For example,GNU libcwas at one point vulnerable to anexploitusingsetuidand an environment variable that allowed executing code from untrustedshared libraries.[9] Thesetuidbit was invented byDennis Ritchie[10]and included insu.[10]His employer, thenBell Telephone Laboratories, applied for a patent in 1972; the patent was granted in 1979 as patent numberUS 4135240"Protection of data file contents". The patent was later placed in thepublic domain.[11]
https://en.wikipedia.org/wiki/Setuid
Ambient authorityis a term used in the study ofaccess controlsystems. A subject, such as a computer program, is said to be usingambient authorityif it only needs to specify the names of the involved object(s) and the operation to be performed on them in order for a permitted action to succeed.[1][2][3] In this definition, The authority is "ambient" in the sense that it exists in a broadly visible environment (often, but not necessarily a global environment) where any subject can request it by name. For example, suppose a C program opens a file for read access by executing the call: The desired file is designated by its name on the filesystem, which does not by itself include authorising information, so the program is exercising ambient authority. When ambient authority is requested, permissions are granted or denied based on one or more global properties of the executing program, such as itsidentityor itsrole. In such cases, the management ofaccess controlis handled separately from explicit communication to the executing program orprocess, through means such asaccess control listsassociated with objects or throughRole-Based Access Controlmechanisms. The executing program has no means toreifythe permissions that it was granted for a specific purpose asfirst-class values. So, if the program should be able to access an object when acting on its own behalf but not when acting on behalf of one of its clients (or, on behalf of one client but not another), it has no way to express that intention. This inevitably leads to such programs being subject to theconfused deputy problem. The term "ambient authority" is used primarily to contrast withcapability-based security(includingobject-capability models), in which executing programs receive permissions as they might receive data, as communicatedfirst-class objectreferences. This allows them to determine where the permissions came from, and thus avoid theConfused deputy problem. However, since there are additional requirements for a system to be considered a capability system besides avoiding ambient authority, "non-ambient authority system" is not just a synonym for "capability system". Ambient authority is the dominant form of access control in computer systems today. Theusermodel of access control as used in Unix and in Windows systems is an ambient authority model because programs execute with the authorities of theuserthat started them. This not only means that executing programs are inevitably given more permissions (seePrinciple of least privilege) than they need for their task, but that they are unable to determine the source or the number and types of permission that they have.[4]A program executing under an ambient authority access control model has little option but to designate permissions and try to exercise them, hoping for the best. This property requires an excess of permissions to be granted to users or roles, in order for programs to execute without error.
https://en.wikipedia.org/wiki/Ambient_authority
TheCommon Vulnerabilities and Exposures(CVE) system, originallyCommon Vulnerability Enumeration,[1]provides a reference method for publicly knowninformation-securityvulnerabilitiesand exposures.[2]The United States'Homeland Security Systems Engineering and Development Institute FFRDC, operated byThe MITRE Corporation, maintains the system, with funding from the USNational Cyber Security Divisionof theUS Department of Homeland Security.[3]The system was officially launched for the public in September 1999.[4] TheSecurity Content Automation Protocoluses CVE, and CVE IDs are listed on MITRE's system as well as the basis for the USNational Vulnerability Database.[5] MITRE Corporation's documentation defines CVE Identifiers (also called "CVE names", "CVE numbers", "CVE-IDs", and "CVEs") as unique, common identifiers for publicly known information-security vulnerabilities in publicly released software packages. Historically, CVE identifiers originally had a status of "candidate" ("CAN-") and could then be promoted to entries ("CVE-"), but this practice was ended in 2005[6][7]and all identifiers are now assigned as CVEs. The assignment of a CVE number is not a guarantee that it will become an official CVE entry (e.g., a CVE may be improperly assigned to an issue which is not a security vulnerability, or which duplicates an existing entry). If found not to meet criteria, MITRE or a CVE Numbering Authority (CNA) can summarily place the entry into REJECTED status. CVEs are assigned by a CVE Numbering Authority (CNA).[8]While some vendors acted as a CNA before, the name and designation was not created until 1 February 2005.[9]There are four primary types of CVE number assignments: When investigating a vulnerability or potential vulnerability it helps to acquire a CVE number early on. CVE numbers may not appear in the MITRE or NVD databases for some time (days, weeks, months or potentially years) due to issues that are embargoed (the CVE number has been assigned but the issue has not been made public), or historically in cases where the entry is not researched and written up by MITRE due to resource issues. The benefit of early CVE candidacy is that all future correspondence and coordination can refer to the CVE number to ensure all parties are referring to the same vulnerability. Information on getting CVE identifiers for issues with open source projects is available fromRed Hat[11]andGitHub.[12] CVEs are for software that has been publicly released; this can include betas and other pre-release versions if they are widely used. Commercial software is included in the "publicly released" category, but custom-built software that is not distributed would historically not be given a CVE. For the first two decades of the program, services (e.g., a Web-based email provider) are not assigned CVEs for vulnerabilities found in the service (e.g., an XSS vulnerability) unless the issue exists in an underlying software product that is publicly distributed. Official rules have not been published regarding this change but some CNAs including MITRE have begun assigning CVEs to service-based vulnerabilities as far back as 2000.[13] The CVE database contains several fields: This is a standardized text description of the issue(s). One common entry is: ** RESERVED ** This candidate has been reserved by an organization or individual that will use it when announcing a new security problem. When the candidate has been publicized, the details for this candidate will be provided. This means that the entry number has been reserved by Mitre for an issue or a CNA has reserved the number. So when a CNA requests a block of CVE numbers in advance (e.g., Red Hat currently requests CVEs in blocks of 500), the CVE number will be marked as reserved even though the CVE itself may not be assigned by the CNA for some time. Until the CVE is assigned, Mitre is made aware of it (i.e., the embargo passes and the issue is made public), and Mitre has researched the issue and written a description of it, entries will show up as "** RESERVED **". This is the date the entry was created. For CVEs assigned directly by Mitre, this is the date Mitre created the CVE entry. For CVEs assigned by CNAs (e.g., Microsoft, Oracle, HP, Red Hat) this is also the date that was created by Mitre, not by the CNA. When a CNA requests a block of CVE numbers in advance (e.g., Red Hat currently requests CVEs in blocks of 500) the entry date that CVE is assigned to the CNA. The following fields were previously used in CVE records, but are no longer used. In order to support CVE IDs beyond CVE-YEAR-9999 (an issue known as the 'CVE10k problem'[14]) a change was made to the CVE syntax in 2014 and took effect on 13 January 2015.[15] The new CVE-ID syntax is variable length and includes: CVE prefix + Year + Arbitrary Digits The variable-length arbitrary digits begin at four fixed digits and expand with arbitrary digits only when needed in a calendar year; for example, CVE-YYYY-NNNN and if needed CVE-YYYY-NNNNN, CVE-YYYY-NNNNNN, and so on. The schema is compatible with previously assigned CVE-IDs, which all include a minimum of four digits. CVE attempts to assign one CVE per security issue; however, in many cases this would lead to an extremely large number of CVEs (e.g., where several dozen cross-site scripting vulnerabilities are found in a PHP application due to lack of use ofhtmlspecialchars()or the insecure creation of files in/tmp).[16] To deal with this, guidelines (subject to change) cover the splitting and merging of issues into distinct CVE numbers. As a general guideline, one should first consider issues to be merged, then issues should be split by the type of vulnerability (e.g.,buffer overflowvs.stack overflow), then by the software version affected (e.g., if one issue affects version 1.3.4 through 2.5.4 and the other affects 1.3.4 through 2.5.8 they would be SPLIT) and then by the reporter of the issue (e.g., if Alice reports one issue and Bob reports another issue, the issues would be SPLIT into separate CVE numbers).[16] Another example is Alice reports a /tmp file creation vulnerability in version 1.2.3 and earlier of ExampleSoft web browser; in addition to this issue, several other/tmpfile creation issues are found. In some cases this may be considered as two reporters (and thus SPLIT into two separate CVEs, or if Alice works for ExampleSoft and an ExampleSoft internal team finds the rest it may be MERGE'ed into a single CVE). Conversely, issues can be merged, such as if Bob finds 145 XSS vulnerabilities in ExamplePlugin for ExampleFrameWork regardless of the versions affected and so on, they may be merged into a single CVE.[16] The Mitre CVE database can be searched at theCVE List Search, and the NVD CVE database can be searched atSearch CVE and CCE Vulnerability Database. CVE identifiers are intended for use with respect to identifying vulnerabilities: Common Vulnerabilities and Exposures (CVE) is a dictionary of common names (i.e., CVE Identifiers) for publicly known information security vulnerabilities. CVE's common identifiers make it easier to share data across separate network security databases and tools, and provide a baseline for evaluating the coverage of an organization's security tools. If a report from one of your security tools incorporates CVE Identifiers, you may then quickly and accurately access fix information in one or more separate CVE-compatible databases to remediate the problem.[17] Users who have been assigned a CVE identifier for a vulnerability are encouraged to ensure that they place the identifier in any related security reports, web pages, emails, and so on. Per section 7 of the CNA Rules, a vendor which received a report about asecurity vulnerabilityhas full discretion in regards to it.[18]This can lead to a conflict of interest as a vendor may attempt to leave flaws unpatched by denying a CVE assignment at first place – a decision which Mitre can't reverse. The "!CVE" (not CVE) project, announced in 2023, aims to collect vulnerabilities that are denied by vendors, so long as they are considered valid by a panel of experts from the project.[19] CVE identifiers have been awarded for bogus issues and issues without security consequences.[20]In response, a number of open-source projects have themselves applied to become the CVE Numbering Authority (CNA) of their own project.[21] On 15 April 2025, it was reported that the contract between MITRE and the US government, set to expire the day after,[22]would expire. Reports stated that the expiration of the contract would bring an end to the operational arm of the CVE program, including assigning new CVEs, while the database would remain accessible viaGitHub.[23] Just prior to its expiration, the contract was extended for 11 months, averting the shutdown of the program.[24]
https://en.wikipedia.org/wiki/Common_Vulnerabilities_and_Exposures
Link rot(also calledlink death,link breaking, orreference rot) is the phenomenon ofhyperlinkstending over time to cease to point to their originally targetedfile,web page, orserverdue to that resource being relocated to a new address or becoming permanently unavailable. A link that no longer points to its target may be calledbroken,dead, ororphaned. The rate of link rot is a subject of study and research due to its significance to theinternet's ability to preserve information. Estimates of that rate vary dramatically between studies. Information professionals have warned that link rot could make important archival data disappear, potentially impacting the legal system and scholarship. A number of studies have examined the prevalence of link rot within theWorld Wide Web, in academic literature that usesURLsto cite web content, and withindigital libraries. In a 2023 study of theMillion Dollar Homepageexternal links, it was found that 27% of the links resulted in a site loading with no redirects, 45% of links have been redirected, and 28% returned various error messages.[1] A 2002 study suggested that link rot within digital libraries is considerably slower than on the web. The article found that about 3% of the objects were no longer accessible after one year,[2]equating to ahalf-lifeof nearly 23 years. A 2003 study found that on the Web, about one link out of every 200 broke each week,[3]suggesting ahalf-lifeof 138 weeks. This rate was largely confirmed by a 2016–2017 study of links inYahoo! Directory(which had stopped updating in 2014 after 21 years of development) that found the half-life of the directory's links to be two years.[4] A 2004 study showed that subsets of Web links (such as those targeting specific file types or those hosted by academic institutions) could have dramatically different half-lives.[5]The URLs selected for publication appear to have greater longevity than the average URL. A 2015 study by Weblock analyzed more than 180,000 links from references in the full-text corpora of three major open access publishers and found a half-life of about 14 years,[6]generally confirming a 2005 study that found that half of theURLscited inD-Lib Magazinearticles were active 10 years after publication.[7]Other studies have found higher rates of link rot in academic literature but typically suggest a half-life of four years or greater.[8][9]A 2013 study inBMC Bioinformaticsanalyzed nearly 15,000 links in abstracts from Thomson Reuters'sWeb of Sciencecitation index and found that the median lifespan of web pages was 9.3 years, and just 62% were archived.[10]A 2021 study of external links inNew York Timesarticles published between 1996 and 2019 found a half-life of about 15 years (with significant variance among content topics) but noted that 13% of functional links no longer lead to the original content—a phenomenon calledcontent drift.[11] A 2013 study found that 49% of links in U.S. Supreme court opinions are dead.[12] A 2023 study looking at United StatesCOVID-19dashboards found that 23% of the state dashboards available in February 2021 were no longer available at the previous URLs in April 2023.[13] Pew Researchfound that, in 2023, 38% of pages from 2013 went missing. Also, in 2023, 54% ofEnglish Wikipediaarticles had a dead link in the 'references' section and 23% ofnews articleslinked to a dead URL.[14] Link rot can result for several reasons. A target web page may be removed. The server that hosts the target page could fail, be removed from service, or relocate to a newdomain name. As far back as 1999, it was noted that with the amount of material that can be stored on a hard drive, "a single disk failure could be like the burning of thelibrary at Alexandria."[15]A domain name's registration may lapse or be transferred to another party. Some causes will result in the link failing to find any target and returning an error such asHTTP 404. Other causes will cause a link to target content other than what was intended by the link's author. Other reasons for broken links include: Strategies for preventing link rot can focus on placing content where its likelihood of persisting is higher, authoring links that are less likely to be broken, taking steps to preserve existing links, or repairing links whose targets have been relocated or removed.[citation needed] The creation of URLs that will not change with time is the fundamental method of preventing link rot. Preventive planning has been championed byTim Berners-Leeand other web pioneers.[16] Strategies pertaining to the authorship of links include: Strategies pertaining to the protection of existing links include: The detection of broken links may be done manually or automatically. Automated methods includeplug-insforcontent management systemsas well as standalone broken-link checkers such as likeXenu's Link Sleuth. Automatic checking may not detect links that return asoft 404or links that return a200 OKresponse but point to content that has changed.[26]
https://en.wikipedia.org/wiki/Link_rot
Amemory debuggeris adebuggerfor finding software memory problems such asmemory leaksandbuffer overflows. These are due tobugsrelated to the allocation and deallocation ofdynamic memory. Programs written in languages that havegarbage collection, such asmanaged code, might also need memory debuggers, e.g. for memory leaks due to "living" references in collections. Memory debuggers work by monitoring memory access, allocations, and deallocation of memory. Many memory debuggers require applications to be recompiled with special dynamic memory allocation libraries, whose APIs are mostly compatible with conventional dynamic memory allocation libraries, or else use dynamic linking.Electric Fenceis such a debugger which debugs memory allocation withmalloc. Some memory debuggers (e.g.Valgrind) work by running the executable in a virtual machine-like environment, monitoring memory access, allocation and deallocation so that no recompilation with special memory allocation libraries is required. Finding memory issues such as leaks can be extremely time consuming as they may not manifest themselves except under certain conditions. Using a tool to detect memory misuse makes the process much faster and easier.[1] As abnormally high memory utilization can be a contributing factor insoftware aging, memory debuggers can help programmers to avoidsoftware anomaliesthat would exhaust the computer system memory, thus ensuring high reliability of the software even for longruntimes. Somestatic analysis toolscan also help find memory errors. Memory debuggers operate as part of an application while it'srunningwhilestatic code analysisis performed by analyzing the code without executing it. These different techniques will typically find different instances of problems, and using them both together yields the best result.[2] This is a list oftoolsuseful for memory debugging. Aprofilercan be used in conjunction with a memory debugger.
https://en.wikipedia.org/wiki/Memory_debugger
Incomputer programming, awild branchis aGOTOinstruction where the target address is indeterminate, random or otherwise unintended.[1]It is usually the result of asoftware bugcausing the accidental corruption of apointerorarray index. It is "wild" in the sense that it cannot be predicted to behave consistently. In other words, a wild branch is a function pointer that is wild (dangling). Detection of wild branches is frequently difficult; they are normally identified by erroneous results (where the unintended target address is nevertheless a valid instruction enabling the program to continue despite the error) or ahardware interrupt, which may change depending uponregistercontents.Debuggersand monitor programs such asInstruction set simulatorscan sometimes be used to determine the location of the original wild branch. Thisprogramming-language-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Wild_branch
Incomputer programming, the termhookingcovers a range of techniques used to alter or augment the behaviour of anoperating system, ofapplications, or of other software components by interceptingfunction callsormessagesoreventspassed betweensoftware components. Code that handles such intercepted function calls, events or messages is called ahook. Hook methods are of particular importance in thetemplate method patternwhere common code in anabstract classcan be augmented by custom code in a subclass. In this case each hook method is defined in the abstract class with an empty implementation which then allows a different implementation to be supplied in each concrete subclass. Hooking is used for many purposes, includingdebuggingand extending functionality. Examples might include intercepting keyboard or mouse event messages before they reach an application, or intercepting operating system calls in order to monitor behavior or modify the function of an application or other component. It is also widely used in benchmarking programs, for exampleframe ratemeasuring in 3D games, where the output and input is done through hooking. Hooking can also be used by malicious code. For example,rootkits, pieces of software that try to make themselves invisible by faking the output ofAPIcalls that would otherwise reveal their existence, often use hooking techniques. Typically hooks are inserted while software is already running, but hooking is a tactic that can also be employed prior to the application being started. Both these techniques are described in greater detail below. Hooking can be achieved by modifying the source of theexecutableorlibrarybefore an application is running, through techniques ofreverse engineering. This is typically used to intercept function calls to either monitor or replace them entirely. For example, by using adisassembler, theentry pointof afunctionwithin amodulecan be found. It can then be altered to instead dynamically load some other library module and then have it execute desired methods within that loaded library. If applicable, another related approach by which hooking can be achieved is by altering theimport tableof an executable. This table can be modified to load any additional library modules as well as changing what external code is invoked when a function is called by the application. An alternative method for achieving function hooking is by intercepting function calls through awrapper library. A wrapper is a version of a library that an application loads, with all the same functionality of the original library that it will replace. That is, all the functions that are accessible are essentially the same between the original and the replacement. This wrapper library can be designed to call any of the functionality from the original library, or replace it with an entirely new set of logic. Operating systems and software may provide the means to easily insert event hooks atruntime. It is available provided that theprocessinserting the hook is granted enough permission to do so. Microsoft Windows for example, allows users to insert hooks that can be used to process or modify systemeventsand application events fordialogs,scrollbars, andmenusas well as other items. It also allows a hook to insert, remove, process or modifykeyboardandmouseevents. Linux provides another example where hooks can be used in a similar manner to process network events within thekernelthroughNetFilter. When such functionality is not provided, a special form of hooking employs intercepting the library function calls made by a process. Function hooking is implemented by changing the very first few code instructions of the target function to jump to an injected code. Alternatively on systems using theshared libraryconcept, theinterrupt vectortable or theimport descriptor tablecan be modified in memory. Essentially these tactics employ the same ideas as those of source modification, but instead altering instructions and structures located in the memory of a process once it is already running. Whenever a class defines/inherits avirtual function(or method), compilers add a hidden member variable to the class which points to avirtual method table(VMT or Vtable). Most compilers place the hidden VMT pointer at the first 4 bytes of every instance of the class. A VMT is basically an array ofpointersto all the virtual functions that instances of the class may call. At runtime these pointers are set to point to the right functions, because atcompile time, it is not yet known if the base function is to be called or if an overridden version of the function from a derived class is to be called (thereby allowing forpolymorphism). Therefore, virtual functions can be hooked by replacing the pointers to them within any VMT that they appear. The code below shows an example of a typical VMT hook in Microsoft Windows, written in C++.[1] All virtual functions must be class member functions, and all (non-static) class member functions are called with the __thiscallcalling convention(unless the member function takes a variable number of arguments, in which case it is called with __cdecl). The __thiscall calling convention passes a pointer to the calling class instance (commonly referred to as a "this" pointer) via the ECX register (on the x86 architecture). Therefore, in order for a hook function to properly intercept the "this" pointer that is passed and take it as an argument, it must look into the ECX register. In the above example, this is done by setting the hook function (hkVirtualFn1) to use the __fastcall calling convention, which causes the hook function to look into the ECX register for one of its arguments. Also note that, in the above example, the hook function (hkVirtualFn1) is not a member function itself so it cannot use the __thiscall calling convention. __fastcall has to be used instead because it is the only other calling convention that looks into the ECX register for an argument. The following example will hook into keyboard events in Microsoft Windows using theMicrosoft .NET Framework. The following source code is an example of an API/function hooking method which hooks by overwriting the first six bytes of a destinationfunctionwith aJMPinstruction to a new function. The code is compiled into aDLLfile then loaded into the target process using any method ofDLL injection. Using a backup of the original function one might then restore the first six bytes again so the call will not be interrupted. In this example thewin32 APIfunction MessageBoxW is hooked.[2] This example shows how to use hooking to alternetworktraffic in the Linux kernel usingNetfilter. The following code demonstrates how to hook functions that are imported from another module. This can be used to hook functions in a different process from the calling process. For this the code must be compiled into aDLLfile then loaded into the target process using any method ofDLL injection. The advantage of this method is that it is less detectable byantivirus softwareand/oranti-cheat software, one might make this into an external hook that doesn't make use of any malicious calls. ThePortable Executableheader contains theImport Address Table(IAT), which can be manipulated as shown in the source below. The source below runs under Microsoft Windows.
https://en.wikipedia.org/wiki/Hooking
Aninstruction set simulator(ISS) is asimulationmodel, usually coded in ahigh-level programming language, which mimics the behavior of a mainframe ormicroprocessorby "reading" instructions and maintaining internal variables which represent the processor'sregisters. Instruction simulationis a methodology employed for one of several possible reasons: Instruction-set simulators can be implemented using three main techniques: An ISS is often provided with (or is itself) adebuggerin order for asoftware engineer/programmerto debug the program prior to obtaining target hardware.GDBis one debugger which has a compiled-in ISS. It is sometimes integrated with simulated peripheral circuits such astimers,interrupts,serial ports, generalI/O ports, etc. to mimic the behavior of amicrocontroller. The basic instruction simulation technique is the same regardless of purpose: first execute the monitoring program passing the name of the target program as an additional input parameter. The target program is then loaded into memory, but control is never passed to the code. Instead, theentry pointwithin the loaded program is calculated, and a pseudoprogram status word(PSW) is set to this location. The Program Status Word (PSW) is composed of astatus registerand aprogram counter, the latter of which signifies the next instruction to be executed.[1]Therefore, it is specifically the program counter that is assigned to this location. A set of pseudoregistersare set to what they would have contained if the program had been given control directly. It may be necessary to amend some of these to point to other pseudo "control blocks" depending on the hardware and operating system. It may also be necessary to reset the original parameter list to 'strip out' the previously added program name parameter. Thereafter, execution proceeds as follows: For test and debugging purposes, the monitoring program can provide facilities to view and alter registers, memory, and restart location or obtain a minicore dumpor print symbolic program names with current data values. It could permit new conditional "pause" locations, remove unwanted pauses and suchlike. Instruction simulation provides the opportunity to detect errors BEFORE execution which means that the conditions are still exactly as they were and not destroyed by the error. A very good example from theIBMS/360world is the following instruction sequence that can cause difficulties debugging without an instruction simulation monitor. The number of instructions to perform the above basic "loop" (Fetch/Execute/calculate new address) depends on hardware but it could be accomplished onIBMS/360/370/390/ES9000 range of machines in around 12 or 13 instructions for many instruction types. Checking for valid memory locations or for conditional "pause"s add considerably to the overhead but optimization techniques can reduce this to acceptable levels. For testing purposes this is normally quite acceptable as powerful debugging capabilities are provided includinginstruction step, trace and deliberate jump to test error routine (when no actual error). In addition, a full instruction trace can be used to test actual (executed)code coverage. Occasionally, monitoring the execution of a target program can help to highlightrandomerrors that appear (or sometimes disappear) while monitoring but not in real execution. This can happen when the target program is loaded at a different location than normal because of the physical presence of the monitoring program in the same address space. If the target program picks up the value from a "random" location in memory (one it doesn't 'own' usually), it may for example be nulls (X"00") in almost every normal situation and the program works OK. If the monitoring program shifts the load point, it may pick up say X"FF" and the logic would cause different results during a comparison operation. Alternatively, if the monitoring program is now occupying the space where the value is being "picked up" from, similar results might occur. Re-entrancy bugs:accidental use ofstatic variablesinstead of "dynamic" thread memory can cause re-entrancy problems in many situations. Use of a monitoring program can detect these even without astorage protect key. Illegal operations:some operating systems (or hardware) require the application program to be in the correct "mode" for certain calls to the Operating system. Instruction simulation can detect these conditions before execution. Hot spot analysis & instruction usageby counting the instructions executed during simulation (which will match the number executed on the actual processor or unmonitored execution), the simulator can provide both a measure of relative performance between different versions of algorithm and also be used to detect "hot spots" whereoptimizationcan then be targeted by the programmer. In this role it can be considered a form ofperformance analysisas it is not easy to obtain these statistics under normal execution and this is especially true for high level language programs which effectively 'disguise' the extent of machine code instructions by their nature. Some of these software simulators remains to be used as tools for assembly language and Instruction Set Architecture teaching, with some specifically designed using multiple simulation layers and ISA to ISA simulation, with the ability to even design ISAs and simulate them.[2] In the first volume ofThe Art of Computer Programming,Donald Knuthwrote: "In the author's opinion, entirely too much programmers' time has been spent in writing such [machine language] simulators and entirely too much computer time has been wasted in using them."[3]In the following section, however, the author gives examples of how such simulators are useful as trace or monitor routines for debugging purposes. Typical trace output from simulation by monitoring program used for test & debugging: Simulators Other
https://en.wikipedia.org/wiki/Instruction_set_simulator
Software analyticsis theanalyticsspecific to the domain ofsoftware systemstaking into accountsource code, static and dynamic characteristics (e.g.,software metrics) as well as related processes of theirdevelopmentandevolution. It aims at describing, monitoring, predicting, and improving the efficiency and effectiveness ofsoftware engineeringthroughout thesoftware lifecycle, in particular duringsoftware developmentandsoftware maintenance. The data collection is typically done by miningsoftware repositories, but can also be achieved by collecting user actions or production data. Software analytics aims at supporting decisions and generating insights, i.e., findings, conclusions, and evaluations about software systems and their implementation, composition, behavior, quality, evolution as well as about the activities of various stakeholders of these processes. Methods, techniques, and tools of software analytics typically rely on gathering, measuring, analyzing, and visualizing information found in the manifold data sources stored in software development environments and ecosystems. Software systems are well suited for applying analytics because, on the one hand, mostly formalized and precise data is available and, on the other hand, software systems are extremely difficult to manage ---in a nutshell: "software projects are highly measurable, but often unpredictable."[2] Core data sources includesource code, "check-ins, work items, bug reports and test executions [...] recorded in software repositories such as CVS, Subversion, GIT, and Bugzilla."[4]Telemetry dataas well as execution traces or logs can also be taken into account. Automated analysis, massive data, and systematic reasoning support decision-making at almost all levels. In general, key technologies employed by software analytics include analytical technologies such asmachine learning,data mining,statistics,pattern recognition,information visualizationas well as large-scale data computing & processing. For example, software analytics tools allow users to map derived analysis results by means ofsoftware maps, which support interactively exploring system artifacts and correlated software metrics. There are also software analytics tools using analytical technologies on top ofsoftware qualitymodels inagile software developmentcompanies, which support assessing software qualities (e.g., reliability), and deriving actions for their improvement.[5] In 2009, the term "software analytics" was used in a paper byDongmei Zhang, Shi Han, Yingnong Dang, Jian-Guang Lou, and Haidong Zhang in part by the Software Analytics Group (SA) atMicrosoft ResearchAsia (MSRA).[6] The term has since become well known in thesoftware engineeringresearch community after a series of tutorials and talks on software analytics were given by the Software Analytics Group, in collaboration with Tao Xie fromNorth Carolina State University, at software engineering conferences including a tutorial at the IEEE/ACMInternational Conference on Automated Software Engineering(ASE 2011),[7]a talk at the International Workshop on Machine Learning Technologies in Software Engineering (MALETS 2011),[8]a tutorial and a keynote talk given by Zhang at the IEEE-CS Conference on Software Engineering Education and Training,[9][10]a tutorial at the International Conference on Software Engineering - Software Engineering in Practice Track,[11]and a keynote talk given by Zhang at the Working Conference on Mining Software Repositories.[12] In November 2010, Software Development Analytics (Software Analytics with a focus on Software Development) was proposed by Thomas Zimmermann and his colleagues at the Empirical Software Engineering Group (ESE) at Microsoft Research Redmond in their FoSER 2010 paper.[13]A goldfish bowl panel on software development analytics was organized by Zimmermann andTim Menziesfrom West Virginia University at the International Conference on Software Engineering, Software Engineering in Practice Track.[14]
https://en.wikipedia.org/wiki/Runtime_intelligence
Managed servicesis the practice of outsourcing the responsibility for maintaining, and anticipating need for, a range of processes and functions, ostensibly for the purpose of improved operations and reduced budgetary expenditures through the reduction of directly-employed staff.[1][2][3]It is an alternative to thebreak/fixoron-demand outsourcingmodel where the service provider performs on-demand services and bills the customer only for the work done.[4][5]The external organization is referred to as amanaged service(s) provider(MSP).[6] A managed IT services provider is a third-party service provider that proactively monitors & manages a customer's server/network/system infrastructure,cybersecurityand end-user systems against a clearly definedService Level Agreement(SLA).[7]Small and medium-sized businesses(SMBs), nonprofits and government agencies hire MSPs to perform a defined set of day-to-day management services so they can focus on improving their services without worrying about extendedsystem downtimesor service interruptions. These services may include network and infrastructure management, security and monitoring.[6][8]Most MSPs bill an upfront setup or transition fee and an ongoing flat or near-fixed monthly fee, which benefits clients by providing them with predictable IT support costs. Sometimes, MSPs act as facilitators who manage and procure staffing services on behalf of the client. In such context, they use an online application calledvendor management system(VMS) for transparency and efficiency. A managed service provider is also useful in creating disaster recovery plans, similar to a corporation's.[9] The managed services model has been useful in the private sector, notably amongFortune 500 companies,[10]with potential future applications in government.[11] The evolution of MSP started in the 1990s with the emergence of application service providers (ASPs) who helped pave the way for remote support for IT infrastructure. From the initial focus of remote monitoring and management of servers and networks, the scope of an MSP's services expanded to include mobile device management,managed security, remote firewall administration and security-as-a-service, and managed print services. Around 2005,Karl W. Palachuk, Amy Luby, Founder of Managed Service Provider Services Network acquired by High Street Technology Ventures, andErick Simpson, founder of Managed Services Provider University, were the first advocates and the pioneers of the managed services business model.[12][13] The first books on the topic of managed services:Service Agreements for SMB Consultants: A Quick-Start Guide to Managed Services[14]andThe Guide to a Successful Managed Services Practice[15]were published in 2006 by Palachuk and Simpson, respectively. Since then, the managed servicesbusiness modelhas gained ground among enterprise-level companies. As thevalue-added reseller(VAR) community evolved to a higher level of services, it adapted the managed service model and tailored it to SMB companies. In the new economy, IT manufacturers are currently moving away from a "box-shifting" resale to a more customized, managed service offering. In this transition, the billing and sales processes ofintangiblemanaged services, appear as the main challenges for traditional resellers. The global managed services market is expected to grow from an estimated $342.9 Billion in 2020 to $410.2 Billion by 2027, representing aCAGRof 2.6%.[16] Adopting managed services is intended to be an efficient way to stay up-to-date on technology, have access to skills and address issues related to cost, quality of service and risk.[17][18][19]As theIT infrastructurecomponents of manySMBand large corporations are migrating to the cloud,[20]with MSPs (managed services providers) increasingly facing the challenge ofcloud computing, a number of MSPs are providing in-house cloud services or acting as brokers with cloud services providers.[21][22]A recent survey claims that a lack of knowledge and expertise in cloud computing rather than offerors' reluctance, appears to be the main obstacle to this transition.[23][24]For example, in transportation, many companies face a significant increase of fuel and carrier costs, driver shortages, customer service requests and global supply chain complexities. Managing day-to-day transportation processes and reducing related costs come as significant burdens that require the expertise of Transportation Managed Services (or managed transportation services) providers.[25][26] * Integrated marketing / advertising agency services (graphic design,copywriting,PPC,social media,web design,SEO) In the IT industry, the most common managed services revolve around connectivity andbandwidth,network monitoring,security,[31]virtualization, anddisaster recovery.[18]
https://en.wikipedia.org/wiki/Managed_services
Insoftware engineering,profiling(program profiling,software profiling) is a form ofdynamic program analysisthat measures, for example, the space (memory) or timecomplexity of a program, theusage of particular instructions, or the frequency and duration of function calls. Most commonly, profiling information serves to aidprogram optimization, and more specifically,performance engineering. Profiling is achieved byinstrumentingeither the programsource codeor its binary executable form using a tool called aprofiler(orcode profiler). Profilers may use a number of different techniques, such as event-based, statistical, instrumented, and simulation methods. Profilers use a wide variety of techniques to collect data, includinghardware interrupts,code instrumentation,instruction set simulation, operating systemhooks, andperformance counters. Program analysis tools are extremely important for understanding program behavior. Computer architects need such tools to evaluate how well programs will perform on newarchitectures. Software writers need tools to analyze their programs and identify critical sections of code.Compilerwriters often use such tools to find out how well theirinstruction schedulingorbranch predictionalgorithm is performing... The output of a profiler may be: A profiler can be applied to an individual method or at the scale of a module or program, to identify performance bottlenecks by making long-running code obvious.[1]A profiler can be used to understand code from a timing point of view, with the objective of optimizing it to handle various runtime conditions[2]or various loads.[3]Profiling results can be ingested by a compiler that providesprofile-guided optimization.[4]Profiling results can be used to guide the design and optimization of an individual algorithm; theKrauss matching wildcards algorithmis an example.[5]Profilers are built into someapplication performance managementsystems that aggregate profiling data to provide insight intotransactionworkloads indistributedapplications.[6] Performance-analysis tools existed onIBM/360andIBM/370platforms from the early 1970s, usually based on timer interrupts which recorded theprogram status word(PSW) at set timer-intervals to detect "hot spots" in executing code.[citation needed]This was an early example ofsampling(see below). In early 1974instruction-set simulatorspermitted full trace and other performance-monitoring features.[citation needed] Profiler-driven program analysis on Unix dates back to 1973,[7]when Unix systems included a basic tool,prof, which listed each function and how much of program execution time it used. In 1982gprofextended the concept to a completecall graphanalysis.[8] In 1994, Amitabh Srivastava andAlan EustaceofDigital Equipment Corporationpublished a paper describing ATOM[9](Analysis Tools with OM). The ATOM platform converts a program into its own profiler: atcompile time, it inserts code into the program to be analyzed. That inserted code outputs analysis data. This technique - modifying a program to analyze itself - is known as "instrumentation". In 2004 both thegprofand ATOM papers appeared on the list of the 50 most influentialPLDIpapers for the 20-year period ending in 1999.[10] Flat profilers compute the average call times, from the calls, and do not break down the call times based on the callee or the context. Call graphprofilers[8]show the call times, and frequencies of the functions, and also the call-chains involved based on the callee. In some tools full context is not preserved. Input-sensitive profilers[11][12][13]add a further dimension to flat or call-graph profilers by relating performance measures to features of the input workloads, such as input size or input values. They generate charts that characterize how an application's performance scales as a function of its input. Profilers, which are also programs themselves, analyze target programs by collecting information on the target program's execution. Based on their data granularity, which depends upon how profilers collect information, they are classified asevent-basedorstatisticalprofilers. Profilers interrupt program execution to collect information. Those interrupts can limit time measurement resolution, which implies that timing results should be taken with a grain of salt.Basic blockprofilers report a number of machineclock cyclesdevoted to executing each line of code, or timing based on adding those together; the timings reported per basic block may not reflect a difference betweencachehits and misses.[14][15] Event-based profilers are available for the following programming languages: These profilers operate bysampling. A sampling profiler probes the target program'scall stackat regular intervals usingoperating systeminterrupts. Sampling profiles are typically less numerically accurate and specific, providing only a statistical approximation, but allow the target program to run at near full speed. "The actual amount of error is usually more than one sampling period. In fact, if a value is n times the sampling period, the expected error in it is the square-root of n sampling periods."[16] In practice, sampling profilers can often provide a more accurate picture of the target program's execution than other approaches, as they are not as intrusive to the target program and thus don't have as many side effects (such as on memory caches or instruction decoding pipelines). Also since they don't affect the execution speed as much, they can detect issues that would otherwise be hidden. They are also relatively immune to over-evaluating the cost of small, frequently called routines or 'tight' loops. They can show the relative amount of time spent in user mode versus interruptible kernel mode such assystem callprocessing. Unfortunately, running kernel code to handle the interrupts incurs a minor loss of CPU cycles from the target program, diverts cache usage, and cannot distinguish the various tasks occurring in uninterruptible kernel code (microsecond-range activity) from user code. Dedicated hardware can do better: ARM Cortex-M3 and some recent MIPS processors' JTAG interfaces have a PCSAMPLE register, which samples theprogram counterin a truly undetectable manner, allowing non-intrusive collection of a flat profile. Some commonly used[17]statistical profilers for Java/managed code areSmartBear Software'sAQtime[18]andMicrosoft'sCLR Profiler.[19]Those profilers also support native code profiling, along withApple Inc.'sShark(OSX),[20]OProfile(Linux),[21]IntelVTuneand Parallel Amplifier (part ofIntel Parallel Studio), andOraclePerformance Analyzer,[22]among others. This technique effectively adds instructions to the target program to collect the required information. Note thatinstrumentinga program can cause performance changes, and may in some cases lead to inaccurate results and/orheisenbugs. The effect will depend on what information is being collected, on the level of timing details reported, and on whether basic block profiling is used in conjunction with instrumentation.[23]For example, adding code to count every procedure/routine call will probably have less effect than counting how many times each statement is obeyed. A few computers have special hardware to collect information; in this case the impact on the program is minimal. Instrumentation is key to determining the level of control and amount of time resolution available to the profilers.
https://en.wikipedia.org/wiki/Software_performance_analysis
Incomputers,hardware performance counters(HPCs),[1]orhardware countersare a set of special-purposeregistersbuilt into modernmicroprocessorsto store the counts of hardware-related activities. Advanced users often rely on those counters to conduct low-levelperformance analysisortuning. The number of available hardware counters in a processor is limited while eachCPUmodel might have a lot of different events that a developer might like to measure. Each counter can be programmed with the index of an event type to be monitored, like a L1 cache miss or a branch misprediction. One of the first processors to implement a hardware counter and an associated instruction to access it (theRDPMCinstruction) was theIntel Pentium, but they were not documented untilTerje Mathisenwrote an article about reverse engineering them inByteJuly 1994.[2] The following table shows some examples of CPUs and the number of available hardware counters: Compared to softwareprofilers, hardware counters provide low-overhead access to a wealth of detailed performance information related to CPU's functional units, caches and main memory etc. Another benefit of using them is that no source code modifications are needed in general. However, the types and meanings of hardware counters vary from one kind of architecture to another due to the variation in hardware organizations. There can be difficulties correlating the low level performance metrics back to source code. The limited number of registers to store the counters often force users to conduct multiple measurements to collect all desired performance metrics. Modernsuperscalarprocessors schedule and execute multiple instructionsout-of-orderat one time. These "in-flight" instructions can retire at any time, depending on memory access, hits in cache, stalls in the pipeline and many other factors. This can cause performance counter events to be attributed to the wrong instructions, making precise performance analysis difficult or impossible. AMD introduced methods to mitigate some of these drawbacks. For example, the Opteron processors have implemented[4]in 2007 a technique known as Instruction Based Sampling (IBS). AMD's implementation of IBS provides hardware counters for both fetch sampling (the front of the superscalar pipeline) and op sampling (the back of the pipeline). This results in discrete performance data associating retired instructions with the "parent" AMD64 instruction.
https://en.wikipedia.org/wiki/Hardware_performance_counter
DTraceis a comprehensive dynamictracingframework originally created bySun Microsystemsfortroubleshootingkerneland application problems on production systems in real time. Originally developed forSolaris, it has since been released under the freeCommon Development and Distribution License(CDDL) inOpenSolarisand its descendantillumos, and has been ported to several otherUnix-likesystems. Windows Server systems fromWindow Server 2025will have DTrace as part of the system. DTrace can be used to get a global overview of a running system, such as the amount of memory, CPU time, filesystem and network resources used by the active processes. It can also provide much more fine-grained information, such as a log of the arguments with which a specific function is being called, or a list of the processes accessing a specific file. In 2010,Oracle Corporation acquired Sun Microsystemsand announced the discontinuation of OpenSolaris. As a community effort of some core Solaris engineers to create a truly open source Solaris,illumosoperating systemwas announced viawebinaron Thursday, 3 August 2010,[3]as a fork on OpenSolaris OS/Net consolidation, including DTrace technology. In October 2011, Oracle announced theportingof DTrace toLinux,[4]and in 2019 official DTrace for Fedora is available onGitHub. For several years an unofficial DTrace port to Linux was available, with no changes in licensing terms.[5] In August 2017, Oracle released DTrace kernel code under theGPLv2+license, anduser spacecode under GPLv2 andUPLlicensing.[6]In September 2018Microsoftannounced that they had ported DTrace fromFreeBSDto Windows.[2] In September 2016 the OpenDTrace effort began ongithubwith both code and comprehensivedocumentationof the system's internals. The OpenDTrace effort maintains the original CDDL licensing for the code from OpenSolaris with additional code contributions coming under aBSD 2 Clauselicense. The goal of OpenDTrace is to provide an OS agnostic, portable implementation of DTrace that is acceptable to all consumers, including macOS, FreeBSD, OpenBSD, NetBSD, and Linux as well as embedded systems. Sun Microsystems designed DTrace to give operational insights that allow users to tune and troubleshoot applications and the OS itself. Testers write tracing programs (also referred to as scripts) using the D programming language (not to be confused with otherprogramming languages named "D"). The language, inspired byC, includes added functions and variables specific to tracing. D programs resembleAWKprograms in structure; they consist of a list of one or moreprobes(instrumentation points), and each probe is associated with an action. These probes are comparable to apointcutinaspect-oriented programming. Whenever the condition for the probe is met, the associated action is executed (the probe "fires"). A typical probe might fire when a certain file is opened, or a process is started, or a certain line of code is executed. A probe that fires may analyze the run-time situation by accessing thecall stackand context variables and evaluating expressions; it can then print out or log some information, record it in a database, or modify context variables. The reading and writing of context variables allows probes to pass information to each other, allowing them to cooperatively analyze the correlation of different events. Special consideration has been taken to make DTrace safe to use in a production environment. For example, there is minimalprobe effectwhen tracing is underway, and no performance impact associated with any disabled probe; this is important since there are tens of thousands of DTrace probes that can be enabled. New probes can also be created dynamically. DTrace scripts can be invoked directly from the command line, providing one or more probes and actions as arguments. Some examples: Scripts can also be written which can reach hundreds of lines in length, although typically only tens of lines are needed for advanced troubleshooting and analysis. Over 200 examples of open source DTrace scripts can be found in the DTraceToolkit,[7]created byBrendan Gregg(author of the DTrace book[8]), which also provides documentation and demonstrations of each. DTrace first became available for use in November 2003, and was formally released as part of Sun'sSolaris 10in January 2005. DTrace was the first component of theOpenSolarisproject to have its source code released under theCommon Development and Distribution License(CDDL). DTrace is an integral part ofillumosand related distributions. DTrace is a standard part of FreeBSD[9]andNetBSD.[10] Apple added DTrace support inMac OS X 10.5"Leopard", including a GUI calledInstruments.[11]Over 40 DTrace scripts from the DTraceToolkit are included in /usr/bin,[12]including tools to examine disk I/O (iosnoop) and process execution (execsnoop). Unlike other platforms that DTrace is supported on, Mac OS X has a flag (P_LNOATTACH) that a program may set that disallows tracing of that process by debugging utilities such as DTrace andgdb. In the original Mac OS X DTrace implementation, this could affect tracing of other system information, as unrelated probes that should fire while a program with this flag set was running would fail to do so.[13]The OS X 10.5.3 update addressed this issue a few months later.[14]However, since El Capitan,System Integrity Protectionprevents user from DTracing protected binary by default. TheLinuxport of DTrace has been available since 2008;[15]work continues actively to enhance and fix issues. There is also an activeimplementation on github. Standard core providers are available (fbt, syscall, profile), plus a special "instr" provider (some of the Solaris providers are not yet available as of 2013[update]). The Linux DTrace implementation is a loadablekernel module, which means that the kernel itself requires no modification, and thus allows DTrace to avoid CDDL vs. GPL licensing conflicts (in its source form, at least). However, once DTrace is loaded the kernel instance will be marked astainted. In 2007, a developer at QNX Software Systems announced on his blog that he and a colleague were working on incorporating DTrace into theQNXoperating system.[16] Oracle Corporation added beta DTrace support forOracle Linuxin 2011,[1]as a technology preview in theUnbreakable Enterprise Kernelrelease 2, which is under GPLv2 (the DTrace Linux kernel module was originally released under CDDL).[17]General availability was announced in December 2012.[18][19] On March 11, 2019, Microsoft released a version of DTrace for Windows 10 insider builds.[20]Microsoft included DTrace as a built-in tool inWindows Server 2025.[21][22] With a supportedlanguage provider, DTrace can retrieve context of the code, including function, source file, and line number location. Further, dynamic memory allocation and garbage collection can be made available if supported by the language.[23]Supported language providers includeassembly language[clarification needed],C,C++,Java,Erlang,JavaScript,Perl,PHP,Python,Ruby,shell script, andTcl. Application providersallow DTrace to follow the operation of applications through system calls and into the kernel. Applications that offer DTrace application providers includeMySQL,PostgreSQL,Oracle Database,Oracle Grid Engine, andFirefox.[23][24][25] DTrace was designed and implemented byBryan Cantrill,Mike Shapiro, andAdam Leventhal. The authors received recognition in 2005 for the innovations in DTrace fromInfoWorldandTechnology Review.[26][27]DTrace won the top prize inThe Wall Street Journal's 2006 Technology Innovation Awards competition.[28]The authors were recognized byUSENIXwith the Software Tools User Group (STUG) award in 2008.[29]
https://en.wikipedia.org/wiki/DTrace
Oracle Solarisis aproprietaryUnixoperating systemoffered byOracleforSPARCandx86-64basedworkstationsandservers. Originally developed bySun Microsystemsas Solaris, it superseded the company's earlierSunOSin 1993 and became known for itsscalability, especially on SPARC systems, and for originating many innovative features such asDTrace,ZFSand Time Slider.[3][4]After theSun acquisition by Oraclein 2010, it was renamed Oracle Solaris.[5] Solaris was registered as compliant with theSingle UNIX Specificationuntil April 29, 2019.[6][7][8]Historically, Solaris was developed asproprietary software. In June 2005, Sun Microsystems released most of thecodebaseunder theCDDLlicense, and founded theOpenSolarisopen-sourceproject.[9]Sun aimed to build a developer and user community with OpenSolaris; after the Oracle acquisition in 2010, the OpenSolaris distribution was discontinued[10][11]and later Oracle discontinued providing public updates to the source code of the Solaris kernel, effectively turning Solaris version 11 back into aclosed sourceproprietary operating system.[12]Following that, OpenSolaris was forked asIllumosand is alive through severalIllumos distributions. In September 2017, Oracle laid off most of the Solaris teams.[13] In 1987,AT&T Corporationand Sun announced that they were collaborating on a project to merge the most popular Unix variants on the market at that time:Berkeley Software Distribution,UNIX System V, andXenix. This became UnixSystem V Release 4(SVR4).[14] On September 4, 1991, Sun announced that it would replace its existing BSD-derived Unix,SunOS 4, with one based on SVR4. This was identified internally asSunOS 5, but a new marketing name was introduced at the same time:Solaris 2.[15]The justification for this new overbrand was that it encompassed not only SunOS, but also theOpenWindowsgraphical user interfaceandOpen Network Computing(ONC) functionality. Although SunOS 4.1.xmicro releases wereretroactively namedSolaris 1by Sun, the Solaris name is used almost exclusively to refer only to the releases based on SVR4-derived SunOS 5.0 and later.[16] For releases based on SunOS 5, the SunOS minor version is included in the Solaris release number. For example, Solaris 2.4 incorporates SunOS 5.4. After Solaris 2.6, the2.was dropped from the release name, so Solaris 7 incorporates SunOS 5.7, and the latest release SunOS 5.11 forms the core of Solaris 11.4. Although SunSoft stated in its initial Solaris 2 press release their intent to eventually support both SPARC and x86 systems, the first two Solaris 2 releases, 2.0 and 2.1, were SPARC-only. An x86 version of Solaris 2.1 was released in June 1993, about 6 months after the SPARC version, as adesktopand uniprocessor workgroup server operating system. It included theWabiemulator to support Windows applications.[17]At the time, Sun also offered theInteractive Unixsystem that it had acquired fromInteractive Systems Corporation.[18]In 1994, Sun released Solaris 2.4, supporting both SPARC and x86 systems from a unified source code base. In 2011, the Solaris 11 kernelsource codeleaked.[19][20] On September 2, 2017,Simon Phipps, a former Sun Microsystems employee not hired by Oracle in the acquisition, reported onTwitterthat Oracle had laid off the Solaris core development staff, which many interpreted as sign that Oracle no longer intended to support future development of the platform.[21]While Oracle did have a large layoff of Solaris development engineering staff, development continued and Solaris 11.4 was released in 2018.[22][23] Solaris uses a commoncode basefor the platforms it supports: 64-bitSPARCandx86-64.[24] Solaris has a reputation for being well-suited tosymmetric multiprocessing, supporting a large number ofCPUs.[25]It has historically been tightly integrated with Sun's SPARC hardware (including support for64-bitSPARCapplications since Solaris 7), with which it is marketed as a combined package. This has led to more reliable systems, but at a cost premium compared tocommodity PC hardware. However, it has supported x86 systems since Solaris 2.1 and 64-bit x86 applications since Solaris 10, allowing Sun to capitalize on the availability of commodity 64-bit CPUs based on thex86-64architecture. Sun heavily marketed Solaris for use with both its own x86-64-basedSun Java Workstationand the x86-64 models of theSun Ultra seriesworkstations, andserversbased onAMDOpteronandIntelXeonprocessors, as well as x86 systems manufactured by companies such asDell,[26]Hewlett-Packard, andIBM. As of 2009[update], the following vendors support Solaris for their x86 server systems: Solaris 2.5.1 included support for thePowerPCplatform (PowerPC Reference Platform), but the port was canceled before the Solaris 2.6 release.[31]In January 2006, a community of developers at Blastwave began work on a PowerPC port which they namedPolaris.[32]In October 2006, anOpenSolariscommunity project based on the Blastwave efforts and Sun Labs'Project Pulsar,[33]which re-integrated the relevant parts from Solaris 2.5.1 into OpenSolaris,[31]announced its first official source code release.[34] A port of Solaris to the IntelItaniumarchitecture was announced in 1997 but never brought to market.[35] On November 28, 2007,IBM, Sun, and Sine Nomine Associates demonstrated a preview ofOpenSolaris for System zrunning on anIBM System zmainframeunderz/VM,[36]calledSirius(in analogy to the Polaris project, and also due to the primary developer's Australian nationality:HMSSiriusof 1786 was a ship of theFirst FleettoAustralia). On October 17, 2008, a prototype release of Sirius was made available[37]and on November 19 the same year, IBM authorized the use of Sirius on System zIntegrated Facility for Linux(IFL) processors.[38] Solaris also supports theLinuxplatformapplication binary interface(ABI), allowing Solaris to run native Linuxbinarieson x86 systems. This feature is calledSolaris Containers for Linux Applications(SCLA), based on thebranded zonesfunctionality introduced in Solaris 10 8/07.[39] Solaris can be installed from various pre-packaged software groups, ranging from a minimalisticReduced Network Supportto a completeEntire PlusOEM. Installation of Solaris is not necessary for an individual to use the system. The DVD ISO image can be used to load Solaris, running in-memory, rather than initiating the installation. Additional software, like Apache, MySQL, etc. can be installed as well in a packaged form fromsunfreeware[40]andOpenCSW.[41]Solaris can be installed from physical media or a network for use on a desktop or server, or be used without installing on a desktop or server.[clarification needed][citation needed] There are several types of updates within each major release, including the Software Packages, and the Oracle Solaris Image. Additional minor updates called Support Repository Updates (SRUs) and Critical Patch Update Packages (CPUs), require a support credential, thus are not freely available to the public.[42] Early releases of Solaris usedOpenWindowsas the standard desktop environment. In Solaris 2.0 to 2.2, OpenWindows supported bothNeWSandXapplications, and providedbackward compatibilityforSunViewapplications from Sun's older desktop environment. NeWS allowed applications to be built in anobject-orientedway usingPostScript, a common printing language released in 1982. TheX Window Systemoriginated fromMIT'sProject Athenain 1984 and allowed for the display of an application to be disconnected from the machine where the application was running, separated by a network connection. Sun's original bundled SunView application suite was ported to X. Sun later dropped support for legacy SunView applications and NeWS with OpenWindows 3.3, which shipped with Solaris 2.3, and switched toX11R5withDisplay Postscriptsupport. The graphical look and feel remained based uponOPEN LOOK. OpenWindows 3.6.2 was the last release under Solaris 8. The OPEN LOOK Window Manager (olwm) with other OPEN LOOK specific applications were dropped in Solaris 9, but support libraries were still bundled, providing long term binary backwards compatibility with existing applications. The OPEN LOOK Virtual Window Manager (olvwm) can still be downloaded for Solaris from sunfreeware and works on releases as recent as Solaris 10. Sun and other Unix vendors created an industry alliance to standardize Unix desktops. As a member of theCommon Open Software Environment(COSE) initiative, Sun helped co-develop theCommon Desktop Environment(CDE). This was an initiative to create a standard Unix desktop environment. Each vendor contributed different components:Hewlett-Packardcontributed thewindow manager,IBMprovided thefile manager, and Sun provided thee-mailand calendar facilities as well as drag-and-drop support (ToolTalk). This new desktop environment was based upon theMotiflook and feel and the old OPEN LOOK desktop environment was considered legacy. CDE unified Unix desktops across multipleopen systemvendors. CDE was available as an unbundled add-on for Solaris 2.4 and 2.5, and was included in Solaris 2.6 through 10. In 2001, Sun issued a preview release of the open-source desktop environmentGNOME1.4, based on theGTK+toolkit, for Solaris 8.[43]Solaris 9 8/03 introduced GNOME 2.0 as an alternative to CDE. Solaris 10 includes Sun'sJava Desktop System(JDS), which is based on GNOME and comes with a large set of applications, includingStarOffice, Sun'soffice suite. Sun describes JDS as a "major component" of Solaris 10.[44]The Java Desktop System is not included in Solaris 11 which instead ships with a stock version of GNOME.[45]Likewise, CDE applications are no longer included in Solaris 11, but many libraries remain for binary backwards compatibility. The open source desktop environmentsKDEandXfce, along with numerous otherwindow managers, also compile and run on recent versions of Solaris. Sun was investing in a new desktop environment calledProject Looking Glasssince 2003. The project has been inactive since late 2006.[46] For versions up to 2005 (Solaris 9), Solaris was licensed under a license that permitted a customer to buy licenses in bulk, and install the software on any machine up to a maximum number. The key license grant was: License to Use. Customer is granted a non-exclusive and non-transferable license ("License") for the use of the accompanying binary software in machine-readable form, together with accompanying documentation ("Software"), by the number of users and the class of computer hardware for which the corresponding fee has been paid. In addition, the license provided a "License to Develop" granting rights to create derivative works, restricted copying to only a single archival copy, disclaimer of warranties, and the like. The license varied only little through 2004. From 2005 to 2010, Sun began to release the source code for development builds of Solaris under theCommon Development and Distribution License(CDDL) via theOpenSolarisproject. This code was based on the work being done for the post-Solaris 10 release (code-named "Nevada"; eventually released as Oracle Solaris 11). As the project progressed, it grew to encompass most of the necessary code to compile an entire release, with a few exceptions.[47] When Sun was acquired byOraclein 2010, the OpenSolaris project was discontinued after the board became unhappy with Oracle's stance on the project.[48]In March 2010, the previously freely available Solaris 10 was placed under a restrictive license that limited the use, modification and redistribution of the operating system.[49]The license allowed the user to download the operating system free of charge, through theOracle Technology Network, and use it for a 90-day trial period. After that trial period had expired the user would then have to purchase a support contract from Oracle to continue using the operating system. With the release of Solaris 11 in 2011, the license terms changed again. The new license allows Solaris 10 and Solaris 11 to be downloaded free of charge from the Oracle Technology Network and used without a support contract indefinitely; however, the license only expressly permits the user to use Solaris as a development platform and expressly forbids commercial and "production" use.[50]Educational use is permitted in some circumstances. From the OTN license: If You are an educational institution vested with the power to confer official high school, associate, bachelor, master and/or doctorate degrees, or local equivalent, ("Degree(s)"), You may also use the Programs as part of Your educational curriculum for students enrolled in Your Degree program(s) solely as required for the conferral of such Degree (collectively "Educational Use"). When Solaris is used without a support contract it can be upgraded to each new "point release"; however, a support contract is required for access to patches and updates that are released monthly.[51] Notable features of Solaris includeDTrace,Doors,Service Management Facility,Solaris Containers,Solaris Multiplexed I/O,Solaris Volume Manager,ZFS, andSolaris Trusted Extensions. Updates to Solaris versions are periodically issued. In the past, these were named after the month and year of their release, such as "Solaris 10 1/13"; as of Solaris 11, sequential update numbers are appended to the release name with a period, such as "Oracle Solaris 11.4". In ascending order, the following versions of Solaris have been released: [90][91][92] A more comprehensive summary of some Solaris versions is also available.[93]Solaris releases are also described in the Solaris 2 FAQ.[94] The underlying Solaris codebase has been under continuous development since work began in the late 1980s on what was eventually released as Solaris 2.0. Each version such as Solaris 10 is based on a snapshot of this development codebase, taken near the time of its release, which is then maintained as a derived project. Updates to that project are built and delivered several times a year until the next official release comes out. The Solaris version under development by Sun since the release of Solaris 10 in 2005, wascodenamedNevada, and is derived from what is now theOpenSolariscodebase. In 2003, an addition to the Solaris development process was initiated. Under the program nameSoftware Express for Solaris(or justSolaris Express), a binary release based on the current development basis was made available for download on a monthly basis, allowing anyone to try out new features and test the quality and stability of the OS as it progressed to the release of the next official Solaris version.[95]A later change to this program introduced a quarterly release model with support available, renamedSolaris Express Developer Edition(SXDE). In 2007, Sun announcedProject Indianawith several goals, including providing an open source binary distribution of the OpenSolaris project, replacing SXDE.[96]The first release of this distribution wasOpenSolaris 2008.05. TheSolaris Express Community Edition(SXCE)was intended specifically for OpenSolaris developers.[97]It was updated every two weeks until it was discontinued in January 2010, with a recommendation that users migrate to the OpenSolaris distribution.[98]Although the download license seen when downloading the image files indicates its use is limited to personal, educational and evaluation purposes, the license acceptance form displayed when the user actually installs from these images lists additional uses including commercial and production environments. SXCE releases terminated with build 130 and OpenSolaris releases terminated with build 134 a few weeks later. The next release of OpenSolaris based on build 134 was due in March 2010, but it was never fully released, though the packages were made available on the package repository. Instead, Oracle renamed the binary distribution Solaris 11 Express, changed the license terms and released build 151a as 2010.11 in November 2010. All in all, Sun has stayed the course with Solaris 9. While its more user-friendly management is welcome, that probably won't be enough to win over converts. What may is the platform's reliability, flexibility, and power. Be that as it may, since the Solaris 10 download is free, it behooves any IT manager to load it on an extra server and at least give it a try. Solaris 10 provides a flexible background for securely dividing system resources, providing performance guarantees and tracking usage for these containers. Creating basic containers and populating them with user applications and resources is simple. But some cases may require quite a bit of fine-tuning. I think that Sun has put some really nice touches on Solaris 10 that make it a better operating system for both administrators and users. The security enhancements are a long time coming, but are worth the wait. Is Solaris 10 perfect, in a word no it is not. But for most uses, including a desktop OS I think Solaris 10 is a huge improvement over previous releases. We've had fun with Solaris 10. It's got virtues that we definitely admire. What it needs to compete with Linux will be easier to bring about than what it's already got. It could become a Linux killer, or at least a serious competitor on Linux's turf. The only question is whether Sun has the will to see it through.
https://en.wikipedia.org/wiki/Solaris_(operating_system)
macOS, previouslyOS Xand originallyMac OS X, is aUnix-based[6][7]operating systemdeveloped and marketed byApplesince 2001. It is the current operating system for Apple'sMac computers. Within the market ofdesktopandlaptopcomputers, it is thesecond most widely used desktop OS, afterMicrosoft Windowsand ahead of allLinuxdistributions, includingChromeOSandSteamOS. As of 2024[update], the most recent release of macOS ismacOS 15 Sequoia, the 21st major version of macOS.[8] Mac OS X succeededclassic Mac OS, the primaryMacintosh operating systemfrom 1984 to 2001. Its underlying architecture came fromNeXT'sNeXTSTEP, as a result ofApple's acquisition of NeXT, which also broughtSteve Jobsback to Apple. The first desktop version,Mac OS X 10.0, was released on March 24, 2001.Mac OS X Leopardand all later versions of macOS,[9]other thanOS X Lion,[10]areUNIX 03certified. The derivatives of macOS are Apple's other operating systems:iOS,iPadOS,watchOS,tvOS,audioOSandvisionOS. macOS has supported three major processor architectures: originallyPowerPC-based Macs in 1999;Intel Core-based Macsfrom2006; and self-designed64-bit ArmApple M seriesMacs since2020.[11] A prominent part of macOS's originalbrand identitywas the use ofRoman numeralX, pronounced "ten", as well ascode namingeach release after species ofbig cats, and later, places withinCalifornia.[12]Apple shortened the name to "OS X" in 2011 and then changed it to "macOS" in 2016 to align with the branding of Apple's other operating systems.[13]After 16 distinctversionsof macOS 10,macOS Big Surwas presented as version 11 in 2020, and every subsequent version has also incremented the major version number, similarly to classic Mac OS and iOS, but is still named after places within California. The heritage of what would become macOS had originated atNeXT, a company founded bySteve Jobsfollowing his departure from Apple in 1985. There, theUnix-likeNeXTSTEPoperating system was developed, before being launched in 1989. Thekernelof NeXTSTEP is based upon theMach kernel, which was originally developed atCarnegie Mellon University, with additional kernel layers and low-leveluser spacecode derived from parts ofFreeBSD[14]and otherBSDoperating systems.[15]Itsgraphical user interfacewas built on top of anobject-orientedGUI toolkitusing theObjective-Cprogramming language. Throughout the 1990s, Apple had tried to create a "next-generation" OS to succeed itsclassic Mac OSthrough theTaligent,CoplandandGershwinprojects, but all were eventually abandoned.[16]This led Apple to acquireNeXTin 1997, allowing NeXTSTEP, later calledOPENSTEP, to serve as the basis for Apple's next generation operating system.[17]This purchase also led to Steve Jobs returning to Apple as an interim, and then the permanent CEO, shepherding the transformation of the programmer-friendly OPENSTEP into a system that would be adopted by Apple's primary market of home users and creative professionals. The project was first codenamed "Rhapsody" before officially being named Mac OS X.[18][19] The letter "X" in Mac OS X's name refers to the number 10, aRoman numeral, and Apple has stated that it should be pronounced "ten" in this context. However, it is also commonly pronounced like the letter "X".[20][21]TheiPhone X,iPhone XRandiPhone XSall later followed this convention. Previous Macintosh operating systems (versions of theclassic Mac OS) were named usingArabic numerals, as withMac OS 8andMac OS 9.[22][20]UntilmacOS 11 Big Sur, all versions of the operating system were given version numbers of the form 10.x, with this going from 10.0 up until 10.15; starting withmacOS 11 Big Sur, Apple switched to numbering major releases with numbers that increase by 1 with every major release. The first version of Mac OS X,Mac OS X Server 1.0, was a transitional product, featuring an interface resembling theclassic Mac OS, though it was not compatible with software designed for the older system. Consumer releases of Mac OS X included morebackward compatibility. Mac OS applications could be rewritten to run natively via theCarbon API; many could also be run directly through theClassic Environmentwith a reduction in performance. The consumer version of Mac OS X was launched in 2001 withMac OS X 10.0. Reviews were variable, with extensive praise for its sophisticated, glossyAqua interface, but criticizing it for sluggish performance.[23]With Apple's popularity at a low, the maker ofFrameMaker,Adobe Inc., declined to develop new versions of it for Mac OS X.[24]Ars Technicacolumnist John Siracusa, who reviewed every major OS X release up to 10.10, described the early releases in retrospect as "dog-slow, feature poor" and Aqua as "unbearably slow and a huge resource hog".[23][25][26] Apple rapidly developed several new releases of Mac OS X.[27]Siracusa's review of version 10.3,Panther, noted "It's strange to have gone from years of uncertainty andvaporwareto a steady annual supply of major new operating system releases."[28]Version 10.4,Tiger, reportedly shocked executives atMicrosoftby offering a number of features, such as fast file searching and improved graphics processing, that Microsoft had spentseveral years strugglingto add toWindows Vistawith acceptable performance.[29] As the operating system evolved, it moved away from theclassic Mac OS, with applications being added and removed.[30]Considering music to be a key market, Apple developed theiPodmusic player and music software for the Mac, includingiTunesandGarageBand.[31]Targeting the consumer and media markets, Apple emphasized its new "digital lifestyle" applications such as theiLifesuite, integrated home entertainment through theFront Rowmedia center and theSafariweb browser. With the increasing popularity of the internet, Apple offered additional online services, including the .Mac,MobileMeand most recentlyiCloudproducts. It later began selling third-party applications through theMac App Store. Newer versions of Mac OS X also included modifications to the general interface, moving away from the striped gloss and transparency of the initial versions. Some applications began to use abrushed metalappearance, or non-pinstriped title bar appearance in version 10.4.[32]In Leopard, Apple announced a unification of the interface, with a standardized gray-gradient window style.[33][34] In 2006, the firstIntelMacs were released with a specialized version ofMac OS X 10.4 Tiger.[35] A key development for the system was the announcement and release of theiPhonefrom 2007 onwards. While Apple's previousiPodmedia players used aminimaloperating system, the iPhone used an operating system based on Mac OS X, which would later be called "iPhone OS" and theniOS. The simultaneous release of two operating systems based on the same frameworks placed tension on Apple, which cited the iPhone as forcing it to delayMac OS X 10.5 Leopard.[36]However, after Apple opened the iPhone to third-party developers its commercial success drew attention to Mac OS X, with many iPhone software developers showing interest in Mac development.[37] In 2007,Mac OS X 10.5 Leopardwas the sole release withuniversal binarycomponents, allowing installation on both Intel Macs and selectPowerPCMacs.[38]It is also the final release with PowerPC Mac support.Mac OS X 10.6 Snow Leopardwas the first version of Mac OS X to be built exclusively for Intel Macs, and the final release with 32-bit Intel Mac support.[39]The name was intended to signal its status as an iteration of Leopard, focusing on technical and performance improvements rather than user-facing features; indeed it was explicitly branded to developers as being a 'no new features' release.[40]Since its release, several OS X or macOS releases (namelyOS X Mountain Lion,OS X El Capitan,macOS High Sierra, andmacOS Monterey) follow this pattern, with a name derived from its predecessor, similar to the 'tick–tock model' used by Intel. In two succeeding versions,LionandMountain Lion, Apple moved some applications to a highlyskeuomorphicstyle of design inspired by contemporary versions of iOS while simplifying some elements by making controls such as scroll bars fade out when not in use.[25]This direction was, like brushed metal interfaces, unpopular with some users, although it continued a trend of greater animation and variety in the interface previously seen in design aspects such as theTime Machinebackuputility, which presented past file versions against a swirling nebula, and the glossy translucentdockofLeopardandSnow Leopard.[41]In addition, withMac OS X 10.7 Lion, Apple ceased to release separateserverversions of Mac OS X, selling server tools as a separate downloadable application through the Mac App Store. A review described the trend in the server products as becoming "cheaper and simpler... shifting its focus from large businesses to small ones."[42] In 2012, with the release ofOS X 10.8 Mountain Lion, the name of the system was officially shortened from Mac OS X to OS X, after theprevious versionshortened the system name in a similar fashion a year prior. That year, Apple removed the head of OS X development,Scott Forstall, and design was changed towards a more minimal direction.[43]Apple's new user interface design, using deep color saturation, text-only buttons and a minimal, 'flat' interface, was debuted withiOS 7in 2013. With OS X engineers reportedly working on iOS 7, the version released in 2013,OS X 10.9 Mavericks, was something of a transitional release, with some of the skeuomorphic design removed, while most of the general interface of Mavericks remained unchanged.[44]The next version,OS X 10.10 Yosemite, adopted a design similar toiOS 7but with greater complexity suitable for an interface controlled with a mouse.[45] From 2012 onwards, the system has shifted to an annual release schedule similar to that ofiOSand Mac OS X releases prior to10.4 Tiger[citation needed]. It also steadily cut the cost of updates from Snow Leopard onwards, before removing upgrade fees altogether inOS X Mavericks.[46]Some journalists and third-party software developers have suggested that this decision, while allowing more rapid feature release, meant less opportunity to focus on stability, with no version of OS X recommendable for users requiring stability and performance above new features.[47]Apple's 2015 update,OS X 10.11 El Capitan, was announced to focus specifically on stability and performance improvements.[48] In 2016, with the release ofmacOS 10.12 Sierra, the name was changed from OS X to macOS with the purpose of aligning it with the branding of Apple's other primary operating systems:iOS,watchOS, andtvOS.[49][50]macOS Sierra addedSiri,iCloud Drive, picture-in-picture support, a Night Shift mode that switches the display to warmer colors at night, and two Continuity features: Universal Clipboard, which syncs a user's clipboard across their Apple devices, and Auto Unlock, which can unlock a user's Mac with their Apple Watch. macOS Sierra also adds support for theApple File System(APFS), Apple's successor to the datedHFS+file system.[51][52][53]macOS 10.13 High Sierra, released in 2017, included performance improvements,Metal 2andHEVCsupport, and made APFS the default file system forSSDboot drives.[54] Its successor,macOS 10.14 Mojave, was released in 2018, adding a dark mode option and adynamic wallpaper setting.[55]It was succeeded bymacOS 10.15 Catalinain 2019, which replacesiTuneswith separate apps for different types of media, and introduces the Catalyst system for porting iOS apps.[56] In 2020, Apple announcedmacOS 11 Big Surat that year's WWDC. This was the first increment in the primary version number of macOS since the release ofMac OS X Public Betain 2000; updates to macOS 11 were given 11.x numbers, matching the version numbering scheme used by Apple's other operating systems. Big Sur brought major changes to the user interface and was the first version to run onApple Silicon, based on theARMarchitecture.[57]The numbering system started with Big Sur continued in 2021 withmacOS 12 Monterey, 2022 withmacOS 13 Ventura, 2023 withmacOS 14 Sonoma, and 2024 withmacOS 15 Sequoia. At macOS's core is aPOSIX-compliant operating system built on top of theXNUkernel,[81](which incorporated large parts ofFreeBSDkernel[14]) andFreeBSDuserland[14]for the standard Unix facilities available from thecommand line interface. Apple has released this family of software as afreeandopen sourceoperating system namedDarwin. On top of Darwin, Apple layered a number of components, including theAquainterface and theFinder, to complete theGUI-based operating system which is macOS.[82] With its original introduction as Mac OS X, the system brought a number of new capabilities to provide a more stable and reliable platform than its predecessor, theclassic Mac OS. For example,pre-emptive multitaskingandmemory protectionimproved the system's ability to run multiple applications simultaneously without them interrupting or corrupting each other. Many aspects of macOS's architecture are derived fromOPENSTEP, which was designed to be portable, to ease the transition from one platform to another. For example,NeXTSTEPwas ported from the original68k-based NeXT workstations tox86and other architectures before NeXT was purchased by Apple,[83]and OPENSTEP was later ported to thePowerPCarchitecture as part of theRhapsody project. Prior to macOS High Sierra, and on drives other thansolid state drives(SSDs), the defaultfile systemisHFS+, which it inherited from the classic Mac OS. Operating system designerLinus Torvaldshad criticized HFS+, saying it is "probably the worst file system ever", whose design is "actively corrupting user data". He criticized thecase insensitivityof file names, a design made worse when Apple extended the file system to supportUnicode.[84][85] TheDarwinsubsystem in macOS manages the file system, which includes the Unixpermissionslayer. In 2003 and 2005, twoMacworldeditors expressed criticism of the permission scheme; Ted Landau called misconfigured permissions "the most common frustration" in macOS, while Rob Griffiths suggested that some users may even have toreset permissionsevery day, a process which can take up to 15 minutes.[86]More recently, another Macworld editor, Dan Frakes, called the procedure of repairing permissions vastly overused.[87]He argues that macOS typically handles permissions properly without user interference, and resetting permissions should only be tried when problems emerge.[88] The architecture of macOS incorporates a layered design:[89]the layered frameworks aid rapid development of applications by providing existing code for common tasks.[90]Apple provides its ownsoftware developmenttools, most prominently anintegrated development environmentcalledXcode. Xcode provides interfaces tocompilersthat support severalprogramming languagesincludingC,C++,Objective-C, andSwift. For theMac transition to Intel processors, it was modified so that developers could build their applications as auniversal binary, which provides compatibility with both the Intel-based and PowerPC-based Macintosh lines.[91]First and third-party applications can be controlled programmatically using theAppleScriptframework,[92]retained from theclassic Mac OS,[93]or using the newerAutomatorapplication that offers pre-written tasks that do not require programming knowledge.[94] Apple offered two mainAPIsto develop software natively for macOS:CocoaandCarbon. Cocoa was a descendant of APIs inherited fromOPENSTEPwith no ancestry from theclassic Mac OS, while Carbon was an adaptation of classic Mac OS APIs, allowing Mac software to be minimally rewritten to run natively on Mac OS X.[19] The Cocoa API was created as the result of a 1993 collaboration betweenNeXTComputer andSun Microsystems. This heritage is highly visible for Cocoa developers, since the "NS" prefix is ubiquitous in the framework, standing variously forNeXTSTEP orNeXT/Sun. The official OPENSTEP API, published in September 1994, was the first to split the API between Foundation and ApplicationKit and the first to use the "NS" prefix.[83]Traditionally, Cocoa programs have been mostly written inObjective-C, with Java as an alternative. However, on July 11, 2005, Apple announced that "features added to Cocoa in Mac OS X versions later than 10.4 will not be added to the Cocoa-Java programming interface."[95]macOS also used to support theJava Platformas a "preferred software package"—in practice this means that applications written in Java fit as neatly into the operating system as possible while still beingcross-platformcompatible, and that graphical user interfaces written inSwinglook almost exactly like native Cocoa interfaces. Since 2014, Apple has promoted its new programming languageSwiftas the preferred language for software development on Apple platforms. Apple's original plan with macOS was to require all developers to rewrite their software into the Cocoa APIs. This caused much outcry among existing Mac developers, who threatened to abandon the platform rather than invest in a costly rewrite, and the idea was shelved.[19][96]To permit a smooth transition from Mac OS 9 to Mac OS X, theCarbonApplication Programming Interface(API) was created.[19]Applications written with Carbon were initially able to run natively on both classic Mac OS and Mac OS X, although this ability was later dropped as Mac OS X developed. Carbon was not included in the first product sold as Mac OS X: the little-used original release ofMac OS X Server 1.0, which also did not include the Aqua interface.[97]Apple limited further development of Carbon from the release of Leopard onwards and announced that Carbon applications would not run at 64-bit.[96][19]A number of macOS applications continued to use Carbon for some time afterwards, especially ones with heritage dating back to the classic Mac OS and for which updates would be difficult, uneconomic or not necessary. This includedMicrosoft Officeup toOffice 2016, and Photoshop up to CS5.[98][96]Early versions of macOS could also run some classic Mac OS applications through theClassic Environmentwith performance limitations; this feature was removed from 10.5 onwards and all Macs using Intel processors. Because macOS isPOSIXcompliant, many software packages written for the otherUnix-likesystems includingLinuxcan be recompiled to run on it, including many scientific and technical programs.[99]Third-party projects such asHomebrew,Fink,MacPortsandpkgsrcprovide pre-compiled or pre-formatted packages. Apple and others have provided versions of theX Window Systemgraphical interface which can allow these applications to run with an approximation of the macOS look-and-feel.[100][101][102]The current Apple-endorsed method is the open-sourceXQuartzproject; earlier versions could use theX11application provided by Apple, or before that theXDarwinproject.[103] Applications can be distributed to Macs and installed by the user from any source and by any method such as downloading (with or withoutcode signing, available via an Apple developer account) or through theMac App Store, a marketplace of software maintained by Apple through a process requiring the company's approval. Apps installed through the Mac App Store run within asandbox, restricting their ability to exchange information with other applications or modify the core operating system and its features. This has been cited as an advantage, by allowing users to install apps with confidence that they should not be able to damage their system, but also as a disadvantage due to blocking the Mac App Store's use for professional applications that require elevated privileges.[104][105]Applications without any code signature cannot be run by default except from a computer's administrator account.[106][107] Apple produces macOS applications. Some are included with macOS and some sold separately. This includesiWork,Final Cut Pro,Logic Pro,iLife, and the database applicationFileMaker. Numerous other developers also offersoftware for macOS. In 2018, Apple introduced an application layer, codenamed Marzipan, toportiOS apps to macOS.[108][109]macOS Mojave included ports of four first-party iOS apps includingHomeandNews, and it was announced that the API would be available for third-party developers to use from 2019.[110][111][112]WithmacOS Catalinain 2019, the application layer was made available to third-party developers asMac Catalyst.[113] List of macOS versions, the supported systems on which they run, and their RAM requirements Tools such asXPostFactoand patches applied to the installation media have been developed by third parties to enable installation of newer versions of macOS on systems not officially supported by Apple. This includes a number of pre-G3 Power Macintosh systems that can be made to run up to and including Mac OS X 10.2 Jaguar, all G3-based Macs which can run up to and including Tiger, and sub-867 MHz G4 Macs can run Leopard by removing the restriction from the installation DVD or entering a command in the Mac'sOpen Firmwareinterface to tell the Leopard Installer that it has a clock rate of 867 MHz or greater. Except for features requiring specific hardware such as graphics acceleration or DVD writing, the operating system offers the same functionality on all supported hardware. As most Mac hardware components, or components similar to those, since the Intel transition are available for purchase,[118]some technology-capable groups have developed software to install macOS on non-Apple computers. These are referred to asHackintoshes, aportmanteauof the words "hack" and "Macintosh". This violates Apple'sEULA(and is therefore unsupported by Apple technical support, warranties etc.), but communities that cater to personal users, who do not install for resale and profit, have generally been ignored by Apple.[119][120][121]These self-made computers allow more flexibility and customization of hardware, but at a cost of leaving the user more responsible for their own machine, such as on matter of data integrity or security.[122]Psystar, a business that attempted to profit from selling macOS on non-Apple certified hardware, was sued by Apple in 2008.[123] In April 2002, eWeek announced a rumor that Apple had a version of Mac OS X code-namedMarklar, which ran onIntel x86processors. The idea behind Marklar was to keep Mac OS X running on an alternative platform should Apple become dissatisfied with the progress of thePowerPCplatform.[124]These rumors subsided until late in May 2005, when various media outlets, such asThe Wall Street Journal[125]andCNET,[126]announced that Apple would unveil Marklar in the coming months.[127][128][129] On June 6, 2005, Steve Jobs announced in his keynote address at WWDC that Apple would be making the transition from PowerPC toIntelprocessors over the following two years, and that Mac OS X would support both platforms during the transition. Jobs also confirmed rumors that Apple had versions of Mac OS X running on Intel processors for most of its developmental life. Intel-based Macs would run a new recompiled version of OS X along withRosetta, abinary translationlayer which enables software compiled for PowerPC Mac OS X to run on Intel Mac OS X machines.[130]The system was included with Mac OS X versions up to version 10.6.8.[131]Apple dropped support for Classic mode on the new Intel Macs. Third party emulation software such asMini vMac,Basilisk IIandSheepShaverprovided support for some early versions of Mac OS. A new version of Xcode and the underlying command-line compilers supported buildinguniversal binariesthat would run on either architecture.[132] PowerPC-only software is supported with Apple's officialbinary translationsoftware,Rosetta, though applications eventually had to be rewritten to run properly on the newer versions released for Intel processors. Apple initially encouraged developers to produce universal binaries with support for both PowerPC and Intel.[133]PowerPC binaries suffer a performance penalty when run on Intel Macs through Rosetta. Moreover, some PowerPC software, such as kernel extensions and System Preferences plugins, are not supported on Intel Macs at all. Plugins for Safari need to be compiled for the same platform as Safari, so when Safari is running on Intel, it requires plug-ins that have been compiled as Intel-only or universal binaries, so PowerPC-only plug-ins will not work.[134]While Intel Macs can run PowerPC, Intel, and universal binaries, PowerPC Macs support only universal and PowerPC builds. Support for the PowerPC platform was dropped following the transition. In 2009, Apple announced at WWDC that Mac OS X 10.6 Snow Leopard would drop support for PowerPC processors and be Intel-only.[135]Rosetta continued to be offered as an optional download or installation choice in Snow Leopard before it was discontinued with Mac OS X 10.7 Lion.[136]In addition, new versions of Mac OS X first- and third-party software increasingly required Intel processors, including new versions of iLife, iWork, Aperture and Logic Pro. Rumors of Apple shifting Macs from Intel to in-house ARM processors used by iOS devices began circulating as early as 2011,[137]and ebbed and flowed throughout the 2010s.[138]Rumors intensified in 2020, when numerous reports announced that the company would announce its shift to its custom processors at WWDC.[139] Apple officially announced its shift toprocessors designed in-houseon June 22, 2020, at WWDC 2020, with the transition planned to last for approximately two years.[140]The first release of macOS to support ARM wasmacOS Big Sur. Big Sur and later versions supportUniversal 2 binaries, which are applications consisting of both Intel (x86-64) and Apple silicon (AArch64) binaries; when launched, only the appropriate binary is run. Additionally, Intel binaries can be run on Apple silicon-based Macs using theRosetta 2binary translationsoftware. The transition was completed atWWDC 2023with the announce of the Apple siliconMac Pro, ending the transition in 3 years, slightly behind schedule. The change in processor architecture allows Macs with ARM processors to be able to run iOS and iPadOS apps natively.[141] One of the major differences between theclassic Mac OSand the current macOS was the addition ofAqua, a graphical user interface with water-like elements, in the first major release of Mac OS X. Every window element, text, graphic, orwidgetis drawn on-screen usingspatial anti-aliasingtechnology.[142]ColorSync, a technology introduced many years before, was improved and built into the core drawing engine, to provide color matching forprintingandmultimediaprofessionals.[143]Also,drop shadowswere added around windows and isolated text elements to provide a sense of depth. New interface elements were integrated, including sheets (dialog boxesattached to specific windows) and drawers, which would slide out and provide options. The use of soft edges, translucent colors, and pinstripes, similar to the hardware design of the firstiMacs, brought more texture and color to the user interface when compared to whatMac OS 9andMac OS X Server 1.0's "Platinum" appearance had offered. According to Siracusa, the introduction of Aqua and its departure from the then conventional look "hit like a ton of bricks."[144]Bruce Tognazzini(who founded the original Apple Human Interface Group) said that the Aqua interface inMac OS X 10.0represented a step backwards in usability compared with the original Mac OS interface.[145][146]Third-party developers started producingskinsfor customizable applications and other operating systems which mimicked the Aqua appearance. To some extent, Apple has used the successful transition to this new design as leverage, at various times threateninglegal actionagainst people who make or distribute software with an interface the company says is derived from itscopyrighteddesign.[147] Apple has continued to change aspects of the macOS appearance and design, particularly with tweaks to the appearance of windows and the menu bar. Since 2012, Apple has sold almost all of its Mac models with high-resolutionRetina displays, and macOS and itsAPIshave extensive support for resolution-independent development on supporting high-resolution displays. Reviewers have described Apple's support for the technology as superior to that on Windows.[148][149][150] Thehuman interface guidelinespublished by Apple for macOS are followed by many applications, giving them consistent user interface and keyboard shortcuts.[151]In addition, new services for applications are included, which include spelling and grammar checkers, special characters palette, color picker, font chooser and dictionary; these global features are present in every Cocoa application, adding consistency. The graphics systemOpenGLcomposites windows onto the screen to allow hardware-accelerated drawing. This technology, introduced in version 10.2, is calledQuartz Extreme, a component ofQuartz. Quartz's internal imaging model correlates well with thePortable Document Format(PDF) imaging model, making it easy to output PDF to multiple devices.[143]As a side result, PDF viewing and creating PDF documents from any application are built-in features.[152]Reflecting its popularity with design users, macOS also has system support for a variety of professional video and image formats and includes an extensive pre-installed font library, featuring many prominent brand-name designs.[153] TheFinderis a file browser allowing quick access to all areas of the computer, which has been modified throughout subsequent releases of macOS.[154][155]Quick Lookhas been part of the Finder sinceversion 10.5. It allows for dynamic previews of files, including videos and multi-page documents without opening any other applications.Spotlight, a file searching technology which has been integrated into the Finder sinceversion 10.4, allows rapid real-time searches of data files; mail messages; photos; and other information based on item properties (metadata) or content.[156][157]macOS makes use of aDock, which holds file and folder shortcuts as well as minimized windows. Apple added Exposé inversion 10.3(calledMission Controlsinceversion 10.7), a feature which includes three functions to help accessibility between windows and desktop. Its functions are to instantly reveal all open windows as thumbnails for easy navigation to different tasks, display all open windows as thumbnails from the current application, and hide all windows to access the desktop.[158]FileVaultis optional encryption of the user's files with the 128-bitAdvanced Encryption Standard(AES-128).[159] Features introduced inversion 10.4includeAutomator, an application designed to create an automatic workflow for different tasks;[160]Dashboard, a full-screen group of small applications calleddesktop widgetsthat can be called up and dismissed in one keystroke;[161]andFront Row, a media viewer interface accessed by theApple Remote.[162]Sync Services allows applications to access a centralized extensible database for various elements of user data, including calendar and contact items. The operating system then managed conflicting edits and data consistency.[163] All system icons are scalable up to 512×512 pixels as ofversion 10.5to accommodate various places where they appear in larger size, including for example theCover Flowview, athree-dimensionalgraphical user interface included withiTunes, the Finder, and other Apple products for visually skimming through files and digital media libraries via cover artwork. That version also introducedSpaces, avirtual desktopimplementation which enables the user to have more than one desktop and display them in an Exposé-like interface;[164]an automatic backup technology calledTime Machine, which allows users to view and restore previous versions of files and application data;[165]and Screen Sharing was built in for the first time.[166] In more recent releases, Apple has developed support foremojicharacters by including the proprietaryApple Color Emojifont.[167][168]Apple has also connected macOS with social networks such asTwitterandFacebookthrough the addition of share buttons for content such as pictures and text.[169]Apple has brought several applications and features that originally debuted iniOS, its mobile operating system, to macOS in recent releases, notably theintelligent personal assistantSiri, which was introduced inversion 10.12of macOS.[170][171] There are 47 system languages available in macOS for the user at the moment of installation; the system language is used throughout the entire operating system environment.[172]Input methods for typing in dozens of scripts can be chosen independently of the system language.[173]Recent updates have added increased support forChinese charactersand interconnections with popular social networks inChina.[174][175][176][177] macOS can be updated using the Software Update settings pane inSystem Settingsor thesoftwareupdatecommand lineutility. UntilOS X 10.8 Mountain Lion, a separateSoftware Updateapplication performed this functionality. In Mountain Lion and later, this was merged into theMac App Storeapplication, although the underlying update mechanism remains unchanged and is fundamentally different from the download mechanism used when purchasing an App Store application. InmacOS 10.14 Mojave, the updating function was moved again to the Software Update settings pane. Most Macs receive six or seven years of macOS updates. After a new major release of macOS, the previous two releases still receive occasional updates, but many security vulnerabilities are only patched in the latest macOS release.[178] Mac OS X versions were named afterbig cats, with the exception ofMac OS X Server 1.0and the original public beta, fromMac OS X 10.0untilOS X 10.9 Mavericks, when Apple switched to usingCalifornialocations. Prior to its release, version 10.0 wascode namedinternally at Apple as "Cheetah", andMac OS X 10.1was code named internally as "Puma". After the immense buzz surroundingMac OS X 10.2, codenamed "Jaguar", Apple's product marketing began openly using the code names to promote the operating system.Mac OS X 10.3was marketed as "Panther",Mac OS X 10.4as "Tiger",Mac OS X 10.5as "Leopard",Mac OS X 10.6as "Snow Leopard",Mac OS X 10.7as "Lion",OS X 10.8as "Mountain Lion", andOS X 10.9as "Mavericks". "Panther", "Tiger" and "Leopard" are registered as trademarks of Apple,[179][180][181]but "Cheetah", "Puma" and "Jaguar" have never been registered. Apple has also registered "Lynx" and "Cougar" as trademarks, though these were allowed to lapse.[182][183]Computer retailerTiger Directsued Apple for its use of the name "Tiger". On May 16, 2005, a US federal court in the Southern District of Florida ruled that Apple's use did not infringe on Tiger Direct's trademark.[184] On September 13, 2000, Apple released a US$29.95[185]"preview" version of Mac OS X, internally codenamed Kodiak, to gain feedback from users. The "PB", as it was known, marked the first public availability of the Aqua interface and Apple made many changes to the UI based on customer feedback. Mac OS X Public Beta expired and ceased to function in Spring 2001.[186] On March 24, 2001, Apple released Mac OS X 10.0 (internally codenamed Cheetah).[187]The initial version was slow,[188]incomplete,[189]and had very few applications available at launch, mostly from independent developers.[190]While many critics suggested that the operating system was not ready for mainstream adoption, they recognized the importance of its initial launch as a base on which to improve.[189]Simply releasing Mac OS X was received by the Macintosh community as a great accomplishment,[189]for attempts to overhaul the Mac OS had been underway since 1996, and delayed by countless setbacks. Later that year, on September 25, 2001, Mac OS X 10.1 (internally codenamed Puma) was released. It featured increased performance and provided missing features, such as DVD playback. Apple released 10.1 as a free upgrade CD for 10.0 users, in addition to the $129 boxed version for people runningMac OS 9. It was discovered that the upgrade CDs were full install CDs that could be used with Mac OS 9 systems by removing a specific file; Apple later re-released the CDs in an actual stripped-down format that did not facilitate installation on such systems.[191]On January 7, 2002, Apple announced that Mac OS X was to be the default operating system for all Macintosh products by the end of that month.[192] On August 23, 2002,[193]Apple followed up with Mac OS X 10.2 Jaguar, the first release to use its code name as part of the branding.[194]It brought significant performance improvements, and an updated version of Aqua's visual design. Jaguar also included over 150[195]new user-facing features, includingQuartz Extremefor compositing graphics directly on anATIRadeonorNvidiaGeForce2MX AGP-based video card with at least 16 MB of VRAM, a system-wide repository for contact information in the newAddress Book, and theiChatinstant messaging client.[196]TheHappy Macicon — which had appeared during the Mac OS startup sequence since theoriginal Macintosh— was replaced with a grey Apple logo.[197] Mac OS X v10.3Panther was released on October 24, 2003. It significantly improved performance and incorporated the most extensive update yet to the user interface. Panther included as many or more new features as Jaguar had the year before, including an updated Finder, incorporating a brushed-metal interface,Fast user switching,Exposé(Window manager),FileVault,Safari, iChat AV (which addedvideo conferencingfeatures to iChat), improvedPortable Document Format(PDF) rendering and much greaterMicrosoft Windowsinteroperability.[198]Support for some early G3 computers such as "beige" Power Macs and "WallStreet" PowerBooks was discontinued.[199] Mac OS X 10.4 Tiger was released on April 29, 2005. Apple stated that Tiger contained more than 200 new features.[200]As with Panther, certain older machines were no longer supported; Tiger requires a Mac with 256 MB and a built-inFireWireport.[115]Among the new features, Tiger introducedSpotlight,Dashboard,Smart Folders, updated Mail program with Smart Mailboxes,QuickTime7,Safari2,Automator,VoiceOver,Core ImageandCore Video. The initial release of theApple TVused a modified version of Tiger with a different graphical interface and fewer applications and services.[201]On January 10, 2006, Apple released the first Intel-based Macs along with the 10.4.4 update to Tiger. This operating system functioned identically on the PowerPC-based Macs and the new Intel-based machines, with the exception of the Intel release lacking support for the Classic environment.[202] Mac OS X 10.5 Leopard was released on October 26, 2007. It was called by Apple "the largest update of Mac OS X". It brought more than 300 new features.[203]Leopard supports bothPowerPC- andIntel x86-based Macintosh computers; support for the G3 processor was dropped and the G4 processor required a minimum clock rate of 867 MHz, and at least 512 MB ofRAMto be installed. The single DVD works for all supported Macs (including 64-bit machines). New features include a new look, an updated Finder,Time Machine,Spaces,Boot Camppre-installed,[204]full support for64-bitapplications (including graphical applications), new features inMailandiChat, and a number of new security features. Leopard is anOpen Brand UNIX 03registered product on the Intel platform. It was also the firstBSD-basedOS to receive UNIX 03 certification.[205][206]Leopard dropped support for theClassic Environmentand all Classic applications.[207]It was the final version of Mac OS X to support the PowerPC architecture.[208] Mac OS X 10.6 Snow Leopard was released on August 28, 2009. Rather than delivering big changes to the appearance and end user functionality like the previous releases ofMac OS X, Snow Leopard focused on "under the hood" changes, increasing the performance, efficiency, and stability of the operating system. For most users, the most noticeable changes were: the disk space that the operating system frees up after a clean install compared to Mac OS X 10.5 Leopard, a more responsiveFinderrewritten inCocoa, fasterTime Machinebackups, more reliable and user-friendly disk ejects, a more powerful version of the Preview application, as well as a fasterSafariweb browser. Snow Leopard only supported machines with Intel CPUs, required at least 1 GB ofRAM, and dropped default support for applications built for thePowerPCarchitecture (Rosettacould be installed as an additional component to retain support for PowerPC-only applications).[209] Snow Leopard also featured new64-bittechnology capable of supporting greater amounts ofRAM, improved support for multi-core processors throughGrand Central Dispatch, and advanced GPU performance withOpenCL.[210] The 10.6.6 update introduced support for theMac App Store, Apple's digital distribution platform for macOS applications.[211] OS X 10.7 Lion was released on July 20, 2011. It brought developments made in Apple's iOS, such as an easily navigable display of installed applications calledLaunchpadand a greater use ofmulti-touchgestures, to the Mac. This release removedRosetta, making it incompatible with PowerPC applications.[136] Changes made to the GUI include auto-hiding scrollbars that only appear when they are used, andMission Controlwhich unifies Exposé, Spaces, Dashboard, and full-screen applications within a single interface.[212]Apple also made changes to applications: they resume in the same state as they were before they were closed, similar to iOS. Documents auto-save by default.[213] OS X 10.8 Mountain Lion was released on July 25, 2012.[74]Following the release of Lion the previous year, it was the first of the annual rather than two-yearly updates to OS X (and later macOS), which also closely aligned with the annual iOS operating system updates. It incorporates some features seen in iOS 5, which includeGame Center, support foriMessagein the newMessagesmessaging application, andRemindersas a to-do list app separate fromiCal(which is renamed as Calendar, like the iOS app). It also includes support for storingiWorkdocuments iniCloud.[214]Notification Center, which makes its debut in Mountain Lion, is a desktop version similar to the one in iOS 5.0 and higher. Application pop-ups are now concentrated on the corner of the screen, and the Center itself is pulled from the right side of the screen. Mountain Lion also includes more Chinese features including support forBaiduas an option forSafarisearch engine,QQ,163.comand 126.com services forMail,ContactsandCalendar,Youku,TudouandSina Weiboare integrated into share sheets.[177] Starting with Mountain Lion, Apple software updates (including the OS) are distributed via theApp Store.[215]This updating mechanism replaced the Apple Software Update utility.[216] OS X 10.9 Mavericks was released on October 22, 2013. It was a free upgrade to all users running Snow Leopard or later with a 64-bit Intel processor.[217]Its changes include the addition of the previously iOS-onlyMapsandiBooksapplications, improvements to the Notification Center, enhancements to several applications, and many under-the-hood improvements.[218] OS X 10.10 Yosemite was released on October 16, 2014. It features a redesigned user interface similar to that ofiOS 7, intended to feature a more minimal, text-based 'flat' design, with use of translucency effects and intenselysaturated colors.[219]Apple's showcase new feature in Yosemite is Handoff, which enables users with iPhones running iOS 8.1 or later to answer phone calls, receive and send SMS messages, and complete unfinished iPhone emails on their Mac. As of OS X 10.10.3,PhotosreplacediPhotoandAperture.[220] OS X 10.11 El Capitan was released on September 30, 2015. Similar to Mac OS X 10.6 Snow Leopard, Apple described this release as emphasizing "refinements to the Mac experience" and "improvements to system performance".[221]Refinements include public transport built into theMapsapplication, GUI improvements to theNotesapplication, adoptingSan Franciscoas the system font for clearer legibility, and the introduction ofSystem Integrity Protection. TheMetal API, first introduced iniOS 8, was also included in this operating system for "all Macs since 2012".[222]According to Apple, Metal accelerates system-level rendering by up to 50 percent, resulting in faster graphics performance for everyday apps. Metal also delivers up to 10 times faster draw call performance for more fluid experience in games and pro apps.[223] macOS 10.12 Sierra was released to the public on September 20, 2016. New features include the addition ofSiri, Optimized Storage, and updates to Photos, Messages, and iTunes.[224][225] macOS 10.13 High Sierra was released to the public on September 25, 2017.[226]LikeOS X El CapitanandOS X Mountain Lion, High Sierra is a refinement-based update having very few new features visible to the user, including updates to Safari, Photos, and Mail, among other changes.[227] The major change under the hood is the switch to theApple File System, optimized for the solid-state storage used in most new Mac computers.[228] macOS 10.14 Mojave was released on September 24, 2018.[55]The update introduced a system-widedark modeand several new apps lifted from iOS, such asApple News. It was the first version to require a GPU that supports Metal. Mojave also changed the system software update mechanism from the App Store (where it had been sinceOS X Mountain Lion) to a new panel in System Preferences. App updates remain in the App Store. macOS 10.15 Catalina was released on October 7, 2019.[229]Updates included enhanced voice control, and bundled apps for music, video, and podcasts that together replace the functions of iTunes, and the ability to use an iPad as an external monitor. Catalina officially dropped support for 32-bit applications.[230] macOS Big Surwas announced during the WWDC keynote speech on June 22, 2020,[231]and it was made available to the general public on November 12, 2020. This is the first time the major version number of the operating system has been incremented since theMac OS X Public Betain 2000. It bringsArmsupport,[232]new icons, and aestheticuser interfacechanges to the system.[233] macOS Montereywas announced during the WWDC keynote speech on June 7, 2021, and released on October 25, 2021, introducing Universal Control (which allows input devices to be used with multiple devices simultaneously), Focus modes (which allows selectively limiting notifications and alerts depending on user-defined user/work modes), Shortcuts (a task automation framework previously only available oniOSandiPadOSexpected to replaceAutomator), a redesignedSafari Web browser, and updates and improvements toFaceTime.[234] macOS Venturawas announced during the WWDC keynote speech on June 6, 2022[235]and released on October 24, 2022.[236]It came with the redesigned System Preferences (named System Settings) to a moreiOS-like design, and the newFreeform,WeatherandClockapps that run natively on Mac. Users can use an iPhone as a webcam for video conferencing with Continuity Camera. Siri's appearance was changed to look more like the versions oniOS 14andiPadOS 14.Mailintroduced schedule send and undo send for emails, andMessagealso got the ability to undo send and edit messages. Stage Manager was introduced as a new way to organize all open windows in a desktop.Mapsgained the feature for multiple-stop routes,Metal 3was added with support for spatial and temporal image upscaling, Lockdown mode was added to reduce the risk of a cyberattack, and the ability to play ambient background sounds was added as an accessibility feature in System Settings. macOS Sonomawas announced during the WWDC keynote speech on June 5, 2023, and released on September 26, 2023.[237]macOS Sonoma revamped widgets—they can now be placed anywhere on the desktop. Game mode optimizes game performance by prioritizing gaming tasks and allocating more GPU and CPU capacity to the game, and by doing so is able to provide smoother frame rates for gameplay. TheSpotlight Searchbar and all app icons were made even more rounded, smoother animations were implemented for notifications and the lock screen, and new slow-motion screensavers of different locations worldwide were added. When logged in, they gradually slow down and become the desktop wallpaper. macOS Sequoiawas announced during the WWDC keynote speech on June 10, 2024. It adds support forApple Intelligencefeatures (for example a redesignedSiri, writing tools, Image Playground, Genmoji, and system-wide integration withGPT-4o), as well as adding iPhone Mirroring, a new dedicated Passwords app for faster autofilling and more organized passwords, and window tiling—a similar feature toMicrosoft Windows'Aero Snapwindow snapping feature.[238] Apple publishes Apple Platform Security documents to lay out the security protections built into macOS and Mac hardware.[239] macOS supports additional hardware-based security features on Apple silicon Macs:[240] macOS's optional Lockdown Mode enables additional protections, such as disablingjust-in-time compilationfor Safari'sJavaScript engine, blocks FaceTime calls unless you have previously called that person or contact, location information is excluded when photos are being shared, Game Center is disabled, and accessories have to be approved and your Mac has to be unlocked. These prevent some vulnerabilities within macOS.[242] Only the latest major release of macOS (currentlymacOS Sequoia) receives patches for all known security vulnerabilities. The previous two releases receive some security updates, but not for all vulnerabilities known to Apple. In 2021, Apple fixed a criticalprivilege escalationvulnerability in macOS Big Sur, but a fix remained unavailable for the previous release, macOS Catalina, for 234 days, until Apple was informed that the vulnerability was being used to infect the computers ofHong Kongcitizens and other people who visitedHong Kongpro-democracywebsites that may have beenblocked in Hong Kong.[243][244] macOS Venturaadded support for Rapid Security Response (RSR) updates and Lockdown Mode. Rapid Security Response updates may require a reboot, but take less than a minute to install.[245][246]In an analysis,Hackintoshdeveloper Mykola Grymalyuk noted that RSR updates can only fix userland vulunerability, and cannot patch the macOS kernel.[247]Lockdown Mode is an optional security feature designed to provide extreme protection for users who may be at risk of targeted cyberattacks, such as journalists, activists, and public figures. This mode significantly alters the functionality of the device to enhance security against sophisticated threats, particularly from spyware and state-sponsored attacks. Apple says most people are never impacted by these attacks.[248] In its earlier years, Mac OS X enjoyed a near-absence of the types ofmalwareandspywarethat have affectedMicrosoft Windowsusers.[249][250][251]macOS has a smaller usage share compared to Windows.[252]Worms, as well as potential vulnerabilities, were noted in 2006, which led some industry analysts and anti-virus companies to issue warnings that Apple's Mac OS X is not immune to malware.[253]Increasing market share coincided with additional reports of a variety of attacks.[254]In early 2011, Mac OS X experienced a large increase in malware attacks,[255]and malware such asMac Defender, MacProtector, and MacGuard was seen as an increasing problem for Mac users. At first, the malware installer required the user to enter the administrative password, but later versions installed without user input.[256]Initially, Apple support staff were instructed not to assist in the removal of the malware or admit the existence of the malware issue, but as the malware spread, a support document was issued. Apple announced an OS X update to fix the problem. An estimated 100,000 users were affected.[257][258]Apple releases security updates for macOS regularly,[259]as well as signature files containingmalware signaturesforXprotect, an anti-malware feature part ofFile Quarantinepresent since Mac OS X Snow Leopard.[260] As of January 2023[update], macOS is the second-most widely used general-purpose desktop operating system used on theWorld Wide WebfollowingMicrosoft Windows, with a 15.33% usage share, according to statistics compiled byStatCounter.[261] As a device company, Apple has mostly promoted macOS to sell Macs, with promotion of macOS updates focused on existing users, promotion atApple Storeand other retail partners, or through events for developers. In larger scale advertising campaigns, Apple specifically promoted macOS as better for handling media and other home-user applications, andcomparingMac OS X (especially versions Tiger and Leopard) with the heavy criticismMicrosoftreceived for the long-awaitedWindows Vistaoperating system.[262][263]
https://en.wikipedia.org/wiki/MacOS
FreeBSDis afree-softwareUnix-likeoperating systemdescended from theBerkeley Software Distribution(BSD). The first version was released in 1993 developed from386BSD[4], one of the first fully functional and freeUnixclones on affordable home-class hardware, and has since continuously been the most commonly used BSD-derived operating system.[5][6][7] FreeBSD maintains a complete system, delivering akernel,device drivers,userlandutilities, and documentation, as opposed toLinuxonly delivering akerneland drivers, and relying on third-parties such asGNUfor system software.[8]The FreeBSDsource codeis generally released under apermissiveBSD license, as opposed to thecopyleftGPLused by Linux. The project includes asecurityteam overseeing all software shipped in the base distribution. Third-partyapplicationsmay be installed using the pkgpackage management systemor from source viaFreeBSD Ports.[9]The project is supported and promoted by the FreeBSD Foundation. Much of FreeBSD's codebase has become an integral part of other operating systems such asDarwin(the basis formacOS,iOS,iPadOS,watchOS, andtvOS),TrueNAS(an open-sourceNAS/SANoperating system), and the system software for thePlayStation 3,[10][11][12]PlayStation 4,[13]PlayStation 5,[14]andPlayStation Vita[15]game consoles. The other current BSD systems (OpenBSD,NetBSD, andDragonFly BSD) also contain a large amount of FreeBSD code, and vice-versa.[citation needed] In 1974, ProfessorBob Fabryof theUniversity of California, Berkeley, acquired a Unix source license fromAT&T.[16]Supported by funding fromDARPA, theComputer Systems Research Groupstarted to modify and improve AT&T Research Unix. The group called this modified version "Berkeley Unix" or "Berkeley Software Distribution" (BSD), implementing features such asTCP/IP,virtual memory, and theBerkeley Fast File System. The BSD project was founded in 1976 byBill Joy. But since BSD contained code from AT&T Unix, all recipients had to first get a license from AT&T in order to use BSD.[17] In June 1989, "Networking Release 1" or simply Net-1 – the first public version of BSD – was released. After releasing Net-1,Keith Bostic, a developer of BSD, suggested replacing all AT&T code with freely-redistributable code under the originalBSD license. Work on replacing AT&T code began and, after 18 months, much of the AT&T code was replaced. However, six files containing AT&T code remained in the kernel. The BSD developers decided to release the "Networking Release 2" (Net-2) without those six files. Net-2 was released in 1991.[17] In 1992, several months after the release of Net-2,WilliamandLynne Jolitzwrote replacements for the six AT&T files, ported BSD toIntel 80386-based microprocessors, and called their new operating system386BSD. They released 386BSD via an anonymous FTP server.[17]The development flow of 386BSD was slow, and after a period of neglect, a group of 386BSD users including Nate Williams, Rod Grimes andJordan Hubbard[18]decided to branch out on their own so that they could keep the operating system up to date. On 19 June 1993, the name FreeBSD was chosen for the project.[19]The first version of FreeBSD was released in November 1993.[20][17] In the early days of the project's inception, a company namedWalnut Creek CDROM, upon the suggestion of the two FreeBSD developers, agreed to release the operating system onCD-ROM. In addition to that, the company employedJordan Hubbardand David Greenman, ran FreeBSD on its servers, sponsored FreeBSD conferences and published FreeBSD-related books, includingThe Complete FreeBSDbyGreg Lehey. By 1997, FreeBSD was Walnut Creek's "most successful product". The company later renamed itself toThe FreeBSD Malland lateriXsystems.[21][22][23] Today, FreeBSD is used by many IT companies such asIBM,Nokia,Juniper Networks, andNetAppto build their products.[24][25]Certain parts ofApple'smacOSoperating system are based on FreeBSD.[26]Both thePlayStation 3andNintendo Switchoperating system also borrow certain components from FreeBSD,[10][11]while thePlayStation 4operating system is derived from FreeBSD 9.[27]Netflix,[28]WhatsApp,[29]andFlightAware[30]are also examples of large, successful and heavily network-oriented companies which are running FreeBSD. 386BSD and FreeBSD were both derived from BSD releases.[24]In January 1992,Berkeley Software Design Inc.(BSDi) started to releaseBSD/386, later called BSD/OS, an operating system similar to FreeBSD and based on 4.3BSD Net/2. AT&T filed a lawsuit against BSDi and alleged distribution of AT&T source code in violation of license agreements. The lawsuit was settled out of court and the exact terms were not all disclosed. The only one that became public was that BSDi would migrate its source base to the newer 4.4BSD-Lite2 sources. Although not involved in the litigation, it was suggested to FreeBSD that it should also move to 4.4BSD-Lite2.[31]FreeBSD 2.0, which was released in November 1994, was the first version of FreeBSD without any code from AT&T.[32] FreeBSD contains a significant collection of server-related software in the base system and the ports collection, allowing FreeBSD to be configured and used as amail server,web server,firewall,FTP server,DNS serverand arouter, among other applications. FreeBSD can be installed on a regular desktop or a laptop. TheX Window Systemis not installed by default, but is available in theFreeBSD ports collection. Though not officially supported,[citation needed]Waylandis also available for FreeBSD.[33]A number ofdesktop environmentssuch asLumina,GNOME,KDE, andXfce, as well as lightweight window managers such asOpenbox,Fluxbox,dwm, and bspwm, are also available for FreeBSD. Major web browsers such asFirefoxandChromiumare available unofficially on FreeBSD.[34][35]As of FreeBSD 12, support for a modern graphics stack is available via drm-kmod. A large number of wireless adapters are supported. FreeBSD releases installation images for supported platforms. Since FreeBSD 13 the focus has been onx86-64andAArch64platforms which have Tier 1 support, and 32-bit platforms no longer have Tier 1 support.[36]IA-32is a Tier 2 platform in FreeBSD 13 and 14 (but will be dropped in next version). 32-bit ARM processors using armv6 or armv7 also have Tier 2 support, and ARMv7 will keep support. 64-bit versions ofRISC-VandPowerPC(that still has 32-bit tier 2 supported, but will be dropped in next version) are also supported.[37]Interest in the RISC-V architecture has been growing.[38]TheMIPS architectureport was marked for deprecation and there is no image for current 13.4 or later available.[39] FreeBSD's TCP/IP stack is based on the4.2BSDimplementation of TCP/IP which greatly contributed to the widespread adoption of these protocols.[40]FreeBSD also supportsIPv6,[41]SCTP,IPSec, and wireless networking (Wi-Fi).[42]The IPv6 and IPSec stacks were taken from theKAME project.[43]Prior to version 11.0, FreeBSD supportedIPXandAppleTalkprotocols, but they are considered obsolescent and have been dropped.[44] As of FreeBSD 5.4, support for theCommon Address Redundancy Protocol(CARP) was imported from theOpenBSDproject. CARP allows multiple nodes to share a set of IP addresses, so if one of the nodes goes down, other nodes can still serve the requests.[45] FreeBSD has several unique features related to storage.Soft updatescan protect the consistency of theUFSfilesystem (widely used on the BSDs) in the event of a system crash.[46]Filesystem snapshots allow an image of a UFS filesystem at an instant in time to be efficiently created.[47]Snapshots allow reliable backup of a live filesystem.GEOMis a modular framework that providesRAID(levels 0, 1, 3 currently),full disk encryption,journaling, concatenation, caching, and access to network-backed storage. GEOM allows building of complex storage solutions combining ("chaining") these mechanisms.[48]FreeBSD provides two frameworks for data encryption:GBDEandGeli. Both GBDE and Geli operate at the disk level. GBDE was written byPoul-Henning Kampand is distributed under the two-clause BSD license. Geli is an alternative to GBDE that was written by Pawel Jakub Dawidek and first appeared in FreeBSD 6.0.[49][50] From 7.0 onward, FreeBSD supports theZFSfilesystem. ZFS was previously an open-source filesystem that was first developed bySun Microsystems, but whenOracleacquired Sun, ZFS became a proprietary product. However, the FreeBSD project is still developing and improving its ZFS implementation via theOpenZFSproject.[51]The currently supported version of OpenZFS is 2.2.2 which contains an important fix for a data corruption bug. This version is compatible with releases starting from 12.2-RELEASE.[52] FreeBSD ships with three different firewall packages:IPFW,pfandIPFilter. IPFW is FreeBSD's native firewall. pf was taken from OpenBSD and IPFilter was ported to FreeBSD by Darren Reed.[53] Taken from OpenBSD, theOpenSSHprogram was included in the default install. OpenSSH is a free implementation of the SSH protocol and is a replacement fortelnet. Unlike telnet, OpenSSH encrypts all information (including usernames and passwords).[54] In November 2012, The FreeBSD Security Team announced that hackers gained unauthorized access on two of the project's servers. These servers were turned off immediately. More research demonstrated that the first unauthorized access by hackers occurred on 19 September. Apparently hackers gained access to these servers by stealingSSH keysfrom one of the developers, not by exploiting a bug in the operating system itself. These two hacked servers were part of the infrastructure used to build third-party software packages. The FreeBSD Security Team checked the integrity of the binary packages and determined that no unauthorized changes were made to the binary packages, but stated that it could not guarantee the integrity of packages that were downloaded between 19 September and 11 November.[55][56][57] FreeBSD provides several security-related features includingaccess-control lists(ACLs),[58]security event auditing, extended file system attributes,mandatory access controls(MAC)[59]and fine-grainedcapabilities.[60]These security enhancements were developed by theTrustedBSD[61]project. The project was founded byRobert Watsonwith the goal of implementing concepts from theCommon Criteriafor Information Technology Security Evaluation and theOrange Book. This project is ongoing[timeframe?]and many of its extensions have been integrated into FreeBSD.[62]The project is supported by a variety of organizations, including the DARPA, NSA, Network Associates Laboratories, Safeport Network Services, the University of Pennsylvania, Yahoo!, McAfee Research, SPARTA, Apple Computer, nCircle Network Security, Google, the University of Cambridge Computer Laboratory, and others.[63] The project has also ported theNSA'sFLASK/TE implementation fromSELinuxto FreeBSD. Other work includes the development ofOpenBSM, an open-source implementation of Sun's Basic Security Module (BSM)APIand audit log file format, which supports an extensive security audit system. This was shipped as part of FreeBSD 6.2. Other infrastructure work in FreeBSD performed as part of the TrustedBSD Project has included GEOM and OpenPAM.[60] Most components of the TrustedBSD project are eventually folded into the main sources for FreeBSD. In addition, many features, once fully matured, find their way into other operating systems. For example,OpenPAMhas been adopted byNetBSD.[64]Moreover, the TrustedBSD MAC Framework has been adopted byAppleformacOS.[65] FreeBSD has been ported to a variety ofinstruction set architectures(though most of no longer supported, at least with Tier 1 support). The FreeBSD project organizes architectures into tiers that characterize the level of support provided. Tier 1 architectures are mature and fully supported, e.g. it is the only tier "supported by the security officer". Tier 2 architectures are under active development but are not fully supported. Tier 3 architectures are experimental or are no longer under active development.[66] As of April 2025[update], FreeBSD has been ported to the following architectures:[36] The 32-bit ARM (including OTG) and MIPS support is mostly aimed at embedded systems (ARM64is also aimed at servers[68]), however FreeBSD/ARM runs on a number ofsingle-board computers, including theBeagleBone Black,Raspberry Pi[69][70]and Wandboard.[71] Supported devices are listed in the FreeBSD 14.3 Hardware Notes.[72]The document describes the devices currently known to be supported by FreeBSD. Other configurations may also work, but simply have not been tested yet. Rough automatically extracted lists of supported device ids are available in a third party repository.[73] In 2020, a new project was introduced to automatically collect information about tested hardware configurations.[74] FreeBSD has asoftware repositoryof over 30,000[75]applications that are developed by third parties. Examples includewindowing systems,web browsers,email clients,office suitesand so forth. In general, the project itself does not develop this software, only the framework to allow these programs to be installed, which is known as the Ports collection. Applications may either be compiled from source ("ports"), provided their licensing terms allow this, or downloaded as precompiled binaries ("packages").[76]The Ports collection supports the current and stable branches of FreeBSD. Older releases are not supported and may or may not work correctly with an up-to-date Ports collection.[77] Ports useMakefilesto automatically fetch the desired application'ssource code, either from a local or remoterepository, unpack it on the system, apply patches to it and compile it.[8][78]Depending on the size of the source code, compiling can take a long time, but it gives the user more control over the process and its result. Most ports also have package counterparts (i.e. precompiled binaries), giving the user a choice. Although this method is faster, the user has fewer customization options.[76] FreeBSD version 10.0 introduced thepackage managerpkg as a replacement for the previously used package tools.[79]It is functionally similar toaptandyuminLinux distributions. It allows for installation, upgrading and removal of both ports and packages. In addition to pkg,PackageKitcan also be used to access the Ports collection. First introduced in FreeBSD version 4,[80]jails are a security mechanism and an implementation ofoperating-system-level virtualizationthat enables the user to run multiple instances of a guest operating system on top of a FreeBSD host. It is an enhanced version of the traditionalchrootmechanism. A process that runs within such a jail is unable to access the resources outside of it. Every jail has its ownhostnameandIP address. It is possible to run multiple jails at the same time, but the kernel is shared among all of them. Hence only software supported by the FreeBSD kernel can be run within a jail.[81] bhyve, a new virtualization solution, was introduced in FreeBSD 10.0. bhyve allows a user to run a number of guest operating systems (FreeBSD,OpenBSD,Linux, andMicrosoft Windows[82]) simultaneously. Other operating systems such asIllumosare planned. bhyve was written by Neel Natu and Peter Grehan and was announced in the 2011 BSDCan conference for the first time. The main difference between bhyve andFreeBSD jailsis that jails are anoperating system-level virtualizationand therefore limited to only FreeBSD guests; but bhyve is a type 2hypervisorand is not limited to only FreeBSD guests.[83][84][85]For comparison, bhyve is a similar technology toKVMwhereas jails are closer toLXC containersorSolaris Zones.Amazon EC2AMI instances are also supported viaamazon-ssm-agent Since FreeBSD 11.0, there has been support for running as the Dom0 privileged domain for theXentype 1 hypervisor.[86]Support for running as DomU (guest) has been available since FreeBSD 8.0. VirtualBox(without the closed-sourceExtension Pack) andQEMUare available on FreeBSD. Most software that runs onLinuxcan run on FreeBSD using an optional built-incompatibility layer. Hence, most Linux binaries can be run on FreeBSD, including some proprietary applications distributed only in binary form. This compatibility layer is not anemulation; Linux'ssystem callinterface is implemented in the FreeBSD's kernel and hence, Linuxexecutable imagesandshared librariesare treated the same as FreeBSD's native executable images and shared libraries.[87]Additionally, FreeBSD providescompatibility layersfor several otherUnix-likeoperating systems, in addition to Linux, such asBSD/OSandSVR4,[87]however, it is more common for users to compile those programs directly on FreeBSD.[88] No noticeable performance penalty over native FreeBSD programs has been noted when running Linux binaries, and, in some cases, these may even perform more smoothly than on Linux.[89][90]However, the layer is not altogether seamless, and some Linux binaries are unusable or only partially usable on FreeBSD. There is support for system calls up to version4.4.0,[91]available sinceFreeBSD 14.0. As of release 10.3, FreeBSD can run 64-bit Linux binaries.[92] FreeBSD has implemented a number ofMicrosoft WindowsnativeNDISkernel interfaces to allow FreeBSD to run (otherwise) Windows-only network drivers.[93][94] TheWinecompatibility layer, which allows the running of many Windows applications, especially games, without a (licensed) copy ofMicrosoft Windows, is available for FreeBSD. FreeBSD's kernel provides support for some essential tasks such as managing processes, communication, booting and filesystems. FreeBSD has amonolithickernel,[95]with a modular design. Different parts of the kernel, such as drivers, are designed as modules. The user can load and unload these modules at any time.[96]ULEis the defaultschedulerin FreeBSD since version 7.1, it supportsSMPandSMT.[97]The FreeBSD kernel has also a scalable event notification interface, namedkqueue. It has been ported to other BSD-derivatives such asOpenBSDandNetBSD.[98]Kernel threading was introduced in FreeBSD 5.0, using anM:N threading model. This model works well in theory,[99][100]but it is hard to implement and few operating systems support it. Although FreeBSD's implementation of this model worked, it did not perform well, so from version 7.0 onward, FreeBSD started using a1:1 threading model, called libthr.[100] FreeBSD's documentation consists of its handbooks, manual pages, mailing list archives, FAQs and a variety of articles, mainly maintained by The FreeBSD Documentation Project. FreeBSD's documentation is translated into several languages.[101]All official documentation is released under theFreeBSD Documentation License, "a permissive non-copyleft free documentation license that is compatible with the GNU FDL".[102]FreeBSD's documentation is described as "high-quality".[103][104] The FreeBSD project maintains a variety of mailing lists.[105]Among the most popular mailing lists are FreeBSD-questions (general questions) and FreeBSD-hackers (a place for asking more technical questions).[106] Since 2004, the New York City BSD Users Group database providesdmesginformation from a collection of computers (laptops,workstations,single-board computers,embedded systems,virtual machines, etc.) running FreeBSD.[107] From version 2.0 to 8.4, FreeBSD used the sysinstall program as its main installer. It was written inCbyJordan Hubbard. It uses atext user interface, and is divided into a number of menus and screens that can be used to configure and control the installation process. It can also be used to install Ports and Packages as an alternative to thecommand-line interface.[108] The sysinstall utility is now considered deprecated in favor of bsdinstall, a new installer which was introduced in FreeBSD 9.0. bsdinstall is "a lightweight replacement for sysinstall" that was written in sh. According toOSNews, "It has lost some features while gaining others, but it is a much more flexible design, and will ultimately be significant improvement".[81][109] Prior to 14.0, the default login shell wastcshfor root[110]and theAlmquist shell(sh) for regular users.[111]Starting with 14.0, the default shell is sh for both root and regular users.[110]The default scripting shell is the Almquist shell.[112] FreeBSD is developed by a volunteer team located around the world. The developers use theInternetfor all communication and many have not met each other in person. In addition to local user groups sponsored and attended by users, an annual conference, called BSDcon, is held byUSENIX. BSDcon is not FreeBSD-specific so it deals with the technical aspects of all BSD-derived operating systems, includingOpenBSDandNetBSD.[113]In addition to BSDcon, three other annual conferences, EuroBSDCon, AsiaBSDCon and BSDCan take place inEurope,JapanandCanadarespectively.[114][115][116] The FreeBSD Project is run by around 500 committers or developers who have commit access to the master source code repositories and can develop, debug or enhance any part of the system. Most of the developers are volunteers and few developers are paid by some companies.[24]There are several kinds of committers, including source committers (base operating system), doc committers (documentation and website authors) and ports (third-party application porting and infrastructure). Every two years the FreeBSD committers select a 9-member FreeBSD Core Team, which is responsible for overall project direction, setting and enforcing project rules and approving new committers, or the granting of commit access to the source code repositories. A number of responsibilities are officially assigned to other development teams by the FreeBSD Core Team, for example, responsibility for managing the ports collection is delegated to the Ports Management Team.[117] In addition to developers, FreeBSD has thousands of "contributors". Contributors are also volunteers outside of the FreeBSD project who submit patches for consideration by committers, as they do not have commit access to FreeBSD's source code repository. Committers then evaluate contributors' submissions and decide what to accept and what to reject. A contributor who submits high-quality patches is often asked to become a committer.[117] FreeBSDdevelopersmaintain at least two branches of simultaneous development. The-CURRENTbranch always represents the "bleeding edge" of FreeBSD development. A-STABLEbranch of FreeBSD is created for each major version number, from which -RELEASE is cut about once every 4–6 months. If a feature is sufficiently stable and mature it will likely bebackported(MFCorMerge from CURRENTin FreeBSD developer slang) to the-STABLEbranch.[118][8] FreeBSD development is supported in part by the FreeBSD Foundation. The foundation is a non-profit organization that accepts donations to fund FreeBSD development. Such funding has been used to sponsor developers for specific activities, purchase hardware and network infrastructure, provide travel grants to developer summits, and provide legal support to the FreeBSD project.[119] In November 2014, the FreeBSD Foundation received US$1 million donation fromJan Koum, co-founder and CEO ofWhatsApp– the largest single donation to the Foundation since its inception. In December 2016, Jan Koum donated another $500,000.[120]Jan Koum himself is a FreeBSD user since the late 1990s and WhatsApp uses FreeBSD on its servers.[121] FreeBSD is released under a variety of open-source licenses. The kernel code and most newly created code are released under the two-clauseBSD licensewhich allows everyone to use and redistribute FreeBSD as they wish. This license was approved byFree Software Foundation[122]andOpen Source Initiative[123]as a Free Software and Open Source license respectively. Free Software Foundation described this license as "a lax, permissive non-copyleft free software license, compatible with the GNU GPL". There are parts released under three- and four-clause BSD licenses, as well as theBeerwarelicense. Some device drivers include abinary blob,[124]such as theAtherosHALof FreeBSD versions before 7.2.[125][failed verification]Some of the code contributed by other projects is licensed underGPL,LGPL,CDDL[126]andISC. All the code licensed underGPLandCDDLis clearly separated from the code under liberal licenses, to make it easy for users such as embedded device manufacturers to use onlypermissive free software licenses. ClangBSD aims to replace someGPLdependencies in the FreeBSD base system by replacing theGNU compiler collectionwith the BSD-licensedLLVM/Clangcompiler. ClangBSD became self-hosting on 16 April 2010.[127] For many years FreeBSD's logo was the genericBSD Daemon, also calledBeastie, a distorted pronunciation ofBSD. However, Beastie was not unique to FreeBSD. Beastie first appeared in 1976 on Unix T-shirts of comic artist Phil Foglio art,[128]for Mike O'Brien,[129][130][131][132]with some purchased byBell Labs.[133] More popular versions of the BSD daemon were drawn by animation directorJohn Lasseterbeginning in 1984.[134][135]Several FreeBSD-specific versions were later drawn by Tatsumi Hosokawa.[136]In lithographic terms, the Lasseter graphic is notline artand often requires a screened, four-colorphoto offsetprinting process for faithful reproduction on physical surfaces such as paper. Also, the BSD daemon was thought to be too graphically detailed for smooth size scaling and aesthetically over-dependent on multiple color gradations, making it hard to reliably reproduce as a simple, standardized logo in only two or three colors, much less in monochrome. Because of these worries, a competition was held and a new logo designed by Anton K. Gural, still echoing the BSD daemon, was released on 8 October 2005.[137][138][139]However, it was announced byRobert Watsonthat the FreeBSD project is "seeking a new logo, but not a new mascot" and that the FreeBSD project would continue to use Beastie as its mascot.[137] The name "FreeBSD" was coined by David Greenman on 19 June 1993, other suggested names were "BSDFree86" and "Free86BSD".[140]FreeBSD's slogan, "The Power to Serve", is a trademark of The FreeBSD Foundation.[141] There are a number of software distributions based on FreeBSD. All these distributions have no or only minor changes when compared with the original FreeBSD base system. The main difference to the original FreeBSD is that they come with pre-installed and pre-configured software for specific use cases. This can be compared withLinuxdistributions, which are all binary compatible because they use the same kernel and also use the same basic tools, compilers, and libraries while coming with different applications, configurations, and branding. Besides these distributions, there are some independent operating systems based on FreeBSD.DragonFly BSDis a fork from FreeBSD 4.8 aiming for a different multiprocessor synchronization strategy than the one chosen for FreeBSD 5 and development of somemicrokernelfeatures.[151]It does not aim to stay compatible with FreeBSD and has huge differences in the kernel and basicuserland.MidnightBSDis a fork of FreeBSD 6.1 borrowing heavily fromNeXTSTEP, particularly in the user interface department. Darwin, the core ofApple'smacOS, includes avirtual file systemand network stack derived from those of FreeBSD, and components of itsuserspaceare also FreeBSD-derived.[26][152]
https://en.wikipedia.org/wiki/FreeBSD
Java Management Extensions(JMX) is aJavatechnology that supplies tools for managing and monitoringapplications, system objects, devices (such asprinters) and service-oriented networks. Those resources are represented by objects called MBeans (forManaged Bean). In the API,classescan be dynamically loaded and instantiated. Managing and monitoring applications can be designed and developed using the Java Dynamic Management Kit.[1] JSR 003[2]of theJava Community Processdefined JMX 1.0, 1.1 and 1.2. JMX 2.0 was being developed under JSR 255, but this JSR was subsequently withdrawn.[3]The JMX Remote API 1.0 for remote management and monitoring is specified by JSR 160.[4]An extension of the JMX Remote API for Web Services was being developed under JSR 262.[5] Adopted early on by theJ2EEcommunity, JMX has been a part ofJ2SEsince version 5.0. "JMX" is a trademark ofOracle Corporation. JMX uses a three-level architecture: Applications can be generic consoles (such asJConsole[6]andMC4J[7]) or domain-specific (monitoring) applications. External applications can interact with the MBeans through the use of JMX connectors and protocol adapters. Connectors serve to connect an agent with a remote JMX-enabled management application. This form of communication involves a connector in the JMX agent and a connector client in the management application. TheJava Platform, Standard Editionships with one connector, theRMI connector, which uses the Java Remote Method Protocol that is part of theJava remote method invocationAPI. This is the connector which most management applications use. Protocol adapters provide a management view of the JMX agent through a given protocol. Management applications that connect to a protocol adapter are usually specific to the given protocol. Amanaged bean– sometimes simply referred to as anMBean– is a type ofJavaBean, created withdependency injection. Managed Beans are particularly used in the Java Management Extensions technology – but with Java EE 6 the specification provides for a more detailed meaning of a managed bean. The MBean represents a resource running in theJava virtual machine, such as an application or a Java EE technical service (transactional monitor, JDBC driver, etc.). They can be used for collecting statistics on concerns like performance, resources usage, or problems (pull); for getting and setting application configurations or properties (push/pull); and notifying events like faults or state changes (push). Java EE 6 provides that a managed bean is a bean that is implemented by a Java class, which is called its bean class. A top-level Java class is a managed bean if it is defined to be a managed bean by any other Java EE technology specification (for example, theJavaServer Facestechnology specification), or if it meets all of the following conditions: No special declaration, such as an annotation, is required to define a managed bean. A MBean can notify the MBeanServer of its internal changes (for the attributes) by implementing thejavax.management.NotificationEmitter. The application interested in the MBean's changes registers a listener (javax.management.NotificationListener) to the MBeanServer. Note that JMX does not guarantee that the listeners will receive all notifications.[8] There are two basic types of MBean: Additional types areOpen MBeans,Model MBeansandMonitor MBeans.Open MBeansare dynamic MBeans that rely on the basic data types. They are self-explanatory and more user-friendly.Model MBeansare dynamic MBeans that can be configured during runtime. A generic MBean class is also provided for dynamically configuring the resources during program runtime. A MXBean (Platform MBean) is a special type of MBean thatreifiesJava virtual machinesubsystems such asgarbage collection,JIT compilation,memory pools,multi-threading, etc. A MLet (Management applet) is a utility MBean to load, instantiate and register MBeans in a MBeanServer from anXMLdescription. The format of the XML descriptor is:[9] JMX is supported at various levels by different vendors:
https://en.wikipedia.org/wiki/Java_Management_Extensions
Application Response Measurement(ARM) is an open standard published by theOpen Groupfor monitoring and diagnosing performance bottlenecks within complex enterprise applications that useloosely-coupleddesigns orservice-oriented architectures. It includes anAPIforCandJavathat allows timing information associated with each step in processing a transaction to be logged to a remote server for later analysis. Version 1 of ARM was developed jointly byTivoli SoftwareandHewlett-Packardin 1996. Version 2 was developed by an industry partnership (the ARM Working Group) and became available in December 1997 as an open standard approved by theOpen Group. ARM 4.0 was released in 2003 and revised in 2004. As of 2007[update], ARM 4.1 version 1 is the latest version of the ARM standard. Current application design tends to be more complex and distributed over networks. This leads to new challenges in today's development and monitoring tools to provide application developers, system- and application administrators with the information they need. Within distributed applications it is not easy to estimate if the application performs well. The following issues help in the evaluation of distributed applications: ARM helps answer these questions. It's important to mention that the ARM benefits as they are defined here are now just a subset of theApplication Performance Managementspace. The main approach of using ARM is: ARM defines the following concepts to provide the described functionality. Complex distributed applications usually consist of many different single applications (processes). In order to be able to understand the relationship between all single applications the concept of an ARM application is introduced with version 4.0 of the ARM standard. Each ARM transaction is executed exactly within one ARM application. Transactions are the main concept of the ARM standard and represents a single performance measurement. A transaction definition defines the type (name) and additional attributes of an ARM transaction. A transaction can be executed (started and stopped) several times which results in multiple measurements. Each measurement has basic attributes like status of completion (good, failed, aborted), start and stop timestamp, the resulting duration and the system address (host) it was executed on. Additionally special metrics or context properties can be associated with a transaction measurement. Uniquely defines a host by its name, IP address or other unique information. ARM correlators are used to express a correlation between two ARM transactions. This is a synchronous relationship also known as parent-child relationship. Commonly, a parent transaction triggers a child transaction and only continues its execution when the child transaction has finished. Using correlators, it is possible to split a complex transaction into several nested child transactions, where each child transaction can have child transactions of its own. This results in a tree of transactions with the topmost parent transaction being the root of the tree. ARM 4.1 defines asynchronous relationships to support data flow driven architectures. ARM Metrics can be used to get more information about the execution of a transaction. ARM defines a set of metric types for different purposes such as a counter, a gauge or just a numeric value. Properties are a set of so-calledname–value pairstrings which qualifies an ARM transaction or an ARM application beyond the basic definition of these entities and allows to associate additional context information to each transaction measurement. Defines a name of a user on behalf an transaction measurement was executed. The following applications are already instrumented with ARM calls:
https://en.wikipedia.org/wiki/Application_Response_Measurement
Anapplication programming interface(API) is a connection betweencomputersor betweencomputer programs. It is a type of softwareinterface, offering a service to other pieces ofsoftware.[1]A document or standard that describes how to build such a connection or interface is called anAPI specification. A computer system that meets this standard is said toimplementorexposean API. The term API may refer either to the specification or to the implementation. In contrast to auser interface, which connects a computer to a person, an application programming interface connects computers or pieces of software to each other. It is not intended to be used directly by a person (theend user) other than acomputer programmer[1]who is incorporating it into software. An API is often made up of different parts which act as tools or services that are available to the programmer. A program or a programmer that uses one of these parts is said tocallthat portion of the API. The calls that make up the API are also known assubroutines, methods, requests, orendpoints. An API specificationdefinesthese calls, meaning that it explains how to use or implement them. One purpose of APIs is tohide the internal detailsof how a system works, exposing only those parts a programmer will find useful and keeping them consistent even if the internal details later change. An API may be custom-built for a particular pair of systems, or it may be a shared standard allowinginteroperabilityamong many systems. The term API is often used to refer toweb APIs,[2]which allow communication between computers that are joined by theinternet. There are also APIs forprogramming languages,software libraries, computeroperating systems, andcomputer hardware. APIs originated in the 1940s, though the term did not emerge until the 1960s and 70s. An API opens a software system to interactions from the outside. It allows two software systems to communicate across a boundary — an interface — using mutually agreed-upon signals.[3]In other words, an API connects software entities together. Unlike auser interface, an API is typically not visible to users. It is an "under the hood" portion of a software system, used for machine-to-machine communication.[4] A well-designed API exposes only objects or actions needed by software or software developers. It hides details that have no use. Thisabstractionsimplifies programming.[5] Building software using APIs has been compared to using building-block toys, such asLegobricks. Software services or software libraries are analogous to the bricks; they may be joined together via their APIs, composing a new software product.[6]The process of joining is calledintegration.[3] As an example, consider a weather sensor that offers an API. When a certain message is transmitted to the sensor, it will detect the current weather conditions and reply with a weather report. The message that activates the sensor is an APIcall, and the weather report is an APIresponse.[7]A weather forecasting app might integrate with a number of weather sensor APIs, gathering weather data from throughout a geographical area. An API is often compared to acontract. It represents an agreement between parties: a service provider who offers the API and the software developers who rely upon it. If the API remains stable, or if it changes only in predictable ways, developers' confidence in the API will increase. This may increase their use of the API.[8] The termAPIinitially described an interface only for end-user-facing programs, known asapplication programs. This origin is still reflected in the name "application programming interface." Today, the term is broader, including alsoutility softwareand evenhardware interfaces.[10] The idea of the API is much older than the term itself. British computer scientistsMaurice WilkesandDavid Wheelerworked on a modularsoftware libraryin the 1940s forEDSAC, an early computer. Thesubroutinesin this library were stored onpunched paper tapeorganized in afiling cabinet. This cabinet also contained what Wilkes and Wheeler called a "library catalog" of notes about each subroutine and how to incorporate it into a program. Today, such a catalog would be called an API (or an API specification or API documentation) because it instructs a programmer on how to use (or "call") each subroutine that the programmer needs.[10] Wilkes and Wheeler's bookThe Preparation of Programs for an Electronic Digital Computercontains the first published API specification.Joshua Blochconsiders that Wilkes and Wheeler "latently invented" the API, because it is more of a concept that is discovered than invented.[10] The term "application program interface" (without an-ingsuffix) is first recorded in a paper calledData structures and techniques for remotecomputer graphicspresented at anAFIPSconference in 1968.[12][10]The authors of this paper use the term to describe the interaction of anapplication—a graphics program in this case—with the rest of the computer system. A consistent application interface (consisting ofFortransubroutine calls) was intended to free the programmer from dealing with idiosyncrasies of the graphics display device, and to providehardware independenceif the computer or the display were replaced.[11] The term was introduced to the field ofdatabasesbyC. J. Date[13]in a 1974 paper calledTheRelationalandNetworkApproaches: Comparison of the Application Programming Interface.[14]An API became a part of theANSI/SPARC frameworkfordatabase management systems. This framework treated the application programming interface separately from other interfaces, such as the query interface. Database professionals in the 1970s observed these different interfaces could be combined; a sufficiently rich application interface could support the other interfaces as well.[9] This observation led to APIs that supported all types of programming, not just application programming. By 1990, the API was defined simply as "a set of services available to a programmer for performing certain tasks" by technologistCarl Malamud.[15] The idea of the API was expanded again with the dawn ofremote procedure callsandweb APIs. Ascomputer networksbecame common in the 1970s and 80s, programmers wanted to call libraries located not only on their local computers, but on computers located elsewhere. These remote procedure calls were well supported by theJavalanguage in particular. In the 1990s, with the spread of theinternet, standards likeCORBA,COM, andDCOMcompeted to become the most common way to expose API services.[16] Roy Fielding's dissertationArchitectural Styles and the Design of Network-based Software ArchitecturesatUC Irvinein 2000 outlinedRepresentational state transfer(REST) and described the idea of a "network-based Application Programming Interface" that Fielding contrasted with traditional "library-based" APIs.[17]XMLandJSONweb APIs saw widespread commercial adoption beginning in 2000 and continuing as of 2021. The web API is now the most common meaning of the term API.[2] TheSemantic Webproposed byTim Berners-Leein 2001 included "semantic APIs" that recast the API as anopen, distributed data interface rather than a software behavior interface.[18]Proprietaryinterfaces and agents became more widespread than open ones, but the idea of the API as a data interface took hold. Because web APIs are widely used to exchange data of all kinds online, API has become a broad term describing much of the communication on the internet.[16]When used in this way, the term API has overlap in meaning with the termcommunication protocol. The interface to asoftware libraryis one type of API. The API describes and prescribes the "expected behavior" (a specification) while the library is an "actual implementation" of this set of rules. A single API can have multiple implementations (or none, being abstract) in the form of different libraries that share the same programming interface. The separation of the API from its implementation can allow programs written in one language to use a library written in another. For example, becauseScalaandJavacompile to compatiblebytecode, Scala developers can take advantage of any Java API.[19] API use can vary depending on the type of programming language involved. An API for aprocedural languagesuch asLuacould consist primarily of basic routines to execute code, manipulate data or handle errors while an API for anobject-oriented language, such as Java, would provide a specification of classes and itsclass methods.[20][21]Hyrum's law states that "With a sufficient number of users of an API, it does not matter what you promise in the contract: all observable behaviors of your system will be depended on by somebody."[22]Meanwhile, several studies show that most applications that use an API tend to use a small part of the API.[23] Language bindingsare also APIs. By mapping the features and capabilities of one language to an interface implemented in another language, a language binding allows a library or service written in one language to be used when developing in another language.[24]Tools such asSWIGand F2PY, aFortran-to-Pythoninterface generator, facilitate the creation of such interfaces.[25] An API can also be related to asoftware framework: a framework can be based on several libraries implementing several APIs, but unlike the normal use of an API, the access to the behavior built into the framework is mediated by extending its content with new classes plugged into the framework itself. Moreover, the overall program flow of control can be out of the control of the caller and in the framework's hands byinversion of controlor a similar mechanism.[26][27] An API can specify the interface between an application and theoperating system.[28]POSIX, for example, specifies a set of common APIs that aim to enable an application written for a POSIX conformant operating system to becompiledfor another POSIX conformant operating system. LinuxandBerkeley Software Distributionare examples of operating systems that implement the POSIX APIs.[29] Microsofthas shown a strong commitment to a backward-compatible API, particularly within itsWindows API(Win32) library, so older applications may run on newer versions of Windows using an executable-specific setting called "Compatibility Mode".[30] An API differs from anapplication binary interface(ABI) in that an API is source code based while an ABI isbinarybased. For instance,POSIXprovides APIs while theLinux Standard Baseprovides an ABI.[31][32] Remote APIs allow developers to manipulate remote resources throughprotocols, specific standards for communication that allow different technologies to work together, regardless of language or platform. For example, the Java Database Connectivity API allows developers to query many different types ofdatabaseswith the same set of functions, while theJava remote method invocationAPI uses the Java Remote Method Protocol to allowinvocationof functions that operate remotely, but appear local to the developer.[33][34] Therefore, remote APIs are useful in maintaining the object abstraction inobject-oriented programming; amethod call, executed locally on aproxyobject, invokes the corresponding method on the remote object, using the remoting protocol, and acquires the result to be used locally as a return value. A modification of the proxy object will also result in a corresponding modification of the remote object.[35] Web APIs are the defined interfaces through which interactions happen between an enterprise and applications that use its assets, which also is aService Level Agreement(SLA) to specify the functional provider and expose the service path or URL for its API users. An API approach is an architectural approach that revolves around providing a program interface to a set of services to different applications serving different types of consumers.[36] When used in the context ofweb development, an API is typically defined as a set of specifications, such asHypertext Transfer Protocol(HTTP) request messages, along with a definition of the structure of response messages, usually in an Extensible Markup Language (XML) or JavaScript Object Notation (JSON) format. An example might be a shipping company API that can be added to an eCommerce-focused website to facilitate ordering shipping services and automatically include current shipping rates, without the site developer having to enter the shipper's rate table into a web database. While "web API" historically has been virtually synonymous withweb service, the recent trend (so-calledWeb 2.0) has been moving away from Simple Object Access Protocol (SOAP) based web services andservice-oriented architecture(SOA) towards more directrepresentational state transfer(REST) styleweb resourcesandresource-oriented architecture(ROA).[37]Part of this trend is related to theSemantic Webmovement towardResource Description Framework(RDF), a concept to promote web-basedontology engineeringtechnologies. Web APIs allow the combination of multiple APIs into new applications known asmashups.[38]In the social media space, web APIs have allowed web communities to facilitate sharing content and data between communities and applications. In this way, content that is created in one place dynamically can be posted and updated to multiple locations on the web.[39]For example, Twitter's REST API allows developers to access core Twitter data and the Search API provides methods for developers to interact with Twitter Search and trends data.[40] The design of an API has significant impact on its usage.[5]The principle ofinformation hidingdescribes the role of programming interfaces as enablingmodular programmingby hiding the implementation details of the modules so that users of modules need not understand the complexities inside the modules.[41]Thus, the design of an API attempts to provide only the tools a user would expect.[5]The design of programming interfaces represents an important part ofsoftware architecture, the organization of a complex piece of software.[42] APIs are one of the more common ways technology companies integrate. Those that provide and use APIs are considered as being members of a business ecosystem.[43] The main policies for releasing an API are:[44] An important factor when an API becomes public is its "interface stability". Changes to the API—for example adding new parameters to a function call—could break compatibility with the clients that depend on that API.[48] When parts of a publicly presented API are subject to change and thus not stable, such parts of a particular API should be documented explicitly as "unstable". For example, in theGoogle Guavalibrary, the parts that are considered unstable, and that might change soon, are marked with theJava annotation@Beta.[49] A public API can sometimes declare parts of itself asdeprecatedor rescinded. This usually means that part of the API should be considered a candidate for being removed, or modified in a backward incompatible way. Therefore, these changes allow developers to transition away from parts of the API that will be removed or not supported in the future.[50] Client code may contain innovative or opportunistic usages that were not intended by the API designers. In other words, for a library with a significant user base, when an element becomes part of the public API, it may be used in diverse ways.[51]On February 19, 2020,Akamaipublished their annual “State of the Internet” report, showcasing the growing trend of cybercriminals targeting public API platforms at financial services worldwide. From December 2017 through November 2019, Akamai witnessed 85.42 billion credential violation attacks. About 20%, or 16.55 billion, were against hostnames defined as API endpoints. Of these, 473.5 million have targeted financial services sector organizations.[52] API documentation describes what services an API offers and how to use those services, aiming to cover everything a client would need to know for practical purposes. Documentation is crucial for the development and maintenance of applications using the API.[53]API documentation is traditionally found in documentation files but can also be found in social media such as blogs, forums, and Q&A websites.[54] Traditional documentation files are often presented via a documentation system, such as Javadoc or Pydoc, that has a consistent appearance and structure. However, the types of content included in the documentation differs from API to API.[55] In the interest of clarity, API documentation may include a description of classes and methods in the API as well as "typical usage scenarios, code snippets, design rationales, performance discussions, and contracts", but implementation details of the API services themselves are usually omitted. It can take a number of forms, including instructional documents, tutorials, and reference works. It'll also include a variety of information types, including guides and functionalities. Restrictions and limitations on how the API can be used are also covered by the documentation. For instance, documentation for an API function could note that its parameters cannot be null, that the function itself is notthread safe.[56]Because API documentation tends to be comprehensive, it is a challenge for writers to keep the documentation updated and for users to read it carefully, potentially yielding bugs.[48] API documentation can be enriched with metadata information likeJava annotations. This metadata can be used by the compiler, tools, and by therun-timeenvironment to implement custom behaviors or custom handling.[57] It is possible to generate API documentation in a data-driven manner. By observing many programs that use a given API, it is possible to infer the typical usages, as well the required contracts and directives.[58]Then, templates can be used to generate natural language from the mined data. In 2010, Oracle Corporation sued Google for having distributed a new implementation of Java embedded in the Android operating system.[59]Google had not acquired any permission to reproduce the Java API, although permission had been given to the similar OpenJDK project. JudgeWilliam Alsupruled in theOracle v. Googlecase that APIs cannot becopyrightedin the U.S. and that a victory for Oracle would have widely expanded copyright protection to a "functional set of symbols" and allowed the copyrighting of simple software commands: To accept Oracle's claim would be to allow anyone to copyright one version of code to carry out a system of commands and thereby bar all others from writing its different versions to carry out all or part of the same commands.[60][61] Alsup's ruling was overturned in 2014 on appeal to theCourt of Appeals for the Federal Circuit, though the question of whether such use of APIs constitutesfair usewas left unresolved.[62][63] In 2016, following a two-week trial, a jury determined that Google's reimplementation of the Java API constitutedfair use, but Oracle vowed to appeal the decision.[64]Oracle won on its appeal, with the Court of Appeals for the Federal Circuit ruling that Google's use of the APIs did not qualify for fair use.[65]In 2019, Google appealed to theSupreme Court of the United Statesover both the copyrightability and fair use rulings, and the Supreme Court granted review.[66]Due to theCOVID-19 pandemic, the oral hearings in the case were delayed until October 2020.[67] The case was decided by the Supreme Court in Google's favor.[68]
https://en.wikipedia.org/wiki/Application_programming_interface
C(pronounced/ˈsiː/– like the letterc)[6]is ageneral-purpose programming language. It was created in the 1970s byDennis Ritchieand remains very widely used and influential. By design, C's features cleanly reflect the capabilities of the targetedCPUs. It has found lasting use inoperating systemscode (especially inkernels[7]),device drivers, andprotocol stacks, but its use inapplication softwarehas been decreasing.[8]C is commonly used on computer architectures that range from the largestsupercomputersto the smallestmicrocontrollersandembedded systems. A successor to the programming languageB, C was originally developed atBell Labsby Ritchie between 1972 and 1973 to construct utilities running onUnix. It was applied to re-implementing the kernel of the Unix operating system.[9]During the 1980s, C gradually gained popularity. It has become one of the most widely usedprogramming languages,[10][11]with Ccompilersavailable for practically all moderncomputer architecturesandoperating systems. The bookThe C Programming Language, co-authored by the original language designer, served for many years as thede factostandard for the language.[12][1]C has been standardized since 1989 by theAmerican National Standards Institute(ANSI) and, subsequently, jointly by theInternational Organization for Standardization(ISO) and theInternational Electrotechnical Commission(IEC). C is animperativeprocedurallanguage, supportingstructured programming,lexical variable scope, andrecursion, with astatic type system. It was designed to becompiledto providelow-levelaccess tomemoryand language constructs that map efficiently tomachine instructions, all with minimalruntime support. Despite its low-level capabilities, the language was designed to encourage cross-platform programming. Astandards-compliant C program written withportabilityin mind can be compiled for a wide variety of computer platforms and operating systems with few changes to its source code. Since 2000, C has consistently ranked among the top four languages in theTIOBE index, a measure of the popularity of programming languages.[13] C is animperative, procedural language in theALGOLtradition. It has a statictype system. In C, allexecutable codeis contained withinsubroutines(also called "functions", though not in the sense offunctional programming).Function parametersare passed by value, althougharraysare passed aspointers, i.e. the address of the first item in the array.Pass-by-referenceis simulated in C by explicitly passing pointers to the thing being referenced. C program source text isfree-formcode.Semicolonsterminatestatements, whilecurly bracesare used to group statements intoblocks. The C language also exhibits the following characteristics: While C does not include certain features found in other languages (such asobject orientationandgarbage collection), these can be implemented or emulated, often through the use of external libraries (e.g., theGLib Object Systemor theBoehm garbage collector). Many later languages have borrowed directly or indirectly from C, includingC++,C#, Unix'sC shell,D,Go,Java,JavaScript(includingtranspilers),Julia,Limbo,LPC,Objective-C,Perl,PHP,Python,Ruby,Rust,Swift,VerilogandSystemVerilog(hardware description languages).[5]These languages have drawn many of theircontrol structuresand other basic features from C. Most of them also express highly similarsyntaxto C, and they tend to combine the recognizable expression and statementsyntax of Cwith underlying type systems,data models, and semantics that can be radically different. The origin of C is closely tied to the development of theUnixoperating system, originally implemented inassembly languageon aPDP-7byDennis RitchieandKen Thompson, incorporating several ideas from colleagues. Eventually, they decided to port the operating system to aPDP-11. The original PDP-11 version of Unix was also developed in assembly language.[9] Thompson wanted a programming language for developing utilities for the new platform. He first tried writing aFortrancompiler, but he soon gave up the idea and instead created a cut-down version of the recently developedsystems programming languagecalledBCPL. The official description of BCPL was not available at the time,[14]and Thompson modified the syntax to be less 'wordy' and similar to a simplifiedALGOLknown as SMALGOL.[15]He called the resultB,[9]describing it as "BCPL semantics with a lot of SMALGOL syntax".[15]Like BCPL, B had abootstrappingcompiler to facilitate porting to new machines.[15]Ultimately, few utilities were written in B because it was too slow and could not take advantage of PDP-11 features such asbyteaddressability. In 1971 Ritchie started to improve B, to use the features of the more-powerful PDP-11. A significant addition was a character data type. He called thisNew B(NB).[15]Thompson started to use NB to write theUnixkernel, and his requirements shaped the direction of the language development.[15][16]Through to 1972, richer types were added to the NB language: NB had arrays ofintandchar. Pointers, the ability to generate pointers to other types, arrays of all types, and types to be returned from functions were all also added. Arrays within expressions became pointers. A new compiler was written, and the language was renamed C.[9] The C compiler and some utilities made with it were included inVersion 2 Unix, which is also known asResearch Unix.[17] AtVersion 4 Unix, released in November 1973, theUnixkernelwas extensively re-implemented in C.[9]By this time, the C language had acquired some powerful features such asstructtypes. Thepreprocessorwas introduced around 1973 at the urging ofAlan Snyderand also in recognition of the usefulness of the file-inclusion mechanisms available in BCPL andPL/I. Its original version provided only included files and simple string replacements:#includeand#defineof parameterless macros. Soon after that, it was extended, mostly byMike Leskand then by John Reiser, to incorporate macros with arguments andconditional compilation.[9] Unix was one of the first operating system kernels implemented in a language other thanassembly. Earlier instances include theMulticssystem (which was written inPL/I) andMaster Control Program(MCP) for theBurroughs B5000(which was written inALGOL) in 1961. In around 1977, Ritchie andStephen C. Johnsonmade further changes to the language to facilitateportabilityof the Unix operating system. Johnson'sPortable C Compilerserved as the basis for several implementations of C on new platforms.[16] In 1978Brian KernighanandDennis Ritchiepublished the first edition ofThe C Programming Language.[18]Known asK&Rfrom the initials of its authors, the book served for many years as an informalspecificationof the language. The version of C that it describes is commonly referred to as "K&R C". As this was released in 1978, it is now also referred to asC78.[19]The second edition of the book[20]covers the laterANSI Cstandard, described below. K&Rintroduced several language features: Even after the publication of the 1989 ANSI standard, for many years K&R C was still considered the "lowest common denominator" to which C programmers restricted themselves when maximum portability was desired, since many older compilers were still in use, and because carefully written K&R C code can be legal Standard C as well. In early versions of C, only functions that return types other thanintmust be declared if used before the function definition; functions used without prior declaration were presumed to return typeint. For example: Theinttype specifiers which are commented out could be omitted in K&R C, but are required in later standards. Since K&R function declarations did not include any information about function arguments, function parametertype checkswere not performed, although some compilers would issue a warning message if a local function was called with the wrong number of arguments, or if different calls to an external function used different numbers or types of arguments. Separate tools such as Unix'slintutility were developed that (among other things) could check for consistency of function use across multiple source files. In the years following the publication of K&R C, several features were added to the language, supported by compilers from AT&T (in particularPCC[21]) and some other vendors. These included: The large number of extensions and lack of agreement on astandard library, together with the language popularity and the fact that not even the Unix compilers precisely implemented the K&R specification, led to the necessity of standardization.[22] During the late 1970s and 1980s, versions of C were implemented for a wide variety ofmainframe computers,minicomputers, andmicrocomputers, including theIBM PC, as its popularity began to increase significantly. In 1983 theAmerican National Standards Institute(ANSI) formed a committee, X3J11, to establish a standard specification of C. X3J11 based the C standard on the Unix implementation; however, the non-portable portion of the Unix C library was handed off to theIEEEworking group1003 to become the basis for the 1988POSIXstandard. In 1989, the C standard was ratified as ANSI X3.159-1989 "Programming Language C". This version of the language is often referred to asANSI C, Standard C, or sometimesC89. In 1990 the ANSI C standard (with formatting changes) was adopted by theInternational Organization for Standardization(ISO) as ISO/IEC 9899:1990, which is sometimes calledC90. Therefore, the terms "C89" and "C90" refer to the same programming language. ANSI, like other national standards bodies, no longer develops the C standard independently, but defers to the international C standard, maintained by the working groupISO/IEC JTC1/SC22/WG14. National adoption of an update to the international standard typically occurs within a year of ISO publication. One of the aims of the C standardization process was to produce asupersetof K&R C, incorporating many of the subsequently introduced unofficial features. The standards committee also included several additional features such asfunction prototypes(borrowed from C++),voidpointers, support for internationalcharacter setsandlocales, and preprocessor enhancements. Although thesyntaxfor parameter declarations was augmented to include the style used in C++, the K&R interface continued to be permitted, for compatibility with existing source code. C89 is supported by current C compilers, and most modern C code is based on it. Any program written only in Standard C and without any hardware-dependent assumptions will run correctly on anyplatformwith a conforming C implementation, within its resource limits. Without such precautions, programs may compile only on a certain platform or with a particular compiler, due, for example, to the use of non-standard libraries, such asGUIlibraries, or to a reliance on compiler- or platform-specific attributes such as the exact size of data types and byteendianness. In cases where code must be compilable by either standard-conforming or K&R C-based compilers, the__STDC__macro can be used to split the code into Standard and K&R sections to prevent the use on a K&R C-based compiler of features available only in Standard C. After the ANSI/ISO standardization process, the C language specification remained relatively static for several years. In 1995, Normative Amendment 1 to the 1990 C standard (ISO/IEC 9899/AMD1:1995, known informally as C95) was published, to correct some details and to add more extensive support for international character sets.[23] The C standard was further revised in the late 1990s, leading to the publication of ISO/IEC 9899:1999 in 1999, which is commonly referred to as "C99". It has since been amended three times by Technical Corrigenda.[24] C99 introduced several new features, includinginline functions, several newdata types(includinglong long intand acomplextype to representcomplex numbers),variable-length arraysandflexible array members, improved support forIEEE 754floating point, support forvariadic macros(macros of variablearity), and support for one-line comments beginning with//, as in BCPL or C++. Many of these had already been implemented as extensions in several C compilers. C99 is for the most part backward compatible with C90, but is stricter in some ways; in particular, a declaration that lacks a type specifier no longer hasintimplicitly assumed. A standard macro__STDC_VERSION__is defined with value199901Lto indicate that C99 support is available.GCC,Solaris Studio, and other C compilers now[when?]support many or all of the new features of C99. The C compiler inMicrosoft Visual C++, however, implements the C89 standard and those parts of C99 that are required for compatibility withC++11.[25][needs update] In addition, the C99 standard requires support foridentifiersusingUnicodein the form of escaped characters (e.g.\u0040or\U0001f431) and suggests support for raw Unicode names. Work began in 2007 on another revision of the C standard, informally called "C1X" until its official publication of ISO/IEC 9899:2011 on December 8, 2011. The C standards committee adopted guidelines to limit the adoption of new features that had not been tested by existing implementations. The C11 standard adds numerous new features to C and the library, including type generic macros, anonymous structures, improved Unicode support, atomic operations, multi-threading, and bounds-checked functions. It also makes some portions of the existing C99 library optional, and improves compatibility with C++. The standard macro__STDC_VERSION__is defined as201112Lto indicate that C11 support is available. C17 is an informal name for ISO/IEC 9899:2018, a standard for the C programming language published in June 2018. It introduces no new language features, only technical corrections, and clarifications to defects in C11. The standard macro__STDC_VERSION__is defined as201710Lto indicate that C17 support is available. C23 is an informal name for the current major C language standard revision. It was informally known as "C2X" through most of its development. C23 was published in October 2024 as ISO/IEC 9899:2024.[26]The standard macro__STDC_VERSION__is defined as202311Lto indicate that C23 support is available. C2Y is an informal name for the next major C language standard revision, after C23 (C2X), that is hoped to be released later in the 2020s, hence the '2' in "C2Y". An early working draft of C2Y was released in February 2024 as N3220 by the working groupISO/IEC JTC1/SC22/WG14.[27] Historically, embedded C programming requires non-standard extensions to the C language to support exotic features such asfixed-point arithmetic, multiple distinctmemory banks, and basic I/O operations. In 2008, the C Standards Committee published atechnical reportextending the C language[28]to address these issues by providing a common standard for all implementations to adhere to. It includes a number of features not available in normal C, such as fixed-point arithmetic, named address spaces, and basic I/O hardware addressing. C has aformal grammarspecified by the C standard.[29]Line endings are generally not significant in C; however, line boundaries do have significance during the preprocessing phase. Comments may appear either between the delimiters/*and*/, or (since C99) following//until the end of the line. Comments delimited by/*and*/do not nest, and these sequences of characters are not interpreted as comment delimiters if they appear insidestringor character literals.[30] C source files contain declarations and function definitions. Function definitions, in turn, contain declarations andstatements. Declarations either define new types using keywords such asstruct,union, andenum, or assign types to and perhaps reserve storage for new variables, usually by writing the type followed by the variable name. Keywords such ascharandintspecify built-in types. Sections of code are enclosed in braces ({and}, sometimes called "curly brackets") to limit the scope of declarations and to act as a single statement for control structures. As an imperative language, C usesstatementsto specify actions. The most common statement is anexpression statement, consisting of an expression to be evaluated, followed by a semicolon; as aside effectof the evaluation,functions may be calledandvariables assignednew values. To modify the normal sequential execution of statements, C provides several control-flow statements identified by reserved keywords.Structured programmingis supported byif... [else] conditional execution and bydo...while,while, andforiterative execution (looping). Theforstatement has separate initialization, testing, and reinitialization expressions, any or all of which can be omitted.breakandcontinuecan be used within the loop. Break is used to leave the innermost enclosing loop statement and continue is used to skip to its reinitialisation. There is also a non-structuredgotostatement which branches directly to the designatedlabelwithin the function.switchselects acaseto be executed based on the value of an integer expression. Different from many other languages, control-flow willfall throughto the nextcaseunless terminated by abreak. Expressions can use a variety of built-in operators and may contain function calls. The order in which arguments to functions and operands to most operators are evaluated is unspecified. The evaluations may even be interleaved. However, all side effects (including storage to variables) will occur before the next "sequence point"; sequence points include the end of each expression statement, and the entry to and return from each function call. Sequence points also occur during evaluation of expressions containing certain operators (&&,||,?:and thecomma operator). This permits a high degree of object code optimization by the compiler, but requires C programmers to take more care to obtain reliable results than is needed for other programming languages. Kernighan and Ritchie say in the Introduction ofThe C Programming Language: "C, like any other language, has its blemishes. Some of the operators have the wrong precedence; some parts of the syntax could be better."[31]The C standard did not attempt to correct many of these blemishes, because of the impact of such changes on already existing software. The basic C source character set includes the following characters: Thenewlinecharacter indicates the end of a text line; it need not correspond to an actual single character, although for convenience C treats it as such. Additional multi-byte encoded characters may be used instring literals, but they are not entirelyportable. SinceC99multi-national Unicode characters can be embedded portably within C source text by using\uXXXXor\UXXXXXXXXencoding (whereXdenotes a hexadecimal character). The basic C execution character set contains the same characters, along with representations foralert,backspace, andcarriage return.Run-timesupport for extended character sets has increased with each revision of the C standard. The following reserved words arecase sensitive. C89 has 32 reserved words, also known as 'keywords', which cannot be used for any purposes other than those for which they are predefined: C99 added five more reserved words: (‡ indicates an alternative spelling alias for a C23 keyword) C11 added seven more reserved words:[32](‡ indicates an alternative spelling alias for a C23 keyword) C23 reserved fifteen more words: Most of the recently reserved words begin with an underscore followed by a capital letter, because identifiers of that form were previously reserved by the C standard for use only by implementations. Since existing program source code should not have been using these identifiers, it would not be affected when C implementations started supporting these extensions to the programming language. Some standard headers do define more convenient synonyms for underscored identifiers. Some of those words were added as keywords with their conventional spelling in C23 and the corresponding macros were removed. Prior to C89,entrywas reserved as a keyword. In the second edition of their bookThe C Programming Language, which describes what became known as C89, Kernighan and Ritchie wrote, "The ... [keyword]entry, formerly reserved but never used, is no longer reserved." and "The stillbornentrykeyword is withdrawn."[33] C supports a rich set ofoperators, which are symbols used within anexpressionto specify the manipulations to be performed while evaluating that expression. C has operators for: C uses the operator=(used in mathematics to express equality) to indicate assignment, following the precedent ofFortranandPL/I, but unlikeALGOLand its derivatives. C uses the operator==to test for equality. The similarity between the operators for assignment and equality may result in the accidental use of one in place of the other, and in many cases the mistake does not produce an error message (although some compilers produce warnings). For example, the conditional expressionif (a == b + 1)might mistakenly be written asif (a = b + 1), which will be evaluated astrueunless the value ofais0after the assignment.[34] The Coperator precedenceis not always intuitive. For example, the operator==binds more tightly than (is executed prior to) the operators&(bitwise AND) and|(bitwise OR) in expressions such asx & 1 == 0, which must be written as(x & 1) == 0if that is the coder's intent.[35] The "hello, world" example that appeared in the first edition ofK&Rhas become the model for an introductory program in most programming textbooks. The program prints "hello, world" to thestandard output, which is usually a terminal or screen display. The original version was:[36] A standard-conforming "hello, world" program is:[a] The first line of the program contains apreprocessing directive, indicated by#include. This causes the compiler to replace that line of code with the entire text of thestdio.hheader file, which contains declarations for standard input and output functions such asprintfandscanf. The angle brackets surroundingstdio.hindicate that the header file can be located using a search strategy that prefers headers provided with the compiler to other headers having the same name (as opposed to double quotes which typically include local or project-specific header files). The second line indicates that a function namedmainis being defined. Themainfunction serves a special purpose in C programs; therun-time environmentcalls themainfunction to begin program execution. The type specifierintindicates that the value returned to the invoker (in this case the run-time environment) as a result of evaluating themainfunction, is an integer. The keywordvoidas a parameter list indicates that themainfunction takes no arguments.[b] The opening curly brace indicates the beginning of the code that defines themainfunction. The next line of the program is a statement thatcalls(i.e. diverts execution to) a function namedprintf, which in this case is supplied from a systemlibrary. In this call, theprintffunction ispassed(i.e. provided with) a single argument, which is theaddressof the first character in thestring literal"hello, world\n". The string literal is an unnamedarrayset up automatically by the compiler, with elements of typecharand a finalNULL character(ASCII value 0) marking the end of the array (to allowprintfto determine the length of the string). The NULL character can also be written as theescape sequence\0. The\nis a standard escape sequence that C translates to anewlinecharacter, which, on output, signifies the end of the current line. The return value of theprintffunction is of typeint, but it is silently discarded since it is not used. (A more careful program might test the return value to check that theprintffunction succeeded.) The semicolon;terminates the statement. The closing curly brace indicates the end of the code for themainfunction. According to the C99 specification and newer, themainfunction (unlike any other function) will implicitly return a value of0upon reaching the}that terminates the function.[c]The return value of0is interpreted by the run-time system as an exit code indicating successful execution of the function.[37] Thetype systemin C isstaticandweakly typed, which makes it similar to the type system ofALGOLdescendants such asPascal.[38]There are built-in types for integers of various sizes, both signed and unsigned,floating-point numbers, and enumerated types (enum). Integer typecharis often used for single-byte characters. C99 added aBoolean data type. There are also derived types includingarrays,pointers,records(struct), andunions(union). C is often used in low-level systems programming where escapes from the type system may be necessary. The compiler attempts to ensure type correctness of most expressions, but the programmer can override the checks in various ways, either by using atype castto explicitly convert a value from one type to another, or by using pointers or unions to reinterpret the underlying bits of a data object in some other way. Some find C's declaration syntax unintuitive, particularly forfunction pointers. (Ritchie's idea was to declare identifiers in contexts resembling their use: "declaration reflects use".)[39] C'susual arithmetic conversionsallow for efficient code to be generated, but can sometimes produce unexpected results. For example, a comparison of signed and unsigned integers of equal width requires a conversion of the signed value to unsigned. This can generate unexpected results if the signed value is negative. C supports the use ofpointers, a type ofreferencethat records the address or location of an object or function in memory. Pointers can bedereferencedto access data stored at the address pointed to, or to invoke a pointed-to function. Pointers can be manipulated using assignment orpointer arithmetic. The run-time representation of a pointer value is typically a raw memory address (perhaps augmented by an offset-within-word field), but since a pointer's type includes the type of the thing pointed to, expressions including pointers can be type-checked at compile time. Pointer arithmetic is automatically scaled by the size of the pointed-to data type. Pointers are used for many purposes in C.Text stringsare commonly manipulated using pointers into arrays of characters.Dynamic memory allocationis performed using pointers; the result of amallocis usuallycastto the data type of the data to be stored. Many data types, such astrees, are commonly implemented as dynamically allocatedstructobjects linked together using pointers. Pointers to other pointers are often used in multi-dimensional arrays and arrays ofstructobjects. Pointers to functions (function pointers) are useful for passing functions as arguments tohigher-order functions(such asqsortorbsearch), indispatch tables, or ascallbackstoevent handlers.[37] Anull pointervalueexplicitly points to no valid location. Dereferencing a null pointer value is undefined, often resulting in asegmentation fault. Null pointer values are useful for indicating special cases such as no "next" pointer in the final node of alinked list, or as an error indication from functions returning pointers. In appropriate contexts in source code, such as for assigning to a pointer variable, anull pointer constantcan be written as0, with or without explicit casting to a pointer type, as theNULLmacro defined by several standard headers or, since C23 with the constantnullptr. In conditional contexts, null pointer values evaluate tofalse, while all other pointer values evaluate totrue. Void pointers (void *) point to objects of unspecified type, and can therefore be used as "generic" data pointers. Since the size and type of the pointed-to object is not known, void pointers cannot be dereferenced, nor is pointer arithmetic on them allowed, although they can easily be (and in many contexts implicitly are) converted to and from any other object pointer type.[37] Careless use of pointers is potentially dangerous. Because they are typically unchecked, a pointer variable can be made to point to any arbitrary location, which can cause undesirable effects. Although properly used pointers point to safe places, they can be made to point to unsafe places by using invalidpointer arithmetic; the objects they point to may continue to be used after deallocation (dangling pointers); they may be used without having been initialized (wild pointers); or they may be directly assigned an unsafe value using a cast, union, or through another corrupt pointer. In general, C is permissive in allowing manipulation of and conversion between pointer types, although compilers typically provide options for various levels of checking. Some other programming languages address these problems by using more restrictivereferencetypes. Arraytypes in C are traditionally of a fixed, static size specified at compile time. The more recent C99 standard also allows a form of variable-length arrays. However, it is also possible to allocate a block of memory (of arbitrary size) at run-time, using the standard library'smallocfunction, and treat it as an array. Since arrays are always accessed (in effect) via pointers, array accesses are typicallynotchecked against the underlying array size, although some compilers may providebounds checkingas an option.[40][41]Array bounds violations are therefore possible and can lead to various repercussions, including illegal memory accesses, corruption of data,buffer overruns, and run-time exceptions. C does not have a special provision for declaringmulti-dimensional arrays, but rather relies onrecursionwithin the type system to declare arrays of arrays, which effectively accomplishes the same thing. The index values of the resulting "multi-dimensional array" can be thought of as increasing inrow-major order. Multi-dimensional arrays are commonly used in numerical algorithms (mainly from appliedlinear algebra) to store matrices. The structure of the C array is well suited to this particular task. However, in early versions of C the bounds of the array must be known fixed values or else explicitly passed to any subroutine that requires them, and dynamically sized arrays of arrays cannot be accessed using double indexing. (A workaround for this was to allocate the array with an additional "row vector" of pointers to the columns.) C99 introduced "variable-length arrays" which address this issue. The following example using modern C (C99 or later) shows allocation of a two-dimensional array on the heap and the use of multi-dimensional array indexing for accesses (which can use bounds-checking on many C compilers): And here is a similar implementation using C99'sAutoVLAfeature:[d] The subscript notationx[i](wherexdesignates a pointer) issyntactic sugarfor*(x+i).[42]Taking advantage of the compiler's knowledge of the pointer type, the address thatx + ipoints to is not the base address (pointed to byx) incremented byibytes, but rather is defined to be the base address incremented byimultiplied by the size of an element thatxpoints to. Thus,x[i]designates thei+1th element of the array. Furthermore, in most expression contexts (a notable exception is as operand ofsizeof), an expression of array type is automatically converted to a pointer to the array's first element. This implies that an array is never copied as a whole when named as an argument to a function, but rather only the address of its first element is passed. Therefore, although function calls in C usepass-by-valuesemantics, arrays are in effect passed byreference. The total size of an arrayxcan be determined by applyingsizeofto an expression of array type. The size of an element can be determined by applying the operatorsizeofto any dereferenced element of an arrayA, as inn = sizeof A[0]. Thus, the number of elements in a declared arrayAcan be determined assizeof A / sizeof A[0]. Note, that if only a pointer to the first element is available as it is often the case in C code because of the automatic conversion described above, the information about the full type of the array and its length are lost. One of the most important functions of a programming language is to provide facilities for managingmemoryand the objects that are stored in memory. C provides three principal ways to allocate memory for objects:[37] These three approaches are appropriate in different situations and have various trade-offs. For example, static memory allocation has little allocation overhead, automatic allocation may involve slightly more overhead, and dynamic memory allocation can potentially have a great deal of overhead for both allocation and deallocation. The persistent nature of static objects is useful for maintaining state information across function calls, automatic allocation is easy to use but stack space is typically much more limited and transient than either static memory or heap space, and dynamic memory allocation allows convenient allocation of objects whose size is known only at run-time. Most C programs make extensive use of all three. Where possible, automatic or static allocation is usually simplest because the storage is managed by the compiler, freeing the programmer of the potentially error-prone chore of manually allocating and releasing storage. However, many data structures can change in size at runtime, and since static allocations (and automatic allocations before C99) must have a fixed size at compile-time, there are many situations in which dynamic allocation is necessary.[37]Prior to the C99 standard, variable-sized arrays were a common example of this. (See the article onC dynamic memory allocationfor an example of dynamically allocated arrays.) Unlike automatic allocation, which can fail at run time with uncontrolled consequences, the dynamic allocation functions return an indication (in the form of a null pointer value) when the required storage cannot be allocated. (Static allocation that is too large is usually detected by thelinkerorloader, before the program can even begin execution.) Unless otherwise specified, static objects contain zero or null pointer values upon program startup. Automatically and dynamically allocated objects are initialized only if an initial value is explicitly specified; otherwise they initially have indeterminate values (typically, whateverbit patternhappens to be present in thestorage, which might not even represent a valid value for that type). If the program attempts to access an uninitialized value, the results are undefined. Many modern compilers try to detect and warn about this problem, but bothfalse positives and false negativescan occur. Heap memory allocation has to be synchronized with its actual usage in any program to be reused as much as possible. For example, if the only pointer to a heap memory allocation goes out of scope or has its value overwritten before it is deallocated explicitly, then that memory cannot be recovered for later reuse and is essentially lost to the program, a phenomenon known as amemory leak.Conversely, it is possible for memory to be freed, but is referenced subsequently, leading to unpredictable results. Typically, the failure symptoms appear in a portion of the program unrelated to the code that causes the error, making it difficult to diagnose the failure. Such issues are ameliorated in languages withautomatic garbage collection. The C programming language useslibrariesas its primary method of extension. In C, a library is a set of functions contained within a single "archive" file. Each library typically has aheader file, which contains the prototypes of the functions contained within the library that may be used by a program, and declarations of special data types and macro symbols used with these functions. For a program to use a library, it must include the library's header file, and the library must be linked with the program, which in many cases requirescompiler flags(e.g.,-lm, shorthand for "link the math library").[37] The most common C library is theC standard library, which is specified by theISOandANSI Cstandards and comes with every C implementation (implementations which target limited environments such asembedded systemsmay provide only a subset of the standard library). This library supports stream input and output, memory allocation, mathematics, character strings, and time values. Several separate standard headers (for example,stdio.h) specify the interfaces for these and other standard library facilities. Another common set of C library functions are those used by applications specifically targeted forUnixandUnix-likesystems, especially functions which provide an interface to thekernel. These functions are detailed in various standards such asPOSIXand theSingle UNIX Specification. Since many programs have been written in C, there are a wide variety of other libraries available. Libraries are often written in C because C compilers generate efficientobject code; programmers then create interfaces to the library so that the routines can be used from higher-level languages likeJava,Perl, andPython.[37] File input and output (I/O) is not part of the C language itself but instead is handled by libraries (such as the C standard library) and their associated header files (e.g.stdio.h). File handling is generally implemented through high-level I/O which works throughstreams. A stream is from this perspective a data flow that is independent of devices, while a file is a concrete device. The high-level I/O is done through the association of a stream to a file. In the C standard library, abuffer(a memory area or queue) is temporarily used to store data before it is sent to the final destination. This reduces the time spent waiting for slower devices, for example ahard driveorsolid-state drive. Low-level I/O functions are not part of the standard C library[clarification needed]but are generally part of "bare metal" programming (programming that is independent of anyoperating systemsuch as mostembedded programming). With few exceptions, implementations include low-level I/O. A number of tools have been developed to help C programmers find and fix statements with undefined behavior or possibly erroneous expressions, with greater rigor than that provided by the compiler. Automated source code checking and auditing tools exist, such asLint. A common practice is to use Lint to detect questionable code when a program is first written. Once a program passes Lint, it is then compiled using the C compiler. Also, many compilers can optionally warn about syntactically valid constructs that are likely to actually be errors.MISRA Cis a proprietary set of guidelines to avoid such questionable code, developed for embedded systems.[43] There are also compilers, libraries, and operating system level mechanisms for performing actions that are not a standard part of C, such asbounds checkingfor arrays, detection ofbuffer overflow,serialization,dynamic memorytracking, andautomatic garbage collection. Memory management checking tools likePurifyorValgrindand linking with libraries containing special versions of thememory allocation functionscan help uncover runtime errors in memory usage.[44][45] C is widely used forsystems programmingin implementingoperating systemsandembedded systemapplications.[46]This is for several reasons: C enables programmers to create efficient implementations of algorithms and data structures, because the layer of abstraction from hardware is thin, and its overhead is low, an important criterion for computationally intensive programs. For example, theGNU Multiple Precision Arithmetic Library, theGNU Scientific Library,Mathematica, andMATLABare completely or partially written in C. Many languages support calling library functions in C, for example, thePython-based frameworkNumPyuses C for the high-performance and hardware-interacting aspects. Computer games are often built from a combination of languages. C has featured significantly, especially for those games attempting to obtain best performance from computer platforms. Examples include Doom from 1993.[47] C is sometimes used as anintermediate languageby implementations of other languages. This approach may be used for portability or convenience; by using C as an intermediate language, additional machine-specific code generators are not necessary. C has some features, such as line-number preprocessor directives and optional superfluous commas at the end of initializer lists, that support compilation of generated code. However, some of C's shortcomings have prompted the development of otherC-based languagesspecifically designed for use as intermediate languages, such asC--. Also, contemporary major compilersGCCandLLVMboth feature anintermediate representationthat is not C, and those compilers support front ends for many languages including C. A consequence of C's wide availability and efficiency is thatcompilers, libraries andinterpretersof other programming languages are often implemented in C.[48]For example, thereference implementationsofPython,[49]Perl,[50]Ruby,[51]andPHP[52]are written in C. Historically, C was sometimes used forweb developmentusing theCommon Gateway Interface(CGI) as a "gateway" for information between the web application, the server, and the browser.[53]C may have been chosen overinterpreted languagesbecause of its speed, stability, and near-universal availability.[54]It is no longer common practice for web development to be done in C,[55]and many otherweb development languagesare popular. Applications where C-based web development continues include theHTTPconfiguration pages onrouters,IoTdevices and similar, although even here some projects have parts in higher-level languages e.g. the use ofLuawithinOpenWRT. The two most popularweb servers,Apache HTTP ServerandNginx, are both written in C. These web servers interact with the operating system, listen on TCP ports for HTTP requests, and then serve up static web content, or cause the execution of other languages handling to 'render' content such asPHP, which is itself primarily written in C. C's close-to-the-metal approach allows for the construction of these high-performance software systems. C has also been widely used to implementend-userapplications.[56]However, such applications can also be written in newer, higher-level languages. the power of assembly language and the convenience of ... assembly language While C has been popular, influential and hugely successful, it has drawbacks, including: For some purposes, restricted styles of C have been adopted, e.g.MISRA CorCERT C, in an attempt to reduce the opportunity for bugs. Databases such asCWEattempt to count the ways C etc. has vulnerabilities, along with recommendations for mitigation. There aretoolsthat can mitigate against some of the drawbacks. Contemporary C compilers include checks which may generate warnings to help identify many potential bugs. C has both directly and indirectly influenced many later languages such asC++andJava.[65]The most pervasive influence has been syntactical; all of the languages mentioned combine the statement and (more or less recognizably) expressionsyntax of Cwith type systems, data models or large-scale program structures that differ from those of C, sometimes radically. Several C or near-C interpreters exist, includingChandCINT, which can also be used for scripting. Whenobject-oriented programminglanguages became popular,C++andObjective-Cwere two different extensions of C that provided object-oriented capabilities. Both languages were originally implemented assource-to-source compilers; source code was translated into C, and then compiled with a C compiler.[66] TheC++programming language (originally named "C withClasses") was devised byBjarne Stroustrupas an approach to providingobject-orientedfunctionality with a C-like syntax.[67]C++ adds greater typing strength, scoping, and other tools useful in object-oriented programming, and permitsgeneric programmingvia templates. Nearly a superset of C, C++ now[when?]supports most of C, witha few exceptions. Objective-Cwas originally a very "thin" layer on top of C, and remains a strictsupersetof C that permits object-oriented programming using a hybrid dynamic/static typing paradigm. Objective-C derives its syntax from both C andSmalltalk: syntax that involves preprocessing, expressions, function declarations, and function calls is inherited from C, while the syntax for object-oriented features was originally taken from Smalltalk. In addition toC++andObjective-C,Ch,Cilk, andUnified Parallel Care nearly supersets of C.
https://en.wikipedia.org/wiki/C_(programming_language)
Javais ahigh-level,general-purpose,memory-safe,object-orientedprogramming language. It is intended to letprogrammerswrite once, run anywhere(WORA),[18]meaning thatcompiledJava code can run on all platforms that support Java without the need to recompile.[19]Java applications are typically compiled tobytecodethat can run on anyJava virtual machine(JVM) regardless of the underlyingcomputer architecture. Thesyntaxof Java is similar toCandC++, but has fewerlow-levelfacilities than either of them. The Java runtime provides dynamic capabilities (such asreflectionand runtime code modification) that are typically not available in traditional compiled languages. Java gained popularity shortly after its release, and has been a popular programming language since then.[20]Java was the third most popular programming language in 2022[update]according toGitHub.[21]Although still widely popular, there has been a gradual decline in use of Java in recent years withother languages using JVMgaining popularity.[22] Java was designed byJames GoslingatSun Microsystems. It was released in May 1995 as a core component of Sun'sJava platform. The original andreference implementationJavacompilers, virtual machines, andclass librarieswere released by Sun underproprietary licenses. As of May 2007, in compliance with the specifications of theJava Community Process, Sun hadrelicensedmost of its Java technologies under theGPL-2.0-onlylicense.Oracle, which bought Sun in 2010, offers its ownHotSpotJava Virtual Machine. However, the officialreference implementationis theOpenJDKJVM, which is open-source software used by most developers and is the default JVM for almost all Linux distributions. Java 24is the version current as of March 2025[update]. Java 8, 11, 17, and 21 arelong-term supportversions still under maintenance. James Gosling, Mike Sheridan, andPatrick Naughtoninitiated the Java language project in June 1991.[23]Java was originally designed for interactive television, but it was too advanced for the digital cable television industry at the time.[24]The language was initially calledOakafter anoaktree that stood outside Gosling's office. Later the project went by the nameGreenand was finally renamedJava, fromJava coffee, a type of coffee fromIndonesia.[25]Gosling designed Java with aC/C++-style syntax that system and application programmers would find familiar.[26] Sun Microsystems released the first public implementation as Java 1.0 in 1996.[27]It promisedwrite once, run anywhere(WORA) functionality, providing no-cost run-times on popularplatforms. Fairly secure and featuring configurable security, it allowed network- and file-access restrictions. Majorweb browserssoon incorporated the ability to runJava appletswithin web pages, and Java quickly became popular. The Java 1.0 compiler was re-writtenin JavabyArthur van Hoffto comply strictly with the Java 1.0 language specification.[28]With the advent of Java 2 (released initially as J2SE 1.2 in December 1998 – 1999), new versions had multiple configurations built for different types of platforms.J2EEincluded technologies and APIs for enterprise applications typically run in server environments, while J2ME featured APIs optimized for mobile applications. The desktop version was renamed J2SE. In 2006, for marketing purposes, Sun renamed new J2 versions asJava EE,Java ME, andJava SE, respectively. In 1997, Sun Microsystems approached theISO/IEC JTC 1standards body and later theEcma Internationalto formalize Java, but it soon withdrew from the process.[29][30][31]Java remains ade factostandard, controlled through theJava Community Process.[32]At one time, Sun made most of its Java implementations available without charge, despite theirproprietary softwarestatus. Sun generated revenue from Java through the selling of licenses for specialized products such as the Java Enterprise System. On November 13, 2006, Sun released much of its Java virtual machine (JVM) asfree and open-source software(FOSS), under the terms of theGPL-2.0-onlylicense. On May 8, 2007, Sun finished the process, making all of its JVM's core code available underfree software/open-source distribution terms, aside from a small portion of code to which Sun did not hold the copyright.[33] Sun's vice-president Rich Green said that Sun's ideal role with regard to Java was as anevangelist.[34]FollowingOracle Corporation's acquisition of Sun Microsystems in 2009–10, Oracle has described itself as the steward of Java technology with a relentless commitment to fostering a community of participation and transparency.[35]This did not prevent Oracle from filing a lawsuit against Google shortly after that for using Java inside theAndroid SDK(see theAndroidsection). On April 2, 2010, James Gosling resigned fromOracle.[36] In January 2016, Oracle announced that Java run-time environments based on JDK 9 will discontinue the browser plugin.[37] Java software runs on most devices from laptops todata centers,game consolesto scientificsupercomputers.[38] Oracle(and others) highly recommend uninstalling outdated and unsupported versions of Java, due to unresolved security issues in older versions.[39] There were five primary goals in creating the Java language:[19] As of November 2024[update], Java 8, 11, 17, and 21 are supported aslong-term support(LTS) versions, with Java 25, releasing in September 2025, as the next scheduled LTS version.[40] Oracle released the last zero-cost public update for thelegacyversionJava 8LTS in January 2019 for commercial use, although it will otherwise still support Java 8 with public updates for personal use indefinitely. Other vendors such asAdoptiumcontinue to offer free builds of OpenJDK's long-term support (LTS) versions. These builds may include additional security patches and bug fixes.[41] Major release versions of Java, along with their release dates: Sun has defined and supports four editions of Java targeting different application environments and segmented many of itsAPIsso that they belong to one of the platforms. The platforms are: Theclassesin the Java APIs are organized into separate groups calledpackages. Each package contains a set of relatedinterfaces, classes, subpackages andexceptions. Sun also provided an edition calledPersonal Javathat has been superseded by later, standards-based Java ME configuration-profile pairings. One design goal of Java isportability, which means that programs written for the Java platform must run similarly on any combination of hardware and operating system with adequate run time support. This is achieved by compiling the Java language code to an intermediate representation calledJava bytecode, instead of directly to architecture-specificmachine code. Java bytecode instructions are analogous to machine code, but they are intended to be executed by avirtual machine(VM) written specifically for the host hardware.End-userscommonly use aJava Runtime Environment(JRE) installed on their device for standalone Java applications or a web browser forJava applets. Standard libraries provide a generic way to access host-specific features such as graphics,threading, andnetworking. The use of universal bytecode makes porting simple. However, the overhead ofinterpretingbytecode into machine instructions made interpreted programs almost always run more slowly than nativeexecutables.Just-in-time(JIT) compilers that compile byte-codes to machine code during runtime were introduced from an early stage. Java's Hotspot compiler is actually two compilers in one; and withGraalVM(included in e.g. Java 11, but removed as of Java 16) allowingtiered compilation.[51]Java itself is platform-independent and is adapted to the particular platform it is to run on by aJava virtual machine(JVM), which translates theJava bytecodeinto the platform's machine language.[52] Programs written in Java have a reputation for being slower and requiring more memory than those written inC++.[53][54]However, Java programs' execution speed improved significantly with the introduction ofjust-in-time compilationin 1997/1998 forJava 1.1,[55]the addition of language features supporting better code analysis (such as inner classes, the StringBuilder class, optional assertions, etc.), and optimizations in the Java virtual machine, such asHotSpotbecoming Sun's default JVM in 2000. With Java 1.5, the performance was improved with the addition of thejava.util.concurrentpackage, includinglock-freeimplementations of theConcurrentMapsand other multi-core collections, and it was improved further with Java 1.6. Some platforms offer direct hardware support for Java; there are micro controllers that can run Java bytecode in hardware instead of a software Java virtual machine,[56]and someARM-based processors could have hardware support for executing Java bytecode through theirJazelleoption, though support has mostly been dropped in current implementations of ARM. Java uses anautomatic garbage collectorto manage memory in theobject lifecycle. The programmer determines when objects are created, and the Java runtime is responsible for recovering the memory once objects are no longer in use. Once no references to an object remain, theunreachable memorybecomes eligible to be freed automatically by the garbage collector. Something similar to amemory leakmay still occur if a programmer's code holds a reference to an object that is no longer needed, typically when objects that are no longer needed are stored in containers that are still in use.[57]If methods for a non-existent object are called, anull pointerexception is thrown.[58][59] One of the ideas behind Java's automatic memory management model is that programmers can be spared the burden of having to perform manual memory management. In some languages, memory for the creation of objects is implicitly allocated on thestackor explicitly allocated and deallocated from theheap. In the latter case, the responsibility of managing memory resides with the programmer. If the program does not deallocate an object, amemory leakoccurs.[57]If the program attempts to access or deallocate memory that has already been deallocated, the result is undefined and difficult to predict, and the program is likely to become unstable or crash. This can be partially remedied by the use ofsmart pointers, but these add overhead and complexity. Garbage collection does not preventlogical memoryleaks, i.e. those where the memory is still referenced but never used.[57] Garbage collection may happen at any time. Ideally, it will occur when a program is idle. It is guaranteed to be triggered if there is insufficient free memory on the heap to allocate a new object; this can cause a program to stall momentarily. Explicit memory management is not possible in Java. Java does not support C/C++ stylepointer arithmetic, where object addresses can be arithmetically manipulated (e.g. by adding or subtracting an offset). This allows the garbage collector to relocate referenced objects and ensures type safety and security. As in C++ and some other object-oriented languages, variables of Java'sprimitive data typesare either stored directly in fields (for objects) or on thestack(for methods) rather than on the heap, as is commonly true for non-primitive data types (but seeescape analysis). This was a conscious decision by Java's designers for performance reasons. Java contains multiple types of garbage collectors. Since Java 9, HotSpot uses theGarbage First Garbage Collector(G1GC) as the default.[60]However, there are also several other garbage collectors that can be used to manage the heap, such as the Z Garbage Collector (ZGC) introduced in Java 11, and Shenandoah GC, introduced in Java 12 but unavailable in Oracle-produced OpenJDK builds. Shenandoah is instead available in third-party builds of OpenJDK, such asEclipse Temurin. For most applications in Java, G1GC is sufficient. In prior versions of Java, such as Java 8, theParallel Garbage Collectorwas used as the default garbage collector. Having solved the memory management problem does not relieve the programmer of the burden of handling properly other kinds of resources, like network or database connections, file handles, etc., especially in the presence of exceptions. The syntax of Java is largely influenced byC++andC. Unlike C++, which combines the syntax for structured, generic, and object-oriented programming, Java was built almost exclusively as an object-oriented language.[19]All code is written inside classes, and every data item is an object, with the exception of the primitive data types, (i.e. integers, floating-point numbers,boolean values, and characters), which are not objects for performance reasons. Java reuses some popular aspects of C++ (such as theprintfmethod). Unlike C++, Java does not supportoperator overloading[61]ormultiple inheritancefor classes, though multiple inheritance is supported forinterfaces.[62] Java usescommentssimilar to those of C++. There are three different styles of comments: a single line style marked with two slashes (//), a multiple line style opened with/*and closed with*/, and theJavadoccommenting style opened with/**and closed with*/. The Javadoc style of commenting allows the user to run the Javadoc executable to create documentation for the program and can be read by someintegrated development environments(IDEs) such asEclipseto allow developers to access documentation within the IDE. The following is a simple example of a"Hello, World!" programthat writes a message to thestandard output: Java applets were programs embedded in other applications, mainly in web pages displayed in web browsers. The Java applet API was deprecated with the release of Java 9 in 2017.[63][64] Java servlettechnology provides Web developers with a simple, consistent mechanism for extending the functionality of a Web server and for accessing existing business systems. Servlets areserver-sideJava EE components that generate responses to requests fromclients. Most of the time, this means generatingHTMLpages in response toHTTPrequests, although there are a number of other standard servlet classes available, for example forWebSocketcommunication. The Java servlet API has to some extent been superseded (but still used under the hood) by two standard Java technologies for web services: Typical implementations of these APIs on Application Servers or Servlet Containers use a standard servlet for handling all interactions with theHTTPrequests and responses that delegate to the web service methods for the actual business logic. JavaServer Pages (JSP) areserver-sideJava EE components that generate responses, typicallyHTMLpages, toHTTPrequests fromclients. JSPs embed Java code in an HTML page by using the specialdelimiters<%and%>. A JSP is compiled to a Javaservlet, a Java application in its own right, the first time it is accessed. After that, the generated servlet creates the response.[65] Swingis a graphical user interfacelibraryfor the Java SE platform. It is possible to specify a different look and feel through thepluggable look and feelsystem of Swing. Clones ofWindows,GTK+, andMotifare supplied by Sun.Applealso provides anAqualook and feel formacOS. Where prior implementations of these looks and feels may have been considered lacking, Swing in Java SE 6 addresses this problem by using more nativeGUI widgetdrawing routines of the underlying platforms.[66] JavaFXis asoftware platformfor creating and deliveringdesktop applications, as well asrich web applicationsthat can run across a wide variety of devices. JavaFX is intended to replaceSwingas the standardgraphical user interface(GUI) library forJava SE, but since JDK 11 JavaFX has not been in the core JDK and instead in a separate module.[67]JavaFX has support fordesktop computersandweb browsersonMicrosoft Windows,Linux, andmacOS. JavaFX does not have support for native OS look and feels.[68] In 2004,genericswere added to the Java language, as part of J2SE 5.0. Prior to the introduction of generics, each variable declaration had to be of a specific type. For container classes, for example, this is a problem because there is no easy way to create a container that accepts only specific types of objects. Either the container operates on all subtypes of a class or interface, usuallyObject, or a different container class has to be created for each contained class. Generics allow compile-time type checking without having to create many container classes, each containing almost identical code. In addition to enabling more efficient code, certain runtime exceptions are prevented from occurring, by issuing compile-time errors. If Java prevented all runtime type errors (ClassCastExceptions) from occurring, it would betype safe. In 2016, the type system of Java was provenunsoundin that it is possible to use generics to construct classes and methods that allow assignment of an instance of one class to a variable of another unrelated class. Such code is accepted by the compiler, but fails at run time with a class cast exception.[69] Criticisms directed at Java include the implementation of generics,[70]speed,[53]the handling of unsigned numbers,[71]the implementation of floating-point arithmetic,[72]and a history of security vulnerabilities in the primary Java VM implementationHotSpot.[73]Developers have criticized the complexity and verbosity of the Java Persistence API (JPA), a standard part of Java EE. This has led to increased adoption of higher-level abstractions like Spring Data JPA, which aims to simplify database operations and reduce boilerplate code. The growing popularity of such frameworks suggests limitations in the standard JPA implementation's ease-of-use for modern Java development.[74] TheJava Class Libraryis thestandard library, developed to support application development in Java. It is controlled byOraclein cooperation with others through theJava Community Processprogram.[75]Companies or individuals participating in this process can influence the design and development of the APIs. This process has been a subject of controversy during the 2010s.[76]The class library contains features such as: Javadoc is a comprehensive documentation system, created bySun Microsystems. It provides developers with an organized system for documenting their code. Javadoc comments have an extra asterisk at the beginning, i.e. the delimiters are/**and*/, whereas the normal multi-line comments in Java are delimited by/*and*/, and single-line comments start with//.[84] Oracle Corporationowns the official implementation of the Java SE platform, due to its acquisition ofSun Microsystemson January 27, 2010. This implementation is based on the original implementation of Java by Sun. The Oracle implementation is available forWindows,macOS,Linux, andSolaris. Because Java lacks any formal standardization recognized byEcma International, ISO/IEC, ANSI, or other third-party standards organizations, the Oracle implementation is thede facto standard. The Oracle implementation is packaged into two different distributions: The Java Runtime Environment (JRE) which contains the parts of the Java SE platform required to run Java programs and is intended for end users, and theJava Development Kit(JDK), which is intended for software developers and includes development tools such as theJava compiler,Javadoc,Jar, and adebugger. Oracle has also releasedGraalVM, a high performance Java dynamic compiler and interpreter. OpenJDKis another Java SE implementation that is licensed under the GNU GPL. The implementation started when Sun began releasing the Java source code under the GPL. As of Java SE 7, OpenJDK is the official Java reference implementation. The goal of Java is to make all implementations of Java compatible. Historically, Sun's trademark license for usage of the Java brand insists that all implementations becompatible. This resulted in a legal dispute withMicrosoftafter Sun claimed that the Microsoft implementation did not supportJava remote method invocation(RMI) orJava Native Interface(JNI) and had added platform-specific features of their own. Sun sued in 1997, and, in 2001, won a settlement of US$20 million, as well as a court order enforcing the terms of the license from Sun.[85]As a result, Microsoft no longer ships Java withWindows. Platform-independent Java is essential toJava EE, and an even more rigorous validation is required to certify an implementation. This environment enables portable server-side applications. The Java programming language requires the presence of a software platform in order for compiled programs to be executed. Oracle supplies theJava platformfor use with Java. TheAndroid SDKis an alternative software platform, used primarily for developingAndroid applicationswith its own GUI system. The Java language is a key pillar inAndroid, anopen sourcemobile operating system. Although Android, built on theLinux kernel, is written largely in C, theAndroid SDKuses the Java language as the basis for Android applications but does not use any of its standard GUI, SE, ME or other established Java standards.[86]The bytecode language supported by the Android SDK is incompatible with Java bytecode and runs on its own virtual machine, optimized for low-memory devices such assmartphonesandtablet computers. Depending on the Android version, the bytecode is either interpreted by theDalvik virtual machineor compiled into native code by theAndroid Runtime. Android does not provide the full Java SE standard library, although the Android SDK does include an independent implementation of a large subset of it. It supports Java 6 and some Java 7 features, offering an implementation compatible with the standard library (Apache Harmony). The use of Java-related technology in Android led to a legal dispute between Oracle and Google. On May 7, 2012, a San Francisco jury found that if APIs could be copyrighted, then Google had infringed Oracle's copyrights by the use of Java in Android devices.[87]District JudgeWilliam Alsupruled on May 31, 2012, that APIs cannot be copyrighted,[88]but this was reversed by the United States Court of Appeals for the Federal Circuit in May 2014.[89]On May 26, 2016, the district court decided in favor of Google, ruling the copyright infringement of the Java API in Android constitutes fair use.[90]In March 2018, this ruling was overturned by the Appeals Court, which sent down the case of determining the damages to federal court in San Francisco.[91]Google filed a petition forwrit of certiorariwith theSupreme Court of the United Statesin January 2019 to challenge the two rulings that were made by the Appeals Court in Oracle's favor.[92]On April 5, 2021, the Court ruled 6–2 in Google's favor, that its use of Java APIs should be consideredfair use. However, the court refused to rule on the copyrightability of APIs, choosing instead to determine their ruling by considering Java's API copyrightable "purely for argument's sake."[93]
https://en.wikipedia.org/wiki/Java_(programming_language)
Incomputer science,dynamic recompilationis a feature of someemulatorsandvirtual machines, where the system mayrecompilesome part of aprogramduring execution. By compiling during execution, the system can tailor the generated code to reflect the program's run-time environment, and potentially produce more efficientcodeby exploiting information that is not available to a traditional staticcompiler. Most dynamic recompilers are used to convert machine code between architectures at runtime. This is a task often needed in the emulation of legacy gaming platforms. In other cases, a system may employ dynamic recompilation as part of anadaptive optimizationstrategy to execute a portable program representation such asJavaor .NETCommon Language Runtimebytecodes. Full-speed debuggers also utilize dynamic recompilation to reduce the space overhead incurred in mostdeoptimizationtechniques, and other features such as dynamicthread migration. The main tasks a dynamic recompiler has to perform are: A dynamic recompiler may also perform some auxiliary tasks:
https://en.wikipedia.org/wiki/Dynamic_recompilation
Incomputing,data validationorinput validationis the process of ensuringdatahas undergonedata cleansingto confirm it hasdata quality, that is, that it is both correct and useful. It uses routines, often called "validation rules", "validation constraints", or "check routines", that check for correctness, meaningfulness, and security of data that are input to the system. The rules may be implemented through the automated facilities of adata dictionary, or by the inclusion of explicitapplication programvalidation logic of the computer and its application. This is distinct fromformal verification, which attempts to prove or disprove the correctness of algorithms for implementing a specification or property. Data validation is intended to provide certain well-defined guarantees for fitness andconsistency of datain an application or automated system. Data validation rules can be defined and designed using various methodologies, and be deployed in various contexts.[1]Their implementation can usedeclarativedata integrityrules, orprocedure-basedbusiness rules.[2] The guarantees of data validation do not necessarily include accuracy, and it is possible for data entry errors such as misspellings to be accepted as valid. Other clerical and/or computer controls may be applied to reduce inaccuracy within a system. In evaluating the basics of data validation, generalizations can be made regarding the different kinds of validation according to their scope, complexity, and purpose. For example: Data type validation is customarily carried out on one or more simple data fields. The simplest kind of data type validation verifies that the individual characters provided through user input are consistent with the expected characters of one or more known primitive data types as defined in a programming language or data storage and retrieval mechanism. For example, an integer field may require input to use only characters 0 through 9. Simple range and constraint validation may examine input for consistency with a minimum/maximum range, or consistency with a test for evaluating a sequence of characters, such as one or more tests against regular expressions. For example, a counter value may be required to be a non-negative integer, and a password may be required to meet a minimum length and contain characters from multiple categories. Code and cross-reference validation includes operations to verify that data is consistent with one or more possibly-external rules, requirements, or collections relevant to a particular organization, context or set of underlying assumptions. These additional validity constraints may involve cross-referencing supplied data with a known look-up table or directory information service such asLDAP. For example, a user-provided country code might be required to identify a current geopolitical region. Structured validation allows for the combination of other kinds of validation, along with more complex processing. Such complex processing may include the testing of conditional constraints for an entire complex data object or set of process operations within a system. Consistency validation ensures that data is logical. For example, the delivery date of an order can be prohibited from preceding its shipment date. Multiple kinds of data validation are relevant to 10-digit pre-2007ISBNs(the 2005 edition of ISO 2108 required ISBNs to have 13 digits from 2007 onwards[3]). Failures or omissions in data validation can lead todata corruptionor asecurity vulnerability.[4]Data validation checks that data are fit for purpose,[5]valid, sensible, reasonable and secure before they are processed.
https://en.wikipedia.org/wiki/Data_validation
Cross-browser testingis a type of non-functionalsoftware testingwhereweb applicationsare checked for support across different browsers and devices. Cross-browser testing can also provide an objective, independent view of the status of the web application to allow the business to appreciate and understand the risks of releasing it or implementing new feature(s). Test techniques include the process of executing a web application with the intent of finding failures in different browsers and devices and verifying that the website is fit for use in all of them. In other words, Cross-browser testing is verification that web application behaves in variousweb browsersidentically[1] The term "cross-browser testing" originated in the early 2000s with the advent of various web browsers that rendered web pages in different ways and supported different web technologies.[2]As a result, this led to inconsistencies in the behavior of web applications across browsers. In the early 2010s, smartphones entered the device market, and their number began to grow significantly. According to the data from Statcounter,[3]in November 2016 the number of sessions on mobile devices equaled the number of sessions on desktop devices. As of July 2021, the number of sessions on mobile devices is already 55.4%. The widespread use of mobile devices has led to the emergence of the term "cross-device testing" Cross-browser testing involves the execution of a web application to evaluate one or more properties of interest on different browsers and devices. In general, these properties indicate the extent to which the web application under test: Cross-browser testing is usually performed by QA engineers. After the development team builds a web application or site, QA engineers evaluate the completed project. The QA engineer tests the consistency of the content and layout, such as how fonts and images display, and whether theresponsive web designworks, if applicable. Next, they check the web application or site's usability,[4]such as features, integrations with third-party services, forms, and touch input for mobile or tablets. They also test accessibility,[5]such as the presence of alt text for images or closed captioning for video. Cross-browser testing can be conducted even if the web application is partially complete. With such an approach, also called "Full-stack web development", cross-browser tests are performed by web developers as they develop elements of theuser interfaceand functionalities.
https://en.wikipedia.org/wiki/Cross-browser_testing
Database testingusually consists of a layered process, including theuser interface(UI) layer, the business layer, the data access layer and the database itself. The UI layer deals with the interface design of the database, while the business layer includes databases supportingbusiness strategies. Databases, the collection of interconnected files on a server, storing information, may not deal with the sametypeof data, i.e. databases may beheterogeneous. As a result, many kinds of implementation and integrationerrorsmay occur in large database systems, which negatively affect the system's performance, reliability, consistency and security. Thus, it is important totestin order to obtain a database system which satisfies theACIDproperties (Atomicity, Consistency, Isolation, and Durability) of adatabase management system.[1] One of the most critical layers is the data access layer, which deals with databases directly during the communication process. Database testing mainly takes place at this layer and involves testing strategies such as quality control and quality assurance of the product databases.[2]Testing at these different layers is frequently used to maintain the consistency of database systems, most commonly seen in the following examples: The figure indicates the areas of testing involved during different database testing methods, such asblack-box testingandwhite-box testing. Black-box testing involves testing interfaces and the integration of the database, which includes: With the help of these techniques, the functionality of the database can be tested thoroughly. Pros and Cons of black box testing include: Test case generation in black box testing is fairly simple. Their generation is completely independent of software development and can be done in an early stage of development. As a consequence, the programmer has better knowledge of how to design the database application and uses less time for debugging. Cost for development of black box test cases is lower than development of white box test cases. The major drawback of black box testing is that it is unknown how much of the program is being tested. Also, certain errors cannot be detected.[3] White-box testing mainly deals with the internal structure of the database. The specification details are hidden from the user. The main advantage of white box testing in database testing is that coding errors are detected, so internal bugs in the database can be eliminated. The limitation of white box testing is that SQL statements are not covered. While generating test cases for database testing, the semantics of SQL statement need to be reflected in the test cases. For that purpose, a technique called WHite bOx Database Application Technique "(WHODATE)" is used. As shown in the figure, SQL statements are independently converted into GPL statements, followed by traditional white box testing to generate test cases which include SQL semantics.[4] A set fixture describes the initial state of the database before entering the testing. After setting fixtures, database behavior is tested for defined test cases. Depending on the outcome, test cases are either modified or kept as is. The "tear down" stage either results in terminating testing or continuing with other test cases.[5] For successful database testing the following workflow executed by each single test is commonly executed:
https://en.wikipedia.org/wiki/Database_testing
Domain testingis asoftware testingtechnique that involves selecting a small number of test cases from a nearly infinite group of candidate test cases. It is one of the most widely practiced software testing techniques.Domain knowledgeplays a very critical role while testing domain-specific work.[1][2][3][4][5][6][7] Thissoftware-engineering-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Domain_testing
Dynamic program analysisis the act ofanalyzing softwarethat involves executing aprogram– as opposed tostatic program analysis, which does not execute it. Analysis can focus on different aspects of the software including but not limited to:behavior,test coverage,performanceandsecurity. To be effective, the target program must be executed with sufficient test inputs[1]to address the ranges of possible inputs and outputs.Software testingmeasures, such ascode coverage, and tools such asmutation testing, are used to identify where testing is inadequate. Functional testing includes relatively commonprogrammingtechniques such asunit testing,integration testingandsystem testing.[2] Computing thecode coverageof a test identifies code that is not tested; not covered by a test. Although this analysis identifies code that is not tested it does not determine whether tested coded isadequatelytested. Code can be executed even if the tests do not actually verify correct behavior. Dynamic testing involves executing a program on a set of test cases. Fuzzing is a testing technique that involves executing a program on a wide variety of inputs; often these inputs are randomly generated (at least in part).Gray-box fuzzersuse code coverage to guide input generation. Dynamic symbolic execution (also known asDSEor concolic execution) involves executing a test program on a concrete input, collecting the path constraints associated with the execution, and using aconstraint solver(generally, anSMT solver) to generate new inputs that would cause the program to take a different control-flow path, thus increasing code coverage of the test suite.[3]DSE can be considered a type offuzzing("white-box" fuzzing). Dynamic data-flow analysis tracks the flow of information fromsourcestosinks. Forms of dynamic data-flow analysis include dynamic taint analysis and evendynamic symbolic execution.[4][5] Daikonis an implementation of dynamic invariant detection. Daikon runs a program, observes the values that the program computes, and then reports properties that were true over the observed executions, and thus likely true over all executions. Dynamic analysis can be used to detect security problems. For a given subset of a program’s behavior, program slicing consists of reducing the program to the minimum form that still produces the selected behavior. The reduced program is called a “slice” and is a faithful representation of the original program within the domain of the specified behavior subset. Generally, finding a slice is an unsolvable problem, but by specifying the target behavior subset by the values of a set of variables, it is possible to obtain approximate slices using a data-flow algorithm. These slices are usually used by developers during debugging to locate the source of errors. Mostperformance analysis toolsuse dynamic program analysis techniques.[citation needed] Most dynamic analysis involvesinstrumentationor transformation. Since instrumentation can affect runtime performance, interpretation of test results must account for this to avoid misidentifying a performance problem. DynInst is a runtime code-patching library that is useful in developing dynamic program analysis probes and applying them to compiled executables. Dyninst does not requiresource codeor recompilation in general, however, non-stripped executables and executables with debugging symbols are easier to instrument. Iroh.jsis a runtime code analysis library forJavaScript. It keeps track of the code execution path, provides runtime listeners to listen for specific executed code patterns and allows the interception and manipulation of the program's execution behavior.
https://en.wikipedia.org/wiki/Dynamic_program_analysis
Insoftware engineering,graphical user interface testingis the process oftestinga product'sgraphical user interface(GUI) to ensure it meets its specifications. This is normally done through the use of a variety oftest cases. To generate a set oftest cases,test designersattempt to cover all the functionality of the system and fully exercise theGUIitself. The difficulty in accomplishing this task is twofold: to deal with domain size and sequences. In addition, the tester faces more difficulty when they have to doregression testing. Unlike aCLI(command-line interface) system, a GUI may have additional operations that need to be tested. A relatively small program such asMicrosoftWordPadhas 325 possible GUI operations.[1]In a large program, the number of operations can easily be anorder of magnitudelarger. The second problem is the sequencing problem. Some functionality of the system may only be accomplished with a sequence of GUI events. For example, to open a file a user may first have to click on the File Menu, then select the Open operation, use a dialog box to specify the file name, and focus the application on the newly opened window. Increasing the number of possible operations increases the sequencing problem exponentially. This can become a serious issue when the tester is creating test cases manually. Regression testingis often a challenge with GUIs as well. A GUI may change significantly, even though the underlying application does not. A test designed to follow a certain path through the GUI may then fail since a button, menu item, or dialog may have changed location or appearance. These issues have driven the GUI testing problem domain towards automation. Many different techniques have been proposed to automatically generatetest suitesthat are complete and that simulate user behavior. Most of the testing techniques attempt to build on those previously used to test CLI programs, but these can have scaling problems when applied to GUIs. For example,finite-state-machine-based modeling[2][3]– where a system is modeled as a finite-state machine and a program is used to generate test cases that exercise all states – can work well on a system that has a limited number of states but may become overly complex and unwieldy for a GUI (see alsomodel-based testing). A novel approach to test suite generation, adapted from a CLI technique[4]involves using a planning system.[5]Planning is a well-studied technique from theartificial intelligence(AI) domain that attempts to solve problems that involve four parameters: Planning systemsdetermine a path from the initial state to the goal state by using the operators. As a simple example of a planning problem, given two words and a single operation which replaces a single letter in a word with another, the goal might be to change one word into another. In[1]the authors used the planner IPP[6]to demonstrate this technique. The system's UI is first analyzed to determine the possible operations. These become the operators used in the planning problem. Next an initial system state is determined, and a goal state is specified that the tester feels would allow exercising of the system. The planning system determines a path from the initial state to the goal state, which becomes the test plan. Using a planner to generate the test cases has some specific advantages over manual generation. A planning system, by its very nature, generates solutions to planning problems in a way that is very beneficial to the tester: When manually creating a test suite, the tester is more focused on how to test a function (i. e. the specific path through the GUI). By using a planning system, the path is taken care of and the tester can focus on what function to test. An additional benefit of this is that a planning system is not restricted in any way when generating the path and may often find a path that was never anticipated by the tester. This problem is a very important one to combat.[7] Another method of generating GUI test cases simulates a novice user. An expert user of a system tends to follow a direct and predictable path through a GUI, whereas a novice user would follow a more random path. A novice user is then likely to explore more possible states of the GUI than an expert. The difficulty lies in generating test suites that simulate 'novice' system usage. Usinggenetic algorithmshave been proposed to solve this problem.[7]Novice paths through the system are not random paths. First, a novice user will learn over time and generally would not make the same mistakes repeatedly, and, secondly, a novice user is following a plan and probably has some domain or system knowledge. Genetic algorithms work as follows: a set of 'genes' are created randomly and then are subjected to some task. The genes that complete the task best are kept and the ones that do not are discarded. The process is again repeated with the surviving genes being replicated and the rest of the set filled in with more random genes. Eventually one gene (or a small set of genes if there is some threshold set) will be the only gene in the set and is naturally the best fit for the given problem. In the case of GUI testing, the method works as follows. Each gene is essentially a list of random integer values of some fixed length. Each of these genes represents a path through the GUI. For example, for a given tree of widgets, the first value in the gene (each value is called an allele) would select the widget to operate on, the following alleles would then fill in input to the widget depending on the number of possible inputs to the widget (for example a pull down list box would have one input...the selected list value). The success of the genes are scored by a criterion that rewards the best 'novice' behavior. A system to perform GUI testing for the X window system, extensible to any windowing system, was introduced by Kasik and George.[7]TheX Windowsystem provides functionality (viaXServerand its protocol) to dynamically send GUI input to and get GUI output from the program without directly using the GUI. For example, one can call XSendEvent() to simulate a click on a pull-down menu, and so forth. This system allows researchers to automate the test case generation and testing for any given application under test, in such a way that a set of novice user test cases can be created. At first the strategies were migrated and adapted from the CLI testing strategies. A popular method used in the CLI environment is capture/playback. Capture playback is a system where the system screen is "captured" as a bitmapped graphic at various times during system testing. This capturing allowed the tester to "play back" the testing process and compare the screens at the output phase of the test with expected screens. This validation could be automated since the screens would be identical if the case passed and different if the case failed. Using capture/playback worked quite well in the CLI world but there are significant problems when one tries to implement it on a GUI-based system.[8]The most obvious problem one finds is that the screen in a GUI system may look different while the state of the underlying system is the same, making automated validation extremely difficult. This is because a GUI allows graphical objects to vary in appearance and placement on the screen. Fonts may be different, window colors or sizes may vary but the system output is basically the same. This would be obvious to a user, but not obvious to an automated validation system. To combat this and other problems, testers have gone 'under the hood' and collected GUI interaction data from the underlying windowing system.[9]By capturing the window 'events' into logs the interactions with the system are now in a format that is decoupled from the appearance of the GUI. Now, only the event streams are captured. There is some filtering of the event streams necessary since the streams of events are usually very detailed and most events are not directly relevant to the problem. This approach can be made easier by using anMVCarchitecture for example and making the view (i. e. the GUI here) as simple as possible while the model and the controller hold all the logic. Another approach is to use the software'sbuilt-inassistive technology, to use anHTML interfaceor athree-tier architecturethat makes it also possible to better separate the user interface from the rest of the application. Another way to run tests on a GUI is to build a driver into the GUI so that commands or events can be sent to the software from another program.[7]This method of directly sending events to and receiving events from a system is highly desirable when testing, since the input and output testing can be fully automated and user error is eliminated.
https://en.wikipedia.org/wiki/Graphical_user_interface_testing
Anindependent test organizationis an organization, person, or company that tests products, materials, software, etc. according to agreed requirements. The test organization can be affiliated with the government or universities or can be anindependent testing laboratory. They are independent because they are not affiliated with the producer nor the user of the item being tested: no commercial bias is present. These "contract testing" facilities are sometimes called "third party" testing or evaluation facilities. Many suppliers or vendors offer somechemical testing,physical testing, andsoftware testingas a free service to customers. It is common for businesses to partner with reputable suppliers: Many suppliers have certified quality management systems such asISO 9000or allow customers to conduct technical and quality audits. Data from testing is commonly shared. There is sometimes a risk that supplier testing may tend to be self-serving and not completely impartial. Large companies often have their own specialized staff and testing facilities laboratory. Corporate engineers know their products, manufacturing capabilities, logistics system, and their customers best. Cost reduction of existing products and cost avoidance for new products have been documented. Another option is to use paidconsultants,Independent contractors, and third-party test laboratories. They are commonly chosen for specialized expertise, for access to certain test equipment, for surge projects, or where independent testing is otherwise required. Many have certifications andaccreditations: ISO 9000,ISO/IEC 17025, and various governing agencies. Independent third party laboratories should not be affiliated with any supplier as such affiliation creates bias. Independent testing might have a variety of purposes, such as: There are varioustechnical standardsavailable which organizations can use to evaluate products and services.Test methodsare published by regulators or can be included inspecificationsorcontracts. International standards organizations also publish test methods: For example in software usage, the Capability Maturity Model Integration (CMMI) is a process improvement approach that “provides organizations with the essential elements of effective processes.” There are various levels attainable within CMMI, the highest of which is Level 5. Attaining this level of certification verifies that the practices of the organization are exemplary. The Testing Maturity Model (TMM) has been designed to complement CMMI and is based on best industry practices. The TMM has 2 components; firstly, a set of 5 levels that define testing capability covering maturity goals, subgoals and activities, tasks and responsibilities and secondly, an assessment model consisting of a maturity questionnaire and an assessment procedure. There is also the Test Process Improvement model from Sogeti. This supports the improvement of test processes by looking at 20 key areas and has different levels therein to enable insight into the state of the key areas. In order to satisfy the criteria stipulated in the best practice guidelines, organizations must be committed and must invest time and money to implement and adhere to the processes as defined by such guidelines. Typically, companies have a small test team which coordinates the entire testing activity. During the testing cycle, the test team is supplemented with the readily available developers.
https://en.wikipedia.org/wiki/Independent_test_organization
Manual testingis the process of manuallytesting softwarefor defects. It requires a tester to play the role of an end user where by they use most of the application's features to ensure correct behaviour. To guarantee completeness of testing, the tester often follows a writtentest planthat leads them through a set of importanttest cases. A key step in the process is testing the software for correct behavior prior to release to end users. For small scale engineering efforts (including prototypes),ad hoc testingmay be sufficient. With this informal approach, the tester does not follow any rigorous testing procedure and simply performs testing without planning or documentation. Conversely,exploratory testing, which involves simultaneous learning, test design and test execution, explores the user interface of the application using as many of its features as possible, using information gained in prior tests to intuitively derive additional tests. The success of exploratory manual testing relies heavily on the domain expertise of the tester, because a lack of knowledge will lead to incompleteness in testing. One of the key advantages of an informal approach is to gain an intuitive insight to how it feels to use the application. Large scale engineering projects that rely on manual software testing follow a more rigorous methodology in order to maximize the number of defects that can be found. A systematic approach focuses on predetermined test cases and generally involves the following steps.[1] A rigorous test case based approach is often traditional for large software engineering projects that follow aWaterfall model.[2]However, at least one recent study did not show a dramatic difference in defect detection efficiency between exploratory testing and test case based testing.[3] Testing can be throughblack-,white-orgrey-box testing. In white-box testing the tester is concerned with the execution of the statements through the source code. In black-box testing the software is run to check for the defects and is less concerned with how the processing of the input is done. Black-box testers do not have access to the source code. Grey-box testing is concerned with running the software while having an understanding of the source code and algorithms.[4] Staticanddynamic testingapproach may also be used. Dynamic testing involves running the software. Static testing includes verifying requirements, syntax of code and any other activities that do not include actually running the code of the program. Testing can be further divided intofunctionalandnon-functional testing. In functional testing the tester would check the calculations, any link on the page, or any other field which on given input, output may be expected. Non-functional testing includes testing performance, compatibility and fitness of the system under test, its security and usability among other things. There are several stages. They are: Test automationmay be able to reduce or eliminate the cost of actual testing.[5]A computer can follow a rote sequence of steps more quickly than a person, and it can run the tests overnight to present the results in the morning. However, the labor that is saved in actual testing must be spent instead authoring the test program. Depending on the type of application to be tested, and the automation tools that are chosen, this may require more labor than a manual approach. In addition, some testing tools present a very large amount of data, potentially creating a time-consuming task of interpreting the results. Things such asdevice driversandsoftware librariesmust be tested using test programs. In addition, testing of large numbers of users (performance testingandload testing) is typically simulated in software rather than performed in practice. Conversely, graphical user interfaces whose layout changes frequently are very difficult to test automatically. There are test frameworks that can be used for regression testing of user interfaces. They rely on recording of sequences of keystrokes and mouse gestures, then playing them back and observing that the user interface responds in the same way every time. Unfortunately, these recordings may not work properly when a button is moved or relabeled in a subsequent release. An automatic regression test may also be fooled if the program output varies significantly.
https://en.wikipedia.org/wiki/Manual_testing
Orthogonal array testingis a systematic andstatistically-drivenblack-box testingtechnique employed in the field ofsoftware testing.[1][2]This method is particularly valuable in scenarios where the number of inputs to asystemis substantial enough to make exhaustive testing impractical. Orthogonal array testing works on the premise of selecting asubsetof test cases from a largepoolof potential inputs. This selection is based on statistical methods to ensure that the chosen subset represents the whole input space. As a result, seriousbugscan be identified while the number of tests necessary to do so is greatly reduced. Orthogonal array testing works based on something calledorthogonal arrays.[3]These are organized lists of different factors. When we use them, we make sure that the results we get from each factor are not connected or related. This means each test gives us new and unique information. This way of organizing inputs helps us avoid repeating tests and get the same info with the least number ofexperiments. The concept of orthogonal vectors in orthogonal arrays is fundamental to understanding orthogonal array testing. Orthogonal vectors possess key properties:
https://en.wikipedia.org/wiki/Orthogonal_array_testing
Pair testingis asoftware developmenttechnique in which two team members work together at one keyboard to test thesoftware application. One does the testing and the other analyzes or reviews the testing. This can be done between onetesteranddeveloperorbusiness analystor between two testers with both participants taking turns at driving the keyboard.[1] This can be more related topair programmingandexploratory testingofagile software developmentwhere two team members are sitting together to test thesoftware application. This will help both the members to learn more about the application. This will narrow down the root cause of the problem while continuous testing. Developer can find out which portion of the source code is affected by the bug. This track can help to make the solid test cases and narrowing the problem for the next time. This is more applicable where the requirements and specifications are not very clear, the team is very new, and needs to learn the application behavior quickly. This follows the same principles of pair programming; the two team members should be in the same level.
https://en.wikipedia.org/wiki/Pair_testing
Reverse semantic traceability(RST) is aquality controlmethod for verification improvement that helps to insure high quality ofartifactsby backward translation at each stage of thesoftware development process. Each stage of development process can be treated as a series of “translations” from one language to another. At the very beginning aprojectteam deals with customer’s requirements and expectations expressed in natural language. These customer requirements sometimes might be incomplete, vague or even contradictory to each other. The first step is specification and formalization of customer expectations, transition (“translation”) of them into a formal requirement document for the future system. Then requirements are translated intosystem architectureand step by step the project team generatescodewritten in a very formal programming language. There is always a threat of inserting mistakes, misinterpreting or losing something during the translation. Even a small defect in requirement ordesignspecificationscan cause huge amounts of defects at the late stages of the project. Sometimes such misunderstandings can lead to project failure or complete customer dissatisfaction. The highest usage scenarios of Reverse Semantic Traceability method can be: Main roles involved in RST session are: Reverse Semantic Traceability as a validation method can be applied to any project artifact, to any part of project artifact or even to a small piece of document or code. However, it is obvious that performing RST for all artifacts can createoverheadand should be well justified (for example, for medical software where possible information loss is very critical). It is a responsibility of company andproject managerto decide what amount of project artifacts will be “reverse engineered”. This amount depends on project specific details:trade-offmatrix, project and companyquality assurancepolicies. Also it depends on importance of particular artifact for project success and level of quality control applied to this artifact. Amount of RST sessions for project is defined at the project planning stage. First of all project manager should create a list of all artifacts project team will have during the project. They could be presented as a tree with dependencies and relationships. Artifacts can be present in one occurrence (likeVision document) or in several occurrences (like risks or bugs). This list can be changed later during the project but the idea behind the decisions about RST activities will be the same. The second step is to analyzedeliverableimportance for project and level of quality control for each project artifact. Importance of document is the degree of artifact impact to project success and quality of final product. It’s measured by the scale: Level of quality control is a measure that defines amount of verification and validation activities applied to artifact, and probability of miscommunication during artifact creation. Success of RST session strongly depends on correct assignment of responsible people. Reverse Semantic Traceability starts when decision that RST should be performed is made and resources for it are available. Project manager defines what documents will be an input for RST session. For example, it can be not only an artifact to restore but some background project information. It is recommended to give to reverse engineers number of words in original text so that they have an idea about amount of text they should get as a result: it can be one sentence or several pages of text. Though, the restored text may not contain the same number of words as original text nevertheless the values should be comparable. After that reverse engineers take the artifact and restore the original text from it. RST itself takes about 1 hour for one page text (750 words). To complete RST session, restored and original texts of artifact should be compared and quality of artifact should be assessed. Decision about artifacts rework and its amount is made based on this assessment. For assessment a group of experts is formed. Experts should be aware of project domain and be an experienced enough to assess quality level of compared artifacts. For example, business analysts will be experts for comparison of vision statement and restored vision statement from scenario. RST assessment criteria: Each of experts gives his assessment, and then the average value is calculated. Depending on this value Project Manager makes a decision should both artifacts be corrected or one of them or rework is not required. If the average RST quality level is in range from 1 to 2 the quality of artifact is poor and it is recommended not only rework of validated artifact to eliminate defects but corrections of original artifact to clear misunderstandings. In this case one more RST session after rework of artifacts is required. For artifacts that have more than 2 but less than 3 corrections of validated artifact to fix defects and eliminate information loss is required, however review of original artifact to find out if there any vague piece of information that cause misunderstandings is recommended. No additional RST sessions is needed. If the average mark is more than 3 but less than 4 then corrections of validated artifact to remove defects and insignificant information loss is supposed. If the mark is greater than 4 it means that artifact is of good quality and no special corrections or rework is required. Obviously the final decision about rework of artifacts is made by project manager and should be based on analysis of reasons of differences in texts.
https://en.wikipedia.org/wiki/Reverse_semantic_traceability
This article discusses a set of tactics useful insoftware testing. It is intended as a comprehensive list of tactical approaches tosoftware quality assurance(more widely colloquially known asquality assurance(traditionally called by the acronym "QA")) and general application of thetest method(usually just called "testing" or sometimes "developer testing"). An installation test assures that the system is installed correctly and working at actual customer's hardware. Software testing methods are traditionally divided into white- and black-box testing. These two approaches are used to describe the point of view that a test engineer takes when designing test cases. White-box testing (also known as clear box testing, glass box testing, transparent box testing and structural testing, by seeing the source code) tests internal structures or workings of a program, as opposed to the functionality exposed to the end-user. In white-box testing an internal perspective of the system, as well as programming skills, are used to design test cases. The tester chooses inputs to exercise paths through the code and determine the appropriate outputs. This is analogous to testing nodes in a circuit, e.g.in-circuit testing(ICT). While white-box testing can be applied at theunit,integrationandsystemlevels of the software testing process, it is usually done at the unit level. It can test paths within a unit, paths between units during integration, and between subsystems during a system–level test. Though this method of test design can uncover many errors or problems, it might not detect unimplemented parts of the specification or missing requirements. Techniques used in white-box testing include: Code coverage tools can evaluate the completeness of a test suite that was created with any method, including black-box testing. This allows the software team to examine parts of a system that are rarely tested and ensures that the most importantfunction pointshave been tested.[1][unreliable source?]Code coverage as asoftware metriccan be reported as a percentage for: 100% statement coverage ensures that all code paths or branches (in terms ofcontrol flow) are executed at least once. This is helpful in ensuring correct functionality, but not sufficient since the same code may process different inputs correctly or incorrectly. Black-box testing treats the software as a "black box", examining functionality without any knowledge of internal implementation, without seeing the source code. The testers are only aware of what the software is supposed to do, not how it does it.[2]Black-box testing methods include:equivalence partitioning,boundary value analysis,all-pairs testing,state transition tables,decision tabletesting,fuzz testing,model-based testing,use casetesting,exploratory testingand specification-based testing. Specification-based testing aims to test the functionality of software according to the applicable requirements.[3]This level of testing usually requires thoroughtest casesto be provided to the tester, who then can simply verify that for a given input, the output value (or behavior), either "is" or "is not" the same as the expected value specified in the test case. Test cases are built around specifications and requirements, i.e., what the application is supposed to do. It uses external descriptions of the software, including specifications, requirements, and designs to derive test cases. These tests can befunctionalornon-functional, though usually functional. Specification-based testing may be necessary to assure correct functionality, but it is insufficient to guard against complex or high-risk situations.[4] One advantage of the black box technique is that no programming knowledge is required. Whatever biases the programmers may have had, the tester likely has a different set and may emphasize different areas of functionality. On the other hand, black-box testing has been said to be "like a walk in a dark labyrinth without a flashlight."[5]Because they do not examine the source code, there are situations when a tester writes many test cases to check something that could have been tested by only one test case, or leaves some parts of the program untested. This method of test can be applied to all levels of software testing:unit,integration,systemandacceptance. It typically comprises most if not all testing at higher levels, but can also dominate unit testing as well. The aim of visual testing is to provide developers with the ability to examine what was happening at the point of software failure by presenting the data in such a way that the developer can easily find the information she or he requires, and the information is expressed clearly.[6][7] At the core of visual testing is the idea that showing someone a problem (or a test failure), rather than just describing it, greatly increases clarity and understanding. Visual testing therefore requires the recording of the entire test process – capturing everything that occurs on the test system in video format. Output videos are supplemented by real-time tester input via picture-in-a-picture webcam and audio commentary from microphones. Visual testing provides a number of advantages. The quality of communication is increased drastically because testers can show the problem (and the events leading up to it) to the developer as opposed to just describing it and the need to replicate test failures will cease to exist in many cases. The developer will have all the evidence he or she requires of a test failure and can instead focus on the cause of the fault and how it should be fixed. Visual testing is particularly well-suited for environments that deployagile methodsin their development of software, since agile methods require greater communication between testers and developers and collaboration within small teams.[citation needed] Ad hoc testingandexploratory testingare important methodologies for checking software integrity, because they require less preparation time to implement, while the important bugs can be found quickly. In ad hoc testing, where testing takes place in an improvised, impromptu way, the ability of a test tool to visually record everything that occurs on a system becomes very important in order to document the steps taken to uncover the bug.[clarification needed][citation needed] Visual testing is gathering recognition incustomer acceptanceandusability testing, because the test can be used by many individuals involved in the development process.[citation needed]For the customer, it becomes easy to provide detailed bug reports and feedback, and for program users, visual testing can record user actions on screen, as well as their voice and image, to provide a complete picture at the time of software failure for the developers. Grey-box testing (American spelling: gray-box testing) involves having knowledge of internal data structures and algorithms for purposes of designing tests, while executing those tests at the user, or black-box level. The tester is not required to have full access to the software's source code.[2]Manipulating input data and formatting output do not qualify as grey-box, because the input and output are clearly outside of the "black box" that we are calling the system under test. This distinction is particularly important when conductingintegration testingbetween two modules of code written by two different developers, where only the interfaces are exposed for test. However, tests that require modifying a back-end data repository such as a database or a log file does qualify as grey-box, as the user would not normally be able to change the data repository in normal production operations.[citation needed]Grey-box testing may also includereverse engineeringto determine, for instance, boundary values or error messages. By knowing the underlying concepts of how the software works, the tester makes better-informed testing choices while testing the software from outside. Typically, a grey-box tester will be permitted to set up an isolated testing environment with activities such as seeding adatabase. The tester can observe the state of the product being tested after performing certain actions such as executingSQLstatements against the database and then executing queries to ensure that the expected changes have been reflected. Grey-box testing implements intelligent test scenarios, based on limited information. This will particularly apply to data type handling,exception handling, and so on.[8] Many programming groups are relying more and more onautomated testing, especially groups that usetest-driven development. There are many frameworks to write tests in, andcontinuous integrationsoftware will run tests automatically every time code is checked into aversion controlsystem. While automation cannot reproduce everything that a human can do (and all the ways they think of doing it), it can be very useful for regression testing. However, it does require a well-developedtest suiteof testing scripts in order to be truly useful. Program testing and fault detection can be aided significantly by testing tools anddebuggers. Testing/debug tools include features such as: Some of these features may be incorporated into a single composite tool or anIntegrated Development Environment(IDE). There are generally four recognized levels of tests: unit testing, integration testing, component interface testing, and system testing. Tests are frequently grouped by where they are added in the software development process, or by the level of specificity of the test. The main levels during the development process as defined by theSWEBOKguide are unit-, integration-, and system testing that are distinguished by the test target without implying a specific process model.[9]Other test levels are classified by the testing objective.[9] There are two different levels of tests from the perspective of customers: low-level testing (LLT) and high-level testing (HLT). LLT is a group of tests for different level components of software application or product. HLT is a group of tests for the whole software application or product.[citation needed] Unit testing refers to tests that verify the functionality of a specific section of code, usually at the function level. In anobject-orientedenvironment, this is usually at the class level, and the minimal unit tests include the constructors and destructors.[10] These types of tests are usually written by developers as they work on code (white-box style), to ensure that the specific function is working as expected. One function might have multiple tests, to catchcorner casesor other branches in the code. Unit testing alone cannot verify the functionality of a piece of software, but rather is used to ensure that the building blocks of the software work independently from each other. Unit testing is a software development process that involves synchronized application of a broad spectrum of defect prevention and detection strategies in order to reduce software development risks, time, and costs. It is performed by the software developer or engineer during the construction phase of the software development lifecycle. Rather than replace traditional QA focuses, it augments it. Unit testing aims to eliminate construction errors before code is promoted to QA; this strategy is intended to increase the quality of the resulting software as well as the efficiency of the overall development and QA process. Depending on the organization's expectations for software development, unit testing might includestatic code analysis,data-flow analysis, metrics analysis, peer code reviews,code coverageanalysis and othersoftware verificationpractices. Integration testing is any type of software testing that seeks to verify the interfaces between components against a software design. Software components may be integrated in an iterative way or all together ("big bang"). Normally the former is considered a better practice since it allows interface issues to be located more quickly and fixed. Integration testing works to expose defects in the interfaces and interaction between integrated components (modules). Progressively larger groups of tested software components corresponding to elements of the architectural design are integrated and tested until the software works as a system.[11] The practice of component interface testing can be used to check the handling of data passed between various units, or subsystem components, beyond full integration testing between those units.[12][13]The data being passed can be considered as "message packets" and the range or data types can be checked, for data generated from one unit, and tested for validity before being passed into another unit. One option for interface testing is to keep a separate log file of data items being passed, often with a timestamp logged to allow analysis of thousands of cases of data passed between units for days or weeks. Tests can include checking the handling of some extreme data values while other interface variables are passed as normal values.[12]Unusual data values in an interface can help explain unexpected performance in the next unit. Component interface testing is a variation ofblack-box testing,[13]with the focus on the data values beyond just the related actions of a subsystem component. System testing tests a completely integrated system to verify that the system meets its requirements.[14]For example, a system test might involve testing a logon interface, then creating and editing an entry, plus sending or printing results, followed by summary processing or deletion (or archiving) of entries, then logoff. Operational acceptance is used to conduct operational readiness (pre-release) of a product, service or system as part of aquality management system. OAT is a common type of non-functional software testing, used mainly insoftware developmentandsoftware maintenanceprojects. This type of testing focuses on theoperational readinessof the system to be supported, and/or to become part of the production environment. Hence, it is also known as operational readiness testing (ORT) orOperations readiness and assurance(OR&A) testing.Functional testingwithin OAT is limited to those tests which are required to verify thenon-functionalaspects of the system. In addition, the software testing should ensure that the portability of the system, as well as working as expected, does not also damage or partially corrupt its operating environment or cause other processes within that environment to become inoperative.[15] A common cause of software failure (real or perceived) is a lack of itscompatibilitywith otherapplication software,operating systems(or operating systemversions, old or new), or target environments that differ greatly from the original (such as aterminalorGUIapplication intended to be run on thedesktopnow being required to become aweb application, which must render in aweb browser). For example, in the case of a lack ofbackward compatibility, this can occur because the programmers develop and test software only on the latest version of the target environment, which not all users may be running. This results in the unintended consequence that the latest work may not function on earlier versions of the target environment, or on older hardware that earlier versions of the target environment was capable of using. Sometimes such issues can be fixed by proactivelyabstractingoperating system functionality into a separate programmoduleorlibrary. Sanity testingdetermines whether it is reasonable to proceed with further testing. Smoke testingconsists of minimal attempts to operate the software, designed to determine whether there are any basic problems that will prevent it from working at all. Such tests can be used asbuild verification test. Regression testing focuses on finding defects after a major code change has occurred. Specifically, it seeks to uncoversoftware regressions, as degraded or lost features, including old bugs that have come back. Such regressions occur whenever software functionality that was previously working correctly, stops working as intended. Typically, regressions occur as anunintended consequenceof program changes, when the newly developed part of the software collides with the previously existing code. Common methods of regression testing include re-running previous sets of test cases and checking whether previously fixed faults have re-emerged. The depth of testing depends on the phase in the release process and theriskof the added features. They can either be complete, for changes added late in the release or deemed to be risky, or be very shallow, consisting of positive tests on each feature, if the changes are early in the release or deemed to be of low risk. Regression testing is typically the largest test effort in commercial software development,[16]due to checking numerous details in prior software features, and even new software can be developed while using some old test cases to test parts of the new design to ensure prior functionality is still supported. Acceptance testing can mean one of two things: Alpha testing is simulated or actual operational testing by potential users/customers or an independent test team at the developers' site. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing, before the software goes to beta testing.[17] Beta testing comes after alpha testing and can be considered a form of externaluser acceptance testing. Versions of the software, known asbeta versions, are released to a limited audience outside of the programming team known as beta testers. The software is released to groups of people so that further testing can ensure the product has few faults orbugs. Beta versions can be made available to the open public to increase thefeedbackfield to a maximal number of future users and to deliver value earlier, for an extended or even indefinite period of time (perpetual beta).[citation needed] Functional testing refers to activities that verify a specific action or function of the code. These are usually found in the code requirements documentation, although some development methodologies work from use cases or user stories. Functional tests tend to answer the question of "can the user do this" or "does this particular feature work." Non-functional testing refers to aspects of the software that may not be related to a specific function or user action, such asscalabilityor otherperformance, behavior under certainconstraints, orsecurity. Testing will determine the breaking point, the point at which extremes of scalability or performance leads to unstable execution. Non-functional requirements tend to be those that reflect the quality of the product, particularly in the context of the suitability perspective of its users. Continuous testing is the process of executingautomated testsas part of the software delivery pipeline to obtain immediate feedback on the business risks associated with a software release candidate.[18][19]Continuous testing includes the validation of bothfunctional requirementsandnon-functional requirements; the scope of testing extends from validating bottom-up requirements or user stories to assessing the system requirements associated with overarching business goals.[20][21][22] Destructive testing attempts to cause the software or a sub-system to fail. It verifies that the software functions properly even when it receives invalid or unexpected inputs, thereby establishing therobustnessof input validation and error-management routines.[citation needed]Software fault injection, in the form offuzzing, is an example of failure testing. Various commercial non-functional testing tools are linked from thesoftware fault injectionpage; there are also numerous open-source and free software tools available that perform destructive testing. Performance testing is generally executed to determine how a system or sub-system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage. Load testingis primarily concerned with testing that the system can continue to operate under a specific load, whether that be large quantities of data or a large number ofusers. This is generally referred to as softwarescalability. The related load testing activity of when performed as a non-functional activity is often referred to asendurance testing.Volume testingis a way to test software functions even when certain components (for example a file or database) increase radically in size.Stress testingis a way to test reliability under unexpected or rare workloads.Stability testing(often referred to as load or endurance testing) checks to see if the software can continuously function well in or above an acceptable period. There is little agreement on what the specific goals of performance testing are. The terms load testing, performance testing,scalability testing, and volume testing, are often used interchangeably. Real-time softwaresystems have strict timing constraints. To test if timing constraints are met,real-time testingis used. Usability testingis to check if the user interface is easy to use and understand. It is concerned mainly with the use of the application. Accessibilitytesting may include compliance with standards such as: Security testingis essential for software that processes confidential data to preventsystem intrusionbyhackers. The International Organization for Standardization (ISO) defines this as a "type of testing conducted to evaluate the degree to which a test item, and associated data and information, are protected so that unauthorised persons or systems cannot use, read or modify them, and authorized persons or systems are not denied access to them."[23] The general ability of software to beinternationalized and localizedcan be automatically tested without actual translation, by usingpseudolocalization. It will verify that the application still works, even after it has been translated into a new language or adapted for a new culture (such as different currencies or time zones).[24] Actual translation to human languages must be tested, too. Possible localization failures include: "Development testing" is a software development process that involves synchronized application of a broad spectrum of defect prevention and detection strategies in order to reduce software development risks, time, and costs. It is performed by the software developer or engineer during the construction phase of the software development lifecycle. Rather than replace traditional QA focuses, it augments it. Development Testing aims to eliminate construction errors before code is promoted to QA; this strategy is intended to increase the quality of the resulting software as well as the efficiency of the overall development and QA process. Depending on the organization's expectations for software development, Development Testing might includestatic code analysis, data flow analysis, metrics analysis, peer code reviews, unit testing, code coverage analysis, traceability, and other software verification practices. A/B testing is basically a comparison of two outputs, generally when only one variable has changed: run a test, change one thing, run the test again, compare the results. This is more useful with more small-scale situations, but very useful in fine-tuning any program. With more complex projects, multivariant testing can be done. In concurrent testing, the focus is on the performance while continuously running with normal input and under normal operational conditions, as opposed to stress testing, or fuzz testing. Memory leaks, as well as basic faults are easier to find with this method. In software testing, conformance testing verifies that a product performs according to its specified standards. Compilers, for instance, are extensively tested to determine whether they meet the recognized standard for that language.
https://en.wikipedia.org/wiki/Software_testing_tactics
Test management toolsare used to store information on how testing is to be done, plan testing activities and report the status of quality assurance activities. The tools have different approaches to testing and thus have different sets of features. Generally they are used to maintain and plan manual testing, run or gather execution data from automated tests, manage multiple environments and to enter information about found defects. Test management tools offer the prospect of streamlining the testing process and allow quick access to data analysis, collaborative tools and easy communication across multiple project teams. Many test management tools incorporaterequirements managementcapabilities to streamline test case design from the requirements. Tracking of defects and project tasks are done within one application to further simplify the testing. Test management tools give teams the ability to consolidate and structure the test process using one test management tool, instead of installing multiple applications that are designed to manage only one step of the process. Test management tools allow teams to manage test case environments, automated tests, defects and project tasks. Some applications include advanced dashboards and detailed tracking of key metrics, allowing for easy tracking of progress and bug management. A test management tool that includes everything needed to manage the test process can save testers the problems of installing separate applications that are necessary for the testing process, which can also be time consuming. They can be implemented with minimal programming ability, allowing for easy installation and monitoring of the test process across multiple project groups. Once installed, teams have instant access to a user interface and can immediately start running and recording test cases. These types of applications are designed to simplify the test management process with high levels of automation and tracking built in, yet don't require advanced programming skills or knowledge to implement. They are useful for teams who manage a variety of test cases and for larger teams who need an all-inclusive application for project management. Once a project has kicked off, a test management tool tracks bug status, defects and projects tasks, and allows for collaboration across the team. When administering test cases, users can access a variety of dashboards to gain access to data instantly, making the test process efficient and accurate. The type of dashboard used is determined by the scope of the project and the information and data that needs to be extracted during the testing process. Data can be shared and accessed across multiple project teams, allowing for effective communication and collaboration throughout the testing process.
https://en.wikipedia.org/wiki/Test_management_tool
Atrace tableis a technique used to test algorithms in order to make sure that no logical errors occur while thecalculationsare being processed. The table usually takes the form of a multi-column, multi-row table; With each column showing avariable, and each row showing each number input into the algorithm and the subsequent values of the variables. Trace tables are typically used in schools and colleges when teaching students how to program. They can be an essential tool in teaching students how certain calculations work and the systematic process that is occurring when an algorithm is executed. They can also be useful for debugging applications, helping theprogrammerto easily detect what error is occurring, and why it may be occurring. This example shows the systematic process that takes place whilst the algorithm is processed. The initial value ofxis zero, buti, although defined, has not been assigned a value. Thus, its initial value is unknown. As we execute the program, line by line, the values ofiandxchange, reflecting each statement of the source code in execution. Their new values are recorded in the trace table. Whenireaches the value of11because of thei++statement in thefordefinition, the comparisoni <= 10evaluates to false, thus halting the loop. As we also reached the end of the program, the trace table also ends. Thisalgorithmsordata structures-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Trace_table
Web testingissoftware testingthat focuses onweb applications. Complete testing of a web-based system before going live can help address issues before the system is revealed to the public. Issues may include the security of the web application, the basic functionality of the site, its accessibility to disabled and fully able users, its ability to adapt to the multitude of desktops, devices, and operating systems, as well as readiness for expected traffic and number of users and the ability to survive a massive spike in user traffic, both of which are related toload testing. A web application performance tool (WAPT) is used to test web applications and web related interfaces. These tools are used for performance, load and stress testing of web applications,web sites,web API,web serversand other web interfaces. WAPT tends to simulate virtual users which will repeat either recorded URLs or specified URL and allows the users to specify number of times or iterations that the virtual users will have to repeat the recorded URLs. By doing so, the tool is useful to check for bottleneck and performance leakage in the website or web application being tested. A WAPT faces various challenges during testing and should be able to conduct tests for: WAPT allows a user to specify how virtual users are involved in the testing environment.ie either increasing users or constant users or periodic users load. Increasing user load, step by step is called RAMP where virtual users are increased from 0 to hundreds. Constant user load maintains specified user load at all time. Periodic user load tends to increase and decrease the user load from time to time. Web security testing tells us whether Web-based applications requirements are met when they are subjected to malicious input data.[1]There is a web application security testing plug-in collection for Fire Fox[2] An application programming interfaceAPIexposes services to other software components, which can query the API. The API implementation is in charge of computing the service and returning the result to the component that send the query. A part of web testing focuses on testing these web API implementations. GraphQLis a specific query and API language. It is the focus of tailored testing techniques. Search-based test generation yields good results to generate test cases for GraphQL APIs.[3]
https://en.wikipedia.org/wiki/Web_testing
Incomputer science,abstract interpretationis a theory ofsound approximationof thesemantics of computer programs, based onmonotonic functionsoverordered sets, especiallylattices. It can be viewed as a partialexecutionof acomputer programwhich gains information about its semantics (e.g.,control-flow,data-flow) without performing all thecalculations. Its main concrete application is formalstatic analysis, the automaticextraction of informationabout the possible executions of computer programs; such analyses have two main usages: Abstract interpretation was formalized by the French computer scientist working couplePatrick CousotandRadhia Cousotin the late 1970s.[1][2] This section illustrates abstract interpretation by means of real-world, non-computing examples. Consider the people in a conference room. Assume a unique identifier for each person in the room, like asocial security numberin the United States. To prove that someone is not present, all one needs to do is see if their social security number is not on the list. Since two different people cannot have the same number, it is possible to prove or disprove the presence of a participant simply by looking up their number. However it is possible that only the names of attendees were registered. If the name of a person is not found in the list, we may safely conclude that that person was not present; but if it is, we cannot conclude definitely without further inquiries, due to the possibility ofhomonyms(for example, two people named John Smith). Note that this imprecise information will still be adequate for most purposes, because homonyms are rare in practice. However, in all rigor, we cannot say for sure that somebody was present in the room; all we can say is that they werepossiblyhere. If the person we are looking up is a criminal, we will issue analarm; but there is of course the possibility of issuing afalse alarm. Similar phenomena will occur in the analysis of programs. If we are only interested in some specific information, say, "was there a person of agen{\displaystyle n}in the room?", keeping a list of all names and dates of births is unnecessary. We may safely and without loss of precision restrict ourselves to keeping a list of the participants' ages. If this is already too much to handle, we might keep only the age of the youngest,m{\displaystyle m}and oldest person,M{\displaystyle M}. If the question is about an age strictly lower thanm{\displaystyle m}or strictly higher thanM{\displaystyle M}, then we may safely respond that no such participant was present. Otherwise, we may only be able to say that we do not know. In the case of computing, concrete, precise information is in general not computable within finite time and memory (seeRice's theoremand thehalting problem).Abstractionis used to allow for generalized answers to questions (for example, answering "maybe" to a yes/no question, meaning "yes or no", when we (an algorithm of abstract interpretation) cannot compute the precise answer with certainty); this simplifies the problems, making them amenable to automatic solutions. One crucial requirement is to add enough vagueness so as to make problems manageable while still retaining enough precision for answering the important questions (such as "might the program crash?"). Given a programming or specification language, abstract interpretation consists of giving several semantics linked by relations of abstraction. A semantics is a mathematical characterization of a possible behavior of the program. The most precise semantics, describing very closely the actual execution of the program, are called theconcrete semantics. For instance, the concrete semantics of animperative programminglanguage may associate to each program the set of execution traces it may produce – an execution trace being a sequence of possible consecutive states of the execution of the program; a state typically consists of the value of the program counter and the memory locations (globals, stack and heap). More abstract semantics are then derived; for instance, one may consider only the set of reachable states in the executions (which amounts to considering the last states in finite traces). The goal of static analysis is to derive a computable semantic interpretation at some point. For instance, one may choose to represent the state of a program manipulating integer variables by forgetting the actual values of the variables and only keeping their signs (+, − or 0). For some elementary operations, such asmultiplication, such an abstraction does not lose any precision: to get the sign of a product, it is sufficient to know the sign of the operands. For some other operations, the abstraction may lose precision: for instance, it is impossible to know the sign of a sum whose operands are respectively positive and negative. Sometimes a loss of precision is necessary to make the semantics decidable (seeRice's theoremand thehalting problem). In general, there is a compromise to be made between the precision of the analysis and its decidability (computability), or tractability (computational cost). In practice the abstractions that are defined are tailored to both the program properties one desires to analyze, and to the set of target programs. The first large scale automated analysis of computer programs with abstract interpretation was motivated by the accident that resulted in the destruction of thefirst flight of the Ariane 5rocket in 1996.[3] LetL{\displaystyle L}be anordered set, calledconcrete set, and letL′{\displaystyle L'}be another ordered set, calledabstract set. These two sets are related to each other by definingtotal functionsthat map elements from one to the other. A functionα{\displaystyle \alpha }is called anabstraction functionif it maps an elementx{\displaystyle x}in the concrete setL{\displaystyle L}to an elementα(x){\displaystyle \alpha (x)}in the abstract setL′{\displaystyle L'}. That is, elementα(x){\displaystyle \alpha (x)}inL′{\displaystyle L'}is theabstractionofx{\displaystyle x}inL{\displaystyle L}. A functionγ{\displaystyle \gamma }is called aconcretization functionif it maps an elementx′{\displaystyle x'}in the abstract setL′{\displaystyle L'}to an elementγ(x′){\displaystyle \gamma (x')}in the concrete setL{\displaystyle L}. That is, elementγ(x′){\displaystyle \gamma (x')}inL{\displaystyle L}is aconcretizationofx′{\displaystyle x'}inL′{\displaystyle L'}. LetL1{\displaystyle L_{1}},L2{\displaystyle L_{2}},L1′{\displaystyle L'_{1}}, andL2′{\displaystyle L'_{2}}be ordered sets. The concrete semanticsf{\displaystyle f}is a monotonic function fromL1{\displaystyle L_{1}}toL2{\displaystyle L_{2}}. A functionf′{\displaystyle f'}fromL1′{\displaystyle L'_{1}}toL2′{\displaystyle L'_{2}}is said to be avalid abstractionoff{\displaystyle f}if, for allx′{\displaystyle x'}inL1′{\displaystyle L'_{1}}, we have(f∘γ)(x′)≤(γ∘f′)(x′){\displaystyle (f\circ \gamma )(x')\leq (\gamma \circ f')(x')}. Program semantics are generally described usingfixed pointsin the presence of loops or recursive procedures. Suppose thatL{\displaystyle L}is acomplete latticeand letf{\displaystyle f}be amonotonic functionfromL{\displaystyle L}intoL{\displaystyle L}. Then, anyx′{\displaystyle x'}such thatf(x′)≤x′{\displaystyle f(x')\leq x'}is an abstraction of the least fixed-point off{\displaystyle f}, which exists, according to theKnaster–Tarski theorem. The difficulty is now to obtain such anx′{\displaystyle x'}. IfL′{\displaystyle L'}is of finite height, or at least verifies theascending chain condition(all ascending sequences are ultimately stationary), then such anx′{\displaystyle x'}may be obtained as the stationary limit of theascending sequencexn′{\displaystyle x'_{n}}defined by induction as follows:x0′=⊥{\displaystyle x'_{0}=\bot }(the least element ofL′{\displaystyle L'}) andxn+1′=f′(xn′){\displaystyle x'_{n+1}=f'(x'_{n})}. In other cases, it is still possible to obtain such anx′{\displaystyle x'}through a (pair-)widening operator,[4]defined as a binary operator∇:L×L→L{\displaystyle \nabla \colon L\times L\to L}which satisfies the following conditions: In some cases, it is possible to define abstractions usingGalois connections(α,γ){\displaystyle (\alpha ,\gamma )}whereα{\displaystyle \alpha }is fromL{\displaystyle L}toL′{\displaystyle L'}andγ{\displaystyle \gamma }is fromL′{\displaystyle L'}toL{\displaystyle L}. This supposes the existence of best abstractions, which is not necessarily the case. For instance, if we abstract sets of couples(x,y){\displaystyle (x,y)}ofreal numbersby enclosing convexpolyhedra, there is no optimal abstraction to the disc defined byx2+y2≤1{\displaystyle x^{2}+y^{2}\leq 1}. One can assign to each variablex{\displaystyle x}available at a given program point an interval[Lx,Hx]{\displaystyle [L_{x},H_{x}]}. A state assigning the valuev(x){\displaystyle v(x)}to variablex{\displaystyle x}will be a concretization of these intervals if, for allx{\displaystyle x}, we havev(x)∈[Lx,Hx]{\displaystyle v(x)\in [L_{x},H_{x}]}. From the intervals[Lx,Hx]{\displaystyle [L_{x},H_{x}]}and[Ly,Hy]{\displaystyle [L_{y},H_{y}]}for variablesx{\displaystyle x}andy{\displaystyle y}, respectively, one can easily obtain intervals forx+y{\displaystyle x+y}(namely,[Lx+Ly,Hx+Hy]{\displaystyle [L_{x}+L_{y},H_{x}+H_{y}]}) and forx−y{\displaystyle x-y}(namely,[Lx−Hy,Hx−Ly]{\displaystyle [L_{x}-H_{y},H_{x}-L_{y}]}); note that these areexactabstractions, since the set of possible outcomes for, say,x+y{\displaystyle x+y}, is precisely the interval[Lx+Ly,Hx+Hy]{\displaystyle [L_{x}+L_{y},H_{x}+H_{y}]}. More complex formulas can be derived for multiplication, division, etc., yielding so-calledinterval arithmetics.[5] Let us now consider the following very simple program: With reasonable arithmetic types, the result forzshould be zero. But if we do interval arithmetic starting fromxin [0, 1], one getszin [−1, +1]. While each of the operations taken individually was exactly abstracted, their composition isn't. The problem is evident: we did not keep track of the equality relationship betweenxandy; actually, this domain of intervals does not take into account any relationships between variables, and is thus anon-relational domain. Non-relational domains tend to be fast and simple to implement, but imprecise. Some examples ofrelationalnumerical abstract domains are: and combinations thereof (such as the reduced product,[2]cf. right picture). When one chooses an abstract domain, one typically has to strike a balance between keeping fine-grained relationships, and high computational costs. While high-level languages such asPythonorHaskelluse unbounded integers by default, lower-level programming languages such asCorassembly languagetypically operate on finitely-sizedmachine words, which are more suitably modeled using theintegers modulo2n{\textstyle 2^{n}}(wherenis the bit width of a machine word). There are several abstract domains suitable for various analyses of such variables. Thebitfield domaintreats each bit in a machine word separately, i.e., a word of widthnis treated as an array ofnabstract values. The abstract values are taken from the set{0,1,⊥}{\textstyle \{0,1,\bot \}}, and the abstraction and concretization functions are given by:[14][15]γ(0)={0}{\displaystyle \gamma (0)=\{0\}},γ(1)={1}{\displaystyle \gamma (1)=\{1\}},γ(⊥)={0,1}{\displaystyle \gamma (\bot )=\{0,1\}},α({0})=0{\displaystyle \alpha (\{0\})=0},α({1})=1{\displaystyle \alpha (\{1\})=1},α({0,1})=⊥{\displaystyle \alpha (\{0,1\})=\bot },α({})=⊥{\displaystyle \alpha (\{\})=\bot }. Bitwise operations on these abstract values are identical with the corresponding logical operations in somethree-valued logics:[16] Further domains include thesigned interval domainand theunsigned interval domain. All three of these domains support forwards and backwards abstract operators for common operations such as addition,shifts, xor, and multiplication. These domains can be combined using the reduced product.[17]
https://en.wikipedia.org/wiki/Abstract_interpretation
In computer science, asimulationis a computation of the execution of some appropriately modelledstate-transition system. Typically this process models the complete state of the system at individual points in a discrete linear time frame, computing each state sequentially from its predecessor. Models for computer programs or VLSI logic designs can be very easily simulated, as they often have anoperational semanticswhich can be used directly for simulation. Symbolic simulationis a form of simulation where many possible executions of a system are considered simultaneously. This is typically achieved by augmenting the domain over which the simulation takes place. A symbolicvariablecan be used in the simulation state representation in order to index multiple executions of the system.[1]For each possible valuation of these variables, there is a concrete system state that is being indirectly simulated. Because symbolic simulation can cover many system executions in a single simulation, it can greatly reduce the size of verification problems. Techniques such assymbolic trajectory evaluation(STE) andgeneralized symbolic trajectory evaluation(GSTE) are based on this idea of symbolic simulation. Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Symbolic_simulation
Inmathematicsandcomputer science,[1]computer algebra, also calledsymbolic computationoralgebraic computation, is a scientific area that refers to the study and development ofalgorithmsandsoftwarefor manipulatingmathematical expressionsand othermathematical objects. Although computer algebra could be considered a subfield ofscientific computing, they are generally considered as distinct fields because scientific computing is usually based onnumerical computationwith approximatefloating point numbers, while symbolic computation emphasizesexactcomputation with expressions containingvariablesthat have no given value and are manipulated as symbols. Softwareapplications that perform symbolic calculations are calledcomputer algebra systems, with the termsystemalluding to the complexity of the main applications that include, at least, a method to represent mathematical data in a computer, a userprogramming language(usually different from the language used for the implementation), a dedicated memory manager, auser interfacefor the input/output of mathematical expressions, and a large set ofroutinesto perform usual operations, like simplification of expressions,differentiationusing thechain rule,polynomial factorization,indefinite integration, etc. Computer algebra is widely used to experiment in mathematics and to design the formulas that are used in numerical programs. It is also used for complete scientific computations, when purely numerical methods fail, as inpublic key cryptography, or for somenon-linearproblems. Some authors distinguishcomputer algebrafromsymbolic computation, using the latter name to refer to kinds of symbolic computation other than the computation with mathematicalformulas. Some authors usesymbolic computationfor the computer-science aspect of the subject andcomputer algebrafor the mathematical aspect.[2]In some languages, the name of the field is not a direct translation of its English name. Typically, it is calledcalcul formelin French, which means "formal computation". This name reflects the ties this field has withformal methods. Symbolic computation has also been referred to, in the past, assymbolic manipulation,algebraic manipulation,symbolic processing,symbolic mathematics, orsymbolic algebra, but these terms, which also refer to non-computational manipulation, are no longer used in reference to computer algebra. There is nolearned societythat is specific to computer algebra, but this function is assumed by thespecial interest groupof theAssociation for Computing MachinerynamedSIGSAM(Special Interest Group on Symbolic and Algebraic Manipulation).[3] There are several annual conferences on computer algebra, the premier beingISSAC(International Symposium on Symbolic and Algebraic Computation), which is regularly sponsored by SIGSAM.[4] There are several journals specializing in computer algebra, the top one beingJournal of Symbolic Computationfounded in 1985 byBruno Buchberger.[5]There are also several other journals that regularly publish articles in computer algebra.[6] Asnumerical softwareis highly efficient for approximatenumerical computation, it is common, in computer algebra, to emphasizeexactcomputation with exactly represented data. Such an exact representation implies that, even when the size of the output is small, the intermediate data generated during a computation may grow in an unpredictable way. This behavior is calledexpression swell.[7]To alleviate this problem, various methods are used in the representation of the data, as well as in the algorithms that manipulate them.[8] The usual number systems used innumerical computationarefloating pointnumbers andintegersof a fixed, bounded size. Neither of these is convenient for computer algebra, due to expression swell.[9]Therefore, the basic numbers used in computer algebra are the integers of the mathematicians, commonly represented by an unbounded signed sequence ofdigitsin somebase of numeration, usually the largest base allowed by themachine word. These integers allow one to define therational numbers, which areirreducible fractionsof two integers. Programming an efficient implementation of the arithmetic operations is a hard task. Therefore, most freecomputer algebra systems, and some commercial ones such asMathematicaandMaple,[10][11]use theGMP library, which is thus ade factostandard. Except fornumbersandvariables, everymathematical expressionmay be viewed as the symbol of an operator followed by asequenceof operands. In computer-algebra software, the expressions are usually represented in this way. This representation is very flexible, and many things that seem not to be mathematical expressions at first glance, may be represented and manipulated as such. For example, an equation is an expression with "=" as an operator, and a matrix may be represented as an expression with "matrix" as an operator and its rows as operands. Even programs may be considered and represented as expressions with operator "procedure" and, at least, two operands, the list of parameters and the body, which is itself an expression with "body" as an operator and a sequence of instructions as operands. Conversely, any mathematical expression may be viewed as a program. For example, the expressiona+bmay be viewed as a program for the addition, withaandbas parameters. Executing this program consists ofevaluatingthe expression for given values ofaandb; if they are not given any values, then the result of the evaluation is simply its input. This process of delayed evaluation is fundamental in computer algebra. For example, the operator "=" of the equations is also, in most computer algebra systems, the name of the program of the equality test: normally, the evaluation of an equation results in an equation, but, when an equality test is needed, either explicitly asked by the user through an "evaluation to a Boolean" command, or automatically started by the system in the case of a test inside a program, then the evaluation to a Boolean result is executed. As the size of the operands of an expression is unpredictable and may change during a working session, the sequence of the operands is usually represented as a sequence of eitherpointers(like inMacsyma)[13]or entries in ahash table(like inMaple). The raw application of the basic rules ofdifferentiationwith respect toxon the expressionaxgives the result A simpler expression than this is generally desired, and simplification is needed when working with general expressions. This simplification is normally done throughrewriting rules.[14]There are several classes of rewriting rules to be considered. The simplest are rules that always reduce the size of the expression, likeE−E→ 0orsin(0) → 0. They are systematically applied in computer algebra systems. A difficulty occurs withassociative operationslike addition and multiplication. The standard way to deal with associativity is to consider that addition and multiplication have an arbitrary number of operands; that is, thata+b+cis represented as"+"(a,b,c). Thusa+ (b+c)and(a+b) +care both simplified to"+"(a,b,c), which is displayeda+b+c. In the case of expressions such asa−b+c, the simplest way is to systematically rewrite−E,E−F,E/Fas, respectively,(−1)⋅E,E+ (−1)⋅F,E⋅F−1. In other words, in the internal representation of the expressions, there is no subtraction nor division nor unary minus, outside the representation of the numbers. Another difficulty occurs with thecommutativityof addition and multiplication. The problem is to quickly recognize thelike termsin order to combine or cancel them. Testing every pair of terms is costly with very long sums and products. To address this,Macsymasorts the operands of sums and products into an order that places like terms in consecutive places, allowing easy detection. InMaple, ahash functionis designed for generating collisions when like terms are entered, allowing them to be combined as soon as they are introduced. This allows subexpressions that appear several times in a computation to be immediately recognized and stored only once. This saves memory and speeds up computation by avoiding repetition of the same operations on identical expressions. Some rewriting rules sometimes increase and sometimes decrease the size of the expressions to which they are applied. This is the case for thedistributive lawortrigonometric identities. For example, the distributive law allows rewriting(x+1)4→x4+4x3+6x2+4x+1{\displaystyle (x+1)^{4}\rightarrow x^{4}+4x^{3}+6x^{2}+4x+1}and(x−1)(x4+x3+x2+x+1)→x5−1.{\displaystyle (x-1)(x^{4}+x^{3}+x^{2}+x+1)\rightarrow x^{5}-1.}As there is no way to make a good general choice of applying or not such a rewriting rule, such rewriting is done only when explicitly invoked by the user. For the distributive law, the computer function that applies this rewriting rule is typically called "expand". The reverse rewriting rule, called "factor", requires a non-trivial algorithm, which is thus a key function in computer algebra systems (seePolynomial factorization). Some fundamental mathematical questions arise when one wants to manipulatemathematical expressionsin a computer. We consider mainly the case of themultivariaterational fractions. This is not a real restriction, because, as soon as theirrational functionsappearing in an expression are simplified, they are usually considered as new indeterminates. For example, is viewed as a polynomial insin⁡(x+y){\displaystyle \sin(x+y)}andlog⁡(z2−5){\displaystyle \log(z^{2}-5)}. There are two notions of equality formathematical expressions.Syntactic equalityis the equality of their representation in a computer. This is easy to test in a program.Semantic equalityis when two expressions represent the same mathematical object, as in It is known fromRichardson's theoremthat there may not exist an algorithm that decides whether two expressions representing numbers are semantically equal if exponentials and logarithms are allowed in the expressions. Accordingly, (semantic) equality may be tested only on some classes of expressions such as thepolynomialsandrational fractions. To test the equality of two expressions, instead of designing specific algorithms, it is usual to put expressions in somecanonical formor to put their difference in anormal form, and to test the syntactic equality of the result. In computer algebra, "canonical form" and "normal form" are not synonymous.[15]Acanonical formis such that two expressions in canonical form are semantically equal if and only if they are syntactically equal, while anormal formis such that an expression in normal form is semantically zero only if it is syntactically zero. In other words, zero has a unique representation as an expression in normal form. Normal forms are usually preferred in computer algebra for several reasons. Firstly, canonical forms may be more costly to compute than normal forms. For example, to put a polynomial in canonical form, one has to expand every product through thedistributive law, while it is not necessary with a normal form (see below). Secondly, it may be the case, like for expressions involving radicals, that a canonical form, if it exists, depends on some arbitrary choices and that these choices may be different for two expressions that have been computed independently. This may make the use of a canonical form impractical. Early computer algebra systems, such as theENIACat theUniversity of Pennsylvania, relied onhuman computersor programmers to reprogram it between calculations, manipulate its many physical modules (or panels), and feed its IBM card reader.[16]Female mathematicians handled the majority of ENIAC programming human-guided computation:Jean Jennings,Marlyn Wescoff,Ruth Lichterman,Betty Snyder,Frances Bilas, andKay McNultyled said efforts.[17] In 1960,John McCarthyexplored an extension ofprimitive recursive functionsfor computing symbolic expressions through theLispprogramming language while at theMassachusetts Institute of Technology.[18]Though his series on "Recursive functions of symbolic expressions and their computation by machine" remained incomplete,[19]McCarthy and his contributions to artificial intelligence programming and computer algebra via Lisp helped establishProject MACat the Massachusetts Institute of Technology and the organization that later became theStanford AI Laboratory(SAIL) atStanford University, whose competition facilitated significant development in computer algebra throughout the late 20th century. Early efforts at symbolic computation, in the 1960s and 1970s, faced challenges surrounding the inefficiency of long-known algorithms when ported to computer algebra systems.[20]Predecessors to Project MAC, such asALTRAN, sought to overcome algorithmic limitations through advancements in hardware and interpreters, while later efforts turned towards software optimization.[21] A large part of the work of researchers in the field consisted of revisiting classicalalgebrato increase itseffectivenesswhile developingefficient algorithmsfor use in computer algebra. An example of this type of work is the computation ofpolynomial greatest common divisors, a task required to simplify fractions and an essential component of computer algebra. Classical algorithms for this computation, such asEuclid's algorithm, proved inefficient over infinite fields; algorithms fromlinear algebrafaced similar struggles.[22]Thus, researchers turned to discovering methods of reducing polynomials (such as those over aring of integersor aunique factorization domain) to a variant efficiently computable via a Euclidean algorithm. For a detailed definition of the subject: For textbooks devoted to the subject:
https://en.wikipedia.org/wiki/Symbolic_computation
Incomputer science, acontrol-flow graph(CFG) is arepresentation, usinggraphnotation, of all paths that might be traversed through aprogramduring itsexecution. The control-flow graph was conceived byFrances E. Allen,[1]who noted thatReese T. Prosserusedboolean connectivity matricesfor flow analysis before.[2] The CFG is essential to manycompiler optimizationsandstatic-analysistools. In a control-flow graph eachnodein thegraphrepresents abasic block, i.e. a straight-line sequence of code with a single entry point and a single exit point, where no branches or jumps occur within the block. Basic blocks start with jump targets and end with jumps or branch instructions. Directededgesare used to represent jumps in thecontrol flow. There are, in most presentations, two specially designated blocks: theentry block, through which control enters into the flow graph, and theexit block, through which all control flow leaves.[3] Because of its construction procedure, in a CFG, every edge A→B has the property that: The CFG can thus be obtained, at least conceptually, by starting from the program's (full) flow graph—i.e. the graph in which every node represents an individual instruction—and performing anedge contractionfor every edge that falsifies the predicate above, i.e. contracting every edge whose source has a single exit and whose destination has a single entry. This contraction-based algorithm is of no practical importance, except as a visualization aid for understanding the CFG construction, because the CFG can be more efficiently constructed directly from the program byscanning it for basic blocks.[4] Consider the following fragment of code: In the above, we have 4 basic blocks: A from 0 to 1, B from 2 to 3, C at 4 and D at 5. In particular, in this case, A is the "entry block", D the "exit block" and lines 4 and 5 are jump targets. A graph for this fragment has edges from A to B, A to C, B to D and C to D. Reachabilityis a graph property useful in optimization. If a subgraph is not connected from the subgraph containing the entry block, that subgraph is unreachable during any execution, and so isunreachable code; under normal conditions it can be safely removed. If the exit block is unreachable from the entry block, aninfinite loopmay exist. Not all infinite loops are detectable, seeHalting problem. A halting order may also exist there. Unreachable code and infinite loops are possible even if the programmer does not explicitly code them: optimizations likeconstant propagationandconstant foldingfollowed byjump threadingcan collapse multiple basic blocks into one, cause edges to be removed from a CFG, etc., thus possibly disconnecting parts of the graph. A block Mdominatesa block N if every path from the entry that reaches block N has to pass through block M. The entry block dominates all blocks. In the reverse direction, block Mpostdominatesblock N if every path from N to the exit has to pass through block M. The exit block postdominates all blocks. It is said that a block Mimmediately dominatesblock N if M dominates N, and there is no intervening block P such that M dominates P and P dominates N. In other words, M is the last dominator on all paths from entry to N. Each block has a unique immediate dominator. Similarly, there is a notion ofimmediate postdominator, analogous toimmediate dominator. Thedominator treeis an ancillary data structure depicting the dominator relationships. There is an arc from Block M to Block N if M is an immediate dominator of N. This graph is a tree, since each block has a unique immediate dominator. This tree is rooted at the entry block. The dominator tree can be calculated efficiently usingLengauer–Tarjan's algorithm. Apostdominator treeis analogous to thedominator tree. This tree is rooted at the exit block. Aback edgeis an edge that points to a block that has already been met during a depth-first (DFS) traversal of the graph. Back edges are typical of loops. Acritical edgeis an edge which is neither the only edge leaving its source block, nor the only edge entering its destination block. These edges must besplit: a new block must be created in the middle of the edge, in order to insert computations on the edge without affecting any other edges. Anabnormal edgeis an edge whose destination is unknown.Exception handlingconstructs can produce them. These edges tend to inhibit optimization. Animpossible edge(also known as afake edge) is an edge which has been added to the graph solely to preserve the property that the exit block postdominates all blocks. It cannot ever be traversed. Aloop header(sometimes called theentry pointof the loop) is a dominator that is the target of a loop-forming back edge. The loop header dominates all blocks in the loop body. A block may be a loop header for more than one loop. A loop may have multiple entry points, in which case it has no "loop header". Suppose block M is a dominator with several incoming edges, some of them being back edges (so M is a loop header). It is advantageous to several optimization passes to break M up into two blocks Mpreand Mloop. The contents of M and back edges are moved to Mloop, the rest of the edges are moved to point into Mpre, and a new edge from Mpreto Mloopis inserted (so that Mpreis the immediate dominator of Mloop). In the beginning, Mprewould be empty, but passes likeloop-invariant code motioncould populate it. Mpreis called theloop pre-header, and Mloopwould be the loop header. A reducible CFG is one with edges that can be partitioned into two disjoint sets: forward edges, and back edges, such that:[5] Structured programminglanguages are often designed such that all CFGs they produce are reducible, and common structured programming statements such as IF, FOR, WHILE, BREAK, and CONTINUE produce reducible graphs. To produce irreducible graphs, statements such asGOTOare needed. Irreducible graphs may also be produced by some compiler optimizations. The loop connectedness of a CFG is defined with respect to a givendepth-first searchtree (DFST) of the CFG. This DFST should be rooted at the start node and cover every node of the CFG. Edges in the CFG which run from a node to one of its DFST ancestors (including itself) are called back edges. The loop connectedness is the largest number of back edges found in any cycle-free path of the CFG. In a reducible CFG, the loop connectedness is independent of the DFST chosen.[6][7] Loop connectedness has been used to reason about the time complexity ofdata-flow analysis.[6] While control-flow graphs represent the control flow of a single procedure, inter-procedural control-flow graphs represent the control flow of whole programs.[8]
https://en.wikipedia.org/wiki/Control-flow_graph
American Fuzzy Lop(AFL), stylized inall lowercaseasamerican fuzzy lop, is afree softwarefuzzerthat employsgenetic algorithmsin order to efficiently increasecode coverageof thetest cases. So far it has detected hundreds of significantsoftware bugsin major free software projects, includingX.Org Server,[2]PHP,[3]OpenSSL,[4][5]pngcrush,bash,[6]Firefox,[7]BIND,[8][9]Qt,[10]andSQLite.[11] Initially released in November 2013, AFL[12]quickly became one of the most widely used fuzzers in security research. For many years after its release, AFL has been considered a "state of the art" fuzzer.[13]AFL is considered "a de-facto standard for fuzzing",[14]and the release of AFL contributed significantly to the development of fuzzing as a research area.[15]AFL is widely used in academia; academic fuzzers are oftenforksof AFL, and AFL is commonly used as a baseline to evaluate new techniques.[16][17] Thesource codeof American fuzzy lop is published onGitHub. Its name is a reference to a breed of rabbit, theAmerican Fuzzy Lop. AFL requires the user to provide a sample command that runs the tested application and at least one small example input. The input can be fed to the tested program either via standard input or as an input file specified in the process command line. Fuzzing networked programs is currently not directly supported, although in some cases there are feasible solutions to this problem.[18]For example, in case of an audio player, American fuzzy lop can be instructed to open a short sound file with it. Then, the fuzzer attempts to actually execute the specified command and if that succeeds, it tries to reduce the input file to the smallest one that triggers the same behavior. After this initial phase, AFL begins the actual process of fuzzing by applying various modifications to the input file. When the tested programcrashesorhangs, this usually implies the discovery of a new bug, possibly asecurity vulnerability. In this case, the modified input file is saved for further user inspection. In order to maximize the fuzzing performance, American fuzzy lop expects the tested program to becompiledwith the aid of autility programthatinstrumentsthe code with helper functions which trackcontrol flow. This allows the fuzzer to detect when the target's behavior changes in response to the input. In cases when this is not possible,black-box testingis supported as well. Fuzzers attempt to find unexpected behaviors (i.e.,bugs) in a target program by repeatedly executing the program on various inputs. As described above, AFL is agray-boxfuzzer, meaning it expects instrumentation to measurecode coverageto have been injected into the target program at compile time and uses the coverage metric to direct the generation of new inputs. AFL's fuzzing algorithm has influenced many subsequent gray-box fuzzers.[20][21] The inputs to AFL are an instrumentedtarget program(thesystem under test) andcorpus, that is, a collection of inputs to the target. Inputs are also known astest cases. The algorithm maintains aqueueof inputs, which is initialized to the input corpus. The overall algorithm works as follows:[22] To generate new inputs, AFL applies variousmutationsto existing inputs.[23]These mutations are mostly agnostic to the input format of the target program; they generally treat the input as simple blob ofbinarydata. At first, AFL applies adeterministicsequence of mutations to each input. These are applied at various offsets in the input. They include:[24][25] After applying all available deterministic mutations, AFL moves on tohavoc, a stage where between 2 and 128 mutations are applied in a row. These mutations are any of:[23] If AFL cycles through the entire queue without generating any input that achieves new code coverage, it beginssplicing. Splicing takes two inputs from the queue,truncatesthem at arbitrary positions,concatenatesthem together, and applies the havoc stage to the result. AFL pioneered the use ofbinned hitcountsfor measuring code coverage.[28]The author claims that this technique mitigatespath explosion.[29][30] Conceptually, AFL counts the number of times a given execution of the target traverses each edge in the target'scontrol-flow graph; the documentation refers to these edges astuplesand the counts ashitcounts. At the end of the execution, the hitcounts arebinnedorbucketedinto the following eight buckets: 1, 2, 3, 4–7, 8–15, 16–31, 32–127, and 128 and greater. AFL maintains a global set of (tuple, binned count) pairs that have been produced by any execution thus far. An input is considered "interesting" and is added to the queue if it produces a (tuple, binned count) pair that is not yet in the global set. In practice, the hitcounts are collected and processed using an efficient butlossyscheme. The compile-time instrumentation injects code that is conceptually similar to the following at each branch in the control-flow graph of the target program:[31] where<COMPILE_TIME_RANDOM>is a random integer andshared_memis a 64kilobyteregion of memorysharedbetween the fuzzer and the target. This representation is more fine-grained (distinguishes between more executions) than simple block or statement coverage, but still allows for a linear-time "interestingness" test. On the assumption that smaller inputs take less time to execute, AFL attempts to minimize ortrimthe test cases in the queue.[23][32]Trimming works by removing blocks from the input; if the trimmed input still results in the same coverage (see#Measuring coverage), then the original input is discarded and the trimmed input is saved in the queue. AFL selects a subset offavoredinputs from the queue, non-favored inputs are skipped with some probability.[33][28] One of the challenges American fuzzy lop had to solve involved an efficientspawningof hundreds of processes per second. Apart from the original engine that spawned every process from scratch, American fuzzy lop offers the default engine that relies heavily on theforksystem call.[34][28]This can further be sped up by leveragingLLVMdeferred fork server mode or the similar persistent mode, but this comes at the cost of having to modify the tested program.[35]Also, American fuzzy lop supports fuzzing the same program over the network. American fuzzy lop features a colorfulcommand line interfacethat displays real-time statistics about the fuzzing process. Various settings may be triggered by either command line options orenvironment variables. Apart from that, programs may read runtime statistics from files in a machine-readable format. In addition toafl-fuzzand tools that can be used for binary instrumentation, American fuzzy lop features utility programs meant for monitoring of the fuzzing process. Apart from that, there isafl-cminandafl-tmin, which can be used for test case and test corpus minimization. This can be useful when the test cases generated byafl-fuzzwould be used by other fuzzers. AFL has beenforkedmany times in order to examine new fuzzing techniques, or to apply fuzzing to different kinds of programs. A few notable forks include: AFL++(AFLplusplus)[43]is a community-maintainedforkof AFL created due to the relative inactivity ofGoogle's upstream AFL development since September 2017. It includes new features and speedups.[44] Google's OSS-Fuzz initiative, which provides free fuzzing services to open source software, replaced its AFL option with AFL++ in January 2021.[45][46]
https://en.wikipedia.org/wiki/American_fuzzy_lop_(fuzzer)
Aglitchis a short-livedtechnical fault, such as a transient one that corrects itself, making it difficult to troubleshoot. The term is particularly common in thecomputingandelectronicsindustries, incircuit bending, as well as among players ofvideo games. More generally, all types of systems including humanorganizationsand nature experience glitches. A glitch, which is slight and often temporary, differs from a more seriousbugwhich is a genuine functionality-breaking problem. Alex Pieschel, writing forArcade Review, said:"'bug' is often cast as the weightier and more blameworthy pejorative, while 'glitch' suggests something more mysterious and unknowable inflicted by surprise inputs or stuff outside the realm of code".[1]The word itself is sometimes humorously described as being short for "gremlins lurking in the computer hardware".[2] Some reference books, includingRandom House's American Slang, state that the term comes from theGermanwordglitschen'to slip'[citation needed]as well as theYiddishwordglitshn'to slide, to skid'andglitsh, meaning "slippery place". Glitch was used from the 1940s by radio announcers to refer to an on-air mistake. During the following decade, the term became used by television engineers to indicate technical problems.[3] According to aWall Street Journalarticle written by Ben Zimmer,[4]theYale Universitylaw librarianFred Shapirocame up with the new earliest use of the word yet found: May 19, 1940. That was when the novelistKatharine Brushwrote aboutglitchin her column "Out of My Mind" (syndicated inThe Washington Post,The Boston Globe, and other papers). Brush corroborated Tony Randall's radio recollection: When the radio talkers make a little mistake in diction they call it a "fluff," and when they make a bad one they call it a "glitch," and I love it. Other examples from the world of radio can be found in the 1940s. The April 11, 1943, issue ofThe Washington Postcarried a review ofHelen Sioussat's book about radio broadcasting,Mikes Don't Bite. The reviewer noted an error and wrote, "In the lingo of radio, has Miss Sioussat pulled a 'muff,' 'fluff,' 'bust,' or 'glitch'?" And in a 1948 book calledThe Advertising and Business Side of Radio, Ned Midgley explained how a radio station's "traffic department" was responsible for properly scheduling items in a broadcast. "Usually most 'glitches,' as on-the-air mistakes are called, can be traced to a mistake on the part of the traffic department", Midgley wrote. In the 1950s,glitchmade the transition from radio to television. In a 1953 ad inBroadcastingmagazine, RCA boasted that their TV camera has "no more a-c power line 'glitches' (horizontal-bar interference)". And Bell Telephone ran an ad in a 1955 issue ofBillboardshowing two technicians monitoring the TV signals that were broadcast on Bell System lines: "When he talks of 'glitch' with a fellow technician, he means a low frequency interference which appears as a narrow horizontal bar moving vertically through the picture". A 1959 article inSponsor, a trade magazine for television and radio advertisers, gave another technical usage in an article about editing TV commercials by splicing tape."'Glitch' is slang for the 'momentary jiggle' that occurs at the editing point if the sync pulses don't match exactly in the splice". It also provided one of the earliest etymologies of the word, noting that,"'Glitch' probably comes from a German or Yiddish word meaning a slide, a glide or a slip".[citation needed] It was first widely defined for the American people byBennett Cerfon the June 20, 1965, episode ofWhat's My Lineas "a kink ... when anything goes wrong down there [Cape Kennedy], they say there's been a slight glitch". The astronautJohn Glennexplained the term in his section of the bookInto Orbit, writing that Another term we adopted to describe some of our problems was "glitch". Literally, a glitch is a spike or change in voltage in an electrical circuit which takes place when the circuit suddenly has a new load put on it. You have probably noticed a dimming of lights in your home when you turn a switch or start the dryer or the television set. Normally, these changes in voltage are protected by fuses. A glitch, however, is such a minute change in voltage that no fuse could protect against it.[5] John Daly further defined the word on the July 4, 1965, episode ofWhat's My Line, saying that it's a term used by the United States Air Force at Cape Kennedy, in the process of launching rockets, "it means something's gone wrong and you can't figure out what it is so you call it a 'glitch'". Later, on July 23, 1965,Timemagazine felt it necessary to define it in an article: "Glitches—a spaceman's word for irritating disturbances". In relation to the reference byTime, the term has been believed to enter common usage during the AmericanSpace Raceof the 1950s, where it was used to describe minor faults in the rocket hardware that were difficult to pinpoint.[6][7] Anelectronicsglitch orlogic hazardis a transition that occurs on a signal before the signal settles to its intended value, particularly in adigital circuit. Generally, this implies an electrical pulse of short duration, often due to arace conditionbetween two signals derived from a common source but with different delays. In some cases, such as a well-timedsynchronous circuit, this could be a harmless and well-tolerated effect that occurs normally in a design. In other contexts, a glitch can represent an undesirable result of a fault or design error that can produce a malfunction. Some electronic components, such asflip-flops, are triggered by a pulse that must not be shorter than a specified minimum duration in order to function correctly; a pulse shorter than the specified minimum may be called a glitch. A related concept is therunt pulse, a pulse whose amplitude is smaller than the minimum level specified for correct operation, and aspike, a short pulse similar to a glitch but often caused byringingorcrosstalk. A computer glitch is the failure of a system, usually containing a computing device, to complete its functions or to perform them properly. It frequently refers to an error which is not detected at the time it occurs but shows up later in data errors or incorrect human decisions. Situations which are frequently called computer glitches are incorrectly written software (software bugs), incorrect instructions given by the operator (operator errors, and a failure to account for this possibility might also be considered a software bug), undetected invalid input data (this might also be considered a software bug), undetected communications errors,computer viruses,Trojan attacksand computerexploiting(sometimes called "hacking"). Such glitches could produce problems such as keyboard malfunction, number key failures, screen abnormalities (turned left, right or upside-down), random program malfunctions, and abnormal program registering. Examples of computer glitches causing disruption include an unexpected shutdown of awater filtrationplant in New Canaan, 2010,[8]failures in theComputer Aided Dispatchsystem used by the police in Austin, resulting in unresponded 911 calls,[9]and an unexpectedbit flipcausing theCassinispacecraft to enter "safe mode" in November 2010.[10]Glitches can also be costly: in 2015, a bank was unable to raise interest rates for weeks resulting in losses of more than a million dollars per day.[11] Glitches invideo gamesmay include graphical and sound errors,collision detectionproblems, game crashes, and other issues.Quality assurance(QA) testers are commonly employed throughout the development process to find and report glitches to the programmers to be fixed, then potentially start over with a newbuildof the game.[12]If insufficient bug fixes are performed, numerous glitches and bugs can make their way to the final product.Bethesda Softworks, for example, is notorious for the amount of glitches in their games, though some players even prefer them to a glitch-free experience.[13] Some players may seek to induce glitches in a game for fun, using methods such ascartridgetilting to disrupt the data flow.[14] "Glitch hunters" are fans of a game who search for beneficial glitches that will allow them tospeedrunthe game faster, usually by skipping portions of a level, or quickly defeating enemies. One example of a speedrunning scene with large amounts of glitch hunters is theSoulsseries.[15]The use of glitches during speedruns is a controversial topic, with some frowning upon their use as subverting the intent of the developers. Those in favor of glitch use believe that using the glitches can in itself take a great deal of skill. Multiple categories of speedruns exist, with "any%" allowing the use of any type of glitch, while "glitchless" indicates the speedrun was performed without them.[16] Some games purposely include effects that look like glitches as a means tobreak the fourth walland either scare the player or put the player at unease, or otherwise as part of the game's narrative.[17]Games likeEternal DarknessandBatman: Arkham Asyluminclude segments with intentional glitches where it appears that the player's game system has failed.[18]The Animus interface in theAssassin's Creedseries, which allows the player-character to experience the memories of an ancestor though their genetic heritage, includes occasional glitches as to enforce the idea that the game is what the player-character is witnessing through a computer-aided system.[17] In broadcasting, a corrupted signal may glitch in the form of jagged lines on the screen, misplaced squares, static looking effects, freezing problems, or inverted colors. The glitches may affect the video and/or audio (usually audio dropout) or the transmission. These glitches may be caused by a variety of issues, interference from portable electronics or microwaves, damaged cables at the broadcasting center, or weather.[19] Multiple works of popular culture deal with glitches; those with the word "glitch" or derivations thereof are detailed inGlitch (disambiguation).
https://en.wikipedia.org/wiki/Glitch
Aglitchis a short-livedtechnical fault, such as a transient one that corrects itself, making it difficult to troubleshoot. The term is particularly common in thecomputingandelectronicsindustries, incircuit bending, as well as among players ofvideo games. More generally, all types of systems including humanorganizationsand nature experience glitches. A glitch, which is slight and often temporary, differs from a more seriousbugwhich is a genuine functionality-breaking problem. Alex Pieschel, writing forArcade Review, said:"'bug' is often cast as the weightier and more blameworthy pejorative, while 'glitch' suggests something more mysterious and unknowable inflicted by surprise inputs or stuff outside the realm of code".[1]The word itself is sometimes humorously described as being short for "gremlins lurking in the computer hardware".[2] Some reference books, includingRandom House's American Slang, state that the term comes from theGermanwordglitschen'to slip'[citation needed]as well as theYiddishwordglitshn'to slide, to skid'andglitsh, meaning "slippery place". Glitch was used from the 1940s by radio announcers to refer to an on-air mistake. During the following decade, the term became used by television engineers to indicate technical problems.[3] According to aWall Street Journalarticle written by Ben Zimmer,[4]theYale Universitylaw librarianFred Shapirocame up with the new earliest use of the word yet found: May 19, 1940. That was when the novelistKatharine Brushwrote aboutglitchin her column "Out of My Mind" (syndicated inThe Washington Post,The Boston Globe, and other papers). Brush corroborated Tony Randall's radio recollection: When the radio talkers make a little mistake in diction they call it a "fluff," and when they make a bad one they call it a "glitch," and I love it. Other examples from the world of radio can be found in the 1940s. The April 11, 1943, issue ofThe Washington Postcarried a review ofHelen Sioussat's book about radio broadcasting,Mikes Don't Bite. The reviewer noted an error and wrote, "In the lingo of radio, has Miss Sioussat pulled a 'muff,' 'fluff,' 'bust,' or 'glitch'?" And in a 1948 book calledThe Advertising and Business Side of Radio, Ned Midgley explained how a radio station's "traffic department" was responsible for properly scheduling items in a broadcast. "Usually most 'glitches,' as on-the-air mistakes are called, can be traced to a mistake on the part of the traffic department", Midgley wrote. In the 1950s,glitchmade the transition from radio to television. In a 1953 ad inBroadcastingmagazine, RCA boasted that their TV camera has "no more a-c power line 'glitches' (horizontal-bar interference)". And Bell Telephone ran an ad in a 1955 issue ofBillboardshowing two technicians monitoring the TV signals that were broadcast on Bell System lines: "When he talks of 'glitch' with a fellow technician, he means a low frequency interference which appears as a narrow horizontal bar moving vertically through the picture". A 1959 article inSponsor, a trade magazine for television and radio advertisers, gave another technical usage in an article about editing TV commercials by splicing tape."'Glitch' is slang for the 'momentary jiggle' that occurs at the editing point if the sync pulses don't match exactly in the splice". It also provided one of the earliest etymologies of the word, noting that,"'Glitch' probably comes from a German or Yiddish word meaning a slide, a glide or a slip".[citation needed] It was first widely defined for the American people byBennett Cerfon the June 20, 1965, episode ofWhat's My Lineas "a kink ... when anything goes wrong down there [Cape Kennedy], they say there's been a slight glitch". The astronautJohn Glennexplained the term in his section of the bookInto Orbit, writing that Another term we adopted to describe some of our problems was "glitch". Literally, a glitch is a spike or change in voltage in an electrical circuit which takes place when the circuit suddenly has a new load put on it. You have probably noticed a dimming of lights in your home when you turn a switch or start the dryer or the television set. Normally, these changes in voltage are protected by fuses. A glitch, however, is such a minute change in voltage that no fuse could protect against it.[5] John Daly further defined the word on the July 4, 1965, episode ofWhat's My Line, saying that it's a term used by the United States Air Force at Cape Kennedy, in the process of launching rockets, "it means something's gone wrong and you can't figure out what it is so you call it a 'glitch'". Later, on July 23, 1965,Timemagazine felt it necessary to define it in an article: "Glitches—a spaceman's word for irritating disturbances". In relation to the reference byTime, the term has been believed to enter common usage during the AmericanSpace Raceof the 1950s, where it was used to describe minor faults in the rocket hardware that were difficult to pinpoint.[6][7] Anelectronicsglitch orlogic hazardis a transition that occurs on a signal before the signal settles to its intended value, particularly in adigital circuit. Generally, this implies an electrical pulse of short duration, often due to arace conditionbetween two signals derived from a common source but with different delays. In some cases, such as a well-timedsynchronous circuit, this could be a harmless and well-tolerated effect that occurs normally in a design. In other contexts, a glitch can represent an undesirable result of a fault or design error that can produce a malfunction. Some electronic components, such asflip-flops, are triggered by a pulse that must not be shorter than a specified minimum duration in order to function correctly; a pulse shorter than the specified minimum may be called a glitch. A related concept is therunt pulse, a pulse whose amplitude is smaller than the minimum level specified for correct operation, and aspike, a short pulse similar to a glitch but often caused byringingorcrosstalk. A computer glitch is the failure of a system, usually containing a computing device, to complete its functions or to perform them properly. It frequently refers to an error which is not detected at the time it occurs but shows up later in data errors or incorrect human decisions. Situations which are frequently called computer glitches are incorrectly written software (software bugs), incorrect instructions given by the operator (operator errors, and a failure to account for this possibility might also be considered a software bug), undetected invalid input data (this might also be considered a software bug), undetected communications errors,computer viruses,Trojan attacksand computerexploiting(sometimes called "hacking"). Such glitches could produce problems such as keyboard malfunction, number key failures, screen abnormalities (turned left, right or upside-down), random program malfunctions, and abnormal program registering. Examples of computer glitches causing disruption include an unexpected shutdown of awater filtrationplant in New Canaan, 2010,[8]failures in theComputer Aided Dispatchsystem used by the police in Austin, resulting in unresponded 911 calls,[9]and an unexpectedbit flipcausing theCassinispacecraft to enter "safe mode" in November 2010.[10]Glitches can also be costly: in 2015, a bank was unable to raise interest rates for weeks resulting in losses of more than a million dollars per day.[11] Glitches invideo gamesmay include graphical and sound errors,collision detectionproblems, game crashes, and other issues.Quality assurance(QA) testers are commonly employed throughout the development process to find and report glitches to the programmers to be fixed, then potentially start over with a newbuildof the game.[12]If insufficient bug fixes are performed, numerous glitches and bugs can make their way to the final product.Bethesda Softworks, for example, is notorious for the amount of glitches in their games, though some players even prefer them to a glitch-free experience.[13] Some players may seek to induce glitches in a game for fun, using methods such ascartridgetilting to disrupt the data flow.[14] "Glitch hunters" are fans of a game who search for beneficial glitches that will allow them tospeedrunthe game faster, usually by skipping portions of a level, or quickly defeating enemies. One example of a speedrunning scene with large amounts of glitch hunters is theSoulsseries.[15]The use of glitches during speedruns is a controversial topic, with some frowning upon their use as subverting the intent of the developers. Those in favor of glitch use believe that using the glitches can in itself take a great deal of skill. Multiple categories of speedruns exist, with "any%" allowing the use of any type of glitch, while "glitchless" indicates the speedrun was performed without them.[16] Some games purposely include effects that look like glitches as a means tobreak the fourth walland either scare the player or put the player at unease, or otherwise as part of the game's narrative.[17]Games likeEternal DarknessandBatman: Arkham Asyluminclude segments with intentional glitches where it appears that the player's game system has failed.[18]The Animus interface in theAssassin's Creedseries, which allows the player-character to experience the memories of an ancestor though their genetic heritage, includes occasional glitches as to enforce the idea that the game is what the player-character is witnessing through a computer-aided system.[17] In broadcasting, a corrupted signal may glitch in the form of jagged lines on the screen, misplaced squares, static looking effects, freezing problems, or inverted colors. The glitches may affect the video and/or audio (usually audio dropout) or the transmission. These glitches may be caused by a variety of issues, interference from portable electronics or microwaves, damaged cables at the broadcasting center, or weather.[19] Multiple works of popular culture deal with glitches; those with the word "glitch" or derivations thereof are detailed inGlitch (disambiguation).
https://en.wikipedia.org/wiki/Glitching
Insoftware testing,monkey testingis a technique where the user tests the application or system by providingrandominputs and checking the behavior, or seeing whether the application or system will crash. Monkey testing is usually implemented as random, automatedunit tests. While the source of the name "monkey" is uncertain, it is believed by some that the name has to do with theinfinite monkey theorem,[1]which states that a monkey hitting keys atrandomon atypewriter keyboardfor an infinite amount of time willalmost surelytype a given text, such as the complete works ofWilliam Shakespeare. Some others believe that the name comes from theclassic Mac OSapplication "The Monkey" developed bySteve Cappsprior to 1983. It used journaling hooks to feed random events into Mac programs, and was used to test for bugs inMacPaint.[2] Monkey Testing is also included inAndroid Studioas part of the standard testing tools forstress testing.[3] Monkey testing can be categorized intosmart monkey testsordumb monkey tests. Smart monkeys are usually identified by the following characteristics:[4] Some smart monkeys are also referred to asbrilliant monkeys,[citation needed]which perform testing as per user's behavior and can estimate the probability of certain bugs. Dumb monkeys, also known as "ignorant monkeys", are usually identified by the following characteristics:[citation needed] Monkey testing is an effective way to identify some out-of-the-box errors. Since the scenarios tested are usuallyad-hoc, monkey testing can also be a good way to perform load and stress testing. The intrinsic randomness of monkey testing also makes it a good way to find major bugs that can break the entire system. The setup of monkey testing is easy, therefore good for any application. Smart monkeys, if properly set up with an accurate state model, can be really good at finding various kinds of bugs. The randomness of monkey testing often makes the bugs found difficult or impossible to reproduce. Unexpected bugs found by monkey testing can also be challenging and time consuming to analyze. In some systems, monkey testing can go on for a long time before finding a bug. For smart monkeys, the ability highly depends on the state model provided, and developing a good state model can be expensive.[1] While monkey testing is sometimes treated the same asfuzz testing[5]and the two terms are usually used together,[6]some believe they are different by arguing that monkey testing is more about random actions while fuzz testing is more about random data input.[7]Monkey testing is also different fromad-hoc testingin that ad-hoc testing is performed without planning and documentation and the objective of ad-hoc testing is to divide the system randomly into subparts and check their functionality, which is not the case in monkey testing.
https://en.wikipedia.org/wiki/Monkey_testing
Random testingis a black-boxsoftware testingtechnique where programs are tested bygeneratingrandom, independent inputs. Results of the output are compared against software specifications to verify that the test output is pass or fail.[1]In case of absence of specifications the exceptions of the language are used which means if an exception arises during test execution then it means there is a fault in the program, it is also used as a way to avoid biased testing. Random testing for hardware was first examined byMelvin Breuerin 1971 and initial effort to evaluate its effectiveness was done by Pratima andVishwani Agrawalin 1975.[2] In software, Duran and Ntafos had examined random testing in 1984.[3] The use of hypothesis testing as a theoretical basis for random testing was described by Howden inFunctional Testing and Analysis. The book also contained the development of a simple formula for estimating the number of testsnthat are needed to have confidence at least 1-1/nin a failure rate of no larger than 1/n. The formula is the lower boundnlogn, which indicates the large number of failure-free tests needed to have even modest confidence in a modest failure rate bound.[4] Consider the following C++ function: Now the random tests for this function could be {123, 36, -35, 48, 0}. Only the value '-35' triggers the bug. If there is no reference implementation to check the result, the bug still could go unnoticed. However, anassertioncould be added to check the results, like: The reference implementation is sometimes available, e.g. when implementing a simple algorithm in a much more complex way for better performance. For example, to test an implementation of theSchönhage–Strassen algorithm, the standard "*" operation on integers can be used: While this example is limited to simple types (for which a simple random generator can be used), tools targeting object-oriented languages typically explore the program to test and find generators (constructors or methods returning objects of that type) and call them using random inputs (either themselves generated the same way or generated using a pseudo-random generator if possible). Such approaches then maintain a pool of randomly generated objects and use a probability for either reusing a generated object or creating a new one.[5] According to the seminal paper on random testing by D. Hamlet [..] the technical, mathematical meaning of "random testing" refers to an explicit lack of "system" in the choice of test data, so that there is no correlation among different tests.[1] Random testing is praised for the following strengths: The following weaknesses have been described : Some tools implementing random testing: Random testing has only a specialized niche in practice, mostly because an effective oracle is seldom available, but also because of difficulties with the operational profile and with generation of pseudorandom input values.[1] Atest oracleis an instrument for verifying whether the outcomes match the program specification or not. An operation profile is knowledge about usage patterns of the program and thus which parts are more important. For programming languages and platforms which have contracts (e.g. Eiffel. .NET or various extensions of Java like JML, CoFoJa...) contracts act as natural oracles and the approach has been applied successfully.[5]In particular, random testing finds more bugs than manual inspections or user reports (albeit different ones).[9]
https://en.wikipedia.org/wiki/Random_testing
Incomputer security,coordinated vulnerability disclosure(CVD, sometimes known asresponsible disclosure)[1]is avulnerability disclosuremodel in which a vulnerability or an issue is disclosed to the public only after the responsible parties have been allowed sufficient time topatchor remedy the vulnerability or issue.[2]This coordination distinguishes the CVD model from the "full disclosure" model. Developers of hardware and software often require time and resources to repair their mistakes. Often, it isethical hackerswho find these vulnerabilities.[1]Hackersand computer security scientists have the opinion that it is their social responsibility to make the public aware of vulnerabilities. Hiding problems could cause a feeling offalse security. To avoid this, the involved parties coordinate and negotiate a reasonable period of time for repairing the vulnerability. Depending on the potential impact of the vulnerability, the expected time needed for an emergency fix or workaround to be developed and applied and other factors, this period may vary between a few days and several months. Coordinated vulnerability disclosure may fail to satisfy security researchers who expect to be financially compensated. At the same time, reporting vulnerabilities with the expectation of compensation is viewed by some as extortion.[3][4]While a market for vulnerabilities has developed, vulnerability commercialization (or "bug bounties") remains a hotly debated topic. Today, the two primary players in the commercial vulnerability market are iDefense, which started their vulnerability contributor program (VCP) in 2003, andTippingPoint, with their zero-day initiative (ZDI) started in 2005. These organizations follow the coordinated vulnerability disclosure process with the material bought. Between March 2003 and December 2007 an average 7.5% of the vulnerabilities affecting Microsoft and Apple were processed by either VCP or ZDI.[5]Independent firms financially supporting coordinated vulnerability disclosure by payingbug bountiesincludeFacebook,Google, andBarracuda Networks.[6] Google Project Zerohas a 90-day disclosure deadline which starts after notifying vendors of vulnerability, with details shared in public with the defensive community after 90 days, or sooner if the vendor releases a fix.[7] ZDI has a 120-day disclosure deadline which starts after receiving a response from the vendor.[8] Selectedsecurity vulnerabilitiesresolved by applying coordinated disclosure:
https://en.wikipedia.org/wiki/Coordinated_vulnerability_disclosure
Runtime error detectionis asoftware verificationmethod that analyzes a software application as it executes and reportsdefectsthat are detected during that execution. It can be applied duringunit testing,component testing,integration testing,system testing(automated/scripted or manual), orpenetration testing. Runtime error detection can identify defects that manifest themselves only at runtime (for example, file overwrites) and zeroing in on the root causes of the application crashing, running slowly, or behaving unpredictably. Defects commonly detected by runtime error detection include: Runtime error detection tools can only detect errors in the executed control flow of the application.[2]
https://en.wikipedia.org/wiki/Runtime_error_detection
Security testingis a process intended to detect flaws in thesecuritymechanisms of aninformation systemand as such help enable it to protect data and maintain functionality as intended.[1]Due to the logical limitations of security testing, passing the security testing process is not an indication that no flaws exist or that the system adequately satisfies the security requirements. Typical security requirements may include specific elements ofconfidentiality,integrity,authentication, availability, authorization andnon-repudiation.[2]Actual security requirements tested depend on the security requirements implemented by the system. Security testing as a term has a number of different meanings and can be completed in a number of different ways. As such, a Security Taxonomy helps us to understand these different approaches and meanings by providing a base level to work from. Integrity of information refers to protecting information from being modified by unauthorized parties This might involve confirming the identity of a person, tracing the origins of an artifact, ensuring that a product is what its packaging and labelling claims to be, or assuring that acomputer programis a trusted one. Common terms used for the delivery of security testing:
https://en.wikipedia.org/wiki/Security_testing
Incomputer programmingandsoftware testing,smoke testing(alsoconfidence testing,sanity testing,[1]build verification test(BVT)[2][3][4]andbuild acceptance test) is preliminary testing orsanity testingto reveal simple failures severe enough to, for example, reject a prospective software release. Smoke tests are a subset oftest casesthat cover the most important functionality of a component or system, used to aid assessment of whether main functions of the software appear to work correctly.[1][2]When used to determine if a computer program should be subjected to further, more fine-grained testing, a smoke test may be called apretest[5]or anintake test.[1]Alternatively, it is a set of tests run on each new build of aproductto verify that the build is testable before the build is released into the hands of the test team.[6]In theDevOpsparadigm, use of a build verification test step is one hallmark of thecontinuous integrationmaturity stage.[7] For example, a smoke test may address basic questions like "does the program run?", "does the user interface open?", or "does clicking the main button do anything?" The process of smoke testing aims to determine whether the application is so badly broken as to make further immediate testing unnecessary. As the bookLessons Learned in Software Testing[8]puts it, "smoke tests broadly cover product features in a limited time [...] if key features don't work or if key bugs haven't yet been fixed, your team won't waste further time installing or testing".[3] Smoke tests frequently run quickly, giving benefits of faster feedback, rather than running more extensivetest suites, which would naturally take longer. Frequent reintegration with smoke testing is among industrybest practices.[9][need quotation to verify]Ideally, every commit to a source code repository should trigger a Continuous Integration build, to identify regressions as soon as possible. If builds take too long, you might batch up several commits into one build, or very large systems might be rebuilt once a day. Overall, rebuild and retest as often as you can. Smoke testing is also done by testers before accepting a build for further testing.Microsoftclaims that aftercode reviews, "smoke testingis the most cost-effective method for identifying and fixing defects in software".[10] One can perform smoke tests either manually or usingan automated tool. In the case of automated tools, the process that generates the build will often initiate the testing.[citation needed] Smoke tests can befunctional testsorunit tests. Functional tests exercise the complete program with various inputs. Unit tests exercise individual functions, subroutines, or object methods. Functional tests may comprise a scripted series of program inputs, possibly even with an automated mechanism for controlling mouse movements. Unit tests can be implemented either as separate functions within the code itself, or else as a driver layer that links to the code without altering the code being tested.[citation needed] The term originates from the centuries-old practice ofmechanical smoke testing, where smoke was pumped into pipes or machinery to identify leaks, defects, or disconnections. Widely used in plumbing and industrial applications, this method revealed problem areas by observing where smoke escaped. Insoftware development, the term was metaphorically adopted to describe a preliminary round of testing that checks for basic functionality. Like its physical counterparts, a software smoke test aims to identify critical failures early, ensuring the system is stable and that all required components are functioning before proceeding to more comprehensive testing, such as end-to-end or load testing. In the context ofelectronics, the term was humorously reinterpreted to describe an initial power-on test for new hardware. This usage alludes to the visible smoke produced by overloaded or improperly connected components during catastrophic failure. While the imagery is memorable, the occurrence of smoke was never an intended or sustainable testing method. Instead, it underscores the importance of performing basic checks to catch critical issues early. For example, Cem Kaner, James Bach, and Brett Pettichord explain inLessons Learned in Software Testing: "The phrase smoke test comes fromelectronic hardware testing. You plug in a new board and turn on the power. If you see smoke coming from the board, turn off the power. You don't have to do any more testing."[3]
https://en.wikipedia.org/wiki/Smoke_testing_(software)
System testing, a.k.a.end-to-end (E2E) testing, is testing conducted on a completesoftware system. System testing describes testing at the system level to contrast to testing at thesystem integration,integrationorunitlevel. System testing often serves the purpose of evaluating the system's compliance with its specifiedrequirements[citation needed]– often from afunctional requirement specification(FRS), asystem requirement specification(SRS), another type of specification or multiple. System testing can detect defects in the system as a whole.[citation needed][1] System testing can verify the design, the behavior and even the believed expectations of the customer. It is also intended to test up to and beyond the bounds of specified software and hardware requirements.[citation needed]
https://en.wikipedia.org/wiki/System_testing
Insoftware testing,test automationis the use ofsoftwareseparate from the software being tested to control the execution of tests and the comparison of actual outcomes with predicted outcomes.[1]Test automation can automate some repetitive but necessary tasks in a formalized testing process already in place, or perform additional testing that would be difficult to do manually. Test automation is critical forcontinuous deliveryandcontinuous testing.[2] There are many approaches to test automation, however below are the general approaches used widely: One way to generate test cases automatically ismodel-based testingthrough use of a model of the system for test case generation, but research continues into a variety of alternative methodologies for doing so.[citation needed]In some cases, the model-based approach enables non-technical users to create automated business test cases in plain English so that no programming of any kind is needed in order to configure them for multiple operating systems, browsers, and smart devices.[3] Somesoftware testingtasks (such as extensive low-level interfaceregression testing) can be laborious and time-consuming to do manually. In addition, a manual approach might not always be effective in finding certain classes of defects. Test automation offers a possibility to perform these types of testing effectively. Once automated tests have been developed, they can be run quickly and repeatedly many times. This can be a cost-effective method for regression testing of software products that have a long maintenance life. Even minor patches over the lifetime of the application can cause existing features to break which were working at an earlier point in time. API testingis also being widely used by software testers as it enables them to verify requirements independent of their GUI implementation, commonly to test them earlier in development, and to make sure the test itself adheres to clean code principles, especially thesingle responsibility principle. It involves directly testingAPIsas part ofintegration testing, to determine if they meet expectations for functionality, reliability, performance, and security.[4]Since APIs lack aGUI, API testing is performed at themessage layer.[5]API testing is considered critical when an API serves as the primary interface toapplication logic.[6] Many test automation tools provide record and playback features that allow users to interactively record user actions and replay them back any number of times, comparing actual results to those expected. The advantage of this approach is that it requires little or nosoftware development. This approach can be applied to any application that has agraphical user interface. However, reliance on these features poses major reliability and maintainability problems. Relabelling a button or moving it to another part of the window may require the test to be re-recorded. Record and playback also often adds irrelevant activities or incorrectly records some activities.[citation needed] A variation on this type of tool is for testing of web sites. Here, the "interface" is the web page. However, such a framework utilizes entirely different techniques because it is renderingHTMLand listening toDOM Eventsinstead of operating system events.Headless browsersor solutions based onSelenium Web Driverare normally used for this purpose.[7][8][9] Another variation of this type of test automation tool is for testing mobile applications. This is very useful given the number of different sizes, resolutions, and operating systems used on mobile phones. For this variation, a framework is used in order to instantiate actions on the mobile device and to gather results of the actions. Another variation is script-less test automation that does not use record and playback, but instead builds a model[clarification needed]of the application and then enables the tester to create test cases by simply inserting test parameters and conditions, which requires no scripting skills. Test automation, mostly using unit testing, is a key feature ofextreme programmingandagile software development, where it is known astest-driven development(TDD) or test-first development. Unit tests can be written to define the functionalitybeforethe code is written. However, these unit tests evolve and are extended as coding progresses, issues are discovered and the code is subjected to refactoring.[10]Only when all the tests for all the demanded features pass is the code considered complete. Proponents argue that it produces software that is both more reliable and less costly than code that is tested by manual exploration.[citation needed]It is considered more reliable because the code coverage is better, and because it is run constantly during development rather than once at the end of awaterfalldevelopment cycle. The developer discovers defects immediately upon making a change, when it is least expensive to fix. Finally,code refactoringis safer when unit testing is used; transforming the code into a simpler form with lesscode duplication, but equivalent behavior, is much less likely to introduce new defects when the refactored code is covered by unit tests. Continuous testingis the process of executing automated tests as part of the software delivery pipeline to obtain immediate feedback on the business risks associated with a software release candidate.[11][12]For Continuous Testing, the scope of testing extends from validating bottom-up requirements or user stories to assessing the system requirements associated with overarching business goals.[13] What to automate, when to automate, or even whether one really needs automation are crucial decisions which the testing (or development) team must make.[14]A multi-vocal literature review of 52 practitioner and 26 academic sources found that five main factors to consider in test automation decision are: 1) System Under Test (SUT), 2) the types and numbers of tests, 3) test-tool, 4) human and organizational topics, and 5) cross-cutting factors. The most frequent individual factors identified in the study were: need for regression testing, economic factors, and maturity of SUT.[15] While the reusability of automated tests is valued by software development companies, this property can also be viewed as a disadvantage. It leads to the so-called"Pesticide Paradox", where repeatedly executed scripts stop detecting errors that go beyond their frameworks. In such cases,manual testingmay be a better investment. This ambiguity once again leads to the conclusion that the decision on test automation should be made individually, keeping in mind project requirements and peculiarities. Testing tools can help automate tasks such as product installation, test data creation, GUI interaction, problem detection (consider parsing or polling agents equipped withtest oracles), defect logging, etc., without necessarily automating tests in an end-to-end fashion. One must keep satisfying popular requirements when thinking of test automation: Test automation tools can be expensive and are usually employed in combination with manual testing. Test automation can be made cost-effective in the long term, especially when used repeatedly inregression testing. A good candidate for test automation is a test case for common flow of an application, as it is required to be executed (regression testing) every time an enhancement is made in the application. Test automation reduces the effort associated with manual testing. Manual effort is needed to develop and maintain automated checks, as well as reviewing test results. In automated testing, thetest engineerorsoftware quality assuranceperson must have software coding ability since the test cases are written in the form of source code which when run produce output according to theassertionsthat are a part of it. Some test automation tools allow for test authoring to be done by keywords instead of coding, which do not require programming. A strategy to decide the amount of tests to automate is the test automation pyramid. This strategy suggests to write three types of tests with different granularity. The higher the level, less is the amount of tests to write.[16] One conception of the testing pyramid contains unit, integration, and end-to-end unit tests. According toGoogle's testing blog, unit tests should make up the majority of your testing strategy, with fewer integration tests and only a small amount of end-to-end tests.[19] A test automation framework is an integrated system that sets the rules of automation of a specific product. This system integrates the function libraries, test data sources, object details and various reusable modules. These components act as small building blocks which need to be assembled to represent a business process. The framework provides the basis of test automation and simplifies the automation effort. The main advantage of aframeworkof assumptions, concepts and tools that provide support for automated software testing is the low cost formaintenance. If there is change to anytest casethen only the test case file needs to be updated and thedriver Scriptandstartup scriptwill remain the same. Ideally, there is no need to update the scripts in case of changes to the application. Choosing the right framework/scripting technique helps in maintaining lower costs. The costs associated with test scripting are due to development and maintenance efforts. The approach of scripting used during test automation has effect on costs. Various framework/scripting techniques are generally used: The Testing framework is responsible for:[20] A growing trend in software development is the use ofunit testingframeworks such as thexUnitframeworks (for example,JUnitandNUnit) that allow the execution of unit tests to determine whether various sections of thecodeare acting as expected under various circumstances.Test casesdescribe tests that need to be run on the program to verify that the program runs as expected. Test automation interfaces are platforms that provide a singleworkspacefor incorporating multiple testing tools and frameworks forSystem/Integration testingof application under test. The goal of Test Automation Interface is to simplify the process of mapping tests to business criteria without coding coming in the way of the process. Test automation interface are expected to improve the efficiency and flexibility of maintaining test scripts.[21] Test Automation Interface consists of the following core modules: Interface engines are built on top of Interface Environment. Interface engine consists of aparserand a test runner. The parser is present to parse the object files coming from the object repository into the test specific scripting language. The test runner executes the test scripts using atest harness.[21] Object repositories are a collection of UI/Application object data recorded by the testing tool while exploring the application under test.[21] Tools are specifically designed to target some particular test environment, such as Windows and web automation tools, etc. Tools serve as a driving agent for an automation process. However, an automation framework is not a tool to perform a specific task, but rather infrastructure that provides the solution where different tools can do their job in a unified manner. This provides a common platform for the automation engineer. There are various types of frameworks. They are categorized on the basis of the automation component they leverage. These are:
https://en.wikipedia.org/wiki/Test_automation
AnABX testis a method of comparing two choices of sensory stimuli to identify detectable differences between them. A subject is presented with two known samples (sampleA, the first reference, and sampleB, the second reference) followed by one unknown sampleXthat is randomly selected from either A or B. The subject is then required to identify X as either A or B. If X cannot be identified reliably with a lowp-valuein a predetermined number of trials, then thenull hypothesiscannot be rejected and it cannot be proven that there is a perceptible difference between A and B. ABX tests can easily be performed asdouble-blind trials, eliminating any possible unconscious influence from the researcher or the test supervisor. Because samples A and B are provided just prior to sample X, the difference does not have to be discerned using long-term memory or past experience. Thus, the ABX test answers whether or not, under the test circumstances, a perceptual difference can be found. ABX tests are commonly used in evaluations of digitalaudio data compressionmethods; sample A is typically an uncompressed sample, and sample B is a compressed version of A. Audiblecompression artifactsthat indicate a shortcoming in the compression algorithm can be identified with subsequent testing. ABX tests can also be used to compare the different degrees of fidelity loss between two different audio formats at a givenbitrate. ABX tests can be used to audition input, processing, and output components as well as cabling: virtually any audio product or prototype design. The history of ABX testing and naming dates back to 1950 in a paper published by two Bell Labs researchers, W. A. Munson and Mark B. Gardner, titledStandardizing Auditory Tests.[1] The purpose of the present paper is to describe a test procedure which has shown promise in this direction and to give descriptions of equipment which have been found helpful in minimizing the variability of the test results. The procedure, which we have called the "ABX" test, is a modification of the method of paired comparisons. An observer is presented with a time sequence of three signals for each judgment he is asked to make. During the first time interval he hears signal A, during the second, signal B, and finally signal X. His task is to indicate whether the sound heard during the X interval was more like that during the A interval or more like that during the B interval. For a threshold test, the A interval is quiet, the B interval is signal, and the X interval is either quiet or signal. The test has evolved to other variations such as subject control over duration and sequence of testing. One such example was the hardware ABX comparator in 1977, built by the ABX company in Troy, Michigan, and documented by one of its founders, David Clark.[2] Refinements to the A/B test The author's first experience with double-blind audibility testing was as a member of the SMWTMS Audio Club in early 1977. A button was provided which would select at random component A or B. Identifying one of these, the X component was greatly hampered by not having the known A and B available for reference. This was corrected by using three interlocked pushbuttons, A, B, and X. Once an X was selected, it would remain that particular A or B until it was decided to move on to another random selection. However, another problem quickly became obvious. There was always an audible relay transition time delay when switching from A to B. When switching from A to X, however, the time delay would be missing if X was really A and present if X was really B. This extraneous cue was removed by inserting a fixed length dropout time when any change was made. The dropout time was selected to be 50 ms which produces a slight consistent click while allowing subjectively instant comparison. The ABX company is now defunct and hardware comparators in general as commercial offerings extinct. Myriad of software tools exist such as Foobar ABX plug-in for performing file comparisons. But hardware equipment testing requires building custom implementations. ABX test equipment utilizing relays to switch between two different hardware paths can help determine if there are perceptual differences in cables and components. Video, audio and digital transmission paths can be compared. If the switching is microprocessor controlled, double-blind tests are possible. Loudspeaker level and line level audio comparisons could be performed on an ABX test device offered for sale as theABX ComparatorbyQSC Audio Productsfrom 1998 to 2004. Other hardware solutions have been fabricated privately by individuals or organizations for internal testing. If only one ABX trial were performed, random guessing would incur a 50% chance of choosing the correct answer, the same as flipping a coin. In order to make a statement having some degree ofconfidence, many trials must be performed. By increasing the number of trials, the likelihood of statistically asserting a person's ability to distinguish A and B is enhanced for a given confidence level. A 95% confidence level is commonly consideredstatistically significant.[2]The company QSC, in the ABX Comparator user manual, recommended a minimum of ten listening trials in each round of tests.[3] QSC recommended that no more than 25 trials be performed, as subject fatigue can set in, making the test less sensitive (less likely to reveal one's actual ability to discern the difference between A and B).[3]However, a more sensitive test can be obtained bypoolingthe results from a number of such tests using separate individuals or tests from the same subject conducted in between rest breaks. For a large number of total trials N, a significant result (one with 95% confidence) can be claimed if the number of correct responses exceedsN/2+N{\displaystyle N/2+{\sqrt {N}}}. Important decisions are normally based on a higher level of confidence, since an erroneoussignificant resultwould be claimed in one of 20 such tests simply by chance. Thefoobar2000and theAmarokaudio players support software-based ABX testing, the latter using a third-party script. Lacinato ABX is a cross-platform audio testing tool for Linux, Windows, and 64-bit Mac. Lacinato WebABX is a web-based cross-browser audio ABX tool. Open source aveX was mainly developed forLinuxwhich also provides test-monitoring from a remote computer. ABX patcher is an ABX implementation forMax/MSP. More ABX software can be found at the archived PCABX website. Acodec listening testis ascientificstudydesigned to compare two or morelossyaudiocodecs, usually with respect to perceivedfidelityor compression efficiency. ABX is a type offorced choicetesting. A subject's choices can be on merit, i.e. the subject indeed honestly tried to identify whether X seemed closer to A or B. But uninterested or tired subjects might choose randomly without even trying. If not caught, this may dilute the results of other subjects who intently took the test and subject the outcome toSimpson's paradox, resulting in false summary results. Simply looking at the outcome totals of the test (mout ofnanswers correct) cannot reveal occurrences of this problem. This problem becomes more acute if the differences are small. The user may get frustrated and simply aim to finish the test by voting randomly. In this regard, forced-choice tests such as ABX tend to favor negative outcomes when differences are small if proper protocols are not used to guard against this problem. Best practices call for both the inclusion of controls and the screening of subjects:[5] A major consideration is the inclusion of appropriate control conditions. Typically, control conditions include the presentation of unimpaired audio materials, introduced in ways that are unpredictable to the subjects. It is the differences between judgement of these control stimuli and the potentially impaired ones that allows one to conclude that the grades are actual assessments of the impairments. 3.2.2 Post-screening of subjects Post-screening methods can be roughly separated into at least two classes; one is based on inconsistencies compared with the mean result and another relies on the ability of the subject to make correct identifications. The first class is never justifiable. Whenever a subjective listening test is performed with the test method recommended here, the required information for the second class of post-screening is automatically available. A suggested statistical method for doing this is described in Attachment 1.' The methods are primarily used to eliminate subjects who cannot make the appropriate discriminations. The application of a post-screening method may clarify the tendencies in a test result. However, bearing in mind the variability of subjects’ sensitivities to different artefacts, caution should be exercised. Other flaws include lack of subject training and familiarization with the test and content selected: 4.1 Familiarization or training phase Prior to formal grading, subjects must be allowed to become thoroughly familiar with the test facilities, the test environment, the grading process, the grading scales and the methods of their use. Subjects should also become thoroughly familiar with the artefacts under study. For the most sensitive tests they should be exposed to all the material they will be grading later in the formal grading sessions. During familiarization or training, subjects should be preferably together in groups (say, consisting of three subjects), so that they can interact freely and discuss the artefacts they detect with each other. Other problems might arise from the ABX equipment itself, as outlined by Clark,[2]where the equipment provides atell, allowing the subject to identify the source. Lack of transparency of the ABX fixture creates similar problems. Since auditory tests and many other sensory tests rely onshort-term memory, which only lasts a few seconds, it is critical that the test fixture allows the subject to identify short segments that can be compared quickly. Pops and glitches in switching apparatus likewise must be eliminated, as they may dominate or otherwise interfere with the stimuli being tested in what is stored in the subject's short-term memory. Since ABX testing requires human beings for evaluation of lossy audio codecs, it is time-consuming and costly. Therefore, cheaper approaches have been developed, e.g.PEAQ, which is an implementation of theODG. InMUSHRA, the subject is presented with the reference (labeled as such), a certain number of test samples, a hidden version of the reference and one or more anchors. A 0–100 rating scale makes it possible to rate very small differences, and the hidden version still provides discrimination checks. Alternative general methods are used indiscrimination testing, such as paired comparison, duo–trio, andtriangle testing. Of these, duo–trio and triangle testing are particularly close to ABX testing. Schematically: In this context, ABX testing is also known as "duo–trio" in "balanced reference" mode – both knowns are presented as references, rather than one alone.[6]
https://en.wikipedia.org/wiki/ABX_test
Inengineeringand its varioussubdisciplines,acceptance testingis a test conducted to determine if the requirements of aspecificationorcontractare met. It may involvechemical tests,physical tests, orperformance tests.[1] Insystems engineering, it may involveblack-box testingperformed on asystem(for example: a piece ofsoftware, lots of manufactured mechanical parts, or batches of chemical products) prior to its delivery.[2] Insoftware testing, theISTQBdefinesacceptance testingas: Formal testing with respect to user needs, requirements, and business processes conducted to determine whether a system satisfies theacceptance criteria[3]and to enable the user, customers or other authorized entity to determine whether to accept the system. The final test in the QA lifecycle, user acceptance testing, is conducted just before the final release to assess whether the product or application can handle real-world scenarios. By replicating user behavior, it checks if the system satisfies business requirements and rejects changes if certain criteria are not met.[5] Some forms of acceptance testing are,user acceptance testing(UAT), end-user testing,operational acceptance testing(OAT),acceptance test-driven development(ATDD) and field (acceptance) testing. Acceptance criteria are the criteria that a system or component must satisfy in order to be accepted by a user, customer, or other authorized entity.[6] Testing is a set of activities conducted to facilitate the discovery and/or evaluation of properties of one or more items under test.[7]Each test, known as a test case, exercises a set of predefined test activities, developed to drive the execution of the test item to meet test objectives; including correct implementation, error identification, quality verification, and other valued details.[7]The testenvironmentis usually designed to be identical, or as close as possible, to the anticipated production environment. It includes all facilities, hardware, software, firmware, procedures, and/or documentation intended for or used to perform the testing of software.[7] UAT and OAT test cases are ideally derived in collaboration with business customers, business analysts, testers, and developers. These tests must include both business logic tests as well as operational environment conditions. The business customers (product owners) are the primarystakeholdersof these tests. As the test conditions successfully achieve their acceptance criteria, the stakeholders are reassured the development is progressing in the right direction.[8] The acceptance test suite may need to be performed multiple times, as all of the test cases may not be executed within a single test iteration.[9] The acceptance test suite is run using predefined acceptance test procedures to direct the testers on which data to use, the step-by-step processes to follow, and the expected result following execution. The actual results are retained for comparison with the expected results.[9]If the actual results match the expected results for each test case, the test case is said to pass. If the quantity of non-passing test cases does not breach the project's predetermined threshold, the test suite is said to pass. If it does, the system may either be rejected or accepted on conditions previously agreed between the sponsor and the manufacturer. The anticipated result of a successful test execution: The objective is to provide confidence that the developed product meets both the functional and non-functional requirements. The purpose of conducting acceptance testing is that once completed, and provided the acceptance criteria are met, it is expected the sponsors will sign off on the product development/enhancement as satisfying the defined requirements (previously agreed between business and product provider/developer). User acceptance testing (UAT) consists of a process of verifying that a solution works for the user.[10]It is notsystem testing(ensuring software does not crash and meets documented requirements) but rather ensures that the solution will work for the user (i.e. tests that the user accepts the solution); software vendors often refer to this as "Beta testing". This testing should be undertaken by the intendedend user, or asubject-matter expert(SME), preferably the owner or client of the solution under test and provide a summary of the findings for confirmation to proceed after trial or review. Insoftware development, UAT as one of the final stages of a project often occurs before a client or customer accepts the new system. Users of the system perform tests in line with what would occur in real-life scenarios.[11] The materials given to the tester must be similar to the materials that the end user will have. Testers should be given real-life scenarios such as the three most common or difficult tasks that the users they represent will undertake.[12] The UAT acts as a final verification of the required business functionality and proper functioning of the system, emulating real-world conditions on behalf of the paying client or a specific large customer. If the software works as required and without issues during normal use, one can reasonably extrapolate the same level of stability in production.[13] User tests, usually performed by clients or by end-users, do not normally focus on identifying simple cosmetic problems such as spelling errors, nor onshowstopperdefects, such assoftware crashes; testers and developers identify and fix these issues during earlierunit testing,integration testing, and system testing phases. UAT should be executed against test scenarios.[14][15]Test scenarios usually differ from System or Functional test cases in that they represent a "player" or "user" journey. The broad nature of the test scenario ensures that the focus is on the journey and not on technical or system-specific details, staying away from "click-by-click" test steps to allow for a variance in users' behavior. Test scenarios can be broken down into logical "days", which are usually where the actor (player/customer/operator) or system (backoffice, front end) changes.[16] In industry, a common UAT is a factory acceptance test (FAT). This test takes place before the installation of the equipment. Most of the time testers not only check that the equipment meets the specification but also that it is fully functional. A FAT usually includes a check of completeness, a verification against contractual requirements, a proof of functionality (either by simulation or a conventional function test), and a final inspection.[17]The results of these tests give clients confidence in how the system will perform in production. There may also be legal or contractual requirements for acceptance of the system. Operational acceptance testing(OAT) is used to conduct operational readiness (pre-release) of a product, service or system as part of aquality management system. OAT is a common type of non-functionalsoftware testing, used mainly insoftware developmentandsoftware maintenanceprojects. This type of testing focuses on the operational readiness of the system to be supported, and/or to become part of the production environment.[18] Acceptance testing is a term used inagile software developmentmethodologies, particularlyextreme programming, referring to thefunctional testingof auser storyby the software development team during the implementation phase.[19] The customer specifies scenarios to test when a user story has been correctly implemented. A story can have one or many acceptance tests, whatever it takes to ensure the functionality works. Acceptance tests are black-box system tests. Each acceptance test represents some expected result from the system. Customers are responsible for verifying the correctness of the acceptance tests and reviewing test scores to decide which failed tests are of highest priority. Acceptance tests are also used asregression testsprior to a production release. A user story is not considered complete until it has passed its acceptance tests. This means that new acceptance tests must be created for each iteration, or the development team will report zero progress.[20] Typical types of acceptance testing include the following According to theProject Management Institute,acceptance criteriais a "set of conditions that is required to be met before deliverables are accepted."[26]Requirements found in acceptance criteria for a given component of the system are usually very detailed.[27]
https://en.wikipedia.org/wiki/Acceptance_testing
In ablindorblinded experiment, information which may influence the participants of theexperimentis withheld until after the experiment is complete. Good blinding can reduce or eliminate experimentalbiasesthat arise from a participants' expectations,observer's effect on the participants,observer bias,confirmation bias, and other sources. A blind can be imposed on any participant of an experiment, including subjects, researchers, technicians, data analysts, and evaluators. In some cases, while blinding would be useful, it is impossible or unethical. For example, it is not possible to blind a patient to their treatment in a physical therapy intervention. A goodclinical protocolensures that blinding is as effective as possible within ethical and practical constraints. During the course of an experiment, a participant becomesunblindedif they deduce or otherwise obtain information that has been masked to them. For example, a patient who experiences a side effect may correctly guess their treatment, becoming unblinded. Unblinding is common in blinded experiments, particularly in pharmacological trials. In particular, trials onpain medicationandantidepressantsare poorly blinded. Unblinding that occurs before the conclusion of a study is a source of experimental error, as the bias that was eliminated by blinding is re-introduced. TheCONSORTreporting guidelines recommend that all studies assess and report unblinding. In practice, very few studies do so.[1] Blinding is an important tool of thescientific method, and is used in many fields of research. In some fields, such asmedicine, it is considered essential.[2]In clinical research, a trial that is not a blinded trial is called anopen trial. The first known blind experiment was conducted by theFrench Royal Commission on Animal Magnetismin 1784 to investigate the claims ofmesmerismas proposed by Charles d'Eslon, a former associate ofFranz Mesmer. In the investigations, the researchers (physically) blindfolded mesmerists and asked them to identify objects that the experimenters had previously filled with "vital fluid". The subjects were unable to do so.[citation needed] In 1817, the first blind experiment recorded to have occurred outside of a scientific setting compared the musical quality of aStradivariusviolin to one with a guitar-like design. A violinist played each instrument while a committee of scientists and musicians listened from another room so as to avoid prejudice.[3][4] An early example of a double-blind protocol was the Nuremberg salt test of 1835 performed by Friedrich Wilhelm von Hoven, Nuremberg's highest-ranking public health official,[5]as well as a close friend ofFriedrich Schiller.[6]This trial contested the effectiveness ofhomeopathicdilution.[5] In 1865,Claude Bernardpublished hisIntroduction to the Study of Experimental Medicine, which advocated for the blinding of researchers.[7]Bernard's recommendation that an experiment's observer should not know the hypothesis being tested contrasted starkly with the prevalentEnlightenment-era attitude that scientific observation can only be objectively valid when undertaken by a well-educated, informed scientist.[8]The first study recorded to have a blinded researcher was conducted in 1907 byW. H. R. Riversand H. N. Webber to investigate the effects of caffeine.[9]The need to blind researchers became widely recognized in the mid-20th century.[10] A number of biases are present when a study is insufficiently blinded. Patient-reported outcomes can be different if the patient is not blinded to their treatment.[11]Likewise, failure to blind researchers results inobserver bias.[12]Unblinded data analysts may favor an analysis that supports their existing beliefs (confirmation bias). These biases are typically the result of subconscious influences, and are present even when study participants believe they are not influenced by them.[13] In medical research, the termssingle-blind,double-blindandtriple-blindare commonly used to describe blinding. These terms describe experiments in which (respectively) one, two, or three parties are blinded to some information. Most often, single-blind studies blind patients to theirtreatment allocation, double-blind studies blind both patients and researchers to treatment allocations, and triple-blinded studies blind patients, researcher, and some other third party (such as a monitoring committee) to treatment allocations. However, the meaning of these terms can vary from study to study.[14] CONSORTguidelines state that these terms should no longer be used because they are ambiguous. For instance, "double-blind" could mean that the data analysts and patients were blinded; or the patients and outcome assessors were blinded; or the patients and people offering the intervention were blinded, etc. The terms also fail to convey the information that was masked and the amount of unblinding that occurred. It is not sufficient to specify the number of parties that have been blinded. To describe an experiment's blinding, it is necessary to reportwhohas been blinded towhatinformation, andhow welleach blind succeeded.[15] "Unblinding" occurs in a blinded experiment when information becomes available to one from whom it has been masked. In clinical studies, unblinding may occur unintentionally when a patient deduces their treatment group. Unblinding that occurs before the conclusion of anexperimentis a source ofbias. Some degree of premature unblinding is common in blinded experiments.[16]When a blind is imperfect, its success is judged on aspectrumwithno blind(or complete failure of blinding) on one end, perfect blinding on the other, and poor or good blinding between. Thus, the common view of studies as blinded or unblinded is an example of afalse dichotomy.[17] Success of blinding is assessed by questioning study participants about information that has been masked to them (e.g. did the participant receive the drug orplacebo?). In a perfectly blinded experiment, the responses should be consistent with no knowledge of the masked information. However, if unblinding has occurred, the responses will indicate the degree of unblinding. Since unblindingcannot be measured directly, but must be inferred from participants' responses, its measured value will depend on thenature of the questions asked. As a result, it is not possible to measure unblinding in a way that is completely objective. Nonetheless, it is still possible to make informed judgments about the quality of a blind. Poorly blinded studies rank above unblinded studies and below well-blinded studies in thehierarchy of evidence.[18] Post-study unblinding is the release of masked data upon completion of a study. Inclinical studies, post-study unblinding serves to inform subjects of theirtreatment allocation. Removing a blind upon completion of a study is never mandatory, but is typically performed as a courtesy to study participants. Unblinding that occurs after the conclusion of a study is not a source of bias, because data collection and analysis are both complete at this time.[19] Premature unblinding is any unblinding that occurs before the conclusion of a study. In contrast with post-study unblinding, premature unblinding is a source of bias. Acode-break proceduredictates when a subject should be unblinded prematurely. A code-break procedure should only allow for unblinding in cases of emergency. Unblinding that occurs in compliance with code-break procedure is strictly documented and reported.[20] Premature unblinding may also occur when a participant infers from experimental conditions information that has been masked to them. A common cause for unblinding is the presence of side effects (or effects) in the treatment group. In pharmacological trials, premature unblinding can be reduced with the use of anactive placebo, which conceals treatment allocation by ensuring the presence of side effects in both groups.[21]However, side effects are not the only cause of unblinding; any perceptible difference between the treatment and control groups can contribute to premature unblinding.[citation needed] A problem arises in the assessment of blinding because asking subjects to guess masked information may prompt them to try to infer that information. Researchers speculate that this may contribute to premature unblinding.[22]Furthermore, it has been reported that some subjects of clinical trials attempt to determine if they have received an active treatment by gathering information on social media and message boards. While researchers counsel patients not to use social media to discuss clinical trials, their accounts are not monitored. This behavior is believed to be a source of unblinding.[23]CONSORT standards andgood clinical practiceguidelines recommend the reporting of all premature unblinding.[24][25]In practice, unintentional unblinding is rarely reported.[1] Bias due to poor blinding tends to favor the experimental group, resulting in inflated effect size and risk offalse positives.[24]Success or failure of blinding is rarely reported or measured; it is implicitly assumed that experiments reported as "blind" are truly blind.[1]Critics have pointed out that without assessment and reporting, there is no way to know if a blind succeeded. This shortcoming is especially concerning given that even a small error in blinding can produce astatistically significantresult in the absence of any real difference between test groups when a study is sufficientlypowered(i.e. statistical significance is not robust to bias). As such, many statistically significant results inrandomized controlled trialsmay be caused by error in blinding.[26]Some researchers have called for the mandatory assessment of blinding efficacy in clinical trials.[18] Blinding is considered essential in medicine,[27]but is often difficult to achieve. For example, it is difficult to compare surgical and non-surgical interventions in blind trials. In some cases,sham surgerymay be necessary for the blinding process. A goodclinical protocolensures that blinding is as effective as possible within ethical and practical constrains. Studies of blinded pharmacological trials across widely varying domains find evidence of high levels of unblinding. Unblinding has been shown to affect both patients and clinicians. This evidence challenges the common assumption that blinding is highly effective in pharmacological trials. Unblinding has also been documented in clinical trials outside of pharmacology.[28] A 2018meta-analysisfound that assessment of blinding was reported in only 23 out of 408 randomized controlled trials for chronic pain (5.6%). The study concluded upon analysis of pooled data that the overall quality of the blinding was poor, and the blinding was "not successful." Additionally, both pharmaceutical sponsorship and the presence of side effects were associated with lower rates of reporting assessment of blinding.[29] Studies have found evidence of extensive unblinding inantidepressanttrials: at least three-quarters of patients were able to correctly guess their treatment assignment.[30]Unblinding also occurs in clinicians.[31]Better blinding of patients and clinicians reduceseffect size. Researchers concluded that unblinding inflates effect size in antidepressant trials.[32][33][34]Some researchers believe that antidepressants are not effective for the treatment of depression and only outperform placebos due tosystematic error. These researchers argue that antidepressants are justactive placebos.[35][36] While the possibility of blinded trials onacupunctureis controversial, a 2003 review of 47randomized controlled trialsfound no fewer than four methods of blinding patients to acupuncture treatment: 1) superficial needling of true acupuncture points, 2) use of acupuncture points which are not indicated for the condition being treated, 3) insertion of needles outside of true acupuncture points, and 4) the use of placebo needles which are designed not to penetrate the skin. The authors concluded that there was "no clear association between type of sham intervention used and the results of the trials."[37] A 2018 study on acupuncture which used needles that did not penetrate the skin as a sham treatment found that 68% of patients and 83% of acupuncturists correctly identified their group allocation. The authors concluded that the blinding had failed, but that more advanced placebos may someday offer the possibility of well-blinded studies in acupuncture.[38] It is standard practice in physics to perform blinded data analysis. After data analysis is complete, one is allowed to unblind the data. A prior agreement to publish the data regardless of the results of the analysis may be made to preventpublication bias.[13] Social science research is particularly prone toobserver bias, so it is important in these fields to properly blind the researchers. In some cases, while blind experiments would be useful, they are impractical or unethical. Blinded data analysis can reduce bias, but is rarely used in social science research.[39] In apolice photo lineup, an officer shows a group of photos to a witness and asks the witness to identify the individual who committed the crime. Since the officer is typically aware of who the suspect is, they may (subconsciously or consciously) influence the witness to choose the individual that they believe committed the crime. There is a growing movement in law enforcement to move to a blind procedure in which the officer who shows the photos to the witness does not know who the suspect is.[40][41] Auditions for symphony orchestras take place behind a curtain so that the judges cannot see the performer. Blinding the judges to the gender of the performers has been shown to increase the hiring of women.[42]Blind tests can also be used to compare the quality of musical instruments.[43][44]
https://en.wikipedia.org/wiki/Blind_experiment
Anedge caseis a problem or situation that occurs only at an extreme (maximum or minimum) operatingparameter. For example, a stereo speaker might noticeably distort audio when played at maximum volume, even in the absence of any other extreme setting or condition. An edge case can be expected or unexpected. Inengineering, the process of planning for and gracefully addressing edge cases can be a significant task, and yet this task may be overlooked or underestimated. Some common causes of edge cases[1]are: Some basic examples of edge cases include: Non-trivial edge cases can result in the failure of an object that is being engineered. They may not have been foreseen during thedesignphase, and they may not have been thought possible during normal use of the object. For this reason, attempts to formalize good engineering standards often include information about edge cases. In programming, an edge case typically involves input values that require special handling in analgorithmbehind a computer program. As a measure for validating the behavior of computer programs in such cases,unit testsare usually created; they are testing boundary conditions of an algorithm,functionormethod. A series of edge cases around each "boundary" can be used to give reasonablecoverageand confidence using the assumption that if it behaves correctly at the edges, it should behave everywhere else.[2] For example, a function that divides two numbers might be tested using both very large and very small numbers. This assumes that if it works for both ends of the magnitude spectrum, it should work correctly in between.[3] Programmers may also createintegration teststo address edge cases not covered by unit tests.[4]These tests cover cases which only appear when a system is tested as a whole. For example, while a unit test may ensure that a function correctly calculates a result, an integration test ensures that this function works properly when integrated with a database or an externalAPI. These tests are particularly relevant with increasing system complexity indistributed systems,microservices, andInternet of things (IoT)devices. With microservices in particular, testing becomes a challenge as integration tests may not cover all microservice endpoints, resulting in uncovered edge cases.[5] Other types of testing which relate to edge cases may includeload testingandnegative/failure testing. Both methods aim at expanding the test coverage of a system, reducing the likelihood of unexpected edge cases. Intest-driven development, edge cases may be determined by system requirements and accounted for by tests, before writing code. Such documentation may go inside aproduct requirements documentafter discussions withstakeholdersand other teams.
https://en.wikipedia.org/wiki/Boundary_testing
Asanity checkorsanity testis a basic test to quickly evaluate whether a claim or the result of a calculation can possibly be true. It is a simple check to see if the produced material is rational (that the material's creator was thinking rationally, applyingsanity). The point of a sanity test is to rule out certain classes of obviously false results, not to catch every possible error. Arule-of-thumborback-of-the-envelope calculationmay be checked to perform the test. The advantage of performing an initial sanity test is that of speedily evaluating basic function. In arithmetic, for example, when multiplying by 9, using thedivisibility rulefor 9 to verify that thesum of digitsof the result is divisible by 9 is a sanity test—it will not catcheverymultiplication error, but is a quick and simple method to discovermanypossible errors. Incomputer science, asanity testis a very brief run-through of the functionality of acomputer program, system, calculation, or other analysis, to assure that part of the system or methodology works roughly as expected. This is often prior to a more exhaustive round of testing. A sanity test can refer to variousorders of magnitudeand other simplerule-of-thumbdevices applied to cross-checkmathematicalcalculations. For example: In software development, a sanity test (a form ofsoftware testingwhich offers "quick, broad, and shallow testing"[1]) evaluates the result of a subset of application functionality to determine whether it is possible and reasonable to proceed with further testing of the entire application.[2]Sanity tests may sometimes be used interchangeably withsmoke tests[3]insofar as both terms denote tests which determine whether it ispossibleandreasonableto continue testing further. On the other hand, a distinction is sometimes made that a smoke test is a non-exhaustive test that ascertains whether the most crucial functions of a programme work before proceeding with further testing whereas a sanity test refers to whether specific functionality such as a particular bug fix works as expected without testing the wider functionality of the software.[citation needed]In other words, a sanity test determines whether the intended result of a code change works correctly while a smoke test ensures that nothing else important was broken in the process. Sanity testing and smoke testing avoid wasting time and effort by quickly determining whether an application is too flawed to merit more rigorousQA testing, but needs more developerdebugging. Groups of sanity tests are often bundled together for automatedunit testingof functions, libraries, or applications prior tomergingdevelopment code into a testing ortrunkversion controlbranch,[4]forautomated building,[5]or forcontinuous integrationandcontinuous deployment.[6] Another common usage ofsanity testis to denote checks which are performedwithinprogramme code, usually on arguments to functions or returns therefrom, to see if the answers can be assumed to be correct. The more complicated the routine, the more important that its response be checked. The trivial case is checking to see whether thereturn valueof a function indicated success or failure, and to therefore cease further processing upon failure. This return value is actually often itself the result of a sanity check. For example, if the function attempted to open, write to, and close a file, a sanity check may be used to ensure that it did not fail on any of these actions—which is a sanity check often ignored by programmers.[7] These kinds of sanity checks may be used during development for debugging purposes and also to aid introubleshootingsoftwareruntime errors. For example, in a bank account management application, a sanity check will fail if a withdrawal requests more money than the total account balance rather than allowing the account to go negative (which wouldn't be sane). Another sanity test might be that deposits or purchases correspond to patterns established by historical data—for example, large purchase transactions or ATM withdrawals in foreign locations never before visited by the cardholder may be flagged for confirmation.[citation needed] Sanity checks are also performed upon installation ofstable, productionsoftware code into a new computingenvironmentto ensure that alldependenciesare met, such as a compatibleoperating systemandlinklibraries. When a computing environment has passed all the sanity checks, it's known as a sane environment for the installation programme to proceed with reasonable expectation of success. A"Hello, World!" programis often used as a sanity test for adevelopment environmentsimilarly. Rather than a complicated script running a set of unit tests, if this simple programme fails to compile or execute, it proves that the supporting environment likely has a configuration problem that will preventanycode from compiling or executing. But if "Hello world" executes, then any problems experienced with other programmes likely can be attributed to errors in that application's code rather than the environment. TheAssociation for Computing Machinery,[8]and software projects such asAndroid,[9]MediaWiki[10]andTwitter,[11]discourage use of the phrasesanity checkin favour of other terms such asconfidence test,coherence check, or simplytest, as part of a wider attempt to avoidableistlanguage and increaseinclusivity.
https://en.wikipedia.org/wiki/Sanity_testing
In software quality assurance,performance testingis in general a testing practice performed to determine how a system performs in terms of responsiveness and stability under a particular workload.[1]It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage. Performance testing, a subset of performance engineering, is a computer science practice which strives to build performance standards into the implementation, design and architecture of a system. Load testingis the simplest form of performance testing. A load test is usually conducted to understand the behavior of the system under a specific expected load. This load can be the expected concurrent number of users on theapplicationperforming a specific number oftransactionswithin the set duration. This test will give out the response times of all the important business critical transactions. Thedatabase,application server, etc. are also monitored during the test, this will assist in identifyingbottlenecksin the application software and the hardware that the software is installed on Stress testingis normally used to understand the upper limits of capacity within the system. This kind of test is done to determine the system's robustness in terms of extreme load and helps application administrators to determine if the system will perform sufficiently if the current load goes well above the expected maximum. Soak testing, also known as endurance testing, is usually done to determine if the system can sustain the continuous expected load. During soak tests, memory utilization is monitored to detect potential leaks. Also important, but often overlooked is performance degradation, i.e. to ensure that the throughput and/or response times after some long period of sustained activity are as good as or better than at the beginning of the test. It essentially involves applying a significant load to a system for an extended, significant period of time. The goal is to discover how the system behaves under sustained use. Spike testing is done by suddenly increasing or decreasing the load generated by a very large number of users, and observing the behavior of the system. The goal is to determine whether performance will suffer, the system will fail, or it will be able to handle dramatic changes in load. Breakpoint testing is similar to stress testing. An incremental load is applied over time while the system is monitored for predetermined failure conditions. Breakpoint testing is sometimes referred to as Capacity Testing because it can be said to determine the maximum capacity below which the system will perform to its required specifications or Service Level Agreements. The results of breakpoint analysis applied to a fixed environment can be used to determine the optimal scaling strategy in terms of required hardware or conditions that should trigger scaling-out events in a cloud environment. Rather than testing for performance from a load perspective, tests are created to determine the effects of configuration changes to the system's components on the system's performance and behavior. A common example would be experimenting with different methods ofload-balancing. Isolation testing is not unique to performance testing but involves repeating a test execution that resulted in a system problem. Such testing can often isolate and confirm the fault domain. This is a relatively new form of performance testing when global applications such as Facebook, Google and Wikipedia, are performance tested from load generators that are placed on the actual target continent whether physical machines or cloud VMs. These tests usually requires an immense amount of preparation and monitoring to be executed successfully. Performance testing can serve different purposes: Many performance tests are undertaken without setting sufficiently realistic, goal-oriented performance goals. The first question from a business perspective should always be, "why are we performance-testing?". These considerations are part of thebusiness caseof the testing. Performance goals will differ depending on the system's technology and purpose, but should always include some of the following: If a system identifies end-users by some form of log-in procedure then a concurrency goal is highly desirable. By definition this is the largest number of concurrent system users that the system is expected to support at any given moment. The work-flow of a scripted transaction may impact trueconcurrencyespecially if the iterative part contains the log-in and log-out activity. If the system has no concept of end-users, then performance goal is likely to be based on a maximum throughput or transaction rate. This refers to the time taken for one system node to respond to the request of another. A simple example would be a HTTP 'GET' request from browser client to web server. In terms of response time this is what allload testingtools actually measure. It may be relevant to set server response time goals between all nodes of the system. Load-testing tools have difficulty measuring render-response time, since they generally have no concept of what happens within anodeapart from recognizing a period of time where there is no activity 'on the wire'. To measure render response time, it is generally necessary to include functionaltest scriptsas part of the performance test scenario. Many load testing tools do not offer this feature. It is critical to detail performance specifications (requirements) and document them in any performance test plan. Ideally, this is done during the requirements development phase of any system development project, prior to any design effort. SeePerformance Engineeringfor more details. However, performance testing is frequently not performed against a specification; e.g., no one will have expressed what the maximum acceptable response time for a given population of users should be. Performance testing is frequently used as part of the process of performance profile tuning. The idea is to identify the "weakest link" – there is inevitably a part of the system which, if it is made to respond faster, will result in the overall system running faster. It is sometimes a difficult task to identify which part of the system represents this critical path, and some test tools include (or can have add-ons that provide) instrumentation that runs on the server (agents) and reports transaction times, database access times, network overhead, and other server monitors, which can be analyzed together with the raw performance statistics. Without such instrumentation one might have to have someone crouched overWindows Task Managerat the server to see how much CPU load the performance tests are generating (assuming a Windows system is under test). Performance testing can be performed across the web, and even done in different parts of the country, since it is known that the response times of the internet itself vary regionally. It can also be done in-house, althoughrouterswould then need to be configured to introduce the lag that would typically occur on public networks. Loads should be introduced to the system from realistic points. For example, if 50% of a system's user base will be accessing the system via a 56K modem connection and the other half over aT1, then the load injectors (computers that simulate real users) should either inject load over the same mix of connections (ideal) or simulate the network latency of such connections, following the same user profile. It is always helpful to have a statement of the likely peak number of users that might be expected to use the system at peak times. If there can also be a statement of what constitutes the maximum allowable 95 percentile response time, then an injector configuration could be used to test whether the proposed system met that specification. Performance specifications should ask the following questions, at a minimum: A stable build of the system which must resemble the production environment as closely as is possible. To ensure consistent results, the performance testing environment should be isolated from other environments, such asuser acceptance testing(UAT) or development. As a best practice it is always advisable to have a separate performance testing environment resembling the production environment as much as possible. In performance testing, it is often crucial for the test conditions to be similar to the expected actual use. However, in practice this is hard to arrange and not wholly possible, since production systems are subjected to unpredictable workloads. Test workloads may mimic occurrences in the production environment as far as possible, but only in the simplest systems can one exactly replicate this workload variability. Loosely-coupled architectural implementations (e.g.:SOA) have created additional complexities with performance testing. To truly replicate production-like states, enterprise services or assets that share a commoninfrastructureor platform require coordinated performance testing, with all consumers creating production-like transaction volumes and load on shared infrastructures or platforms. Because this activity is so complex and costly in money and time, some organizations now use tools to monitor and simulate production-like conditions (also referred as "noise") in their performance testing environments (PTE) to understand capacity and resource requirements and verify / validate quality attributes. It is critical to the cost performance of a new system that performance test efforts begin at the inception of the development project and extend through to deployment. The later a performance defect is detected, the higher the cost of remediation. This is true in the case of functional testing, but even more so with performance testing, due to the end-to-end nature of its scope. It is crucial for a performance test team to be involved as early as possible, because it is time-consuming to acquire and prepare the testing environment and other key performance requisites. Performance testing is mainly divided into two main categories: This part of performance testing mainly deals with creating/scripting the work flows of key identified business processes. This can be done using a wide variety of tools. Each of the tools mentioned in the above list (which is not exhaustive nor complete) either employs a scripting language (C, Java, JS) or some form of visual representation (drag and drop) to create and simulate end user work flows. Most of the tools allow for something called "Record & Replay", where in the performance tester will launch the testing tool, hook it on a browser or thick client and capture all the network transactions which happen between the client and server. In doing so a script is developed which can be enhanced/modified to emulate various business scenarios. This forms the other face of performance testing. With performance monitoring, the behavior and response characteristics of the application under test are observed. The below parameters are usually monitored during the a performance test execution Server hardware Parameters As a first step, the patterns generated by these 4 parameters provide a good indication on where the bottleneck lies. To determine the exact root cause of the issue,software engineersuse tools such asprofilersto measure what parts of a device or software contribute most to the poor performance, or to establish throughput levels (and thresholds) for maintained acceptable response time. Performance testing technology employs one or more PCs or Unix servers to act as injectors, each emulating the presence of numbers of users and each running an automated sequence of interactions (recorded as a script, or as a series of scripts to emulate different types of user interaction) with the host whose performance is being tested. Usually, a separate PC acts as a test conductor, coordinating and gathering metrics from each of the injectors and collating performance data for reporting purposes. The usual sequence is to ramp up the load: to start with a few virtual users and increase the number over time to a predetermined maximum. The test result shows how the performance varies with the load, given as number of users vs. response time. Various tools are available to perform such tests. Tools in this category usually execute a suite of tests which emulate real users against the system. Sometimes the results can reveal oddities, e.g., that while the average response time might be acceptable, there are outliers of a few key transactions that take considerably longer to complete – something that might be caused by inefficient database queries, pictures, etc. Performance testing can be combined withstress testing, in order to see what happens when an acceptable load is exceeded. Does the system crash? How long does it take to recover if a large load is reduced? Does its failure cause collateral damage? Analytical Performance Modelingis a method to model the behavior of a system in a spreadsheet. The model is fed with measurements of transaction resource demands (CPU, disk I/O,LAN,WAN), weighted by the transaction-mix (business transactions per hour). The weighted transaction resource demands are added up to obtain the hourly resource demands and divided by the hourly resource capacity to obtain the resource loads. Using the response time formula (R=S/(1-U), R=response time, S=service time, U=load), response times can be calculated and calibrated with the results of the performance tests. Analytical performance modeling allows evaluation of design options and system sizing based on actual or anticipated business use. It is therefore much faster and cheaper than performance testing, though it requires thorough understanding of the hardware platforms. Tasks to perform such a test would include: According to the Microsoft Developer Network the Performance Testing Methodology consists of the following activities:
https://en.wikipedia.org/wiki/Software_performance_testing
Stress testingis a form of deliberately intense or thorough testing, used to determine the stability of a given system, critical infrastructure or entity. It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results. Reasons can include: Reliability engineersoften test items under expected stress or even under accelerated stress in order to determine the operating life of the item or to determine modes of failure.[1] The term "stress" may have a more specific meaning in certain industries, such as material sciences, and therefore stress testing may sometimes have a technical meaning – one example is infatigue testingfor materials. Inanimal biology, there are various forms ofbiological stressandbiological stress testing, such as thecardiac stress testin humans, often administered forbiomedicalreasons. Inexercise physiology, training zones are often determined in relation to metabolic stress protocols, quantifyingenergy production,oxygen uptake, orblood chemistryregimes. Fatigue testingis a specialised form ofmechanical testingthat is performed by applyingcyclic loadingto acouponor structure. These tests are used either to generatefatiguelife and crack growth data, identify critical locations or demonstrate the safety of a structure that may be susceptible to fatigue. Fatigue tests are used on a range of components from coupons through to full size test articles such asautomobilesandaircraft. Fatigue tests on coupons are typically conducted usingservo hydraulic test machineswhich are capable of applying largevariable amplitudecyclic loads.[4]Constant amplitudetesting can also be applied by simpler oscillating machines. Thefatigue lifeof a coupon is the number of cycles it takes to break the coupon. This data can be used for creating stress-life or strain-life curves. The rate of crack growth in a coupon can also be measured, either during the test or afterward usingfractography. Testing of coupons can also be carried out insideenvironmental chamberswhere the temperature, humidity and environment that may affect the rate of crack growth can be controlled. Because of the size and unique shape of full size test articles, specialtest rigsare built to apply loads through a series of hydraulic or electricactuators. Actuators aim to reproduce the significant loads experienced by a structure, which in the case of aircraft, may consist of manoeuvre, gust,buffetand ground-air-ground (GAG) loading. A representative sample or block of loading is applied repeatedly until thesafe lifeof the structure has been demonstrated or failures occur which need to be repaired. Instrumentation such asload cells,strain gaugesanddisplacement gaugesare installed on the structure to ensure the correct loading has been applied. Periodicinspectionsof the structure around criticalstress concentrationssuch as holes and fittings are made to determine the time detectable cracks were found and to ensure any cracking that does occur, does not affect other areas of the test article. Because not all loads can be applied, any unbalanced structural loads are typically reacted out to the test floor through non-critical structure such as the undercarriage. Critical infrastructure (CI) such as highways, railways, electric power networks, dams, port facilities, major gas pipelines or oil refineries are exposed to multiple natural and human-induced hazards and stressors, includingearthquakes,landslides,floods,tsunami,wildfires,climate changeeffects orexplosions. These stressors and abrupt events can cause failures and losses, and hence, can interrupt essential services for the society and the economy.[6]Therefore, CI owners and operators need to identify and quantify the risks posed by the CIs due to different stressors, in order to define mitigation strategies[7]and improve theresilienceof the CIs.[8][9]Stress tests are advanced and standardised tools for hazard andrisk assessmentof CIs, that include both low-probability high-consequence (LP-HC) events and so-called extreme orrare events, as well as the systematic application of these new tools to classes of CI. Stress testing is the process of assessing the ability of a CI to maintain a certain level of functionality under unfavourable conditions, while stress tests consider LP-HC events, which are not always accounted for in the design and risk assessment procedures, commonly adopted by public authorities or industrial stakeholders. A multilevel stress test methodology for CI has been developed in the framework of the European research project STREST,[10]consisting of four phases:[11] Phase 1:Preassessment, during which the data available on the CI (risk context) and on the phenomena of interest (hazard context) are collected. The goal and objectives, the time frame, the stress test level and the total costs of the stress test are defined. Phase 2:Assessment, during which the stress test at the component and the system scope is performed, including fragility[12]and risk[13]analysis of the CIs for the stressors defined in Phase 1. The stress test can result in three outcomes: Pass, Partly Pass and Fail, based on the comparison of the quantified risks to acceptable risk exposure levels and a penalty system. Phase 3:Decision, during which the results of the stress test are analyzed according to the goal and objectives defined in Phase 1. Critical events (events that most likely cause the exceedance of a given level of loss) and risk mitigation strategies are identified. Phase 4:Report, during which the stress test outcome and risk mitigation guidelines based on the findings established in Phase 3 are formulated and presented to the stakeholders. Infinance, astress testis an analysis or simulation designed to determine the ability of a givenfinancial instrumentorfinancial institutionto deal with aneconomic crisis. Instead of doing financial projection on a "best estimate" basis, a company or its regulators may do stress testing where they look at how robust a financial instrument is in certain crashes, a form ofscenario analysis. They may test the instrument under, for example, the following stresses: This type of analysis has become increasingly widespread, and has been taken up by various governmental bodies (such as thePRAin the UK or inter-governmental bodies such as theEuropean Banking Authority(EBA) and theInternational Monetary Fund) as a regulatory requirement on certain financial institutions to ensure adequate capital allocation levels to cover potential losses incurred during extreme, but plausible, events. The EBA's regulatory stress tests have been referred to as "a walk in the park" bySaxo Bank's Chief Economist.[15] Acardiac stress testis a cardiological examination that evaluates the cardiovascular system's response to external stress within a controlled clinical setting. This stress response can be induced through physical exercise (usually a treadmill) or intravenous pharmacological stimulation of heart rate.[16] As the heart works progressively harder (stressed) it is monitored using anelectrocardiogram(ECG) monitor. This measures the heart's electrical rhythms and broaderelectrophysiology. Pulse rate, blood pressure and symptoms such as chest discomfort or fatigue are simultaneously monitored by attending clinical staff. Clinical staff will question the patient throughout the procedure asking questions that relate to pain and perceived discomfort. Abnormalities in blood pressure, heart rate, ECG or worsening physical symptoms could be indicative ofcoronary artery disease.[17] Stress testing does not accurately diagnose all cases of coronary artery disease, and can often indicate that it exists in people who do not have the condition. The test can also detect heart abnormalities such asarrhythmias, and conditions affecting electrical conduction within the heart such as various types of fascicular blocks.[18] Acontraction stress test(CST) is performed near the end ofpregnancy(34 weeks' gestation) to determine how well thefetuswill cope with thecontractionsofchildbirth. The aim is to induce contractions and monitor thefetusto check forheart rateabnormalities using acardiotocograph. A CST is one type of antenatal fetal surveillance technique.
https://en.wikipedia.org/wiki/Stress_testing
Unit testing,a.k.a.componentormoduletesting, is a form ofsoftware testingby which isolatedsource codeis tested to validate expected behavior.[1] Unit testing describes tests that are run at the unit-level to contrast testing at theintegrationorsystemlevel. Unit testing, as a principle for testing separately smaller parts of large software systems, dates back to the early days of software engineering. In June 1956 at US Navy's Symposium on Advanced Programming Methods for Digital Computers, H.D. Benington presented theSAGEproject. It featured a specification-based approach where the coding phase was followed by "parameter testing" to validate component subprograms against their specification, followed then by an "assembly testing" for parts put together.[2][3] In 1964, a similar approach is described for the software of theMercury project, where individual units developed by different programmes underwent "unit tests" before being integrated together.[4]In 1969, testing methodologies appear more structured, with unit tests, component tests and integration tests collectively validating individual parts written separately and their progressive assembly into larger blocks.[5]Some public standards adopted in the late 1960s, such as MIL-STD-483[6]and MIL-STD-490, contributed further to a wide acceptance of unit testing in large projects. Unit testing was in those times interactive[3]or automated,[7]using either coded tests or capture and replay testing tools. In 1989,Kent Beckdescribed a testing framework forSmalltalk(later calledSUnit) in "Simple Smalltalk Testing: With Patterns". In 1997,Kent BeckandErich Gammadeveloped and releasedJUnit, a unit test framework that became popular withJavadevelopers.[8]Googleembraced automated testing around 2005–2006.[9] A unit is defined as a single behaviour exhibited by the system under test (SUT), usually corresponding to a requirement[definition needed]. While a unit may correspond to a single function or module (inprocedural programming) or a single method or class (inobject-oriented programming), functions/methods and modules/classes do not necessarily correspond to units. From the system requirements perspective only the perimeter of the system is relevant, thus only entry points to externally visible system behaviours define units.[clarification needed][10] Unit tests can be performed manually or viaautomated testexecution. Automated tests include benefits such as: running tests often, running tests without staffing cost, and consistent and repeatable testing. Testing is often performed by the programmer who writes and modifies the code under test. Unit testing may be viewed as part of the process of writing code. Unit testing,a.k.a.componentormoduletesting, is a form ofsoftware testingby which isolatedsource codeis tested to validate expected behavior.[1] Unit testing describes tests that are run at the unit-level to contrast testing at theintegrationorsystemlevel.[11] Aparameterized testis a test that accepts a set of values that can be used to enable the test to run with multiple, different input values. A testing framework that supports parametrized tests supports a way to encode parameter sets and to run the test with each set. Use of parametrized tests can reduce test code duplication. Parameterized tests are supported byTestNG,JUnit,[14]XUnitandNUnit, as well as in various JavaScript test frameworks.[citation needed] Parameters for the unit tests may be coded manually or in some cases are automatically generated by the test framework. In recent years support was added for writing more powerful (unit) tests, leveraging the concept of theories, test cases that execute the same steps, but using test data generated at runtime, unlike regular parameterized tests that use the same execution steps with input sets that are pre-defined.[citation needed] Sometimes, in the agile software development, unit testing is done peruser storyand comes in the later half of the sprint after requirements gathering and development are complete. Typically, the developers or other members from the development team, such asconsultants, will write step-by-step 'test scripts' for the developers to execute in the tool. Test scripts are generally written to prove the effective and technical operation of specific developed features in the tool, as opposed to full fledged business processes that would be interfaced by theend user, which is typically done duringuser acceptance testing. If the test-script can be fully executed from start to finish without incident, the unit test is considered to have "passed", otherwise errors are noted and the user story is moved back to development in an 'in-progress' state. User stories that successfully pass unit tests are moved on to the final steps of the sprint - Code review, peer review, and then lastly a 'show-back' session demonstrating the developed tool to stakeholders. In test-driven development (TDD), unit tests are written while the production code is written. Starting with working code, the developer adds test code for a required behavior, then addsjust enoughcode to make the test pass, then refactors the code (including test code) as makes sense and then repeats by adding another test. Unit testing is intended to ensure that the units meet theirdesignand behave as intended.[15] By writing tests first for the smallest testable units, then the compound behaviors between those, one can build up comprehensive tests for complex applications.[15] One goal of unit testing is to isolate each part of the program and show that the individual parts are correct.[1]A unit test provides a strict, writtencontractthat the piece of code must satisfy. Unit testing finds problems early in thedevelopment cycle. This includes both bugs in the programmer's implementation and flaws or missing parts of the specification for the unit. The process of writing a thorough set of tests forces the author to think through inputs, outputs, and error conditions, and thus more crisply define the unit's desired behavior.[citation needed] The cost of finding a bug before coding begins or when the code is first written is considerably lower than the cost of detecting, identifying, and correcting the bug later. Bugs in released code may also cause costly problems for the end-users of the software.[16][17][18]Code can be impossible or difficult to unit test if poorly written, thus unit testing can force developers to structure functions and objects in better ways. Unit testing enables more frequent releases in software development. By testing individual components in isolation, developers can quickly identify and address issues, leading to faster iteration and release cycles.[19] Unit testing allows the programmer torefactorcode or upgrade system libraries at a later date, and make sure the module still works correctly (e.g., inregression testing). The procedure is to write test cases for allfunctionsandmethodsso that whenever a change causes a fault, it can be identified quickly. Unit tests detect changes which may break adesign contract. Unit testing may reduce uncertainty in the units themselves and can be used in abottom-uptesting style approach. By testing the parts of a program first and then testing the sum of its parts,integration testingbecomes much easier.[citation needed] Some programmers contend that unit tests provide a form of documentation of the code. Developers wanting to learn what functionality is provided by a unit, and how to use it, can review the unit tests to gain an understanding of it.[citation needed] Test cases can embody characteristics that are critical to the success of the unit. These characteristics can indicate appropriate/inappropriate use of a unit as well as negative behaviors that are to be trapped by the unit. A test case documents these critical characteristics, although many software development environments do not rely solely upon code to document the product in development.[citation needed] In some processes, the act of writing tests and the code under test, plus associated refactoring, may take the place of formal design. Each unit test can be seen as a design element specifying classes, methods, and observable behavior.[citation needed] Testing will not catch every error in the program, because it cannot evaluate every execution path in any but the most trivial programs. Thisproblemis a superset of thehalting problem, which isundecidable. The same is true for unit testing. Additionally, unit testing by definition only tests the functionality of the units themselves. Therefore, it will not catch integration errors or broader system-level errors (such as functions performed across multiple units, or non-functional test areas such asperformance). Unit testing should be done in conjunction with othersoftware testingactivities, as they can only show the presence or absence of particular errors; they cannot prove a complete absence of errors. To guarantee correct behavior for every execution path and every possible input, and ensure the absence of errors, other techniques are required, namely the application offormal methodsto prove that a software component has no unexpected behavior.[citation needed] An elaborate hierarchy of unit tests does not equal integration testing. Integration with peripheral units should be included in integration tests, but not in unit tests.[citation needed]Integration testing typically still relies heavily on humanstesting manually; high-level or global-scope testing can be difficult to automate, such that manual testing often appears faster and cheaper.[citation needed] Software testing is a combinatorial problem. For example, every Boolean decision statement requires at least two tests: one with an outcome of "true" and one with an outcome of "false". As a result, for every line of code written, programmers often need 3 to 5 lines of test code.[citation needed]This obviously takes time and its investment may not be worth the effort. There are problems that cannot easily be tested at all – for example those that arenondeterministicor involve multiplethreads. In addition, code for a unit test is as likely to be buggy as the code it is testing.Fred BrooksinThe Mythical Man-Monthquotes: "Never go to sea with two chronometers; take one or three."[20]Meaning, if twochronometerscontradict, how do you know which one is correct? Another challenge related to writing the unit tests is the difficulty of setting up realistic and useful tests. It is necessary to create relevant initial conditions so the part of the application being tested behaves like part of the complete system. If these initial conditions are not set correctly, the test will not be exercising the code in a realistic context, which diminishes the value and accuracy of unit test results.[citation needed] To obtain the intended benefits from unit testing, rigorous discipline is needed throughout the software development process. It is essential to keep careful records not only of the tests that have been performed, but also of all changes that have been made to the source code of this or any other unit in the software. Use of aversion controlsystem is essential. If a later version of the unit fails a particular test that it had previously passed, the version-control software can provide a list of the source code changes (if any) that have been applied to the unit since that time.[citation needed] It is also essential to implement a sustainable process for ensuring that test case failures are reviewed regularly and addressed immediately.[21]If such a process is not implemented and ingrained into the team's workflow, the application will evolve out of sync with the unit test suite, increasing false positives and reducing the effectiveness of the test suite. Unit testing embedded system software presents a unique challenge: Because the software is being developed on a different platform than the one it will eventually run on, you cannot readily run a test program in the actual deployment environment, as is possible with desktop programs.[22] Unit tests tend to be easiest when a method has input parameters and some output. It is not as easy to create unit tests when a major function of the method is to interact with something external to the application. For example, a method that will work with a database might require a mock up of database interactions to be created, which probably won't be as comprehensive as the real database interactions.[23][better source needed] Below is an example of a JUnit test suite. It focuses on theAdderclass. The test suite usesassertstatements to verify the expected result of various input values to thesummethod. Using unit-tests as a design specification has one significant advantage over other design methods: The design document (the unit-tests themselves) can itself be used to verify the implementation. The tests will never pass unless the developer implements a solution according to the design. Unit testing lacks some of the accessibility of a diagrammatic specification such as aUMLdiagram, but they may be generated from the unit test using automated tools. Most modern languages have free tools (usually available as extensions toIDEs). Free tools, like those based on thexUnitframework, outsource to another system the graphical rendering of a view for human consumption.[24] Unit testing is the cornerstone ofextreme programming, which relies on an automatedunit testing framework. This automated unit testing framework can be either third party, e.g.,xUnit, or created within the development group. Extreme programming uses the creation of unit tests fortest-driven development. The developer writes a unit test that exposes either a software requirement or a defect. This test will fail because either the requirement isn't implemented yet, or because it intentionally exposes a defect in the existing code. Then, the developer writes the simplest code to make the test, along with other tests, pass. Most code in a system is unit tested, but not necessarily all paths through the code. Extreme programming mandates a "test everything that can possibly break" strategy, over the traditional "test every execution path" method. This leads developers to develop fewer tests than classical methods, but this isn't really a problem, more a restatement of fact, as classical methods have rarely ever been followed methodically enough for all execution paths to have been thoroughly tested.[citation needed]Extreme programming simply recognizes that testing is rarely exhaustive (because it is often too expensive and time-consuming to be economically viable) and provides guidance on how to effectively focus limited resources. Crucially, the test code is considered a first class project artifact in that it is maintained at the same quality as the implementation code, with all duplication removed. Developers release unit testing code to the code repository in conjunction with the code it tests. Extreme programming's thorough unit testing allows the benefits mentioned above, such as simpler and more confident code development andrefactoring, simplified code integration, accurate documentation, and more modular designs. These unit tests are also constantly run as a form ofregression test. Unit testing is also critical to the concept ofEmergent Design. As emergent design is heavily dependent upon refactoring, unit tests are an integral component.[citation needed] An automated testing framework provides features for automating test execution and can accelerate writing and running tests. Frameworks have been developed fora wide variety of programming languages. Generally, frameworks arethird-party; not distributed with a compiler orintegrated development environment(IDE). Tests can be written without using a framework to exercise the code under test usingassertions,exception handling, and othercontrol flowmechanisms to verify behavior and report failure. Some note that testing without a framework is valuable since there is abarrier to entryfor the adoption of a framework; that having some tests is better than none, but once a framework is in place, adding tests can be easier.[25] In some frameworks advanced test features are missing and must be hand-coded. Some programming languages directly support unit testing. Their grammar allows the direct declaration of unit tests without importing a library (whether third party or standard). Additionally, the Boolean conditions of the unit tests can be expressed in the same syntax as Boolean expressions used in non-unit test code, such as what is used forifandwhilestatements. Languages with built-in unit testing support include: Languages with standard unit testing framework support include: Some languages do not have built-in unit-testing support but have established unit testing libraries or frameworks. These languages include:
https://en.wikipedia.org/wiki/Unit_testing
Dynamic application security testing(DAST) represents a non-functional testing process to identify security weaknesses and vulnerabilities in an application. This testing process can be carried out either manually or by using automated tools. Manual assessment of an application involves human intervention to identify the security flaws which might slip from an automated tool. Usually business logic errors, race condition checks, and certain zero-day vulnerabilities can only be identified using manual assessments. On the other side, a DAST tool is a program which communicates with a web application through the web front-end in order to identify potential security vulnerabilities in the web application and architectural weaknesses.[1]It performs ablack-boxtest. Unlikestatic application security testingtools, DAST tools do not have access to the source code and therefore detectvulnerabilitiesby actually performing attacks. DAST tools allow sophisticated scans, detecting vulnerabilities with minimal user interactions once configured with host name, crawling parameters and authentication credentials. These tools will attempt to detect vulnerabilities in query strings, headers, fragments, verbs (GET/POST/PUT) and DOM injection. DAST tools facilitate the automated review of a web application with the express purpose of discovering security vulnerabilities and are required to comply with various regulatory requirements. Web application scanners can look for a wide variety of vulnerabilities, such as input/output validation: (e.g.cross-site scriptingandSQL injection), specific application problems and server configuration mistakes. Commercial scanners are a category of web-assessment tools which need to be purchased. Some scanners include some free features but most need to be bought for full access to the tool's power. Open-source scanners are often free of cost to the user. These tools can detect vulnerabilities of the finalizedrelease candidateversions prior to shipping. Scanners simulate a malicious user by attacking and probing, identifying results which are not part of the expected result set, allowing for a realistic attack simulation.[2]The big advantage of these types of tools are that they can scan year-round to be constantly searching for vulnerabilities. With new vulnerabilities being discovered regularly this allows companies to find and patch vulnerabilities before they can become exploited.[3] As a dynamic testing tool, web scanners are not language-dependent. A web application scanner is able to scan engine-driven web applications. Attackers use the same tools, so if the tools can find a vulnerability, so can attackers.[4] While scanning with a DAST tool, data may be overwritten or malicious payloads injected into the subject site. Sites should be scanned in a production-like but non-production environment to ensure accurate results while protecting the data in the production environment. Because the tool is implementing adynamic testingmethod, it cannot cover 100% of the source code of the application and then, the application itself. The penetration tester should look at the coverage of the web application or of itsattack surfaceto know if the tool was configured correctly or was able to understand the web application. The tool cannot implement all variants of attacks for a given vulnerability. So the tools generally have a predefined list of attacks and do not generate the attack payloads depending on the tested web application. Some tools are also quite limited in their understanding of the behavior of applications with dynamic content such asJavaScriptandFlash.
https://en.wikipedia.org/wiki/Web_application_security_scanner
White-box testing(also known asclear box testing,glass box testing,transparent box testing, andstructural testing) is a method ofsoftware testingthat tests internal structures or workings of an application, as opposed to its functionality (i.e. black-box testing). In white-box testing, an internal perspective of the system is used to design test cases. The tester chooses inputs to exercise paths through the code and determine the expected outputs. This is analogous to testing nodes in a circuit, e.g. in-circuit testing (ICT). White-box testing can be applied at theunit,integrationandsystemlevels of the software testing process. Although traditional testers tended to think of white-box testing as being done at the unit level, it is used for integration and system testing more frequently today. It can test paths within a unit, paths between units during integration, and between subsystems during a system–level test. Though this method of test design can uncover many errors or problems, it has the potential to miss unimplemented parts of the specification or missing requirements. Where white-box testing is design-driven,[1]that is, drivenexclusivelyby agreed specifications of how each component of software is required to behave (as inDO-178CandISO 26262processes), white-box test techniques can accomplish assessment for unimplemented or missing requirements. White-box test design techniques include the followingcode coveragecriteria: White-box testing is a method of testing the application at the level of the source code. These test cases are derived through the use of the design techniques mentioned above:control flowtesting, data flow testing, branch testing, path testing, statement coverage and decision coverage as well as modified condition/decision coverage. White-box testing is the use of these techniques as guidelines to create an error-free environment by examining all code. These white-box testing techniques are the building blocks of white-box testing, whose essence is the careful testing of the application at the source code level to reduce hidden errors later on.[2]These different techniques exercise every visible path of the source code to minimize errors and create an error-free environment. The whole point of white-box testing is the ability to know which line of the code is being executed and being able to identify what the correct output should be.[2] White-box testing's basic procedures require the tester to have an in-depth knowledge of the source code being tested. The programmer must have a deep understanding of the application to know what kinds of test cases to create so that every visible path is exercised for testing. Once the source code is understood then it can be analyzed for test cases to be created. The following are the three basic steps that white-box testing takes in order to create test cases: A more modern view is that the dichotomy between white-box testing and black-box testing has blurred and is becoming less relevant. Whereas "white-box" originally meant using the source code, and black-box meant using requirements, tests are now derived from many documents at various levels of abstraction. The real point is that tests are usually designed from an abstract structure such as the input space, a graph, or logical predicates, and the question is what level of abstraction we derive that abstract structure from.[4]That can be the source code, requirements, input space descriptions, or one of dozens of types of design models. Therefore, the "white-box / black-box" distinction is less important and the terms are less relevant.[citation needed] Inpenetration testing, white-box testing refers to a method where awhite hat hackerhas full knowledge of the system being attacked.[6]The goal of a white-box penetration test is to simulate a malicious insider who has knowledge of and possibly basic credentials for the target system. For such a penetration test, administrative credentials are typically provided in order to analyse how or which attacks can impact high-privileged accounts.[7]Source code can be made available to be used as a reference for the tester. When the code is a target of its own, this is not (only) a penetration test but asource code security audit(or security review).[8]
https://en.wikipedia.org/wiki/White-box_testing
Inmathematics,statistics, andcomputational modelling, agrey box model[1][2][3][4]combines a partial theoretical structure with data to complete the model. The theoretical structure may vary from information on the smoothness of results, to models that need only parameter values from data or existing literature.[5]Thus, almost all models are grey box models as opposed toblack boxwhere no model form is assumed orwhite boxmodels that are purely theoretical. Some models assume a special form such as alinear regression[6][7]orneural network.[8][9]These have special analysis methods. In particularlinear regressiontechniques[10]are much more efficient than most non-linear techniques.[11][12]The model can bedeterministicorstochastic(i.e. containing random components) depending on its planned use. The general case is anon-linear modelwith a partial theoretical structure and some unknown parts derived from data. Models with unlike theoretical structures need to be evaluated individually,[1][13][14]possibly usingsimulated annealingorgenetic algorithms. Within a particular model structure,parameters[14][15]or variable parameter relations[5][16]may need to be found. For a particular structure it is arbitrarily assumed that the data consists of sets of feed vectorsf, product vectorsp, and operating condition vectorsc.[5]Typicallycwill contain values extracted fromf, as well as other values. In many cases a model can be converted to a function of the form:[5][17][18] where the vector functionmgives the errors between the datap, and the model predictions. The vectorqgives some variable parameters that are the model's unknown parts. The parametersqvary with the operating conditionscin a manner to be determined.[5][17]This relation can be specified asq=AcwhereAis a matrix of unknown coefficients, andcas inlinear regression[6][7]includes aconstant termand possibly transformed values of the original operating conditions to obtain non-linear relations[19][20]between the original operating conditions andq. It is then a matter of selecting which terms inAare non-zero and assigning their values. The model completion becomes anoptimizationproblem to determine the non-zero values inAthat minimizes the error termsm(f,p,Ac)over the data.[1][16][21][22][23] Once a selection of non-zero values is made, the remaining coefficients inAcan be determined by minimizingm(f,p,Ac)over the data with respect to the nonzero values inA, typically bynon-linear least squares. Selection of the nonzero terms can be done by optimization methods such assimulated annealingandevolutionary algorithms. Also thenon-linear least squarescan provide accuracy estimates[11][15]for the elements ofAthat can be used to determine if they are significantly different from zero, thus providing a method ofterm selection.[24][25] It is sometimes possible to calculate values ofqfor each data set, directly or bynon-linear least squares. Then the more efficientlinear regressioncan be used to predictqusingcthus selecting the non-zero values inAand estimating their values. Once the non-zero values are locatednon-linear least squarescan be used on the original modelm(f,p,Ac)to refine these values .[16][21][22] A third method ismodel inversion,[5][17][18]which converts the non-linearm(f,p,Ac) into an approximate linear form in the elements ofA, that can be examined using efficient term selection[24][25]and evaluation of the linear regression.[10]For the simple case of a singleqvalue (q=aTc) and an estimateq*ofq. Putting dq=aTc−q*gives so thataTis now in a linear position with all other terms known, and thus can be analyzed bylinear regressiontechniques. For more than one parameter the method extends in a direct manner.[5][18][17]After checking that the model has been improved this process can be repeated until convergence. This approach has the advantages that it does not need the parametersqto be able to be determined from an individual data set and the linear regression is on the original error terms[5] Where sufficient data is available, division of the data into a separate model construction set and one or twoevaluation setsis recommended. This can be repeated using multiple selections of the construction set and theresulting models averagedor used to evaluate prediction differences. A statistical test such aschi-squaredon the residuals is not particularly useful.[26]The chi squared test requires known standard deviations which are seldom available, and failed tests give no indication of how to improve the model.[11]There are a range of methods to compare both nested and non nested models. These include comparison of model predictions with repeated data. An attempt to predict the residualsm(, )with the operating conditionscusing linear regression will show if the residuals can be predicted.[21][22]Residuals that cannot be predicted offer little prospect of improving the model using the current operating conditions.[5]Terms that do predict the residuals are prospective terms to incorporate into the model to improve its performance.[21] The model inversion technique above can be used as a method of determining whether a model can be improved. In this case selection of nonzero terms is not so important andlinear predictioncan be done using the significanteigenvectorsof theregression matrix. The values inAdetermined in this manner need to be substituted into the nonlinear model to assess improvements in the model errors. The absence of a significant improvement indicates the available data is not able to improve the current model form using the defined parameters.[5]Extra parameters can be inserted into the model to make this test more comprehensive.
https://en.wikipedia.org/wiki/Grey_box_model
Inmathematics,extrapolationis a type ofestimation, beyond the original observation range, of the value of a variable on the basis of its relationship with another variable. It is similar tointerpolation, which produces estimates between known observations, but extrapolation is subject to greateruncertaintyand a higher risk of producing meaningless results. Extrapolation may also mean extension of amethod, assuming similar methods will be applicable. Extrapolation may also apply to humanexperienceto project, extend, or expand known experience into an area not known or previously experienced. By doing so, one makes an assumption of the unknown[1](for example, a driver may extrapolate road conditions beyond what is currently visible and these extrapolations may be correct or incorrect). The extrapolation method can be applied in theinterior reconstructionproblem. A sound choice of which extrapolation method to apply relies ona priori knowledgeof the process that created the existing data points. Some experts have proposed the use of causal forces in the evaluation of extrapolation methods.[2]Crucial questions are, for example, if the data can be assumed to be continuous, smooth, possibly periodic, etc. Linear extrapolation means creating a tangent line at the end of the known data and extending it beyond that limit. Linear extrapolation will only provide good results when used to extend the graph of an approximately linear function or not too far beyond the known data. If the two data points nearest the pointx∗{\displaystyle x_{*}}to be extrapolated are(xk−1,yk−1){\displaystyle (x_{k-1},y_{k-1})}and(xk,yk){\displaystyle (x_{k},y_{k})}, linear extrapolation gives the function: (which is identical tolinear interpolationifxk−1<x∗<xk{\displaystyle x_{k-1}<x_{*}<x_{k}}). It is possible to include more than two points, and averaging the slope of the linear interpolant, byregression-like techniques, on the data points chosen to be included. This is similar tolinear prediction. A polynomial curve can be created through the entire known data or just near the end (two points for linear extrapolation, three points for quadratic extrapolation, etc.). The resulting curve can then be extended beyond the end of the known data. Polynomial extrapolation is typically done by means ofLagrange interpolationor using Newton's method offinite differencesto create aNewton seriesthat fits the data. The resulting polynomial may be used to extrapolate the data. High-order polynomial extrapolation must be used with due care. For the example data set and problem in the figure above, anything above order 1 (linear extrapolation) will possibly yield unusable values; an error estimate of the extrapolated value will grow with the degree of the polynomial extrapolation. This is related toRunge's phenomenon. Aconic sectioncan be created using five points near the end of the known data. If the conic section created is anellipseorcircle, when extrapolated it will loop back and rejoin itself. An extrapolatedparabolaorhyperbolawill not rejoin itself, but may curve back relative to the X-axis. This type of extrapolation could be done with a conic sections template (on paper) or with a computer. French curveextrapolation is a method suitable for any distribution that has a tendency to be exponential, but with accelerating or decelerating factors.[3]This method has been used successfully in providing forecast projections of the growth of HIV/AIDS in the UK since 1987 and variant CJD in the UK for a number of years. Another study has shown that extrapolation can produce the same quality of forecasting results as more complex forecasting strategies.[4] Can be created with 3 points of a sequence and the "moment" or "index", this type of extrapolation have 100% accuracy in predictions in a big percentage of known series database (OEIS).[5] Example of extrapolation with error prediction : Typically, the quality of a particular method of extrapolation is limited by the assumptions about the function made by the method. If the method assumes the data are smooth, then a non-smooth functionwill be poorly extrapolated. In terms of complex time series, some experts have discovered that extrapolation is more accurate when performed through the decomposition of causal forces.[6] Even for proper assumptions about the function, the extrapolation can diverge severely from the function. The classic example is truncatedpower seriesrepresentations of sin(x) and relatedtrigonometric functions. For instance, taking only data from near thex= 0, we may estimate that the function behaves as sin(x) ~x. In the neighborhood ofx= 0, this is an excellent estimate. Away fromx= 0 however, the extrapolation moves arbitrarily away from thex-axis while sin(x) remains in theinterval[−1,1]. I.e., the error increases without bound. Taking more terms in the power series of sin(x) aroundx= 0 will produce better agreement over a larger interval nearx= 0, but will produce extrapolations that eventually diverge away from thex-axis even faster than the linear approximation. This divergence is a specific property of extrapolation methods and is only circumvented when the functional forms assumed by the extrapolation method (inadvertently or intentionally due to additional information) accurately represent the nature of the function being extrapolated. For particular problems, this additional information may be available, but in the general case, it is impossible to satisfy all possible function behaviors with a workably small set of potential behavior. Incomplex analysis, a problem of extrapolation may be converted into aninterpolationproblem by the change of variablez^=1/z{\displaystyle {\hat {z}}=1/z}. This transform exchanges the part of thecomplex planeinside theunit circlewith the part of the complex plane outside of the unit circle. In particular, thecompactificationpoint at infinityis mapped to the origin and vice versa. Care must be taken with this transform however, since the original function may have had "features", for examplepolesand othersingularities, at infinity that were not evident from the sampled data. Another problem of extrapolation is loosely related to the problem ofanalytic continuation, where (typically) apower seriesrepresentation of afunctionis expanded at one of its points ofconvergenceto produce apower serieswith a largerradius of convergence. In effect, a set of data from a small region is used to extrapolate a function onto a larger region. Again,analytic continuationcan be thwarted byfunctionfeatures that were not evident from the initial data. Also, one may usesequence transformationslikePadé approximantsandLevin-type sequence transformationsas extrapolation methods that lead to asummationofpower seriesthat are divergent outside the originalradius of convergence. In this case, one often obtainsrational approximants. Extrapolation arguments are informal and unquantified arguments which assert that something is probably true beyond the range of values for which it is known to be true. For example, we believe in the reality of what we see through magnifying glasses because it agrees with what we see with the naked eye but extends beyond it; we believe in what we see through light microscopes because it agrees with what we see through magnifying glasses but extends beyond it; and similarly for electron microscopes. Such arguments are widely used in biology in extrapolating from animal studies to humans and from pilot studies to a broader population.[7] Likeslippery slopearguments, extrapolation arguments may be strong or weak depending on such factors as how far the extrapolation goes beyond the known range.[8]
https://en.wikipedia.org/wiki/Extrapolation
Incomputer science,static program analysis(also known asstatic analysisorstatic simulation) is theanalysisof computer programs performed without executing them, in contrast withdynamic program analysis, which is performed on programs during their execution in the integrated environment.[1][2] The term is usually applied to analysis performed by an automated tool, with human analysis typically being called "program understanding",program comprehension, orcode review. In the last of these,software inspectionandsoftware walkthroughsare also used. In most cases the analysis is performed on some version of a program'ssource code, and, in other cases, on some form of itsobject code. The sophistication of the analysis performed by tools varies from those that only consider the behaviour of individual statements and declarations,[3]to those that include the completesource codeof a program in their analysis. The uses of the information obtained from the analysis vary from highlighting possible coding errors (e.g., thelinttool) toformal methodsthat mathematically prove properties about a given program (e.g., its behaviour matches that of its specification). Software metricsandreverse engineeringcan be described as forms of static analysis. Deriving software metrics and static analysis are increasingly deployed together, especially in creation of embedded systems, by defining so-calledsoftware quality objectives.[4] A growing commercial use of static analysis is in the verification of properties of software used insafety-criticalcomputer systems and locating potentiallyvulnerablecode.[5]For example, the following industries have identified the use of static code analysis as a means of improving the quality of increasingly sophisticated and complex software: A study in 2012 by VDC Research reported that 28.7% of the embedded software engineers surveyed use static analysis tools and 39.7% expect to use them within 2 years.[9]A study from 2010 found that 60% of the interviewed developers in European research projects made at least use of their basic IDE built-in static analyzers. However, only about 10% employed an additional other (and perhaps more advanced) analysis tool.[10] In the application security industry the namestatic application security testing(SAST) is also used. SAST is an important part ofSecurity Development Lifecycles(SDLs) such as the SDL defined by Microsoft[11]and a common practice in software companies.[12] The OMG (Object Management Group) published a study regarding the types of software analysis required forsoftware qualitymeasurement and assessment. This document on "How to Deliver Resilient, Secure, Efficient, and Easily Changed IT Systems in Line with CISQ Recommendations" describes three levels of software analysis.[13] A further level of software analysis can be defined. Formal methods is the term applied to the analysis ofsoftware(andcomputer hardware) whose results are obtained purely through the use of rigorous mathematical methods. The mathematical techniques used includedenotational semantics,axiomatic semantics,operational semantics, andabstract interpretation. By a straightforward reduction to thehalting problem, it is possible to prove that (for anyTuring completelanguage), finding all possible run-time errors in an arbitrary program (or more generally any kind of violation of a specification on the final result of a program) isundecidable: there is no mechanical method that can always answer truthfully whether an arbitrary program may or may not exhibit runtime errors. This result dates from the works ofChurch,GödelandTuringin the 1930s (see:Halting problemandRice's theorem). As with many undecidable questions, one can still attempt to give useful approximate solutions. Some of the implementation techniques of formal static analysis include:[14] Data-driven static analysis leverages extensive codebases to infer coding rules and improve the accuracy of the analysis.[16][17]For instance, one can use all Java open-source packages available onGitHubto learn good analysis strategies. The rule inference can use machine learning techniques.[18]It is also possible to learn from a large amount of past fixes and warnings.[16] Static analyzers produce warnings. For certain types of warnings, it is possible to design and implementautomated remediationtechniques. For example, Logozzo and Ball have proposed automated remediations for C#cccheck.[19]
https://en.wikipedia.org/wiki/Static_program_analysis
Dynamic scoringis a forecasting technique forgovernment revenues, expenditures, andbudget deficitsthat incorporates predictions about the behavior of people and organizations based on changes infiscal policy, usuallytax rates. Dynamic scoring depends on models of the behavior ofeconomic agentswhich predict how they would react once the tax rate or other policy change goes into effect. This means the uncertainty induced in predictions is greater to the degree that the proposed policy is unlike current policy. Unfortunately, any such model depends heavily on judgment, and there is no evidence that it is more effective or accurate.[1] For example, a dynamic scoring model may includeeconometric modelof a transitional phase as the population adapts to the new policy, rather than the so-calledstatic-scoring[2]alternative of standard assumption about behavior of people being immediately and directly sensitive to prices. The outcome of the dynamic analysis is therefore heavily dependent on assumptions about future behaviors and rates of change. The dynamic analysis is potentially more accurate than the alternative, if theeconometricmodel correctly captureshowhouseholds and firms will react to a policy changes. This has been attacked as assumption-driven compared to static scoring which makes simpler assumptions about behavior change due to the introduction of a new policy. Using dynamic scoring has been promoted byRepublican legislatorsto argue thatsupply-sidetax policy, for example theBush tax cutsof 2001[3]and 2011 GOPPath to Prosperityproposal,[4]return higher benefits in terms ofGDPgrowth and revenue increases than are predicted from static scoring. Some economists[5]argue that their dynamic scoring conclusions are overstated,[6]pointing out thatCongressional Budget Office(CBO) practices already include some dynamic scoring elements and that to include more may lead to politicization of the department.[7] On January 6, 2013, the version of thePro-Growth Budgeting Act of 2013included in theBudget and Accounting Transparency Act of 2014passed theUnited States House of Representativesas part of their Rules adopted in House Resolution 5, passed with the exclusive support of theRepublican Party (United States)by a vote of 234-172.[8]The same rules package for the year had other controversial provisions funded.[9]The bill would require theCongressional Budget Officeto use dynamic scoring to provide a macroeconomic impact analysis for bills that are estimated to have a large budgetary effect.[10]The text of the provision read: (a) An estimate provided by the Congressional Budget Office under section 402 of the Congressional Budget Act of 1974 for any major legislation shall, to the extent practicable, incorporate the budgetary effects of changes in economic output, employment, capital stock, and other macroeconomic variables resulting from such legislation. (b) An estimate provided by the Joint Committee on Taxation to the Director of the Congressional Budget Office under section 201(f) of the Congressional Budget Act of 1974 for any major legislation shall, to the extent practicable, incorporate the budgetary effects of changes in economic output, employment, capital stock, and other macroeconomic variables resulting from such legislation. (c) An estimate referred to in this clause shall, to the extent practicable, include-- (d) As used in this clause-- These provisions were removed in January 2019 for the 116th Congress by H. Res. 6 section 102(u).[12] TheKansasstate government cut personal income taxes to stimulate economic growth, depending on optimistic assumptions from dynamic scoring for state revenue. Authors of the plan claimed that "cutting taxes can have a near immediate and permanent impact,"[13]arguing for tax cuts over rebuilding roads or improving the quality of schools. In addition, the tax on "pass-through" businesses was eliminated. After continual revenue deficits, the largest sales tax increase in Kansas history, downgrades fromMoody'sandStandard & Poor'sand economic performance that lagged neighboring states, the election of 2016 was a referendum on tax policy and the legislature increased income taxes over the governor's veto[14][15][16]Kansas's "rainy day" fund reported levels $570 million lower than before the tax cut,[17]even thoughKansashad directed more tax revenue to it.
https://en.wikipedia.org/wiki/Dynamic_scoring
Ineconomics, theLaffer curveillustrates a theoretical relationship betweenratesoftaxationand the resulting levels of the government'stax revenue. The Laffer curve assumes that no tax revenue is raised at the extreme tax rates of 0% and 100%, meaning that there is a tax rate between 0% and 100% that maximizes government tax revenue.[a][1][2] The shape of the curve is a function of taxable incomeelasticity—i.e.,taxable incomechanges in response to changes in the rate of taxation. As popularized bysupply-side economistArthur Laffer, the curve is typically represented as a graph that starts at 0% tax with zero revenue, rises to a maximum rate of revenue at an intermediate rate of taxation, and then falls again to zero revenue at a 100% tax rate. However, the shape of the curve is uncertain and disputed among economists.[3] One implication of the Laffer curve is that increasing tax rates beyond a certain point is counter-productive for raising further tax revenue. Particularly in the United States,conservativeshave used the Laffer curve to argue that lower taxes may increase tax revenue. However, the hypothetical maximum revenue point of the Laffer curve for any given market cannot be observed directly and can only be estimated—such estimates are often controversial. According toThe New Palgrave Dictionary of Economics, estimates of revenue-maximizing income tax rates have varied widely, with a mid-range of around 70%.[4]The shape of the Laffer curve may also differ between different global economies.[5] The Laffer curve was popularized in the United States with policymakers following an afternoon meeting withFord AdministrationofficialsDick CheneyandDonald Rumsfeldin 1974, in whichArthur Lafferreportedly sketched the curve on a napkin to illustrate his argument.[6]The term "Laffer curve" was coined byJude Wanniski, who was also present at the meeting. The basic concept was not new; Laffer himself notes antecedents in the writings of the 14th-century social philosopherIbn Khaldunand others.[7] Ibn Khaldun, a 14th-century philosopher, wrote in his workTheMuqaddimah: "It should be known that at the beginning of the dynasty, taxation yields a large revenue from small assessments. At the end of the dynasty, taxation yields a small revenue from large assessments." Laffer states that he did not invent the concept, citing numerous antecedents, including theMuqaddimahby 14th-centuryIslamicscholarIbn Khaldun,[7][8]John Maynard Keynes[7]andAdam Smith.[9]Andrew Mellon,Secretary of the Treasuryfrom 1921 to 1932, articulated a similar policy idea in 1924.[10] Laffer's name began to be associated with the idea after an article was published inNational Affairsin 1978 that linked him to the idea.[9]In theNational Affairsarticle,Jude Wanniskirecalled a 1974 dinner meeting at the Two Continents Restaurant in theWashington HotelwithArthur Laffer, Wanniski,Dick Cheney,Donald Rumsfeld, and his deputy press secretary Grace-Marie Arnett.[9][7]In this meeting, Laffer, arguing against PresidentGerald Ford's tax increase, reportedly sketched the curve on a napkin to illustrate the concept.[6]Cheney did not accept the idea immediately, but it caught the imaginations of those present.[11]Laffer professes no recollection of this napkin, but writes: "I used the so-called Laffer Curve all the time in my classes and with anyone else who would listen to me".[7] There are historical precedents other than those cited by Laffer.Ferdinando Galianiwrote inDella Moneta(1751) that "It is an enormous error ... to believe that an impost always yields more revenue as it becomes heavier".[12]He gave the example of a toll on late-night entry to a town which would be less remunerative if set unreasonably high.David Humeexpressed similar arguments in his essayOf Taxesin 1756, as did fellow Scottish economistAdam Smithtwenty years later.[13] At the time of theIrish famineof the mid-1840s,Edward Twisletonsuggested that lower local taxes in Ireland would increase the amount of taxes successfully collected towards relief. An analysis of actual collection rates has indicated that areas with higher rates did collect a lesser proportion of the tax due.[14] The Democratic party embraced this argument in the 1880s when high revenue from import tariffs raised during the Civil War (1861–1865) led to federal budget surpluses. The Republican party, which was then based in the protectionist industrial Northeast, argued that cutting rates would lower revenues. In 1924, Secretary of TreasuryAndrew Mellonwrote: "It seems difficult for some to understand that high rates of taxation do not necessarily mean large revenue to the government, and that more revenue may often be obtained by lower rates". Exercising his understanding that "73% of nothing is nothing", he pushed for the reduction of the top income tax bracket from 73% to an eventual 24% (as well as tax breaks for lower brackets). Mellon was one of the wealthiest people in the United States, the third-highest income-tax payer in the mid-1920s, behindJohn D. RockefellerandHenry Ford.[15]While he served as Secretary of the U.S. Treasury Department his wealth peaked at around US$300–400 million. Personal income tax receipts rose from US$719 million in 1921 to over US$1billionin 1929, an average increase of 4.2% per year over an 8-year period, which supporters attribute to the rate cut.[16] In 2012, economists surveyed by theUniversity of Chicagorejected the viewpoint that the Laffer curve's postulation of increased tax revenue through a rate cut applies to federalUS income taxesof the time in the medium term. When asked whether a "cut in federal income tax rates in the US right now would raise taxable income enough so that the annual total tax revenue would be higher within five years than without the tax cut", none of the economists surveyed agreed and 71% disagreed.[17]According to Harvard University economistJeffrey Frankel, a substantial majority of economists reject the proposition that income taxes are so high in the United States that tax cuts will pay for themselves.[18] One of the conceptual uses of the Laffer curve is to determine the rate of taxation that will raise the maximum revenue (in other words, "optimizing" revenue collection). The revenue maximizing tax rate should not be confused with theoptimal taxrate, which economists use to describe tax rates in a tax system that raises a given amount of revenue with the fewest distortions to the economy.[19] In 2017, Jacob Lundberg of theUppsala Universityestimated Laffer curves for 27OECDcountries, with top income-tax rates maximising tax revenue ranging from 60 to 61% (Austria, Luxembourg, Netherlands, Poland, Sweden) to 74–76% (Germany, Switzerland, UK, US). Most countries appear to have set their highest tax rates below the peak rate, while five countries are exceeding it (Austria, Belgium, Denmark, Finland, Sweden).[20] Writing in 2010,John Quigginsaid, "To the extent that there was an economic response to the Reagan tax cuts, and to those of George W. Bush twenty years later, it seems largely to have been a Keynesian demand-side response, to be expected when governments provide households with additional net income in the context of a depressed economy."[21]A 1999 study by University of Chicago economistAustan Goolsbee, which examined major changes in high income tax rates in the United States from the 1920s onwards found no evidence that the United States was to the right of the peak of the Laffer curve.[22] In the early 1980s,Edgar L. Feigeand Robert T. McGee developed a macroeconomic model from which they derived a Laffer curve. According to the model, the shape and position of the Laffer curve depend upon the strength of supply side effects, the progressivity of the tax system and the size of the unobserved economy.[24][25][26]Economist Paul Pecorino presented a model in 1995 that predicted the peak of the Laffer curve occurred at tax rates around 65%.[27]A draft paper by Y. Hsing looking at the United States economy between 1959 and 1991 placed the revenue-maximizing average federal tax rate between 32.67% and 35.21%.[28]A 1981 article published in theJournal of Political Economypresented a model integrating empirical data that indicated that the point of maximum tax revenue in Sweden in the 1970s would have been 70%.[29]A 2011 study by Trabandt and Uhlig published in theJournal of Monetary Economicsestimated a 70% revenue maximizing rate, and estimated that the US and most European economies were on the left of the Laffer curve (in other words, that raising taxes would raise further revenue).[23]A 2005 study concluded that with the exception of Sweden, no major OECD country could increase revenue by reducing the marginal tax rate.[30] The New Palgrave Dictionary of Economicsreports that a comparison of academic studies yields a range of revenue maximizing rates that centers around 70%.[4] The Laffer curve has also been extended to taxation of goods and services. In their 2018Econometricapaper, Miravete, Seim, and Thurk, show that in non-competitive markets, the strategic pricing response of firms is important to consider when estimating the Laffer curve.[31]The authors show that firms increase their prices in response to a decrease in thead valorem tax, leading to less of a quantity increase than would otherwise be expected. The net effect is to flatten the Laffer curve and move the revenue maximum point to the right. In 2005, the United StatesCongressional Budget Office(CBO) released a paper called "Analyzing the Economic and Budgetary Effects of a 10 Percent Cut in Income Tax Rates." This paper considered the impact of a stylized reduction of 10% in the then existing marginal rate offederal income taxin the US (for example, if those facing a 25% marginal federal income tax rate had it lowered to 22.5%). Unlike earlier research, the CBO paper estimates the budgetary impact of possiblemacroeconomiceffects of tax policies, that is, it attempts to account for how reductions in individual income tax rates might affect the overall future growth of the economy, and therefore influence future government tax revenues; and ultimately, impact deficits or surpluses. In the paper's most generous estimated growth scenario, only 28% of the projected lost revenue from the lower tax rate would be recouped over a 10-year period after a 10% across-the-board reduction in all individual income tax rates. In other words, deficits would increase by nearly the same amount as the tax cut in the first five years, with limited feedback revenue thereafter. Through increased budget deficits, the tax cuts primarily benefiting the wealthy will be paid for—plus interest—bytaxes borne relatively evenly by all taxpayers.[32]The paper points out that these projected shortfalls in revenue would have to be made up by federal borrowing: the paper estimates that the federal government would pay an extra US$200billionin interest over the decade covered by the paper's analysis.[33][34]In 2019, economists at theJoint Committee on Taxationrevisited the macroeconomic and budgetary response to the stylized 10% reduction in statutory ordinary income tax rates, but from the levels set byP.L. 115-97.[35]While incorporating additional tax detail within the modeling framework relative to previous analyses, the paper similarly estimates that this policy change would result in increased budget deficits - both in the short- and long-run - after accounting for revenue feedback from macroeconomic changes. Following the reduction of the top rate of income tax in the UK from 50% to 45% in 2013,HMRCestimated the cost of the tax reduction to be about £100 million (out of an income for this group of around £90 billion), but with large uncertainty on both sides.Robert Chote, the chairman of the UKOffice for Budget Responsibilitycommented that Britain was "strolling across the summit of the Laffer curve", implying that UK tax rates had been close to the optimum rate.[36][37] Laffer has presented the examples of Russia and the Baltic states, which instituted aflat taxwith rates lower than 35% around the same time that their economies started growing. He has similarly referred to the economic outcome of theKemp-Roth tax cuts, theKennedy tax cuts, the 1920s tax cuts, and the changes in UScapital gains taxstructure in 1997.[7]Some have also citedHauser's Law, which postulates that US federal revenues, as a percentage of GDP, have remained stable at approximately 19.5% over the period 1950 to 2007 despite changes in marginal tax rates over the same period.[38]Others however, have called Hauser's Law "misleading" and contend that tax changes have had large effects on tax revenues.[39] In 2012, based on Laffer curve arguments, Kansas GovernorSam Brownbackgreatly reduced state tax rates in what has been called theKansas experiment.[40][41][42]Laffer was paid $75,000 to advise in the creation of Brownback's tax cut plan, and gave Brownback his full endorsement, stating that what Brownback was doing was "truly revolutionary."[40]The state, which had previously had a budget surplus, experienced a budget deficit of about $200 million in 2012. Drastic cuts to state funding for education and infrastructure followed[43]before the tax cut was repealed in 2017 by a bipartisan super majority in the Kansas legislature.[40] Supply-side economics rose in popularity among Republican Party politicians from 1977 onwards. Prior to 1977, Republicans were more split on tax reduction, with some worrying that tax cuts would fuel inflation and exacerbate deficits.[44] Supply-side economics is a school of macroeconomic thought that argues that overall economic well-being is maximized by lowering the barriers to producing goods and services (the "Supply Side" of the economy). By lowering such barriers, consumers are thought to benefit from a greater supply of goods and services at lower prices. Typical supply-side policy would advocate generally lower income tax and capital gains tax rates (to increase the supply of labor and capital), smaller government and a lower regulatory burden on enterprises (to lower costs). Although tax policy is often mentioned in relation to supply-side economics, supply-side economists are concerned with all impediments to the supply of goods and services and not just taxation.[45] In their economics textbookPrinciples of Economics(7th edition), economistsKarl E. CaseofWellesley CollegeandRay FairofYale Universitystate "The Laffer curve shows the relationship between tax rates and tax revenues. Supply-side economists use it to argue that it is possible to generate higher revenues by cutting tax rates, but evidence does not appear to support this."[46][26] The Laffer curve andsupply-side economicsinspiredReaganomicsand theKemp-Roth Tax Cutof 1981. Supply-side advocates of tax cuts claimed that lower tax rates would generate more tax revenue because theUnited States government'smarginal income tax ratesprior to the legislation were on theright-handside of the curve. This assertion was derided byGeorge H. W. Bushas "voodoo economics" while running against Reagan for the Presidential nomination in 1980.[47]During the Reagan presidency, the top marginal rate of tax in the United States fell from 70% to 28%. David Stockman, Ronald Reagan's budget director during his first administration and one of the early proponents of supply-side economics, was concerned that the administration did not pay enough attention to cutting government spending. He maintained that the Laffer curve was not to be taken literally—at least not in the economic environment of the 1980s United States. InThe Triumph of Politics, he writes: "[T]he whole California gang had taken [the Laffer curve] literally (and primitively). The way they talked, they seemed to expect that once the supply-side tax cut was in effect, additional revenue would start to fall, manna-like, from the heavens. Since January, I had been explaining that there is no literal Laffer curve."[48]Stockman also said that "Laffer wasn't wrong, he just didn't go far enough" (in paying attention to government spending).[49] Some have criticized elements of Reaganomics on the basis of equity. For example, economistJohn Kenneth Galbraithbelieved that theReagan administrationactively used the Laffer curve "to lower taxes on the affluent".[50]Some critics point out that tax revenues almost always rise every year, and during Reagan's two terms increases in tax revenue were more shallow than increases during presidencies where top marginal tax rates were higher.[51]Critics also point out that since the Reagan tax cuts,income has not significantly increasedfor the rest of the population. This assertion is supported by studies that show the income of the top 1% nearly doubling during the Reagan years, while income for other income levels increased only marginally; income actually decreased for the bottom quintile.[52]However, a 2018 study by the Congressional Budget Office showed average household income rising 68.8% for the bottom quintile after government transfers (in the form of various income support and in-kind programmes, subsidies, and taxes) from 1979 to 2014. This same study showed the middle quintile's income rising 41.5% after government transfers and taxes.[53] The Congressional Budget Office has estimated that extending theBush tax cutsof 2001–2003 beyond their 2010 expiration would increase deficits by $1.8 trillion over the following decade.[54]Economist Paul Krugman contended that supply-side adherents did not fully believe that the United States income tax rate was on the "backwards-sloping" side of the curve and yet they still advocated lowering taxes to encourage investment of personal savings.[55] Supply-side economics indicates that the simple descriptions of the Laffer curve are usually intended for pedagogical purposes only and do not represent the complex economic responses to tax policy which may be observed from such viewpoints as provided by supply-side economics. Although the simplified Laffer curve is usually illustrated as a straightforward symmetrical and continuousbell-shaped curve, in reality the bell-shaped curve may be skewed or lop-sided to either side of the 'maximum'. Within the reality of complex and sudden changes to tax policy over time, the response of tax revenue to tax rates may vary dramatically and is not necessarily even continuous over time, when for example new legislation is enacted which abruptly changes tax revenue expectations.[56][57] Laffer explains the model in terms of two interacting effects of taxation: an "arithmeticeffect" and an "economic effect".[7]The "arithmetic effect" assumes that tax revenue raised is the tax rate multiplied by the revenue available for taxation (or tax base). Thus revenueRis equal tot×Bwheretis the tax rate andBis the taxable base(R=t×B).At a 0% tax rate, the model states that no tax revenue is raised. The "economic effect" assumes that the tax rate will affect the tax base itself. At the extreme of a 100% tax rate, the government collects zero revenue because taxpayers change their behavior in response to the tax rate: either they lose their incentive to work, or they find a way to avoid paying taxes. Thus, the "economic effect" of a 100% tax rate is to decrease the tax base to zero. If this is the case, then somewhere between 0% and 100% lies a tax rate that will maximize revenue. Graphical representations of the curve sometimes appear to put the rate at around 50%, if the tax base reacts to the tax rate linearly, but the revenue-maximizing rate could theoretically beanypercentage greater than 0% and less than 100%. Similarly, the curve is often presented as a parabolic shape, but there is no reason that this is necessarily the case. The effect of changes in tax can be cased in terms of elasticities, where the revenue-maximizing elasticity of thetax basewith respect to the tax is equal to 1. This is done by differentiatingRwith respect totand grouping terms to reveal that the rate of change ofRwith respect totis equal to the sum of elasticity of the tax base plus one all multiplied by the tax base. Thus as elasticity surpasses one absolute value, revenues begin to fall. The problem is similar to that of the monopolist who must never increase prices beyond the point at which the elasticity of demand exceeds one in absolute value. Wanniski noted that all economic activity would be unlikely to cease at 100% taxation, but it would switch from the exchange of money to barter. He also noted that there can be special circumstances in which economic activity can continue for a period at a near 100% taxation rate (for example, inwar economy).[13] Various efforts have been made to quantify the relationship between tax revenue and tax rates (for example, in the United States by theCongressional Budget Office).[33]While the interaction between tax rates and tax revenue is generally accepted, the precise nature of this interaction is debated. In practice, the shape of a hypothetical Laffer curve for a given economy can only be estimated. The relationship between tax rate and tax revenue is likely to vary from one economy to another and depends on the elasticity of supply for labor, as well as various other factors. Even in the same economy, the characteristics of the curve could vary over time. Complexities such asprogressive taxesand possible differences in the incentive to work for different income groups complicate the task of estimation. The structure of the curve may also be changed by policy decisions. For example, if tax loopholes andtax sheltersare made more readily available by legislation, the point at which revenue begins to decrease with increased taxation is likely to become lower. Laffer presented the curve as a pedagogical device to show that in some circumstances, a reduction in tax rates will actually increase government revenue and not need to be offset by decreased government spending or increased borrowing. For a reduction in tax rates to increase revenue, the current tax rate would need to be higher than the revenue maximizing rate. In 2007, Laffer said that the curve should not be the sole basis for raising or lowering taxes.[58] Supply-siders argue that in a high tax rate environment, lowering tax rates would result in either increased revenues or smaller revenue losses than one would expect relying on only static estimates of the previous tax base.[59][60] This led supply-siders to advocate large reductions in marginal income and capital gains tax rates to encourage greater investment, which would produce more supply. Jude Wanniski and many others advocate a zero capital gains rate.[56][61]The increased aggregate supply would result in increased aggregate demand, hence the term "supply-side economics". Laffer assumes that the government's revenue is a continuous function of the tax rate. However, in some theoretical models, the Laffer curve can be discontinuous, leading to an inability to devise a revenue-maximizing tax rate solution.[62]Additionally, the Laffer curve depends on the assumption that tax revenue is used to provide a public good that is separable in utility and separate from labor supply, which may not be true in practice.[63] The Laffer curve as presented is simplistic in that it assumes a single tax rate and a single labor supply. Actual systems of public finance are more complex, and there is serious doubt about the relevance of considering a single marginal tax rate.[4]In addition, revenue may well be amultivalued functionof tax rate; for instance, an increase in tax rate to a certain percentage may not result in the same revenue as a decrease in tax rate to the same percentage (a kind ofhysteresis). Furthermore, the Laffer curve does not take explicitly into account the nature of thetax avoidancetaking place. It is possible that if all producers are endowed with two survival factors in the market (ability to produce efficiently and ability to avoid tax), then the revenues raised under tax avoidance can be greater than without avoidance, and thus the Laffer curve maximum is found to be farther right than thought. The reason for this result is that if producers with low productive abilities (high production costs) tend to have strong avoidance abilities as well, a uniform tax on producers actually becomes a tax that discriminates on the ability to pay. However, if avoidance abilities and productive abilities are unrelated, then this result disappears.[64] Generally, among other criticisms, the Laffer curve has been scrutinised as intangible and inapplicable in the real world, i. e. in a real national economy. On the contrary, diligent application of the Laffer curve in the past has actually led to controversial outcomes. Since its proposal, there have been several real-life trials of modelling the Laffer curve and its consequent application, which have resulted in the finding that tax rates, which are actually utilised by the governing body, are to the left of the Laffer curve turning point, which would maximise tax revenue. More significantly, the result of several experiments, which tried to adjust the tax rate to the one proposed by the Laffer curve model, resulted in a significant decrease in national tax revenue - lowering the economy's tax rate led to an increase in the government budget deficit. The occurrence of this phenomenon is most famously attributed to the Reagan administration (1981–1989), during which the government deficit increased by approx. $2 trillion.[65]
https://en.wikipedia.org/wiki/Laffer_curve
Incomputer security, abillion laughs attackis a type ofdenial-of-service (DoS) attackwhich is aimed atparsersofXMLdocuments.[1] It is also referred to as anXML bombor as an exponential entity expansion attack.[2] The example attack consists of defining 10 entities, each defined as consisting of 10 of the previous entity, with the document consisting of a single instance of the largest entity, which expands to onebillioncopies of the first entity. Versions with larger amount of entries also exist. In the most frequently cited example, the first entity is thestring"lol", hence the name "billion laughs". At the time this vulnerability was first reported, thecomputer memoryused by a billion instances of the string "lol" would likely exceed that available to the process parsing the XML. While the original form of the attack was aimed specifically at XML parsers, the term may be applicable to similar subjects as well.[1] The problem was first reported as early as 2002,[3]but began to be widely addressed in 2008.[4] Defenses against this kind of attack include capping the memory allocated in an individual parser if loss of the document is acceptable, or treating entities symbolically and expanding them lazily only when (and to the extent) their content is to be used. When an XML parser loads this document, it sees that it includes one root element, "lolz", that contains the text "&lol9;". However, "&lol9;" is a defined entity that expands to a string containing ten "&lol8;" strings. Each "&lol8;" string is a defined entity that expands to ten "&lol7;" strings, and so on. After all the entity expansions have been processed, this small (< 1 KB) block of XML will actually contain 109= a billion "lol"s, taking up almost 3gigabytesof memory.[5] The billion laughs attack described above can take anexponentialamount of space or time. Thequadratic blowupvariation causesquadratic growthin resource requirements by simply repeating a large entity over and over again, to avoid countermeasures that detect heavily nested entities.[6](Seecomputational complexity theoryfor comparisons of different growth classes.) A "billion laughs" attack could exist for any file format that can contain macro expansions, for example thisYAMLbomb: This crashed earlier versions ofGobecause the Go YAML processor (contrary to the YAML spec) expands references as if they were macros. The Go YAML processor was modified to fail parsing if the result object becomes too large. Enterprise software likeKuberneteshas been affected by this attack through its YAML parser.[7][8]For this reason, either a parser with intentionally limited capabilities is preferred (like StrictYAML) or file formats that do not allow references are often preferred for data arriving from untrusted sources.[9][failed verification]
https://en.wikipedia.org/wiki/Billion_laughs
Incomputer securityandprogramming, abuffer over-read[1][2]orout-of-bounds read[3]is ananomalywhere aprogram, while readingdatafrom abuffer, overruns the buffer's boundary and reads (or tries to read) adjacent memory. This is a special case of violation ofmemory safety. Buffer over-reads can be triggered, as in theHeartbleedbug, by maliciously crafted inputs that are designed to exploit a lack ofbounds checkingto read parts of memory not intended to be accessible. They may also be caused by programming errors alone. Buffer over-reads can result in erratic program behavior, includingmemoryaccess errors, incorrect results, acrash, or a breach of system security. Thus, they are the basis of manysoftware vulnerabilitiesand can be maliciouslyexploitedto access privileged information.[citation needed] At other times, buffer over-reads not caused by malicious input can lead to crashes if they triggerinvalid page faults. For example,widespread IT outages in 2024were caused by an out-of-bounds memory error in cybersecurity software developed byCrowdStrike.[4] Programming languagescommonly associated with buffer over-reads includeCandC++, which provide no built-in protection against usingpointersto access data in any part ofvirtual memory, and which do not automatically check that reading data from a block of memory is safe; respective examples are attempting to read more elements than contained in an array, or failing to append a trailing terminator to anull-terminated string.Bounds checkingcan prevent buffer over-reads,[5]whilefuzz testingcan help detect them. Thiscomputer securityarticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Buffer_over-read
Coding conventionsare a set of guidelines for a specificprogramming languagethat recommendprogramming style, practices, and methods for each aspect of a program written in that language. These conventions usually cover file organization,indentation,comments,declarations,statements,white space,naming conventions,programming practices,programming principles,programming rules of thumb, architectural best practices, etc. These are guidelines forsoftware structural quality.Software programmersare highly recommended to follow these guidelines to help improve thereadabilityof theirsource codeand makesoftware maintenanceeasier. Coding conventions are only applicable to the human maintainers andpeer reviewersof a software project. Conventions may be formalized in a documented set of rules that an entire team or company follows,[1]or may be as informal as the habitual coding practices of an individual. Coding conventions are not enforced bycompilers. Reducing the cost ofsoftware maintenanceis the most often cited reason for following coding conventions. In the introductory section on code conventions for the Java programming language, Sun Microsystems offers the following reasoning:[2] Code conventions are important to programmers for a number of reasons: Software peer reviewfrequently involves reading source code. This type of peer review is primarily adefectdetection activity. By definition, only the original author of a piece of code has read the source file before the code is submitted for review. Code that is written using consistent guidelines is easier for other reviewers to understand and assimilate, improving the efficacy of the defect detection process. Even for the original author, consistently coded software eases maintainability. There is no guarantee that an individual will remember the precise rationale for why a particular piece of code was written in a certain way long after the code was originally written. Coding conventions can help. Consistent use ofwhitespaceimproves readability and reduces the time it takes to understand the software. Where coding conventions have been specifically designed to produce high-quality code, and have then been formally adopted, they then become coding standards. Specific styles, irrespective of whether they are commonly adopted, do not automatically produce good quality code. Complexity is a factor going against security.[4] The management of complexity includes the following basic principle: minimize the amount of code written during the project development. This prevents unnecessary work which prevents unnecessary cost, both upfront and downstream. This is simply because if there is less code, it is less work not only to create the application, but also to maintain it. Complexity is managed both at the design stage (how the project is architectured) and at the development stage (by having simpler code). If the coding is kept basic and simple then the complexity will be minimised. Very often this is keeping the coding as 'physical' as possible - coding in a manner that is very direct and not highly abstract. This produces optimal code that is easy to read and follow. Complexity can also be avoided simply by not using complicated tools for simple jobs. The more complex the code is the more likely it is to be buggy, the more difficult the bugs are to find and the more likely there are to be hidden bugs. Refactoringrefers to a software maintenance activity wheresource codeis modified to improve readability or improve its structure. Software is often refactored to bring it into conformance with a team's stated coding standards after its initial release. Any change that does not alter the behavior of the software can be considered refactoring. Common refactoring activities are changing variable names, renaming methods, moving methods or whole classes andbreaking large methods(orfunctions) into smaller ones. Agile software development methodologiesplan for regular (or even continuous) refactoring making it an integral part of the teamsoftware development process.[5] Coding conventions allow programmers to have simple scripts or programs whose job is to process source code for some purpose other than compiling it into an executable. It is common practice to count the software size (Source lines of code) to track current project progress or establish a baseline for futureproject estimates. Consistent coding standards can, in turn, make the measurements more consistent. Specialtagswithinsource code commentsare often used to process documentation, two notable examples arejavadocanddoxygen. The tools specify the use of a set of tags, but their use within a project is determined by convention. Coding conventions simplify writing new software whose job is to process existing software. Use ofstatic code analysishas grown consistently since the 1950s. Some of the growth of this class of development tools stems from increased maturity and sophistication of the practitioners themselves (and the modern focus onsafetyandsecurity), but also from the nature of the languages themselves. All software practitioners must grapple with the problem of organizing and managing a large number of sometimes complex instructions. For all but the smallest software projects, source code (instructions) are partitioned into separatefilesand frequently among manydirectories. It was natural for programmers to collect closely related functions (behaviors) in the same file and to collect related files into directories. As software development shifted from purelyprocedural programming(such as found inFORTRAN) towards moreobject-orientedconstructs (such as found inC++), it became the practice to write the code for a single (public) class in a single file (the 'one class per file' convention).[6][7]Java has gone one step further - the Java compiler returns an error if it finds more than one public class per file. A convention in one language may be a requirement in another. Language conventions also affect individual source files. Each compiler (or interpreter) used to process source code is unique. The rules a compiler applies to the source creates implicit standards. For example, Python code is much more consistently indented than, say Perl, because whitespace (indentation) is actually significant to the interpreter. Python does not use the brace syntax Perl uses to delimit functions. Changes in indentation serve as the delimiters.[8][9]Tcl, which uses a brace syntax similar to Perl or C/C++ to delimit functions, does not allow the following, which seems fairly reasonable to a C programmer: The reason is that in Tcl, curly braces are not used only to delimit functions as in C or Java. More generally, curly braces are used to group words together into a single argument.[10][11]In Tcl, thewordwhiletakes two arguments, aconditionand anaction. In the example above,whileis missing its second argument, itsaction(because the Tcl also uses the newline character to delimit the end of a command). There are a large number of coding conventions; seeCoding Stylefor numerous examples and discussion. Common coding conventions may cover the following areas: Coding standards include theCERT C Coding Standard,MISRA C,High Integrity C++.
https://en.wikipedia.org/wiki/Coding_conventions
Incomputing,end-of-file(EOF)[1]is a condition in a computeroperating systemwhere no more data can be read from a data source. The data source is usually called afileorstream. In theC standard library, the character-reading functions such asgetcharreturn a value equal to the symbolic value (macro)EOFto indicate that an end-of-file condition has occurred. The actual value ofEOFis implementation-dependent and must be negative (it is commonly −1, such as inglibc[2]). Block-reading functions return the number of bytes read, and if this is fewer than asked for, then the end of file was reached or an error occurred (checking oferrnoor dedicated function, such asferroris required to determine which). Input from a terminal never really "ends" (unless the device is disconnected), but it is useful to enter more than one "file" into a terminal, so a key sequence is reserved to indicate end of input. InUNIX, the translation of the keystroke to EOF is performed by the terminal driver, so a program does not need to distinguish terminals from other input files. By default, the driver converts aControl-Dcharacter at the start of a line into an end-of-file indicator. To insert an actual Control-D (ASCII 04) character into the input stream, the user precedes it with a "quote" command character (usuallyControl-V).AmigaDOSis similar but uses Control-\ instead of Control-D. InDOSandWindows(and inCP/Mand manyDECoperating systems such as thePDP-6monitor,[3]RT-11,VMSorTOPS-10[4]), reading from the terminal will never produce an EOF. Instead, programs recognize that the source is a terminal (or other "character device") and interpret a given reserved character or sequence as an end-of-file indicator; most commonly, this is anASCIIControl-Z, code 26. Some MS-DOS programs, including parts of the Microsoft MS-DOS shell (COMMAND.COM) and operating-system utility programs (such asEDLIN), treat a Control-Z in a text file as marking the end of meaningful data, and/or append a Control-Z to the end when writing a text file. This was done for two reasons: In the ANSI X3.27-1969magnetic tapestandard, the end of file was indicated by atape mark, which consisted of a gap of approximately 3.5 inches of tape followed by a single byte containing the character0x13(hex) fornine-track tapesand017(octal) forseven-track tapes.[5]Theend-of-tape, commonly abbreviated asEOT, was indicated by two tape marks. This was the standard used, for example, onIBM 360. Thereflective stripthat was used to announce impending physical end of tape was also called anEOTmarker.
https://en.wikipedia.org/wiki/End-of-file
Aping of deathis a type of attack on a computer system that involves sending amalformedor otherwise maliciouspingto a computer.[1]In this attack, a host sends hundreds of ping requests with a packet size that is large or illegal to another host to try to take it offline or to keep it preoccupied responding withICMP Echoreplies.[2] A correctly formed ping packet is typically 56bytesin size, or 64 bytes when theInternet Control Message Protocol(ICMP) header is considered, and 84 bytes includingInternet Protocol(IP) version 4 header. However, anyIPv4packet (including pings) may be as large as 65,535 bytes. Some computer systems were never designed to properly handle a ping packet larger than the maximum packet size because it violates theInternet Protocol.[3][4]Like other large but well-formed packets, a ping of death is fragmented into groups of 8 octets before transmission. However, when the target computer reassembles the malformed packet, abuffer overflowcan occur, causing asystem crashand potentially allowing theinjection of malicious code. The excessive byte size prevents the machine from processing it effectively, impacting the cloud environment and causing disruptions in the operating system processes leading torebootsorcrashes.[5] In early implementations ofTCP/IP, this bug is easy to exploit and can affect a wide variety of systems includingUnix,Linux,Mac,Windows, and peripheral devices. As systems began filtering out pings of death through firewalls and other detection methods, a different kind of ping attack known asping floodinglater appeared, which floods the victim with so many ping requests that normal traffic fails to reach the system (a basicdenial-of-service attack). The ping of death attack has been largely neutralized by advancements in technology. Devices produced after 1998 include defenses against such attacks,[specify]rendering them resilient to this specific threat. However, in a notable development, a variant targetingIPv6packets on Windows systems was identified, leadingMicrosoftto release a patch in mid-2013.[6] The maximum packet length of an IPv4 packet including the IP header is 65,535 (216− 1) bytes,[3]a limitation presented by the use of a 16-bit wide IP header field that describes the total packet length. The underlyingdata link layeralmost always poses limits to the maximum frame size (SeeMTU). InEthernet, this is typically 1500 bytes. In such a case, a large IP packet is split across multiple IP packets (also known as IP fragments), so that each IP fragment will match the imposed limit. The receiver of the IP fragments will reassemble them into the complete IP packet and continue processing it as usual. Whenfragmentationis performed, each IP fragment needs to carry information about which part of the original IP packet it contains. This information is kept in the Fragment Offset field, in the IP header. The field is 13 bits long, and contains the offset of the data in the current IP fragment, in the original IP packet. The offset is given in units of 8 bytes. This allows a maximum offset of 65,528 ((213-1)*8). Then when adding 20 bytes of IP header, the maximum will be 65,548 bytes, which exceeds the maximum frame size. This means that an IP fragment with the maximum offset should have data no larger than 7 bytes, or else it would exceed the limit of the maximum packet length. Amalicious usercan send an IP fragment with the maximum offset and with much more data than 8 bytes (as large as the physical layer allows it to be). When the receiver assembles all IP fragments, it will end up with an IP packet which is larger than 65,535 bytes. This may possibly overflow memory buffers which the receiver allocated for the packet, and can cause various problems. As is evident from the description above, the problem has nothing to do withICMP, which is used only as payload, big enough to exploit the problem. It is a problem in the reassembly process of IP fragments, which may contain any type of protocol (TCP,UDP,IGMP, etc.). The correction of the problem is to add checks in the reassembly process. The check for each incoming IP fragment makes sure that the sum of "Fragment Offset" and "Total length" fields in the IP header of each IP fragment is smaller or equal to 65,535. If the sum is greater, then the packet is invalid, and the IP fragment is ignored. This check is performed by somefirewalls, to protect hosts that do not have the bug fixed. Another fix for the problem is using a memory buffer larger than 65,535 bytes for the re-assembly of the packet. (This is essentially a breaking of the specification, since it adds support for packets larger than those allowed.) In 2013, an IPv6 version of the ping of death vulnerability was discovered inMicrosoft Windows. Windows TCP/IP stack did not handle memory allocation correctly when processing incoming malformedICMPv6packets, which could cause remote denial of service. This vulnerability was fixed in MS13-065 in August 2013.[7][8]TheCVE-IDfor this vulnerability isCVE-2013-3183.[9]In 2020, another bug (CVE-2020-16898) in ICMPv6 was found aroundRouter Advertisement, which could even lead toremote code execution.[10]
https://en.wikipedia.org/wiki/Ping_of_death
Aport scanneris an application designed to probe aserverorhostfor openports. Such an application may be used byadministratorsto verifysecuritypolicies of theirnetworksand byattackersto identifynetwork servicesrunning on a host and exploit vulnerabilities. Aport scanorportscanis a process that sends client requests to a range of server port addresses on a host, with the goal of finding an active port; this is not a nefarious process in and of itself.[1]The majority of uses of a port scan are not attacks, but rather simple probes to determine services available on a remote machine. Toportsweepis to scan multiple hosts for a specific listening port. The latter is typically used to search for a specific service, for example, anSQL-basedcomputer wormmay portsweep looking for hosts listening onTCPport 1433.[2] The design and operation of theInternetis based on theInternet Protocol Suite, commonly also calledTCP/IP. In this system, network services are referenced using two components: a host address and a port number. There are 65535 distinct and usable port numbers, numbered 1 … 65535. (Port zero is not a usable port number.) Most services use one, or at most a limited range of, port numbers. Some port scanners scan only the most common port numbers, or ports most commonly associated with vulnerable services, on a given host. The result of a scan on a port is usually generalized into one of three categories: Open ports present two vulnerabilities of whichadministratorsmust be wary: Filtered ports do not tend to present vulnerabilities. All forms of port scanning rely on the assumption that the targeted host is compliant withRFC. Although this is the case most of the time, there is still a chance a host might send back strange packets or even generatefalse positiveswhen the TCP/IP stack of the host is non-RFC-compliant or has been altered. This is especially true for less common scan techniques that areOS-dependent (FIN scanning, for example).[3]TheTCP/IP stack fingerprintingmethod also relies on these types of different network responses from a specific stimulus to guess the type of the operating system the host is running. The simplest port scanners use the operating system's network functions and are generally the next option to go to when SYN is not a feasible option (described next).Nmapcalls this mode connect scan, named after the Unix connect() system call. If a port is open, the operating system completes theTCPthree-way handshake, and the port scanner immediately closes the connection to avoid performing aDenial-of-service attack.[3]Otherwise an error code is returned. This scan mode has the advantage that the user does not require special privileges. However, using the OS network functions prevents low-level control, so this scan type is less common. This method is "noisy", particularly if it is a "portsweep": the services can log the sender IP address andIntrusion detection systemscan raise an alarm. SYNscan is another form of TCP scanning. Rather than using the operating system's network functions, the port scanner generates raw IP packets itself, and monitors for responses. This scan type is also known as "half-open scanning", because it never actually opens a full TCP connection. The port scanner generates a SYN packet. If the target port is open, it will respond with a SYN-ACK packet. The scanner host responds with an RST packet, closing the connection before the handshake is completed.[3]If the port is closed but unfiltered, the target will instantly respond with an RST packet. The use of raw networking has several advantages, giving the scanner full control of the packets sent and the timeout for responses, and allowing detailed reporting of the responses. There is debate over which scan is less intrusive on the target host. SYN scan has the advantage that the individual services never actually receive a connection. However, the RST during the handshake can cause problems for some network stacks, in particular simple devices like printers. There are no conclusive arguments either way. UDP scanning is also possible, although there are technical challenges.UDPis aconnectionlessprotocol so there is no equivalent to a TCP SYN packet. However, if a UDP packet is sent to a port that is not open, the system will respond with anICMPport unreachable message. Most UDP port scanners use this scanning method, and use the absence of a response to infer that a port is open. However, if a port is blocked by afirewall, this method will falsely report that the port is open. If the port unreachable message is blocked, all ports will appear open. This method is also affected by ICMPrate limiting.[4] An alternative approach is to send application-specific UDP packets, hoping to generate an application layer response. For example, sending a DNS query to port 53 will result in a response, if a DNS server is present. This method is much more reliable at identifying open ports. However, it is limited to scanning ports for which an application specific probe packet is available. Some tools (e.g.,Nmap,Unionscan[5]) generally have probes for less than 20 UDP services, while some commercial tools have as many as 70. In some cases, a service may be listening on the port, but configured not to respond to the particular probe packet. ACK scanning is one of the more unusual scan types, as it does not exactly determine whether the port is open or closed, but whether the port is filtered or unfiltered. This is especially good when attempting to probe for the existence of a firewall and its rulesets. Simple packet filtering will allow established connections (packets with the ACK bit set), whereas a more sophisticated stateful firewall might not.[6] Rarely used because of its outdated nature, window scanning is fairly untrustworthy in determining whether a port is opened or closed. It generates the same packet as an ACK scan, but checks whether the window field of the packet has been modified. When the packet reaches its destination, a design flaw attempts to create a window size for the packet if the port is open, flagging the window field of the packet with 1's before it returns to the sender. Using this scanning technique with systems that no longer support this implementation returns 0's for the window field, labeling open ports as closed.[7] Since SYN scans are not surreptitious enough, firewalls are, in general, scanning for and blocking packets in the form of SYN packets.[3]FIN packetscan bypass firewalls without modification. Closed ports reply to a FIN packet with the appropriate RST packet, whereas open ports ignore the packet on hand. This is typical behavior due to the nature of TCP, and is in some ways an inescapable downfall.[8] Some more unusual scan types exist. These have various limitations and are not widely used.Nmapsupports most of these.[6] ManyInternet service providersrestrict their customers' ability to perform port scans to destinations outside of their home networks. This is usually covered in theterms of serviceoracceptable use policyto which the customer must agree.[9][10]Some ISPs implementpacket filtersortransparent proxiesthat prevent outgoing service requests to certain ports. For example, if an ISP provides a transparent HTTP proxy on port 80, port scans of any address will appear to have port 80 open, regardless of the target host's actual configuration. The information gathered by a port scan has many legitimate uses including network inventory and the verification of the security of a network. Port scanning can, however, also be used to compromise security. Many exploits rely upon port scans to find open ports and send specific data patterns in an attempt to trigger a condition known as abuffer overflow. Such behavior can compromise the security of a network and the computers therein, resulting in the loss or exposure of sensitive information and the ability to do work.[3] The threat level caused by a port scan can vary greatly according to the method used to scan, the kind of port scanned, its number, the value of the targeted host and the administrator who monitors the host. But a port scan is often viewed as a first step for an attack, and is therefore taken seriously because it can disclose much sensitive information about the host.[11]Despite this, the probability of a port scan alone followed by a real attack is small. The probability of an attack is much higher when the port scan is associated with avulnerability scan.[12] Because of the inherently open and decentralized architecture of the Internet, lawmakers have struggled since its creation to define legal boundaries that permit effective prosecution ofcybercriminals. Cases involving port scanning activities are an example of the difficulties encountered in judging violations. Although these cases are rare, most of the time the legal process involves proving that an intent to commit a break-in or unauthorized access existed, rather than just the performance of a port scan. In June 2003, an Israeli, Avi Mizrahi, was accused by the Israeli authorities of the offense of attempting the unauthorized access of computer material. He had port scanned theMossadwebsite. He was acquitted of all charges on February 29, 2004. The judge ruled that these kinds of actions should not be discouraged when they are performed in a positive way.[13] A 17-year-old Finn was accused of attempted computer break-in by a major Finnish bank. On April 9, 2003, he was convicted of the charge by theSupreme Court of Finlandand ordered to pay US$12,000 for the expense of the forensic analysis made by the bank. In 1998, he had port scanned the bank network in an attempt to access the closed network, but failed to do so.[14] In 2006, the UK Parliament had voted an amendment to theComputer Misuse Act 1990such that a person is guilty of an offence who "makes, adapts, supplies or offers to supply any article knowing that it is designed or adapted for use in the course of or in connection with an offence under section 1 or 3 [of the CMA]".[15]Nevertheless, the area of effect of this amendment is blurred, and widely criticized by Security experts as such.[16] Germany, with theStrafgesetzbuch§ 202a,b,c also has a similar law, and the Council of the European Union has issued a press release stating they plan to pass a similar one too, albeit more precise.[17] In December 1999, Scott Moulton was arrested by the FBI and accused of attempted computer trespassing under Georgia's Computer Systems Protection Act andComputer Fraud and Abuse Act of America. At this time, his IT service company had an ongoing contract with Cherokee County of Georgia to maintain and upgrade the 911 center security. He performed several port scans on Cherokee County servers to check their security and eventually port scanned a web server monitored by another IT company, provoking a tiff which ended up in a tribunal. He was acquitted in 2000, with judge Thomas Thrash ruling inMoulton v. VC3(N.D.Ga.2000)[18]that there was no damage impairing the integrity and availability of the network.[19]
https://en.wikipedia.org/wiki/Port_scanner
A"return-to-libc" attackis acomputer securityattack usually starting with abuffer overflowin which a subroutinereturn addresson acall stackis replaced by an address of a subroutine that is already present in theprocessexecutable memory, bypassing theno-execute bitfeature (if present) and ridding the attacker of the need toinjecttheir own code. The first example of this attack in the wild was contributed byAlexander Peslyakon theBugtraqmailing list in 1997.[1] OnPOSIX-compliantoperating systemstheC standard library("libc") is commonly used to provide a standardruntime environmentfor programs written in theC programming language. Although the attacker could make the code return anywhere,libcis the most likely target, as it is almost always linked to the program, and it provides useful calls for an attacker (such as thesystemfunction used to execute shell commands). Anon-executablestack can prevent some buffer overflow exploitation, however it cannot prevent a return-to-libc attack because in the return-to-libc attack only existing executable code is used. On the other hand, these attacks can only call preexisting functions.Stack-smashing protectioncan prevent or obstruct exploitation as it may detect the corruption of the stack and possibly flush out the compromised segment. "ASCII armoring" is a technique that can be used to obstruct this kind of attack. With ASCII armoring, all the system libraries (e.g., libc) addresses contain aNULL byte(0x00). This is commonly done by placing them in the first0x01010101bytes of memory (a few pages more than 16 MB, dubbed the "ASCII armor region"), as every address up to (but not including) this value contains at least one NULL byte. This makes it impossible to emplace code containing those addresses using string manipulation functions such asstrcpy(). However, this technique does not work if the attacker has a way to overflow NULL bytes into the stack. If the program is too large to fit in the first 16MB, protection may be incomplete.[2]This technique is similar to another attack known asreturn-to-pltwhere, instead of returning to libc, the attacker uses the Procedure Linkage Table (PLT) functions loaded in theposition-independent code(e.g.,system@plt, execve@plt, sprintf@plt, strcpy@plt).[3] Address space layout randomization(ASLR) makes this type of attack extremely unlikely to succeed on64-bit machinesas the memory locations of functions are random. For32-bit systems, however, ASLR provides little benefit since there are only 16 bits available for randomization, and they can be defeated bybrute forcein a matter of minutes.[4]
https://en.wikipedia.org/wiki/Return-to-libc_attack
Asafety-critical system[2]orlife-critical systemis a system whose failure or malfunction may result in one (or more) of the following outcomes:[3][4] Asafety-related system(or sometimessafety-involved system) comprises everything (hardware, software, and human aspects) needed to perform one or more safety functions, in which failure would cause a significant increase in the safety risk for the people or environment involved.[5]Safety-related systems are those that do not have full responsibility for controlling hazards such as loss of life, severe injury or severeenvironmental damage. The malfunction of a safety-involved system would only be that hazardous in conjunction with the failure of other systems orhuman error. Some safety organizations provide guidance on safety-related systems, for example theHealth and Safety Executivein theUnited Kingdom.[6] Risks of this sort are usually managed with the methods and tools ofsafety engineering. A safety-critical system is designed to lose less than one life per billion (109) hours of operation.[7][8]Typical design methods includeprobabilistic risk assessment, a method that combinesfailure mode and effects analysis (FMEA)withfault tree analysis. Safety-critical systems are increasinglycomputer-based. Safety-critical systems are a concept often used together with theSwiss cheese modelto represent (usually in abow-tie diagram) how a threat can escalate to a major accident through the failure of multiple critical barriers. This use has become common especially in the domain ofprocess safety, in particular when applied to oil and gas drilling and production both for illustrative purposes and to support other processes, such asasset integrity managementandincident investigation.[9] Several reliability regimes for safety-critical systems exist: Software engineeringfor safety-critical systems is particularly difficult. There are three aspects which can be applied to aid the engineering software for life-critical systems. First is process engineering and management. Secondly, selecting the appropriate tools and environment for the system. This allows the system developer to effectively test the system by emulation and observe its effectiveness. Thirdly, address any legal and regulatory requirements, such as Federal Aviation Administration requirements for aviation. By setting a standard for which a system is required to be developed under, it forces the designers to stick to the requirements. Theavionicsindustry has succeeded in producingstandard methods for producing life-critical avionics software. Similar standards exist for industry, in general, (IEC 61508) and automotive (ISO 26262), medical (IEC 62304) and nuclear (IEC 61513) industries specifically. The standard approach is to carefully code, inspect, document, test, verify and analyze the system. Another approach is to certify a production system, acompiler, and then generate the system's code from specifications. Another approach usesformal methodsto generateproofsthat the code meets requirements.[12]All of these approaches improve thesoftware qualityin safety-critical systems by testing or eliminating manual steps in the development process, because people make mistakes, and these mistakes are the most common cause of potential life-threatening errors. The technology requirements can go beyond avoidance of failure, and can even facilitate medicalintensive care(which deals with healing patients), and alsolife support(which is for stabilizing patients). Archived2020-07-15 at theWayback Machine
https://en.wikipedia.org/wiki/Safety-critical_system
This is a list ofoperating systemsspecifically focused onsecurity. Similar concepts includesecurity-evaluated operating systemsthat have achieved certification from anauditingorganization, andtrusted operating systemsthat provide sufficient support formultilevel securityand evidence ofcorrectnessto meet a particular set of requirements.
https://en.wikipedia.org/wiki/Security-focused_operating_system
Incomputer science,self-modifying code(SMCorSMoC) iscodethat alters its owninstructionswhile it isexecuting– usually to reduce theinstruction path lengthand improveperformanceor simply to reduce otherwiserepetitively similar code, thus simplifyingmaintenance. The term is usually only applied to code where the self-modification is intentional, not in situations where code accidentally modifies itself due to an error such as abuffer overflow. Self-modifying code can involve overwriting existing instructions or generating new code at run time and transferring control to that code. Self-modification can be used as an alternative to the method of "flag setting" and conditional program branching, used primarily to reduce the number of times a condition needs to be tested. The method is frequently used for conditionally invokingtest/debuggingcode without requiring additionalcomputational overheadfor everyinput/outputcycle. The modifications may be performed: In either case, the modifications may be performed directly to themachine codeinstructions themselves, byoverlayingnew instructions over the existing ones (for example: altering a compare and branch to anunconditional branchor alternatively a 'NOP'). In theIBM System/360 architecture, and its successors up toz/Architecture, an EXECUTE (EX) instructionlogicallyoverlays the second byte of its target instruction with the low-order 8 bits ofregister1. This provides the effect of self-modification although the actual instruction in storage is not altered. Self-modification can be accomplished in a variety of ways depending upon the programming language and its support for pointers and/or access to dynamic compiler or interpreter 'engines': Self-modifying code is quite straightforward to implement when usingassembly language. Instructions can be dynamically created inmemory(or else overlaid over existing code in non-protected program storage),[1]in a sequence equivalent to the ones that a standard compiler may generate as theobject code. With modern processors, there can be unintendedside effectson theCPU cachethat must be considered. The method was frequently used for testing 'first time' conditions, as in this suitably commentedIBM/360assemblerexample. It uses instruction overlay to reduce theinstruction path lengthby (N×1)−1 where N is the number of records on the file (−1 being theoverheadto perform the overlay). Alternative code might involve testing a "flag" each time through. The unconditional branch is slightly faster than a compare instruction, as well as reducing the overall path length. In later operating systems for programs residing inprotected storagethis technique could not be used and so changing the pointer to thesubroutinewould be used instead. The pointer would reside indynamic storageand could be altered at will after the first pass to bypass the OPEN (having to load a pointer first instead of a direct branch & link to the subroutine would add N instructions to the path length – but there would be a corresponding reduction of N for the unconditional branch that would no longer be required). Below is an example inZilog Z80assembly language. The code increments register "B" in range [0,5]. The "CP" compare instruction is modified on each loop. Self-modifying code is sometimes used to overcome limitations in a machine's instruction set. For example, in theIntel 8080instruction set, one cannot input a byte from an input port that is specified in a register. The input port is statically encoded in the instruction itself, as the second byte of a two byte instruction. Using self-modifying code, it is possible to store a register's contents into the second byte of the instruction, then execute the modified instruction in order to achieve the desired effect. Some compiled languages explicitly permit self-modifying code. For example, the ALTER verb inCOBOLmay be implemented as a branch instruction that is modified during execution.[2]Somebatchprogramming techniques involve the use of self-modifying code.ClipperandSPITBOLalso provide facilities for explicit self-modification. The Algol compiler onB6700 systemsoffered an interface to the operating system whereby executing code could pass a text string or a named disc file to the Algol compiler and was then able to invoke the new version of a procedure. With interpreted languages, the "machine code" is the source text and may be susceptible to editing on-the-fly: inSNOBOLthe source statements being executed are elements of a text array. Other languages, such asPerlandPython, allow programs to create new code at run-time and execute it using anevalfunction, but do not allow existing code to be mutated. The illusion of modification (even though no machine code is really being overwritten) is achieved by modifying function pointers, as in this JavaScript example: Lisp macrosalso allow runtime code generation without parsing a string containing program code. The Push programming language is agenetic programmingsystem that is explicitly designed for creating self-modifying programs. While not a high level language, it is not as low level as assembly language.[3] Prior to the advent of multiple windows, command-line systems might offer a menu system involving the modification of a running command script. Suppose aDOSscript (or "batch") file MENU.BAT contains the following:[4][nb 1] Upon initiation of MENU.BAT from the command line, SHOWMENU presents an on-screen menu, with possible help information, example usages and so forth. Eventually the user makes a selection that requires a commandSOMENAMEto be performed: SHOWMENU exits after rewriting the file MENU.BAT to contain Because the DOS command interpreter does not compile a script file and then execute it, nor does it read the entire file into memory before starting execution, nor yet rely on the content of a record buffer, when SHOWMENU exits, the command interpreter finds a new command to execute (it is to invoke the script fileSOMENAME, in a directory location and via a protocol known to SHOWMENU), and after that command completes, it goes back to the start of the script file and reactivates SHOWMENU ready for the next selection. Should the menu choice be to quit, the file would be rewritten back to its original state. Although this starting state has no use for the label, it, or an equivalent amount of text is required, because the DOS command interpreter recalls the byte position of the next command when it is to start the next command, thus the re-written file must maintain alignment for the next command start point to indeed be the start of the next command. Aside from the convenience of a menu system (and possible auxiliary features), this scheme means that the SHOWMENU.EXE system is not in memory when the selected command is activated, a significant advantage when memory is limited.[4][5] Control tableinterpreterscan be considered to be, in one sense, 'self-modified' by data values extracted from the table entries (rather than specificallyhand codedinconditional statementsof the form "IF inputx = 'yyy'"). Some IBMaccess methodstraditionally used self-modifyingchannel programs, where a value, such as a disk address, is read into an area referenced by a channel program, where it is used by a later channel command to access the disk. TheIBM SSEC, demonstrated in January 1948, had the ability to modify its instructions or otherwise treat them exactly like data. However, the capability was rarely used in practice.[6]In the early days of computers, self-modifying code was often used to reduce use of limited memory, or improve performance, or both. It was also sometimes used to implement subroutine calls and returns when the instruction set only provided simple branching or skipping instructions to vary thecontrol flow.[7][8]This use is still relevant in certain ultra-RISCarchitectures, at least theoretically; see for exampleone-instruction set computer.Donald Knuth'sMIXarchitecture also used self-modifying code to implement subroutine calls.[9] Self-modifying code can be used for various purposes: Pseudocodeexample: Self-modifying code, in this case, would simply be a matter of rewriting the loop like this: Note that two-state replacement of theopcodecan be easily written as 'xor var at address with the value "opcodeOf(Inc) xor opcodeOf(dec)"'. Choosing this solution must depend on the value ofNand the frequency of state changing. Suppose a set of statistics such as average, extrema, location of extrema, standard deviation, etc. are to be calculated for some large data set. In a general situation, there may be an option of associating weights with the data, so each xiis associated with a wiand rather than test for the presence of weights at every index value, there could be two versions of the calculation, one for use with weights and one not, with one test at the start. Now consider a further option, that each value may have associated with it a Boolean to signify whether that value is to be skipped or not. This could be handled by producing four batches of code, one for each permutation and code bloat results. Alternatively, the weight and the skip arrays could be merged into a temporary array (with zero weights for values to be skipped), at the cost of processing and still there is bloat. However, with code modification, to the template for calculating the statistics could be added as appropriate the code for skipping unwanted values, and for applying weights. There would be no repeated testing of the options and the data array would be accessed once, as also would the weight and skip arrays, if involved. Self-modifying code is more complex to analyze than standard code and can therefore be used as a protection againstreverse engineeringandsoftware cracking. Self-modifying code was used to hide copy protection instructions in 1980s disk-based programs for systems such asIBM PC compatiblesandApple II. For example, on an IBM PC, thefloppy diskdrive access instructionint 0x13would not appear in the executable program's image but it would be written into the executable's memory image after the program started executing. Self-modifying code is also sometimes used by programs that do not want to reveal their presence, such ascomputer virusesand someshellcodes. Viruses and shellcodes that use self-modifying code mostly do this in combination withpolymorphic code. Modifying a piece of running code is also used in certain attacks, such asbuffer overflows. Traditionalmachine learningsystems have a fixed, pre-programmed learningalgorithmto adjust theirparameters. However, since the 1980sJürgen Schmidhuberhas published several self-modifying systems with the ability to change their own learning algorithm. They avoid the danger of catastrophic self-rewrites by making sure that self-modifications will survive only if they are useful according to a user-givenfitness,errororrewardfunction.[14] TheLinux kernelnotably makes wide use of self-modifying code; it does so to be able to distribute a single binary image for each major architecture (e.g.IA-32,x86-64, 32-bitARM,ARM64...) while adapting the kernel code in memory during boot depending on the specific CPU model detected, e.g. to be able to take advantage of new CPU instructions or to work around hardware bugs.[15][16]To a lesser extent, theDR-DOSkernel also optimizes speed-critical sections of itself at loadtime depending on the underlying processor generation.[10][11][nb 2] Regardless, at ameta-level, programs can still modify their own behavior by changing data stored elsewhere (seemetaprogramming) or via use ofpolymorphism. The Synthesiskernelpresented inAlexia Massalin'sPh.D.thesis[17][18]is a tinyUnixkernel that takes astructured, or evenobject oriented, approach to self-modifying code, where code is created for individualquajects, like filehandles. Generating code for specific tasks allows the Synthesis kernel to (as a JIT interpreter might) apply a number ofoptimizationssuch asconstant foldingorcommon subexpression elimination. The Synthesis kernel was very fast, but was written entirely in assembly. The resulting lack of portability has prevented Massalin's optimization ideas from being adopted by any production kernel. However, the structure of the techniques suggests that they could be captured by a higher levellanguage, albeit one more complex than existing mid-level languages. Such a language and compiler could allow development of faster operating systems and applications. Paul Haeberliand Bruce Karsh have objected to the "marginalization" of self-modifying code, and optimization in general, in favor of reduced development costs.[19] On architectures without coupled data and instruction cache (for example, someSPARC, ARM, andMIPScores) the cache synchronization must be explicitly performed by the modifying code (flush data cache and invalidate instruction cache for the modified memory area). In some cases short sections of self-modifying code execute more slowly on modern processors. This is because a modern processor will usually try to keep blocks of code in its cache memory. Each time the program rewrites a part of itself, the rewritten part must be loaded into the cache again, which results in a slight delay, if the modifiedcodeletshares the same cache line with the modifying code, as is the case when the modified memory address is located within a few bytes to the one of the modifying code. The cache invalidation issue on modern processors usually means that self-modifying code would still be faster only when the modification will occur rarely, such as in the case of a state switching inside an inner loop.[citation needed] Most modern processors load the machine code before they execute it, which means that if an instruction that is too near theinstruction pointeris modified, the processor will not notice, but instead execute the code as it wasbeforeit was modified. Seeprefetch input queue(PIQ). PC processors must handle self-modifying code correctly for backwards compatibility reasons but they are far from efficient at doing so.[citation needed] Because of the security implications of self-modifying code, all of the majoroperating systemsare careful to remove such vulnerabilities as they become known. The concern is typically not that programs will intentionally modify themselves, but that they could be maliciously changed by anexploit. One mechanism for preventing malicious code modification is an operating system feature calledW^X(for "writexorexecute"). This mechanism prohibits a program from making any page of memory both writable and executable. Some systems prevent a writable page from ever being changed to be executable, even if write permission is removed.[citation needed]Other systems provide a 'back door' of sorts, allowing multiple mappings of a page of memory to have different permissions. A relatively portable way to bypass W^X is to create a file with all permissions, then map the file into memory twice. On Linux, one may use an undocumented SysV shared memory flag to get executable shared memory without needing to create a file.[citation needed] Self-modifying code is harder to read and maintain because the instructions in the source program listing are not necessarily the instructions that will be executed. Self-modification that consists of substitution offunction pointersmight not be as cryptic, if it is clear that the names of functions to be called are placeholders for functions to be identified later. Self-modifying code can be rewritten as code that tests aflagand branches to alternative sequences based on the outcome of the test, but self-modifying code typically runs faster. Self-modifying code conflicts with authentication of the code and may require exceptions to policies requiring that all code running on a system be signed. Modified code must be stored separately from its original form, conflicting with memory management solutions that normally discard the code in RAM and reload it from the executable file as needed. On modern processors with aninstruction pipeline, code that modifies itself frequently may run more slowly, if it modifies instructions that the processor has already read from memory into the pipeline. On some such processors, the only way to ensure that the modified instructions are executed correctly is to flush the pipeline and reread many instructions. Self-modifying code cannot be used at all in some environments, such as the following:
https://en.wikipedia.org/wiki/Self-modifying_code
In the context ofsoftware engineering,software qualityrefers to two related but distinct notions:[citation needed] Many aspects of structural quality can be evaluated onlystaticallythrough the analysis of the software's inner structure, its source code (seeSoftware metrics),[3]at the unit level, and at the system level (sometimes referred to as end-to-end testing[4]), which is in effect how its architecture adheres to sound principles ofsoftware architectureoutlined in a paper on the topic byObject Management Group(OMG).[5] Some structural qualities, such asusability, can beassessedonlydynamically(users or others acting on their behalf interact with the software or, at least, some prototype or partial implementation; even the interaction with a mock version made in cardboard represents a dynamic test because such version can be considered a prototype). Other aspects, such as reliability, might involve not only the software but also the underlying hardware, therefore, it can be assessed both statically and dynamically (stress test).[citation needed] Usingautomated testsandfitness functionscan help to maintain some of the quality related attributes.[6] Functional quality is typically assessed dynamically but it is also possible to use static tests (such assoftware reviews).[citation needed] Historically, the structure, classification, and terminology of attributes and metrics applicable tosoftware quality managementhave been derived or extracted from theISO 9126and the subsequentISO/IEC 25000standard.[7]Based on these models (see Models), theConsortium for IT Software Quality(CISQ) has defined five major desirable structural characteristics needed for a piece of software to providebusiness value:[8]Reliability, Efficiency, Security, Maintainability, and (adequate) Size.[9][10][11] Software quality measurement quantifies to what extent a software program or system rates along each of these five dimensions. An aggregated measure of software quality can be computed through a qualitative or a quantitative scoring scheme or a mix of both and then a weighting system reflecting the priorities. This view of software quality being positioned on a linear continuum is supplemented by the analysis of "critical programming errors" that under specific circumstances can lead to catastrophic outages or performance degradations that make a given system unsuitable for use regardless of rating based on aggregated measurements. Such programming errors found at the system level represent up to 90 percent of production issues, whilst at the unit-level, even if far more numerous, programming errors account for less than 10 percent of production issues (see alsoNinety–ninety rule). As a consequence, code quality without the context of the whole system, asW. Edwards Demingdescribed it, has limited value.[citation needed] To view, explore, analyze, and communicate software quality measurements, concepts and techniques ofinformation visualizationprovide visual, interactive means useful, in particular, if several software quality measures have to be related to each other or to components of a software or system. For example,software mapsrepresent a specialized approach that "can express and combine information about software development, software quality, and system dynamics".[12] Software quality also plays a role in the release phase of a software project. Specifically, the quality and establishment of therelease processes(alsopatch processes),[13][14]configuration management[15]are important parts of an overall software engineering process.[16][17][18] Software quality is motivated by at least two main perspectives: Software quality is the "capability of a software product to conform to requirements."[36][37]while for others it can be synonymous with customer- or value-creation[38][39]or even defect level.[40]Software quality measurements can be split into three parts: process quality, product quality which includes internal and external properties and lastly, quality in use, which is the effect of the software.[41] ASQuses the following definition:Software qualitydescribes the desirable attributes of software products. There are two main approaches exist: defect management and quality attributes.[42] Software Assurance (SA) covers both the property and the process to achieve it:[43] TheProject Management Institute'sPMBOKGuide "Software Extension" defines not"Software quality"itself, but Software Quality Assurance (SQA) as"a continuous process that audits other software processes to ensure that those processes are being followed (includes for example a software quality management plan)."whereas Software Quality Control (SCQ) means"taking care of applying methods, tools, techniques to ensure satisfaction of the work products toward quality requirements for a software under development or modification."[44] The first definition of quality in recorded history is from Shewhart in the beginning of 20th century:"There are two common aspects of quality: one of them has to do with the consideration of the quality of a thing as an objective reality independent of the existence of man. The other has to do with what we think, feel or sense as a result of the objective reality. In other words, there is a subjective side of quality."[45] Kitchenhamand Pfleeger, further reporting the teachings of David Garvin, identify five different perspectives on quality:[46][47] The problem inherent in attempts to define the quality of a product, almost any product, was stated by the master Walter A. Shewhart. The difficulty in defining quality is to translate the future needs of the user into measurable characteristics, so that a product can be designed and turned out to give satisfaction at a price that the user will pay. This is not easy, and as soon as one feels fairly successful in the endeavor, he finds that the needs of the consumer have changed, competitors have moved in, etc.[51] Quality is a customer determination, not an engineer's determination, not a marketing determination, nor a general management determination. It is based on the customer's actual experience with the product or service, measured against his or her requirements -- stated or unstated, conscious or merely sensed, technically operational or entirely subjective -- and always representing a moving target in a competitive market.[52] The word quality has multiple meanings. Two of these meanings dominate the use of the word: 1. Quality consists of those product features which meet the need of customers and thereby provide product satisfaction. 2. Quality consists of freedom from deficiencies. Nevertheless, in a handbook such as this it is convenient to standardize on a short definition of the word quality as "fitness for use".[53] Tom DeMarcohas proposed that "a product's quality is a function of how much it changes the world for the better."[citation needed]This can be interpreted as meaning that functional quality and user satisfaction are more important than structural quality in determining software quality. Another definition, coined byGerald Weinbergin Quality Software Management: Systems Thinking, is "Quality is value to some person."[54][55] One of the challenges in defining quality is that "everyone feels they understand it"[56]and otherdefinitions of software qualitycould be based on extending the various descriptions of the concept of quality used in business. Software quality also often gets mixed-up withQuality Assuranceor Problem Resolution Management[57]orQuality Control[58]orDevOps. It does overlap with these areas (see also PMI definitions), but it is distinctive as it does not solely focus on testing but also on processes, management, improvements, assessments, etc.[58] Although the concepts presented in this section are applicable to both structural and functional software quality, measurement of the latter is essentially performed throughsoftware testing.[59]Testing is not enough: According to one study, "individual programmers are less than 50% efficient at finding bugs in their own software. And most forms of testing are only 35% efficient. This makes it difficult to determine [software] quality."[60] Software quality measurement is about quantifying to what extent a system or software possesses desirable characteristics. This can be performed through qualitative or quantitative means or a mix of both. In both cases, for each desirable characteristic, there are a set of measurable attributes the existence of which in a piece of software or system tend to be correlated and associated with this characteristic. For example, an attribute associated with portability is the number of target-dependent statements in a program. More precisely, using theQuality Function Deploymentapproach, these measurable attributes are the "hows" that need to be enforced to enable the "whats" in the Software Quality definition above. The structure, classification and terminology of attributes and metrics applicable to software quality management have been derived or extracted from theISO 9126-3and the subsequent ISO/IEC 25000:2005 quality model. The main focus is on internal structural quality. Subcategories have been created to handle specific areas like business application architecture and technical characteristics such as data access and manipulation or the notion of transactions. The dependence tree between software quality characteristics and their measurable attributes is represented in the diagram on the right, where each of the 5 characteristics that matter for the user (right) or owner of the business system depends on measurable attributes (left): Correlations between programming errors and production defects unveil that basic code errors account for 92 percent of the total errors in the source code. These numerous code-level issues eventually count for only 10 percent of the defects in production. Bad software engineering practices at the architecture levels account for only 8 percent of total defects, but consume over half the effort spent on fixing problems, and lead to 90 percent of the serious reliability, security, and efficiency issues in production.[61][62] Many of the existing software measures count structural elements of the application that result from parsing the source code for such individual instructions[63]tokens[64]control structures (Complexity), and objects.[65] Software quality measurement is about quantifying to what extent a system or software rates along these dimensions. The analysis can be performed using a qualitative or quantitative approach or a mix of both to provide an aggregate view [using for example weighted average(s) that reflect relative importance between the factors being measured]. This view of software quality on a linear continuum has to be supplemented by the identification of discreteCritical Programming Errors. These vulnerabilities may not fail a test case, but they are the result of bad practices that under specific circumstances can lead to catastrophic outages, performance degradations, security breaches, corrupted data, and myriad other problems[66]that make a given system de facto unsuitable for use regardless of its rating based on aggregated measurements. A well-known example of vulnerability is theCommon Weakness Enumeration,[67]a repository of vulnerabilities in the source code that make applications exposed to security breaches. The measurement of critical application characteristics involves measuring structural attributes of the application's architecture, coding, and in-line documentation, as displayed in the picture above. Thus, each characteristic is affected by attributes at numerous levels of abstraction in the application and all of which must be included in calculating the characteristic's measure if it is to be a valuable predictor of quality outcomes that affect the business. The layered approach to calculating characteristic measures displayed in the figure above was first proposed by Boehm and his colleagues at TRW (Boehm, 1978)[68]and is the approach taken in the ISO 9126 and 25000 series standards. These attributes can be measured from the parsed results of a static analysis of the application source code. Even dynamic characteristics of applications such as reliability and performance efficiency have their causal roots in the static structure of the application. Structural quality analysis and measurement is performed through the analysis of thesource code, thearchitecture,software framework,database schemain relationship to principles and standards that together define the conceptual and logical architecture of a system. This is distinct from the basic, local, component-level code analysis typically performed bydevelopment toolswhich are mostly concerned with implementation considerations and are crucial duringdebuggingandtestingactivities. The root causes of poor reliability are found in a combination of non-compliance with good architectural and coding practices. This non-compliance can be detected by measuring the static quality attributes of an application. Assessing the static attributes underlying an application's reliability provides an estimate of the level of business risk and the likelihood of potential application failures and defects the application will experience when placed in operation. Assessing reliability requires checks of at least the following software engineering best practices and technical attributes: Depending on the application architecture and the third-party components used (such as external libraries or frameworks), custom checks should be defined along the lines drawn by the above list of best practices to ensure a better assessment of the reliability of the delivered software. As with Reliability, the causes of performance inefficiency are often found in violations of good architectural and coding practice which can be detected by measuring the static quality attributes of an application. These static attributes predict potential operational performance bottlenecks and future scalability problems, especially for applications requiring high execution speed for handling complex algorithms or huge volumes of data. Assessing performance efficiency requires checking at least the following software engineering best practices and technical attributes: Software quality includessoftware security.[70]Many security vulnerabilities result from poor coding and architectural practices such as SQL injection or cross-site scripting.[71][72]These are well documented in lists maintained by CWE,[73]and the SEI/Computer Emergency Center(CERT)at Carnegie Mellon University.[69] Assessing security requires at least checking the following software engineering best practices and technical attributes: Maintainability includes concepts of modularity, understandability, changeability, testability, reusability, and transferability from one development team to another. These do not take the form of critical issues at the code level. Rather, poor maintainability is typically the result of thousands of minor violations with best practices in documentation, complexity avoidance strategy, and basic programming practices that make the difference between clean and easy-to-read code vs. unorganized and difficult-to-read code.[79] Assessing maintainability requires checking the following software engineering best practices and technical attributes: Maintainability is closely related to Ward Cunningham's concept oftechnical debt, which is an expression of the costs resulting of a lack of maintainability. Reasons for why maintainability is low can be classified as reckless vs. prudent and deliberate vs. inadvertent,[80][81]and often have their origin in developers' inability, lack of time and goals, their carelessness and discrepancies in the creation cost of and benefits from documentation and, in particular, maintainablesource code.[82] Measuring software size requires that the whole source code be correctly gathered, including database structure scripts, data manipulation source code, component headers, configuration files etc. There are essentially two types of software sizes to be measured, the technical size (footprint) and the functional size: The function point analysis sizing standard is supported by the International Function Point Users Group (IFPUG). It can be applied early in the software development life-cycle and it is not dependent on lines of code like the somewhat inaccurate Backfiring method. The method is technology agnostic and can be used for comparative analysis across organizations and across industries. Since the inception of Function Point Analysis, several variations have evolved and the family of functional sizing techniques has broadened to include such sizing measures as COSMIC, NESMA, Use Case Points, FP Lite, Early and Quick FPs, and most recently Story Points. Function Point has a history of statistical accuracy, and has been used as a common unit of work measurement in numerous application development management (ADM) or outsourcing engagements, serving as the "currency" by which services are delivered and performance is measured. One common limitation to the Function Point methodology is that it is a manual process and therefore it can be labor-intensive and costly in large scale initiatives such as application development or outsourcing engagements. This negative aspect of applying the methodology may be what motivated industry IT leaders to form the Consortium for IT Software Quality focused on introducing a computable metrics standard for automating the measuring of software size while the IFPUG keep promoting a manual approach as most of its activity rely on FP counters certifications. CISQdefines Sizing as to estimate the size of software to support cost estimating, progress tracking or other related software project management activities. Two standards are used:Automated Function Pointsto measure the functional size of software andAutomated Enhancement Pointsto measure the size of both functional and non-functional code in one measure.[83] Critical Programming Errors are specific architectural and/or coding bad practices that result in the highest, immediate or long term, business disruption risk.[84] These are quite often technology-related and depend heavily on the context, business objectives and risks. Some may consider respect for naming conventions while others – those preparing the ground for a knowledge transfer for example – will consider it as absolutely critical. Critical Programming Errors can also be classified per CISQ Characteristics. Basic example below: Newer proposals for quality models such asSqualeand Quamoco[85]propagate a direct integration of the definition of quality attributes and measurement. By breaking down quality attributes or even defining additional layers, the complex, abstract quality attributes (such as reliability or maintainability) become more manageable and measurable. Those quality models have been applied in industrial contexts but have not received widespread adoption. Notes Bibliography
https://en.wikipedia.org/wiki/Software_quality
Blind return-oriented programming(BROP) is an exploit technique which can successfully create an exploit even if the attacker does not possess the target binary. BROP attacks shown by Bittau et al. have defeatedaddress space layout randomization(ASLR) andstack canarieson 64-bit systems. With the current improvements in OS security and hardware, security features like the LinuxPaXproject, code injection is now impossible. Security researchers then conceived a new attack which they namedreturn-oriented programmingto defeatNX(non-executable) memory. This attack relies on affecting program flow by controlling the stack, especially return addresses. Gadgets are the fundamental units of this attack. Gadgets are a group of instruction sequences ending in a return instruction, along with a certain state of the stack. A gadget can perform an operation like loading a word from memory into a register, or performing a more complex operation like a conditional jump. With a large enough target binary, aTuring-completecollection of gadgets can be constructed, which is more than enough to get a shellcode executed. One assumption which ROP makes is that the attacker possesses the target binaries and hence knows the addresses of the gadgets beforehand. There are three new scenarios which BROP[1]can be relevant for. They are: The attack assumes that there is a service on the server which has a known stack vulnerability and also that the service should restart on crash. Return instruction pointers are usually protected bystack canaries. A stack canary causes the program to crash if its value is modified by a buffer overrun. In the BROP model of attack, the buffer overrun is carried byte by byte. Each try at the overrun results either in a program crash or continued execution. A program crash implies that the stack value was incorrectly guessed, therefore in 256 tries (average case is 128 tries), the stack value can be probably estimated. On 64 bit machines, 4 such stack reads would be required to leak the canary. Once the canary is leaked, the return instruction pointer can be perturbed in the same way. It may, however, be noted that though the estimation of the stack canary is exact, the same cannot be said about the return instruction address. The attacker would be satisfied to be able to leak any address within the text segment of the address space. This stage is the heart of the attack. The objective in this phase is to initiate a write system call, sending a dump of the binary to the attacker. The write system call has three params: socket, buffer, and length. As x86-64 calling conventions require the parameters to be passed through registers, appropriate pop instructions into rsi, rdi and rdx would be needed to set up the arguments for the write system call. Instruction sequences like pop rdi, ret and the like would be helpful in this regard. A simple ROP version of the write system call would be: One problem with this methodology is that even if useful gadgets are found in the address space after they return the address on the stack would lead to non-executable stack with a high probability. To remedy this, BROP proposers conceived stop gadgets. A stop gadget is anything that would cause the program to block, like an infinite loop or a blocking system call (like sleep). This also workers processors affected in the attack to be stuck in an infinite loop and hence allowing the attacker to carry on the attack. What is mentioned above is the bare-bones methodology of the attack. In reality, a few optimizations can be carried out which help in efficiently carrying out the attack. Primary among them is the use ofProcedure Linker Tables(PLTs) to track down the writing system call instead of passing the system call number to the syscall function. Others include using strcmp to populate the RDX register, as pop RDX, ret instruction sequences are extremely rare. Once the writing is found in the PLT, the attacker can dump the contents of the target binary to find more gadgets. The attacker can use conventional ROP gadget search techniques to gather enough and create a shellcode. Once they have the shellcode, the exploited system can be taken under full control with root access. A huge assumption in the BROP attack is that the server restarts after each crash and when restarting does not re-randomize its address space. So enabling re-randomization of address space at startup can provide almost complete protection against BROP. Another technique used by NetBSD and Linux is sleep on the crash. This slows down the attack considerably and allows the system administrator to look into any suspicious activity. Apart from this the conventional protection against ROP style control flow hijacking attacks, Control Flow Integrity also can provide provable prevention but at a significant performance overhead. Another attack that is similar in nature to BROP, is JIT (Just-In-Time)-ROP, or JIT-ROP. It is also another attack that's based on information disclosure, which is able to also defeataddress space layout randomization. Both BROP and JIT-ROP will attempt to locate gadgets on the binary in order to initiate a ROP attack, where the goal is to exploit some type of data leak. However, unlike BROP, JIT-ROP is not an attack which is interactive, and seeks to adapt to no-crash/crash situations, but rather, the attacker will send out a script which will discover gadgets, then subsequently creates an attack for delivery. Also, JIT-ROP must have two different vulnerabilities (both a heap and stack) known in advance of the attack, while BROP only requires awareness of a stack vulnerability.[2]
https://en.wikipedia.org/wiki/Blind_return_oriented_programming
JIT sprayingis a class ofcomputer security exploitthat circumvents the protection ofaddress space layout randomizationanddata execution preventionby exploiting the behavior ofjust-in-time compilation.[1]It has been used to exploit thePDFformat[2]andAdobe Flash.[3] Ajust-in-time compiler(JIT) by definition produces code as its data. Since the purpose is to produce executable data, a JIT compiler is one of the few types of programs that cannot be run in a no-executable-data environment. Because of this, JIT compilers are normally exempt from data execution prevention. A JIT spray attack doesheap sprayingwith the generated code. To produce exploit code from JIT, an idea from Dion Blazakis[4]is used. The input program, usuallyJavaScriptorActionScript, typically contains numerous constant values that can be erroneously executed as code. For example, theXORoperation could be used:[5] JIT then will transform bytecode to native x86 code like: The attacker then uses a suitable bug to redirect code execution into the newly generated code. For example, abuffer overfloworuse after freebug could allow the attack to modify afunction pointeror return address. This causes the CPU to execute instructions in a way that was unintended by the JIT authors. The attacker is usually not even limited to the expected instruction boundaries; it is possible to jump into the middle of an intended instruction to have the CPU interpret it as something else. As with non-JITROPattacks, this may be enough operations to usefully take control of the computer. Continuing the above example, jumping to the second byte of the "mov" instruction results in an "inc" instruction: x86andx86-64allow jumping into the middle of an instruction, but not fixed-length architectures likeARM. To protect against JIT spraying, the JIT code can be disabled or made less predictable for the attacker.[4]
https://en.wikipedia.org/wiki/JIT_spraying
Sigreturn-oriented programming(SROP) is acomputer security exploittechnique that allows an attacker to execute code in presence of security measures such asnon-executable memoryand code signing.[1]It was presented for the first time at the 35thIEEE Symposium on Security and Privacyin 2014 where it won thebest student paper award.[2]This technique employs the same basic assumptions behind thereturn-oriented programming(ROP) technique: an attacker controlling thecall stack, for example through astack buffer overflow, is able to influence thecontrol flowof the program through simple instruction sequences calledgadgets. The attack works bypushinga forgedsigcontextstructure[3]on the call stack, overwriting the original return address with the location of a gadget that allows the attacker to call thesigreturn[4]system call.[5]Often just a single gadget is needed to successfully put this attack into effect. This gadget may reside at a fixed location, making this attack simple and effective, with a setup generally simpler and more portable than the one needed by the plain return-oriented programming technique.[1] Sigreturn-oriented programming can be considered aweird machinesince it allows code execution outside the original specification of the program.[1] Sigreturn-oriented programming (SROP) is a technique similar to return-oriented programming (ROP), since it employscode reuseto execute code outside the scope of the original control flow. In this sense, the adversary needs to be able to carry out astack smashingattack, usually through a stack buffer overflow, to overwrite the return address contained inside the call stack. If mechanisms such asdata execution preventionare employed, it won't be possible for the attacker to just place ashellcodeon the stack and cause the machine to execute it by overwriting the return address. With such protections in place, the machine won't execute any code present in memory areas marked as writable and non-executable. Therefore, the attacker will need to reuse code already present in memory. Most programs do not contain functions that will allow the attacker to directly carry out the desired action (e.g., obtain access to ashell), but the necessary instructions are often scattered around memory.[6] Return-oriented programming requires these sequences of instructions, called gadgets, to end with aRETinstruction. In this way, the attacker can write a sequence of addresses for these gadgets to the stack, and as soon as aRETinstruction in one gadget is executed, the control flow will proceed to the next gadget in the list. This attack is made possible by howsignalsare handled in mostPOSIX-like systems. Whenever a signal is delivered, the kernel needs tocontext switchto the installed signal handler. To do so, the kernel saves the current execution context in a frame on the stack.[5][6]The structure pushed onto the stack is an architecture-specific variant of thesigcontextstructure, which holds various data comprising the contents of the registers at the moment of the context switch. When the execution of the signal handler is completed, thesigreturn()system call is called. Calling thesigreturnsyscall means being able to easily set the contents of registers using a single gadget that can be easily found on most systems.[1] There are several factors that characterize an SROP exploit and distinguish it from a classical return-oriented programming exploit.[7] First, ROP is dependent on available gadgets, which can be very different in distinctbinaries, thus making chains of gadget non-portable.Address space layout randomization(ASLR) makes it hard to use gadgets without aninformation leakageto get their exact positions in memory. AlthoughTuring-completeROP compilers exist,[8]it is usually non-trivial to create a ROP chain.[7] SROP exploits are usually portable across different binaries with minimal or no effort and allow easily setting the contents of the registers, which could be non-trivial or unfeasible for ROP exploits if the needed gadgets are not present.[6]Moreover, SROP requires a minimal number of gadgets and allows constructing effective shellcodes by chaining system calls. These gadgets are always present in memory, and in some cases are always at fixed locations:[7] An example of the kind of gadget needed for SROP exploits can always be found in thevirtual dynamic shared object(VDSO) memory area on x86-Linuxsystems: On someLinux kernelversions, ASLR can be disabled by setting the limit for the stack size to unlimited,[9]effectively bypassing ASLR and allowing easy access to the gadget present in a VDSO. For Linux kernels prior to version 3.3, it is also possible to find a suitable gadget inside the vsyscall page, which is a mechanism to accelerate the access to certain system calls often used by legacy programs and resides always at a fixed location. It is possible to use gadgets to write into the contents of the stack frames, thereby constructing aself-modifying program. Using this technique, it is possible to devise a simplevirtual machine, which can be used as the compilation target for aTuring-completelanguage. An example of such an approach can be found in Bosman's paper, which demonstrates the construction of an interpreter for a language similar to theBrainfuck programming language. The language provides a program counterPC, a memory pointerP, and a temporary register used for 8-bit additionA. This means that complexbackdoorsor obfuscated attacks can also be devised.[1] A number of techniques exists to mitigate SROP attacks, relying onaddress space layout randomization,canariesandcookies, orshadow stacks. Address space layout randomization makes it harder to use suitable gadgets by making their locations unpredictable. A mitigation for SROP calledsignal cookieshas been proposed. It consists of a way of verifying that the sigcontext structure has not been tampered with by the means of a random cookieXORedwith the address of the stack location where it is to be stored. In this way, thesigreturnsyscall just needs to verify the cookie's existence at the expected location, effectively mitigating SROP with a minimal impact on performances.[1][10] In Linux kernel versions greater than 3.3, the vsyscall interface is emulated, and any attempt to directly execute gadgets in the page will result in an exception.[11][12] Grsecurity is a set of patches for theLinux kernelto harden and improve system security.[13]It includes the so-called return-address protection (RAP) to help protect against code reuse attacks.[14] Starting in 2016,Intelis developing aControl-flow Enforcement Technology(CET) to help mitigate and prevent stack-hopping exploits. CET works by implementing a shadow stack in RAM which will only contain return addresses, protected by the CPU'smemory management unit.[15][16]
https://en.wikipedia.org/wiki/Sigreturn-oriented_programming
Incomputer science,threaded codeis a programming technique where thecodehas a form that essentially consists entirely of calls tosubroutines. It is often used incompilers, which may generate code in that form or be implemented in that form themselves. The code may be processed by aninterpreteror it may simply be a sequence ofmachine codecall instructions. Threaded code has betterdensitythan code generated by alternative generation techniques and by alternativecalling conventions. Incachedarchitectures, it mayexecuteslightly slower.[citation needed]However, a program that is small enough to fit in acomputer processor'scachemay run faster than a larger program that suffers manycache misses.[1]Small programs may also be faster atthread switching, when other programs have filled the cache. Threaded code is best known for its use in many compilers ofprogramming languages, such asForth, many implementations ofBASIC, some implementations ofCOBOL, early versions ofB,[2]and other languages for smallminicomputersand foramateur radio satellites.[citation needed] The common way to make computer programs is to use acompilerto translatesource code(written in somesymbolic language) tomachine code. The resultingexecutableis typically fast but, because it is specific to ahardwareplatform, it isn't portable. A different approach is to generateinstructionsfor avirtual machineand to use aninterpreteron each hardware platform. The interpreter instantiates the virtual machine environment and executes the instructions. Thus the interpreter, compiled to machine code, provides an abstraction layer for "interpreted languages" that only need little compilation to conform to that layer (compilation may be confined to generating anAbstract Syntax Tree) or even need no compilation at all (if the layer is designed to consume raw source code.) Early computers had relatively little memory. For example, mostData General Nova,IBM 1130, and many of the firstmicrocomputershad only 4 kB of RAM installed. Consequently, a lot of time was spent trying to find ways to reduce a program's size, to fit in the available memory. One solution is to use an interpreter which reads the symbolic language a bit at a time, and calls functions to perform the actions. As the source code is typically muchdenserthan the resulting machine code, this can reduce overall memory use. This was the reasonMicrosoft BASICis an interpreter:[a]its own code had to share the 4 kB memory of machines like theAltair 8800with the user's source code. A compiler translates from a source language to machine code, so the compiler, source, and output must all be in memory at the same time. In an interpreter, there is no output. Threaded code is a formatting style for compiled code that minimizes memory use. Instead of writing out every step of an operation at its every occurrence in the program, as was common inmacro assemblersfor instance, the compiler writes each common bit of code into a subroutine. Thus, each bit exists in only one place in memory (see "Don't repeat yourself"). The top-level application in these programs may consist of nothing but subroutine calls. Many of these subroutines, in turn, also consist of nothing but lower-level subroutine calls. Mainframes and some early microprocessors such as theRCA 1802required several instructions to call a subroutine. In the top-level application and in many subroutines, that sequence is constantly repeated, with only the subroutine address changing from one call to the next. This means that a program consisting of many function calls may have considerable amounts of repeated code as well. To address this, threaded code systems used pseudo-code to represent function calls in a single operator. At run time, a tiny "interpreter" would scan over the top-level code, extract the subroutine's address in memory, and call it. In other systems, this same basic concept is implemented as abranch table,dispatch table, orvirtual method table, all of which consist of a table of subroutine addresses. During the 1970s, hardware designers spent considerable effort to make subroutine calls faster and simpler. On the improved designs, only a single instruction is expended to call a subroutine, so the use of a pseudo-instruction saves no room.[citation needed]Additionally, the performance of these calls is almost free of additional overhead. Today, though almost all programming languages focus on isolating code into subroutines, they do so for code clarity and maintainability, not to save space. Threaded code systems save room by replacing that list of function calls, where only the subroutine address changes from one call to the next, with a list of execution tokens, which are essentially function calls with the call opcode(s) stripped off, leaving behind only a list of addresses.[3][4][5][6][7] Over the years, programmers have created many variations on that "interpreter" or "small selector". The particular address in the list of addresses may be extracted using an index,general-purpose registerorpointer. The addresses may be direct or indirect, contiguous or non-contiguous (linked by pointers), relative or absolute, resolved at compile time or dynamically built. No single variation is "best" for all situations. To save space, programmers squeezed the lists of subroutine calls into simple lists of subroutine addresses, and used a small loop to call each subroutine in turn. For example, the following pseudocode uses this technique to add two numbers A and B. In the example, the list is labeledthreadand a variableip(Instruction Pointer) tracks our place within the list. Another variablesp(Stack Pointer) contains an address elsewhere in memory that is available to hold a value temporarily. The calling loop attopis so simple that it can be repeated inline at the end of each subroutine. Control now jumps once, from the end of a subroutine to the start of another, instead of jumping twice viatop. For example: This is calleddirect threaded code(DTC). Although the technique is older, the first widely circulated use of the term "threaded code" is probably James R. Bell's 1973 article "Threaded Code".[8] In 1970,Charles H. Mooreinvented a more compact arrangement,indirect threaded code(ITC), for his Forth virtual machine. Moore arrived at this arrangement becauseNovaminicomputers had anindirection bitin every address, which made ITC easy and fast. Later, he said that he found it so convenient that he propagated it into all later Forth designs.[9] Today, some Forth compilers generate direct-threaded code while others generate indirect-threaded code. The executables act the same either way. Practically all executable threaded code uses one or another of these methods for invoking subroutines (each method is called a "threading model"). Addresses in the thread are the addresses of machine language. This form is simple, but may have overheads because the thread consists only of machine addresses, so all further parameters must be loaded indirectly from memory. Some Forth systems produce direct-threaded code. On many machines direct-threading is faster than subroutine threading (see reference below). An example of astack machinemight execute the sequence "push A, push B, add". That might be translated to the following thread and routines, whereipis initialized to the address labeledthread(i.e., the address where&pushAis stored). Alternatively, operands may be included in the thread. This can remove some indirection needed above, but makes the thread larger: Indirect threading uses pointers to locations that in turn point to machine code. The indirect pointer may be followed by operands which are stored in the indirect "block" rather than storing them repeatedly in the thread. Thus, indirect code is often more compact than direct-threaded code. The indirection typically makes it slower, though usually still faster than bytecode interpreters. Where the handler operands include both values and types, the space savings over direct-threaded code may be significant. Older FORTH systems typically produce indirect-threaded code. For example, if the goal is to execute "push A, push B, add", the following might be used. Here,ipis initialized to address&thread, each code fragment (push,add) is found by double-indirecting throughipand an indirect block; and any operands to the fragment are found in the indirect block following the fragment's address. This requires keeping thecurrentsubroutine inip, unlike all previous examples where it contained thenextsubroutine to be called. So-called "subroutine-threaded code" (also "call-threaded code") consists of a series of machine-language "call" instructions (or addresses of functions to "call", as opposed to direct threading's use of "jump"). Early compilers forALGOL, Fortran, Cobol and some Forth systems often produced subroutine-threaded code. The code in many of these systems operated on a last-in-first-out (LIFO) stack of operands, for which compiler theory was well-developed. Most modern processors have special hardware support for subroutine "call" and "return" instructions, so the overhead of one extra machine instruction per dispatch is somewhat diminished. Anton Ertl, theGforthcompiler's co-creator, stated that "in contrast to popular myths, subroutine threading is usually slower than direct threading".[10]However, Ertl's most recent tests[1]show that subroutine threading is faster than direct threading in 15 out of 25 test cases. More specifically, he found that direct threading is the fastest threading model on Xeon, Opteron, and Athlon processors, indirect threading is fastest on Pentium M processors, and subroutine threading is fastest on Pentium 4, Pentium III, and PPC processors. As an example of call threading for "push A, push B, add": Token-threaded code implements the thread as a list of indices into a table of operations; the index width is naturally chosen to be as small as possible for density and efficiency. 1 byte / 8-bits is the natural choice for ease of programming, but smaller sizes like 4-bits, or larger like 12 or 16 bits, can be used depending on the number of operations supported. As long as the index width is chosen to be narrower than a machine pointer, it will naturally be more compact than the other threading types without much special effort by the programmer. It is usually half to three-fourths the size of other threadings, which are themselves a quarter to an eighth the size of non-threaded code. The table's pointers can either be indirect or direct. Some Forth compilers produce token-threaded code. Some programmers consider the "p-code" generated by somePascalcompilers, as well as thebytecodesused by.NET,Java, BASIC and someCcompilers, to be token-threading. A common approach, historically, isbytecode, which typically uses 8-bit opcodes with a stack-based virtual machine. The archetypal bytecodeinterpreteris known as a "decode and dispatch interpreter" and follows the form: If the virtual machine uses only byte-size instructions,decode()is simply a fetch fromthread, but often there are commonly used 1-byte instructions plus some less-common multibyte instructions (seecomplex instruction set computer), in which casedecode()is more complex. The decoding of single byte opcodes can be very simply and efficiently handled by a branch table using the opcode directly as an index. For instructions where the individual operations are simple, such as "push" and "add", theoverheadinvolved in deciding what to execute is larger than the cost of actually executing it, so such interpreters are often much slower than machine code. However, for more complex ("compound") instructions, the overhead percentage is proportionally less significant. There are times when token-threaded code can sometimes run faster than the equivalent machine code when that machine code ends up being too large to fit in the physical CPU's L1 instruction cache. The highercode densityof threaded code, especially token-threaded code, can allow it to fit entirely in the L1 cache when it otherwise would not have, thereby avoiding cache thrashing. However, threaded code consumes both instruction cache (for the implementation of each operation) as well as data cache (for the bytecode and tables) unlike machine code which only consumes instruction cache; this means threaded code will eat into the budget for the amount of data that can be held for processing by the CPU at any given time. In any case, if the problem being computed involves applying a large number of operations to a small amount of data then using threaded code may be an ideal optimization.[4] Huffman threaded code consists of lists of tokens stored asHuffman codes. A Huffman code is a variable-length string of bits that identifies a unique token. A Huffman-threaded interpreter locates subroutines using an index table or a tree of pointers that can be navigated by the Huffman code. Huffman-threaded code is one of the most compact representations known for a computer program. The index and codes are chosen by measuring the frequency of calls to each subroutine in the code. Frequent calls are given the shortest codes. Operations with approximately equal frequencies are given codes with nearly equal bit-lengths. Most Huffman-threaded systems have been implemented as direct-threaded Forth systems, and used to pack large amounts of slow-running code into small, cheapmicrocontrollers. Most published[11]uses have been in smart cards, toys, calculators, and watches. The bit-oriented tokenized code used inPBASICcan be seen as a kind of Huffman-threaded code. An example is string threading, in which operations are identified by strings, usually looked up by a hash table. This was used in Charles H. Moore's earliest Forth implementations and in theUniversity of Illinois's experimental hardware-interpreted computer language. It is also used inBashforth. HP'sRPL, first introduced in theHP-18Ccalculator in 1986, is a type of proprietary hybrid (direct-threaded and indirect-threaded)threaded interpretive language(TIL)[12]that, unlike other TILs, allows embedding of RPL "objects" into the "runstream", i.e. the stream of addresses through which the interpreter pointer advances. An RPL "object" can be thought of as a special data type whose in-memory structure contains an address to an "object prolog" at the start of the object, and then data or executable code follows. The object prolog determines how the object's body should be executed or processed. Using the "RPL inner loop",[13]which was invented and patented[14]by William C. Wickes in 1986 and published in 1988, execution follows like so:[15] This can be represented more precisely by: Where above, O is the current object pointer, I is the interpreter pointer, Δ is the length of one address word and the "[]" operator stands for "dereference". When control is transferred to an object pointer or an embedded object, execution continues as follows: On HP'sSaturnmicroprocessors that use RPL, there is a third level of indirection made possible by an architectural / programming trick which allows faster execution.[13] In all interpreters, a branch simply changes the thread pointer (ip) to a different address in the thread. A conditional jump-if-zero branch that jumps only if the top-of-stack value is zero could be implemented as shown below. This example uses the embedded parameter version of direct threading so the&thread[123]line is the destination of where to jump if the condition is true, so it must be skipped (ip++) over if the branch is not taken. Separating the data and return stacks in a machine eliminates a great deal of stack management code, substantially reducing the size of the threaded code. The dual-stack principle originated three times independently: forBurroughs large systems,Forth, andPostScript. It is used in someJava virtual machines. Threeregistersare often present in a threaded virtual machine. Another one exists for passing data betweensubroutines('words'). These are: Often, threadedvirtual machines, such as implementations of Forth, have a simple virtual machine at heart, consisting of threeprimitives. Those are: In an indirect-threaded virtual machine, the one given here, the operations are:
https://en.wikipedia.org/wiki/Threaded_code
Cross-application scripting(CAS) is a vulnerability affecting desktop applications that don't check input in an exhaustive way. CAS allows an attacker to insert data that modifies the behaviour of a particular desktop application. This makes it possible to extract data from inside of the users' systems. Attackers may gain the full privileges of the attacked application when exploiting CAS vulnerabilities; the attack is to some degree independent of the underlying operating system and hardware architecture. Initially discovered by Emanuele Gentili and presented with two other researchers (Alessandro Scoscia and Emanuele Acri) that had participated in the study of the technique and its implications, it was presented for the first time during the Security Summit 2010 inMilan.[1][2][3] Theformat string attackis very similar in concept to this attack and CAS could be considered as a generalization of this attack method. Some aspects of this technique have been previously demonstrated inclickjackingtechniques. Like web interfaces, modern frameworks for the realization of graphical applications (in particularGTK+andQt) allow the use of tags inside their ownwidgets. If an attacker gains the possibility to inject tags, he gains the ability to manipulate the appearance and behaviour of the application. Exactly the same phenomenon was seen with the use ofcross-site scripting(XSS) in web pages, which is why this kind of behavior has been named cross-application scripting (CAS). Typically desktop applications get a considerable amount of input and support a large number of features, certainly more than any web interface. This makes it harder for the developer to check whether all the input a program might get from untrusted sources is filtered correctly. If cross-application scripting is the application equivalent for XSS in web applications, then cross-application request forgery (CARF) is the equivalent ofcross-site request forgery(CSRF) in desktop applications. In CARF the concept of “link” and “protocol” inherited from the web has been extended because it involves components of the graphical environment and, in some cases, of the operating system. Exploiting vulnerabilities amenable to CSRF requires interaction from the user. This requirement isn't particularly limiting because the user can be easily led to execute certain actions if the graphical interface is altered the right way. Many misleading changes in the look of applications can be obtained with the use of CAS: a new kind of “phishing”, whose dangerousness is amplified by a lack of tools to detect this kind of attack outside of websites or emails. In contrast to XSS techniques, that can manipulate and later execute commands in the users' browser, with CAS it is possible to talk directly to the operating system, and not just its graphical interface.
https://en.wikipedia.org/wiki/Cross-application_scripting
Cross-site scripting(XSS)[a]is a type of securityvulnerabilitythat can be found in someweb applications. XSS attacks enable attackers toinjectclient-side scripts into web pages viewed by other users. A cross-site scripting vulnerability may be used by attackers to bypassaccess controlssuch as thesame-origin policy. During the second half of 2007, XSSed documented 11,253 site-specific cross-site vulnerabilities, compared to 2,134 "traditional" vulnerabilities documented bySymantec.[1]XSS effects vary in range from petty nuisance to significant security risk, depending on the sensitivity of the data handled by the vulnerable site and the nature of any security mitigation implemented by the site's ownernetwork. OWASPconsiders the term cross-site scripting to be amisnomer. It initially was an attack that was used for breaching data across sites, but gradually started to include other forms of data injection attacks.[2] Security on the web depends on a variety of mechanisms, including an underlying concept of trust known as thesame-origin policy. This states that if content from one site (such ashttps://mybank.example1.com) is granted permission to access resources (like cookies etc.) on a web browser, then content from any URL with the same (1)URI scheme(e.g. ftp, http, or https), (2)host name,and(3)port numberwill share these permissions. Content from URLs where any of these three attributes are different will have to be granted permissions separately.[3] Cross-site scripting attacks use known vulnerabilities inweb-based applications, theirservers, or the plug-in systems on which they rely. Exploiting one of these, attackers fold malicious content into the content being delivered from the compromised site. When the resulting combined content arrives at the client-side web browser, it has all been delivered from the trusted source, and thus operates under the permissions granted to that system. By finding ways of injecting malicious scripts into web pages, an attacker can gain elevated access-privileges to sensitive page content, to session cookies, and to a variety of other information maintained by the browser on behalf of the user. Cross-site scripting attacks are a case ofcode injection. Microsoftsecurity-engineers introduced the term "cross-site scripting" in January 2000.[4][non-primary source needed]The expression "cross-site scripting" originally referred to the act of loading the attacked, third-party web application from an unrelated attack-site, in a manner that executes a fragment of JavaScript prepared by the attacker in thesecurity contextof the targeted domain (taking advantage of areflectedornon-persistentXSS vulnerability). The definition gradually expanded to encompass other modes of code injection, including persistent and non-JavaScript vectors (includingActiveX,Java,VBScript,Flash, or evenHTMLscripts), causing some confusion to newcomers to the field ofinformation security.[5] XSS vulnerabilities have been reported and exploited since the 1990s. Prominent sites affected in the past include the social-networking sitesTwitter[6]andFacebook.[7]Cross-site scripting flaws have since surpassedbuffer overflowsto become the most common publicly reported security vulnerability,[8]with some researchers in 2007 estimating as many as 68% of websites are likely open to XSS attacks.[9] There is no single, standardized classification of cross-site scripting flaws, but most experts distinguish between at least two primary flavors of XSS flaws:non-persistentandpersistent. Some sources further divide these two groups intotraditional(caused by server-side code flaws) andDOM-based(in client-side code). Thenon-persistent(orreflected) cross-site scripting vulnerability is by far the most basic type of web vulnerability.[10]These holes show up when the data provided by a web client,[11]most commonly in HTTP query parameters (e.g. HTML form submission), is used immediately by server-side scripts to parse and display a page of results for and to that user, without properlysanitizingthe content.[12] Because HTML documents have a flat, serial structure that mixes control statements, formatting, and the actual content, any non-validated user-supplied data included in the resulting page without proper HTML encoding, may lead to markup injection.[10][12]A classic example of a potential vector is a site search engine: if one searches for a string, the search string will typically be redisplayed verbatim on the result page to indicate what was searched for. If this response does not properlyescapeor reject HTML control characters, a cross-site scripting flaw will ensue.[13] A reflected attack is typically delivered via email or a neutral web site. The bait is an innocent-looking URL, pointing to a trusted site but containing the XSS vector. If the trusted site is vulnerable to the vector, clicking the link can cause the victim's browser to execute the injected script. Thepersistent(orstored) XSS vulnerability is a more devastating variant of a cross-site scripting flaw: it occurs when the data provided by the attacker is saved by the server, and then permanently displayed on "normal" pages returned to other users in the course of regular browsing, without proper HTML escaping. A classic example of this is with online message boards where users are allowed to post HTML formatted messages for other users to read.[12] For example, suppose there is a dating website where members scan the profiles of other members to see if they look interesting. For privacy reasons, this site hides everybody's real name and email. These are kept secret on the server. The only time a member's real name andemailare in the browser is when the member issigned in, and they can't see anyone else's. Suppose that Mallory, an attacker, joins the site and wants to figure out the real names of the people she sees on the site. To do so, she writes a script designed to run from other users' browsers whentheyvisitherprofile. The script then sends a quick message to her own server, which collects this information. To do this, for the question "Describe your Ideal First Date", Mallory gives a short answer (to appear normal), but the text at the end of her answer is her script to steal names and emails. If the script is enclosed inside a<script>element, it won't be shown on the screen. Then suppose that Bob, a member of the dating site, reaches Mallory's profile, which has her answer to the First Date question. Her script is run automatically by the browser and steals a copy of Bob's real name and email directly from his own machine. Persistent XSS vulnerabilities can be more significant than other types because an attacker's malicious script is rendered automatically, without the need to individually target victims or lure them to a third-party website. Particularly in the case of social networking sites, the code would be further designed to self-propagate across accounts, creating a type of client-sideworm.[14] The methods of injection can vary a great deal; in some cases, the attacker may not even need to directly interact with the web functionality itself to exploit such a hole. Any data received by the web application (via email, system logs, IM etc.) that can be controlled by an attacker could become an injection vector. XSS vulnerabilities were originally found in applications that performed all data processing on the server side. User input (including an XSS vector) would be sent to the server, and then sent back to the user as a web page. The need for an improved user experience resulted in popularity of applications that had a majority of the presentation logic (maybe written inJavaScript) working on the client-side that pulled data, on-demand, from the server usingAJAX. As the JavaScript code was also processing user input and rendering it in the web page content, a new sub-class of reflected XSS attacks started to appear that was calledDOM-based cross-site scripting. In a DOM-based XSS attack, the malicious data does not touch the web server. Rather, it is being reflected by the JavaScript code, fully on the client side.[15] An example of a DOM-based XSS vulnerability is the bug found in 2011 in a number ofjQueryplugins.[16]Prevention strategies for DOM-based XSS attacks include very similar measures to traditional XSS prevention strategies but implemented inJavaScriptcode and contained in web pages (i.e. input validation and escaping).[17]SomeJavaScript frameworkshave built-in countermeasures against this and other types of attack — for exampleAngularJS.[18] Self-XSSis a form of XSS vulnerability that relies onsocial engineeringin order to trick the victim into executing malicious JavaScript code in their browser. Although it is technically not a true XSS vulnerability due to the fact it relies on socially engineering a user into executing code rather than a flaw in the affected website allowing an attacker to do so, it still poses the same risks as a regular XSS vulnerability if properly executed.[19] Mutated XSS happens when the attacker injects something that is seemingly safe but is rewritten and modified by the browser while parsing the markup. This makes it extremely hard to detect or sanitize within the website's application logic. An example is rebalancing unclosed quotation marks or even adding quotation marks to unquoted parameters on parameters to CSS font-family. There are several escaping schemes that can be used depending on where the untrusted string needs to be placed within an HTML document including HTML entity encoding, JavaScript escaping, CSS escaping, andURL (or percent) encoding.[20]Most web applications that do not need to accept rich data can use escaping to largely eliminate the risk of XSS attacks in a fairly straightforward manner. Performing HTML entity encoding only on thefive XML significant charactersis not always sufficient to prevent many forms of XSS attacks, security encoding libraries are usually easier to use.[20] Someweb template systemsunderstand the structure of the HTML they produce and automatically pick an appropriate encoder.[21][22] Many operators of particular web applications (e.g. forums and webmail) allow users to utilize a limited subset of HTML markup. When accepting HTML input from users (say,<b>very</b> large), output encoding (such as&lt;b&gt;very&lt;/b&gt; large) will not suffice since the user input needs to be rendered as HTML by the browser (so it shows as "verylarge", instead of "<b>very</b> large"). Stopping an XSS attack when accepting HTML input from users is much more complex in this situation. Untrusted HTML input must be run through anHTML sanitizationengine to ensure that it does not contain XSS code. Many validations rely on parsing out (blacklisting) specific "at risk" HTML tags such as theiframe tag, link and the script tag. There are several issues with this approach, for example sometimes seemingly harmless tags can be left out which when utilized correctly can still result in an XSS Another popular method is to strip user input of " and ' however this can also be bypassed as the payload can be concealed withobfuscation. Besides content filtering, other imperfect methods for cross-site scripting mitigation are also commonly used. One example is the use of additional security controls when handlingcookie-based user authentication. Many web applications rely on session cookies for authentication between individual HTTP requests, and because client-side scripts generally have access to these cookies, simple XSS exploits can steal these cookies.[23]To mitigate this particular threat (though not the XSS problem in general), many web applications tie session cookies to the IP address of the user who originally logged in, then only permit that IP to use that cookie.[24]This is effective in most situations (if an attacker is only after the cookie), but obviously breaks down in situations where an attacker is behind the sameNATedIP address orweb proxyas the victim, or the victim is changing his or hermobile IP.[24] Another mitigation present inInternet Explorer(since version 6),Firefox(since version 2.0.0.5),Safari(since version 4),Opera(since version 9.5) andGoogle Chrome, is anHttpOnlyflag which allows a web server to set a cookie that is unavailable to client-side scripts. While beneficial, the feature can neither fully prevent cookie theft nor prevent attacks within the browser.[25] WhileWeb 2.0andAjaxdevelopers require the use of JavaScript,[26]some web applications are written to allow operation without the need for any client-side scripts.[27]This allows users, if they choose, to disable scripting in their browsers before using the application. In this way, even potentially malicious client-side scripts could be inserted unescaped on a page, and users would not be susceptible to XSS attacks. Some browsers or browser plugins can be configured to disable client-side scripts on a per-domain basis. This approach is of limited value if scripting is allowed by default, since it blocks bad sites onlyafterthe user knows that they are bad, which is too late. Functionality that blocks all scripting and external inclusions by default and then allows the user to enable it on a per-domain basis is more effective. This has been possible for a long time in Internet Explorer (since version 4) by setting up its so called "Security Zones",[28]and in Opera (since version 9) using its "Site Specific Preferences".[29]A solution for Firefox and otherGecko-based browsers is the open sourceNoScriptadd-on which, in addition to the ability to enable scripts on a per-domain basis, provides some XSS protection even when scripts are enabled.[30] The most significant problem with blocking all scripts on all websites by default is substantial reduction in functionality and responsiveness (client-side scripting can be much faster than server-side scripting because it does not need to connect to a remote server and the page orframedoes not need to be reloaded).[31]Another problem with script blocking is that many users do not understand it, and do not know how to properly secure their browsers. Yet another drawback is that many sites do not work without client-side scripting, forcing users to disable protection for that site and opening their systems to vulnerabilities.[32]The Firefox NoScript extension enables users to allow scripts selectively from a given page while disallowing others on the same page. For example, scripts from example.com could be allowed, while scripts from advertisingagency.com that are attempting to run on the same page could be disallowed.[33] Content Security Policy(CSP) allows HTML documents to opt in to disabling some scripts while leaving others enabled.[34]The browser checks each script against a policy before deciding whether to run it. As long as the policy only allows trustworthy scripts and disallowsdynamic code loading, the browser will not run programs from untrusted authors regardless of the HTML document's structure. Modern CSP policies allow usingnoncesto mark scripts in the HTML document as safe to run instead of keeping the policy entirely separate from the page content.[35][36]As long as trusted nonces only appear on trustworthy scripts, the browser will not run programs from untrusted authors. Some large application providers report having successfully deployed nonce-based policies.[37][38] Trusted types[39]changesWeb APIsto check that values have beentrademarkedas trusted.  As long as programs only trademark trustworthy values, an attacker who controls a JavaScriptstring valuecannot cause XSS.  Trusted types are designed to beauditablebyblue teams. Another defense approach is to use automated tools that will remove XSS malicious code in web pages, these tools usestatic analysisand/or pattern matching methods to identify malicious codes potentially and secure them using methods like escaping.[40] When a cookie is set with theSameSite=Strictparameter, it is stripped from all cross-origin requests. When set withSameSite=Lax, it is stripped from all non-"safe" cross-origin requests (that is, requests other than GET, OPTIONS, and TRACE which have read-only semantics).[41]The feature is implemented inGoogle Chromesince version 63 andFirefoxsince version 60.[42]
https://en.wikipedia.org/wiki/Cross-site_scripting
printfis aC standard libraryfunctionthatformatstextand writes it tostandard output. The function accepts a formatc-stringargumentand avariablenumber of value arguments that the functionserializesper the format string. Mismatch between the format specifiers and count andtypeof values results inundefined behaviorand possibly programcrashor othervulnerability. The format string isencodedas atemplate languageconsisting of verbatim text andformat specifiersthat each specify how to serialize a value. As the format string is processed left-to-right, a subsequent value is used for each format specifier found. A format specifier starts with a%character and has one or more following characters that specify how to serialize a value. The standard library provides other, similar functions that form a family ofprintf-likefunctions. The functions share the same formatting capabilities but provide different behavior such as output to a different destination or safety measures that limit exposure to vulnerabilities. Functions of the printf-family have been implemented in other programming contexts (i.e.languages) with the same or similarsyntaxandsemantics. ThescanfC standard library function complements printf by providing formatted input (a.k.a.lexing, a.k.a.parsing) via a similar format string syntax. The name,printf, is short forprint formattedwhereprintrefers to output to aprinteralthough the function is not limited to printer output. Today, print refers to output to any text-based environment such as aterminalor afile. Early programming languages likeFortranused special statements with different syntax from other calculations to build formatting descriptions.[1]In this example, the format is specified on line601, and thePRINT[a]command refers to it by line number: Hereby: An output with input arguments100,200, and1500.25might look like this: In 1967,BCPLappeared.[2]Its library included thewritefroutine.[3]An example application looks like this: Hereby: In 1968,ALGOL 68had a more function-likeAPI, but still used special syntax (the$delimiters surround special formatting syntax): In contrast to Fortran, using normal function calls and data types simplifies the language and compiler, and allows the implementation of the input/output to be written in the same language. These advantages were thought to outweigh the disadvantages (such as a complete lack oftype safetyin many instances) up until the 2000s, and in most newer languages of that era I/O is not part of the syntax. People have since learned[4]that this potentially results in consequences, ranging from security exploits to hardware failures (e.g., phone's networking capabilities being permanently disabled after trying to connect to an access point named "%p%s%s%s%s%n"[5]). Modern languages, such asC++20and later, tend to include format specifications as a part of the language syntax,[6]which restore type safety in formatting to an extent, and allow the compiler to detect some invalid combinations of format specifiers and data types at compile time. In 1973,printfwas included as a C standard library routine as part ofVersion 4 Unix.[7] In 1990, theprintfshellcommand, modeled after the C standard library function, was included with4.3BSD-Reno.[8]In 1991, aprintfcommand was included with GNU shellutils (now part ofGNU Core Utilities). The need to do something about the range of problems resulting from lack of type safety has prompted attempts to make the C++ compilerprintf-aware. The-Wformatoption ofGCCallows compile-time checks toprintfcalls, enabling the compiler to detect a subset of invalid calls (and issue either a warning or an error, stopping the compilation altogether, depending on other flags).[9] Since the compiler is inspectingprintfformat specifiers, enabling this effectively extends the C++ syntax by making formatting a part of it. To address usability issues with the existingC++input/output support, as well as avoid safety issues of printf[10]theC++ standard librarywas revised[11]to support a new type-safe formatting starting withC++20.[12]The approach ofstd::formatresulted from incorporating Victor Zverovich'slibfmt[13]API into the language specification[14](Zverovich wrote[15]the first draft of the new format proposal); consequently,libfmtis an implementation of the C++20 format specification. InC++23, another function,std::print, was introduced that combines formatting and outputting and therefore is a functional replacement forprintf().[16] As the format specification has become a part of the language syntax, a C++ compiler is able to prevent invalid combinations of types and format specifiers in many cases. Unlike the-Wformatoption, this is not an optional feature. The format specification oflibfmtandstd::formatis, in itself, an extensible "mini-language" (referred to as such in the specification),[17]an example of adomain-specific language. As such,std::print, completes a historical cycle; bringing the state-of-the-art (as of 2024) back to what it was in the case of Fortran's firstPRINTimplementation in the 1950s. Formatting of a value is specified as markup in the format string. For example, the following outputsYour age isand then the value of the variableagein decimal format. The syntax for a format specifier is: The parameter field is optional. If included, then matching specifiers to values isnotsequential. The numeric valuenselects the n-th value parameter. This is aPOSIXextension; notC99.[citation needed] This field allows for using the same value multiple times in a format string instead of having to pass the value multiple times. If a specifier includes this field, then subsequent specifiers must also. For example, outputs:17 0x11; 16 0x10 This field is particularly useful forlocalizingmessages to differentnatural languagesthat use differentword orders. InWindows API, support for this feature is via a different function,printf_p. The flags field can be zero or more of (in any order): The width field specifies theminimumnumber of characters to output. If the value can be represented in fewer characters, then the value is left-padded with spaces so that output is the number of characters specified. If the value requires more characters, then the output is longer than the specified width. A value is never truncated. For example,printf("%3d",12);specifies a width of 3 and outputs12with a space on the left to output 3 characters. The callprintf("%3d",1234);outputs1234which is 4 characters long since that is the minimum width for that value even though the width specified is 3. If the width field is omitted, the output is the minimum number of characters for the value. If the field is specified as*, then the width value is read from the list of values in the call.[18]For example,printf("%*d",3,10);outputs10where the second parameter,3, is the width (matches with*) and10is the value toserialize(matches withd). Though not part of the width field, a leading zero is interpreted as the zero-padding flag mentioned above, and a negative value is treated as the positive value in conjunction with the left-alignment-flag also mentioned above. The width field can be used to format values as a table (tabulated output). But, columns do not align if any value is larger than fits in the width specified. For example, notice that the last line value (1234) does not fit in the first column of width 3 and therefore the column is not aligned. The precision field usually specifies amaximumlimit of the output, depending on the particular formatting type. Forfloating-pointnumeric types, it specifies the number of digits to the right of the decimal point to which the output should be rounded; for%gand%Git specifies the total number ofsignificant digits(before and after the decimal, not including leading or trailing zeroes) to round to. For thestring type, it limits the number of characters that should be output, after which the string is truncated. The precision field may be omitted, or a numeric integer value, or a dynamic value when passed as another argument when indicated by an asterisk (*). For example,printf("%.*s",3,"abcdef");outputsabc. The length field can be omitted or be any of: For floating-point types, this is ignored.floatarguments are always promoted todoublewhen used in avarargscall.[19] Platform-specific length options came to exist prior to widespread use of the ISO C99 extensions, including: ISO C99 includes theinttypes.hheader file that includes a number ofmacrosfor platform-independentprintfcoding. For example:printf("%"PRId64,t);specifies decimal format for a64-bit signed integer. Since the macros evaluate to astring literal, and the compilerconcatenatesadjacent string literals, the expression"%"PRId64compiles to a single string. Macros include: The type field can be any of: A common way to handle formatting with a custom data type is to format the custom data type value into astring, then use the%sspecifier to include the serialized value in a larger message. Some printf-like functions allow extensions to theescape-character-basedmini-language, thus allowing the programmer to use a specific formatting function for non-builtin types. One is the (nowdeprecated)glibc'sregister_printf_function(). However, it is rarely used due to the fact that it conflicts withstatic format string checking. Another isVstr custom formatters, which allows adding multi-character format names. Some applications (like theApache HTTP Server) include their own printf-like function, and embed extensions into it. However these all tend to have the same problems thatregister_printf_function()has. TheLinux kernelprintkfunction supports a number of ways to display kernel structures using the generic%pspecification, byappendingadditional format characters.[23]For example,%pI4prints anIPv4 addressin dotted-decimal form. This allows static format string checking (of the%pportion) at the expense of full compatibility with normal printf. Extra value arguments are ignored, but if the format string has more format specifiers than value arguments passed, the behavior is undefined. For some C compilers, an extra format specifier results in consuming a value even though there isn't one which allows theformat string attack. Generally, for C, arguments arepassed on the stack. If too few arguments are passed, then printf can read past the end of the stack frame, thus allowing an attacker to read the stack. Some compilers, likethe GNU Compiler Collection, willstatically checkthe format strings of printf-like functions and warn about problems (when using the flags-Wallor-Wformat). GCC will also warn about user-defined printf-style functions if the non-standard "format"__attribute__is applied to the function. The format string is often astring literal, which allowsstatic analysisof the function call. However, the format string can be the value of avariable, which allows for dynamic formatting but also a security vulnerability known as anuncontrolled format stringexploit. Although an output function on the surface,printfallows writing to a memory location specified by an argument via%n. This functionality is occasionally used as a part of more elaborate format-string attacks.[24] The%nfunctionality also makesprintfaccidentallyTuring-completeeven with a well-formed set of arguments. A game of tic-tac-toe written in the format string is a winner of the 27thIOCCC.[25] Variants ofprintfin the C standard library include: fprintfoutputs to afileinstead of standard output. sprintfwrites to astring bufferinstead of standard output. snprintfprovides a level of safety oversprintfsince the caller provides a lengthnthat is the length of the output buffer in bytes (including space for the trailing nul). asprintfprovides for safety by accepting a stringhandle(char**) argument. The functionallocatesa buffer of sufficient size to contain the formatted text and outputs the buffer via the handle. For each function of the family, including printf, there is also a variant that accepts a singleva_listargument rather than a variable list of arguments. Typically, these variants start with "v". For example:vprintf,vfprintf,vsprintf. Generally, printf-like functions return the number of bytes output or -1 to indicate failure.[26] The following list includes notable programming languages that provide (directly or via a standard library) functionality that is the same or similar to the C printf-like functions. Excluded are languages that use format strings that deviate from the style in this article (such asAMPLandElixir), languages that inherit their implementation from theJVMor other environment (such asClojureandScala), and languages that do not have a standard native printf implementation but have external libraries which emulate printf behavior (such asJavaScript).
https://en.wikipedia.org/wiki/Printf
scanf, short for scan formatted, is aCstandard libraryfunctionthat reads andparsestext fromstandard input. The function accepts a format string parameter that specifies the layout of inputtext. The function parses input text and loads values into variables based ondata type. Similar functions, with other names, predate C, such asreadfinALGOL 68. Input format strings are complementary to output format strings (seeprintf), which provide formatted output (templating). Mike Lesk'sportable input/output library, includingscanf, officially became part of Unix inVersion 7.[1] Thescanffunction reads input for numbers and otherdatatypesfromstandard input. The following C code reads a variable number of unformatted decimalintegersfrom standard input and prints each of them out on separate lines: For input: The output is: To print out a word: No matter what the data type the programmer wants the program to read, the arguments (such as&nabove) must bepointerspointing to memory. Otherwise, the function will not perform correctly because it will be attempting to overwrite the wrong sections of memory, rather than pointing to the memory location of the variable you are attempting to get input for. In the last example an address-of operator (&) isnotused for the argument: aswordis the name of anarrayofchar, as such it is (in all contexts in which it evaluates to an address) equivalent to a pointer to the first element of the array. While the expression&wordwould numerically evaluate to the same value, semantically, it has an entirely different meaning in that it stands for the address of the whole array rather than an element of it. This fact needs to be kept in mind when assigningscanfoutput to strings. Asscanfis designated to read only from standard input, many programming languages withinterfaces, such asPHP, have derivatives such assscanfandfscanfbut notscanfitself. The formattingplaceholdersinscanfare more or less the same as that inprintf, its reverse function. As in printf, the POSIX extensionn$is defined.[2] There are rarely constants (i.e., characters that are not formattingplaceholders) in a format string, mainly because a program is usually not designed to read known data, althoughscanfdoes accept these if explicitly specified. The exception is one or morewhitespace characters, which discards all whitespace characters in the input.[2] Some of the most commonly used placeholders follow: The above can be used in compound with numeric modifiers and thel,Lmodifiers which stand for "long" and "long long" in between the percent symbol and the letter. There can also be numeric values between the percent symbol and the letters, preceding thelongmodifiers if any, that specifies the number of characters to be scanned. An optionalasterisk(*) right after the percent symbol denotes that the datum read by this format specifier is not to be stored in a variable. No argument behind the format string should be included for this dropped variable. Theffmodifier in printf is not present in scanf, causing differences between modes of input and output. Thellandhhmodifiers are not present in the C90 standard, but are present in the C99 standard.[3] An example of a format string is The above format string scans the first seven characters as a decimal integer, then reads the remaining as a string until a space, newline, or tab is found, then consumes whitespace until the first non-whitespace character is found, then consumes that character, and finally scans the remaining characters as adouble. Therefore, a robust program must check whether thescanfcall succeeded and take appropriate action. If the input was not in the correct format, the erroneous data will still be on the input stream and must discarded before new input can be read. An alternative method, which avoids this, is to usefgetsand then examine the string read in. The last step can be done bysscanf, for example. In the case of the many float type charactersa, e, f, g, many implementations choose to collapse most into the same parser. Microsoft MSVCRT does it withe, f, g,[4]whileglibcdoes so with all four.[2] ISO C99 includes theinttypes.hheader file that includes a number of macros for use in platform-independentscanfcoding. These must be outside double-quotes, e.g.scanf("%"SCNd64"\n",&t); Example macros include: scanfis vulnerable toformat string attacks. Great care should be taken to ensure that the formatting string includes limitations for string and array sizes. In most cases the input string size from a user is arbitrary and cannot be determined before thescanffunction is executed. This means that%splaceholders without length specifiers are inherently insecure and exploitable forbuffer overflows. Another potential problem is to allow dynamic formatting strings, for example formatting strings stored in configuration files or other user-controlled files. In this case the allowed input length of string sizes cannot be specified unless the formatting string is checked beforehand and limitations are enforced. Related to this are additional or mismatched formatting placeholders which do not match the actualvararglist. These placeholders might be partially extracted from the stack or contain undesirable or even insecure pointers, depending on the particular implementation ofvarargs.
https://en.wikipedia.org/wiki/Scanf
Incomputing,syslog(/ˈsɪslɒɡ/) is a standard formessage logging. It allows separation of the software that generates messages, the system that stores them, and the software that reports and analyzes them. Each message is labeled with a facility code, indicating the type of system generating the message, and is assigned a severity level. Computer system designers may use syslog for system management and security auditing as well as general informational, analysis, and debugging messages. A wide variety of devices, such as printers, routers, and message receivers across many platforms use the syslog standard. This permits the consolidation of logging data from different types of systems in a central repository. Implementations of syslog exist for many operating systems. When operating over a network, syslog uses aclient-serverarchitecture where asyslog serverlistens for and logs messages coming from clients. Syslog was developed in the 1980s byEric Allmanas part of theSendmailproject.[1]It was readily adopted by other applications and has since become the standard logging solution onUnix-likesystems.[2]A variety of implementations also exist on other operating systems and it is commonly found in network devices, such asrouters.[3] Syslog originally functioned as ade facto standard, without any authoritative published specification, and many implementations existed, some of which were incompatible. TheInternet Engineering Task Forcedocumented the status quo in RFC 3164 in August 2001. It was standardized by RFC 5424 in March 2009.[4] Various companies have attempted to claim patents for specific aspects of syslog implementations.[5][6]This has had little effect on the use and standardization of the protocol.[citation needed] The information provided by the originator of a syslog message includes the facility code and the severity level. The syslog software adds information to the information header before passing the entry to the syslog receiver. Such components include an originator process ID, atimestamp, and the hostname orIP addressof the device. A facility code is used to specify the type of system that is logging the message. Messages with different facilities may be handled differently.[7]The list of facilities available is described by the standard:[4]: 9 The mapping between facility code and keyword is not uniform in different operating systems and syslog implementations.[8] The list of severities of issues is also described by the standard:[4]: 10 The meaning of severity levels other thanEmergencyandDebugare relative to the application. For example, if the purpose of the system is to process transactions to update customer account balance information, an error in the final step should be assignedAlertlevel. However, an error occurring in an attempt to display theZIP codeof the customer may be assignedErroror evenWarninglevel. The server process that handles display of messages usually includes all lower (more severe) levels when the display of less severe levels is requested. That is, if messages are separated by individual severity, aWarninglevel entry will also be included when filtering forNotice,InfoandDebugmessages.[12] In RFC 3164, the message component (known as MSG) was specified as having these fields:TAG, which should be the name of the program or process that generated the message, andCONTENTwhich contains the details of the message. Described in RFC 5424,[4]"MSG is what was called CONTENT in RFC 3164. The TAG is now part of the header, but not as a single field. The TAG has been split into APP-NAME, PROCID, and MSGID. This does not totally resemble the usage of TAG, but provides the same functionality for most of the cases." Popular syslog tools such asNXLog,Rsyslogconform to this new standard. The content field should be encoded in aUTF-8character set and octet values in the traditionalASCII control character rangeshould be avoided.[13][4] Generated log messages may be directed to various destinations includingconsole, files, remote syslog servers, or relays. Most implementations provide a command line utility, often calledlogger, as well as asoftware library, to send messages to the log.[14] To display and monitor the collected logs one needs to use a client application or access the log file directly on the system. The basic command line tools aretailandgrep. The log servers can be configured to send the logs over the network (in addition to the local files). Some implementations include reporting programs for filtering and displaying of syslog messages. When operating over a network, syslog uses aclient-serverarchitecture where the server listens on awell-knownorregistered portfor protocol requests from clients. Historically the most common transport layer protocol for network logging has beenUser Datagram Protocol(UDP), with the server listening on port 514.[15]Because UDP lacks congestion control mechanisms,Transmission Control Protocol(TCP) port 6514 is used;Transport Layer Securityis also required in implementations and recommended for general use.[16][17] Since each process, application, and operating system was written independently, there is little uniformity to the payload of the log message. For this reason, no assumption is made about its formatting or contents. A syslog message is formatted (RFC 5424 gives theAugmented Backus–Naur form(ABNF) definition), but its MSG field is not. The network protocol issimplex communication, with no means of acknowledging the delivery to the originator. Various groups are working on draft standards detailing the use of syslog for more than just network and security event logging, such as its proposed application within the healthcare environment.[18] Regulations, such as theSarbanes–Oxley Act,PCI DSS,HIPAA, and many others, require organizations to implement comprehensive security measures, which often include collecting and analyzing logs from many different sources. The syslog format has proven effective in consolidating logs, as there are many open-source and proprietary tools for reporting and analysis of these logs. Utilities exist for conversion fromWindows Event Logand other log formats to syslog. Managed Security Service Providersattempt to apply analytical techniques and artificial intelligence algorithms to detect patterns and alert customers to problems.[19] The Syslog protocol is defined byRequest for Comments(RFC) documents published by theInternet Engineering Task Force(Internet standards). The following is a list of RFCs that define the syslog protocol:[20]
https://en.wikipedia.org/wiki/Syslog
Improper input validation[1]orunchecked user inputis a type ofvulnerabilityincomputer softwarethat may be used forsecurity exploits.[2]This vulnerability is caused when "[t]he product does not validate or incorrectly validates input that can affect the control flow or data flow of a program."[1] Examples include: This security software article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Improper_input_validation
Incomputer science, atype punningis any programming technique that subverts or circumvents thetype systemof aprogramming languagein order to achieve an effect that would be difficult or impossible to achieve within the bounds of the formal language. InCandC++, constructs such aspointertype conversionandunion— C++ addsreferencetype conversion andreinterpret_castto this list — are provided in order to permit many kinds of type punning, although some kinds are not actually supported by the standard language. In thePascalprogramming language, the use ofrecordswithvariantsmay be used to treat a particular data type in more than one manner, or in a manner not normally permitted. One classic example of type punning is found in theBerkeley socketsinterface. The function to bind an opened but uninitialized socket to anIP addressis declared as follows: Thebindfunction is usually called as follows: The Berkeley sockets library fundamentally relies on the fact that inC, a pointer tostruct sockaddr_inis freely convertible to a pointer tostruct sockaddr; and, in addition, that the two structure types share the same memory layout. Therefore, a reference to the structure fieldmy_addr->sin_family(wheremy_addris of typestruct sockaddr*) will actually refer to the fieldsa.sin_family(wheresais of typestruct sockaddr_in). In other words, the sockets library uses type punning to implement a rudimentary form ofpolymorphismorinheritance. Often seen in the programming world is the use of "padded" data structures to allow for the storage of different kinds of values in what is effectively the same storage space. This is often seen when two structures are used in mutual exclusivity for optimization. Not all examples of type punning involve structures, as the previous example did. Suppose we want to determine whether afloating-pointnumber is negative. We could write: However, supposing that floating-point comparisons are expensive, and also supposing thatfloatis represented according to theIEEE floating-point standard, and integers are 32 bits wide, we could engage in type punning to extract thesign bitof the floating-point number using only integer operations: Note that the behaviour will not be exactly the same: in the special case ofxbeingnegative zero, the first implementation yieldsfalsewhile the second yieldstrue. Also, the first implementation will returnfalsefor anyNaNvalue, but the latter might returntruefor NaN values with the sign bit set. Lastly we have the problem wherein the storage of the floating point data may be in big endian or little endian memory order and thus the sign bit could be in the least significant byte or the most significant byte. Therefore the use of type punning with floating point data is a questionable method with unpredictable results. This kind of type punning is more dangerous than most. Whereas the former example relied only on guarantees made by the C programming language about structure layout and pointer convertibility, the latter example relies on assumptions about a particular system's hardware. The C99 Language Specification ( ISO9899:1999 ) has the following warning in section 6.3.2.3 Pointers : "A pointer to an object or incomplete type may be converted to a pointer to a different object or incomplete type. If the resulting pointer is not correctly aligned for the pointed-to type, the behavior is undefined." Therefore one should be very careful with the use of type punning. Some situations, such astime-criticalcode that the compiler otherwise fails tooptimize, may require dangerous code. In these cases, documenting all such assumptions incomments, and introducingstatic assertionsto verify portability expectations, helps to keep the codemaintainable. Practical examples of floating-point punning includefast inverse square rootpopularized byQuake III, fast FP comparison as integers,[1]and finding neighboring values by incrementing as an integer (implementingnextafter).[2] In addition to the assumption about bit-representation of floating-point numbers, the above floating-point type-punning example also violates the C language's constraints on how objects are accessed:[3]the declared type ofxisfloatbut it is read through an expression of typeunsigned int. On many common platforms, this use of pointer punning can create problems if different pointers arealigned in machine-specific ways. Furthermore, pointers of different sizes canalias accesses to the same memory, causing problems that are unchecked by the compiler. Even when data size and pointer representation match, however, compilers can rely on the non-aliasing constraints to perform optimizations that would be unsafe in the presence of disallowed aliasing. A naive attempt at type-punning can be achieved by using pointers: (The following running example assumes IEEE-754 bit-representation for typefloat.) The C standard's aliasing rules state that an object shall have its stored value accessed only by an lvalue expression of a compatible type.[4]The typesfloatandint32_tare not compatible, therefore this code's behavior isundefined. Although on GCC and LLVM this particular program compiles and runs as expected, more complicated examples may interact with assumptions made bystrict aliasingand lead to unwanted behavior. The option-fno-strict-aliasingwill ensure correct behavior of code using this form of type-punning, although using other forms of type punning is recommended.[5] In C, but not in C++, it is sometimes possible to perform type punning via aunion. Accessingmy_union.iafter most recently writing to the other member,my_union.d, is an allowed form of type-punning in C,[6]provided that the member read is not larger than the one whose value was set (otherwise the read hasunspecified behavior[7]). The same is syntactically valid but hasundefined behaviorin C++,[8]however, where only the last-written member of aunionis considered to have any value at all. For another example of type punning, seeStride of an array. InC++20, thestd::bit_castfunction allows type punning with no undefined behavior. It also allows the function be labeledconstexpr. A variant record permits treating a data type as multiple kinds of data depending on which variant is being referenced. In the following example,integeris presumed to be 16 bit, whilelongintandrealare presumed to be 32, while character is presumed to be 8 bit: In Pascal, copying a real to an integer converts it to the truncated value. This method would translate the binary value of the floating-point number into whatever it is as a long integer (32 bit), which will not be the same and may be incompatible with the long integer value on some systems. These examples could be used to create strange conversions, although, in some cases, there may be legitimate uses for these types of constructs, such as for determining locations of particular pieces of data. In the following example a pointer and a longint are both presumed to be 32 bit: Where "new" is the standard routine in Pascal for allocating memory for a pointer, and "hex" is presumably a routine to print the hexadecimal string describing the value of an integer. This would allow the display of the address of a pointer, something which is not normally permitted. (Pointers cannot be read or written, only assigned.) Assigning a value to an integer variant of a pointer would allow examining or writing to any location in system memory: This construct may cause a program check or protection violation if address 0 is protected against reading on the machine the program is running upon or the operating system it is running under. The reinterpret cast technique from C/C++ also works in Pascal. This can be useful, when eg. reading dwords from a byte stream, and we want to treat them as float. Here is a working example, where we reinterpret-cast a dword to a float: InC#(and other .NET languages), type punning is a little harder to achieve because of the type system, but can be done nonetheless, using pointers or struct unions. C# only allows pointers to so-called native types, i.e. any primitive type (exceptstring), enum, array or struct that is composed only of other native types. Note that pointers are only allowed in code blocks marked 'unsafe'. Struct unions are allowed without any notion of 'unsafe' code, but they do require the definition of a new type. RawCILcan be used instead of C#, because it doesn't have most of the type limitations. This allows one to, for example, combine two enum values of a generic type: This can be circumvented by the following CIL code: ThecpblkCIL opcode allows for some other tricks, such as converting a struct to a byte array:
https://en.wikipedia.org/wiki/Reinterpret_cast
Incomputer science,type conversion,[1][2]type casting,[1][3]type coercion,[3]andtype juggling[4][5]are different ways of changing anexpressionfrom onedata typeto another. An example would be the conversion of anintegervalue into afloating pointvalue or its textual representation as astring, and vice versa. Type conversions can take advantage of certain features oftype hierarchiesordata representations. Two important aspects of a type conversion are whether it happensimplicitly(automatically) orexplicitly,[1][6]and whether the underlying data representation is converted from one representation into another, or a given representation is merelyreinterpretedas the representation of another data type.[6][7]In general, bothprimitiveandcompound data typescan be converted. Eachprogramming languagehas its own rules on how types can be converted. Languages withstrong typingtypically do little implicit conversion and discourage the reinterpretation of representations, while languages withweak typingperform many implicit conversions between data types. Weak typing language often allow forcing thecompilerto arbitrarily interpret a data item as having different representations—this can be a non-obvious programming error, or a technical method to directly deal with underlying hardware. In most languages, the wordcoercionis used to denote animplicitconversion, either during compilation or duringrun time. For example, in an expression mixing integer and floating point numbers (like 5 + 0.1), the compiler will automatically convert integer representation into floating point representation so fractions are not lost. Explicit type conversions are either indicated by writing additional code (e.g. adding type identifiers or calling built-inroutines) or by coding conversion routines for the compiler to use when it otherwise would halt with a type mismatch. In mostALGOL-like languages, such asPascal,Modula-2,AdaandDelphi,conversionandcastingare distinctly different concepts. In these languages,conversionrefers to either implicitly or explicitly changing a value from one data type storage format to another, e.g. a 16-bit integer to a 32-bit integer. The storage needs may change as a result of the conversion, including a possible loss of precision or truncation. The wordcast, on the other hand, refers to explicitly changing theinterpretationof thebit patternrepresenting a value from one type to another. For example, 32 contiguous bits may be treated as an array of 32 Booleans, a 4-byte string, an unsigned 32-bit integer or an IEEE single precision floating point value. Because the stored bits are never changed, the programmer must know low level details such as representation format, byte order, and alignment needs, to meaningfully cast. In the C family of languages andALGOL 68, the wordcasttypically refers to anexplicittype conversion (as opposed to an implicit conversion), causing some ambiguity about whether this is a re-interpretation of a bit-pattern or a real data representation conversion. More important is the multitude of ways and rules that apply to what data type (or class) is located by a pointer and how a pointer may be adjusted by the compiler in cases like object (class) inheritance. Adaprovides a generic library function Unchecked_Conversion.[8] Implicit type conversion, also known ascoercionortype juggling, is an automatic type conversion by thecompiler. Someprogramming languagesallow compilers to provide coercion; others require it. In a mixed-type expression, data of one or moresubtypescan beconvertedto a supertype as needed atruntimeso that the program will run correctly. For example, the following is legalC languagecode: Althoughd,l, andibelong to different data types, they will be automatically converted to equal data types each time a comparison or assignment is executed. This behavior should be used with caution, as unintended consequences can arise. Data can be lost when converting representations from floating-point to integer, as the fractional components of the floating-point values will be truncated (rounded toward zero). Conversely, precision can be lost when converting representations from integer to floating-point, since a floating-point type may be unable to exactly represent all possible values of some integer type. For example,floatmight be anIEEE 754single precision type, which cannot represent the integer 16777217 exactly, while a 32-bit integer type can. This can lead to unintuitive behavior, as demonstrated by the following code: On compilers that implement floats as IEEE single precision, and ints as at least 32 bits, this code will give this peculiar print-out: Note that 1 represents equality in the last line above. This odd behavior is caused by an implicit conversion ofi_valueto float when it is compared withf_value. The conversion causes loss of precision, which makes the values equal before the comparison. Important takeaways: One special case of implicit type conversion is type promotion, where an object is automatically converted into another data type representing asupersetof the original type. Promotions are commonly used with types smaller than the native type of the target platform'sarithmetic logic unit(ALU), before arithmetic and logical operations, to make such operations possible, or more efficient if the ALU can work with more than one type. C and C++ perform such promotion for objects of Boolean, character, wide character, enumeration, and short integer types which are promoted to int, and for objects of type float, which are promoted to double. Unlike some other type conversions, promotions never lose precision or modify the value stored in the object. InJava: Explicit type conversion, also called type casting, is a type conversion which is explicitly defined within a program (instead of being done automatically according to the rules of the language for implicit type conversion). It is requested by the user in the program. There are several kinds of explicit conversion. Inobject-oriented programminglanguages, objects can also bedowncast: a reference of a base class is cast to one of its derived classes. InC#, type conversion can be made in a safe or unsafe (i.e., C-like) manner, the former calledchecked type cast.[9] InC++a similar effect can be achieved usingC++-style cast syntax. InEiffelthe notion of type conversion is integrated into the rules of the type system. The Assignment Rule says that an assignment, such as: is valid if and only if the type of its source expression,yin this case, iscompatible withthe type of its target entity,xin this case. In this rule,compatible withmeans that the type of the source expression eitherconforms toorconverts tothat of the target. Conformance of types is defined by the familiar rules forpolymorphism in object-oriented programming. For example, in the assignment above, the type ofyconforms to the type ofxif the class upon whichyis based is a descendant of that upon whichxis based. The actions of type conversion in Eiffel, specificallyconverts toandconverts fromare defined as: A type based on a class CUconverts toa type T based on a class CT (and Tconverts fromU) if either Eiffel is a fully compliantlanguagefor Microsoft.NET Framework. Before development of .NET, Eiffel already had extensive class libraries. Using the .NET type libraries, particularly with commonly used types such as strings, poses a conversion problem. Existing Eiffel software uses the string classes (such asSTRING_8) from the Eiffel libraries, but Eiffel software written for .NET must use the .NET string class (System.String) in many cases, for example when calling .NET methods which expect items of the .NET type to be passed as arguments. So, the conversion of these types back and forth needs to be as seamless as possible. In the code above, two strings are declared, one of each different type (SYSTEM_STRINGis the Eiffel compliant alias for System.String). BecauseSystem.Stringdoes not conform toSTRING_8, then the assignment above is valid only ifSystem.Stringconverts toSTRING_8. The Eiffel classSTRING_8has a conversion proceduremake_from_cilfor objects of typeSystem.String. Conversion procedures are also always designated as creation procedures (similar to constructors). The following is an excerpt from theSTRING_8class: The presence of the conversion procedure makes the assignment: semantically equivalent to: in whichmy_stringis constructed as a new object of typeSTRING_8with content equivalent to that ofmy_system_string. To handle an assignment with original source and target reversed: the classSTRING_8also contains a conversion queryto_cilwhich will produce aSystem.Stringfrom an instance ofSTRING_8. The assignment: then, becomes equivalent to: In Eiffel, the setup for type conversion is included in the class code, but then appears to happen as automatically asexplicit type conversionin client code. The includes not just assignments but other types of attachments as well, such as argument (parameter) substitution. Rustprovides no implicit type conversion (coercion) between primitive types. But, explicit type conversion (casting) can be performed using theaskeyword.[10] A related concept in static type systems is calledtype assertion, which instruct the compiler to treat the expression of a certain type, disregarding its own inference. Type assertion may be safe (a runtime check is performed) or unsafe. A type assertion does not convert the value from a data type to another. InTypeScript, a type assertion is done by using theaskeyword:[11] In the above example,document.getElementByIdis declared to return anHTMLElement, but you know that it always return anHTMLCanvasElement, which is a subtype ofHTMLElement, in this case. If it is not the case, subsequent code which relies on the behaviour ofHTMLCanvasElementwill not perform correctly, as in Typescript there is no runtime checking for type assertions. In Typescript, there is no general way to check if a value is of a certain type at runtime, as there is no runtime type support. However, it is possible to write a user-defined function which the user tells the compiler if a value is of a certain type of not. Such a function is calledtype guard, and is declared with a return type ofx is Type, wherexis a parameter orthis, in place ofboolean. This allows unsafe type assertions to be contained in the checker function instead of littered around the codebase. InGo, a type assertion can be used to access a concrete type value from an interface value. It is a safe assertion that it will panic (in the case of one return value), or return a zero value (if two return values are used), if the value is not of that concrete type.[12] This type assertions tell the system thatiis of typeT. If it isn't, it panics. Many programming languages supportunion typeswhich can hold a value of multiple types.Untaggedunions are provided in some languages with loose type-checking, such asCandPL/I, but also in the originalPascal. These can be used to interpret the bit pattern of one type as a value of another type. Inhacking, typecasting is the misuse of type conversion to temporarily change avariable's data type from how it was originally defined.[13]This provides opportunities for hackers since in type conversion after a variable is "typecast" to become a different data type, the compiler will treat that hacked variable as the new data type for that specific operation.[14]
https://en.wikipedia.org/wiki/Const_cast