text
stringlengths 11
320k
| source
stringlengths 26
161
|
|---|---|
Mousetrappingis a technique that prevents users from exiting a website through standard means. It is frequently used bymalicious websites, and is often seen ontech support scamsites.[1][2]
Mousetrapping can be executed through various means. A website may launch an endless series ofpop-up adsor redirects; it may re-launch the website in a window that cannot be easily closed. Sometimes these windows run like stand-alone applications and cause thetaskbarand browsermenuto become inaccessible. Some websites also employbrowser hijackersto reset the user'shomepage.[3]
TheFederal Trade Commissionhas brought suits against mousetrappers, charging that the practice is a deceptive and unfair competitive practice, in violation of section 5 of the FTC Act.[4]Typically, mousetrappers registerURLs with misspelled namesof celebrities (e.g.BrittnaySpears.com) or companies (e.g.BettyCroker.comandWallStreetJournel.com).[5]Thus, if someone seeking theBettyCrockerwebsite typedBettyCroker, the user would become ensnared in the mousetrapper's system. Once the viewer is at the site, a Javascript or a click induced by, as one example, promises offree samples, redirects the viewer to a URL and regular site of the mousetrapper's client-advertiser, who (the FTC said in the Zuccarini case) pays 10 to 25 cents for capturing and redirecting each potential customer. An FTC press release explaining why the agency opposes mousetrapping states:
Schemes that capture consumers and hold them at sites against their will while exposing Internet users, including children, to solicitations for gambling, psychics, lotteries, and pornography must be stopped.
This Internet-related article is astub. You can help Wikipedia byexpanding it.
Thiscomputer securityarticle is astub. You can help Wikipedia byexpanding it.
Thismalware-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Mousetrapping
|
Slopsquattingis a type ofcybersquattingand practice of registering a non-existent software package name that alarge language model(LLM) mayhallucinatein its output, whereby someone unknowingly may copy-paste and install the software package without realizing it is fake.[1]Attempting to install a non-existent package should result in an error, but some have exploited this for their gain in the form oftyposquatting.[2]
The term was coined byPython Software FoundationDeveloper-in-Residence Seth Larson and popularized in April 2025 by Andrew Nesbitt onMastodon.[1]
The potential for slopsquatting was detailed in the academic paper, "We Have a Package for You! A Comprehensive Analysis of Package Hallucinations by Code Generating LLMs".[1][3]Some of the paper's main findings are that 19.7% of the LLM recommended packages did not exist, open-source models hallucinated far more frequently (21.7% on average, compared to commercial models at 5.2%),CodeLlama 7Band CodeLlama 34B hallucinated in over a third of outputs, and across all models, the researchers observed over 205,000 unique hallucinated package names.
In 2024, security researcher Bar Lanyado noted that LLMs hallucinated a packaged named "huggingface-cli".[4][5]While this name is identical to the command used for the command-line version of HuggingFace Hub, it is not the name of the package. The software is correctly installed with the codepip install -U "huggingface_hub[cli]". Lanyado tested the potential for slopsquatting by uploading an empty package under this hallucinated name. In three months, it had received over 30,000 downloads.[5]The hallucinated packaged name was also used in the README file of a repo for research conducted byAlibaba.[6]
Feross Aboukhadijeh, CEO of security firmSocket, warns about software engineers who are practicingvibe codingmay be susceptible to slopsquatting and either using the code without reviewing the code or theAI assistant toolinstalling the non-existent package.[2]There has not yet been a reported case where slopsquatting has been used as a cyber attack.
Thisartificial intelligence-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Slopsquatting
|
URL shorteningis a technique on theWorld Wide Webin which aUniform Resource Locator(URL) may be made substantially shorter and still direct to the required page. This is achieved by using aredirectwhich links to theweb pagethat has a long URL. For example, the URL "https://en.wikipedia.org/wiki/URL_shortening" can be shortened to "https://w.wiki/U". Often the redirectdomain nameis shorter than the original one. Afriendly URLmay be desired for messaging technologies that limit the number ofcharactersin a message (for example SMS), for reducing the amount of typing required if the reader is copying a URL from a print source, for making it easier for a person to remember, or for the intention of apermalink. In November 2009, the shortened links of the URL shortening serviceBitlywere accessed 2.1 billion times.[1]
Other uses of URL shortening are to "beautify" a link, track clicks, or disguise the underlying address. This is because the URL shortener can redirect to just about any web domain, even malicious ones. So, although disguising of the underlying address may be desired for legitimate business or personal reasons, it is open to abuse.[2]Some URL shortening service providers have found themselves on spamblocklists, because of the use of their redirect services by sites trying to bypass those very same blocklists. Some websites prevent short, redirected URLs from being posted.
There are several reasons to use URL shortening. Often regular unshortened links may be aesthetically unpleasing. Many web developers pass descriptive attributes in the URL to represent data hierarchies, command structures, transaction paths or session information. This can result in URLs that are hundreds of characters long and that contain complex character patterns. Such URLs are difficult to memorize, type out or distribute. As a result, long URLs must be copied and pasted for reliability. Thus, short URLs may be more convenient for websites or hard copy publications (e.g. a printed magazine or a book), the latter often requiring that very long strings be broken into multiple lines (as is the case with somee-mailsoftware orinternet forums) or truncated.
OnTwitterand someinstant messagingservices, there is a limit to the number of characters a message can carry – however, Twitter now shortens links automatically using its own URL shortening service,t.co, so there is no need to use a separate URL shortening service just to shorten URLs in a tweet. On other such services, using a URL shortener can allow linking to web pages which would otherwise violate this constraint. Some shortening services, such asgoo.gl,tinyurl.com, andbit.lycan generate URLs that are human-readable, although the resulting strings are longer than those generated by a length-optimized service. Finally, URL shortening sites provide detailed information on the clicks a link receives, which can be simpler than setting up an equally powerful server-side analytics engine, and unlike the latter, does not require any access to the server.
URLs encoded in two dimensional barcodes such asQR codeare often shortened by a URL shortener in order to reduce the printed area of the code, or allow printing at lower density in order to improve scanning reliability.
Some websites create short links to make sharing links via instant messaging easier, and to make it cheaper to send them via SMS. This can be done online, at the web pages of a URL shortening service; to do it in batch via bulk upload with tools like CSV importer or on demand may require the use of anAPI.
A few well-known websites have set up their own URL shortening services for their own use – for example, Twitter with t.co,[3]Telegram with t.me, Google with g.co,[4]and GoDaddy with x.co.[5]
In URL shortening, every long URL is associated with aunique key, which is the part after itstop-level domain name. For example,https://tinyurl.com/m3q2xthas a key ofm3q2xt, these keys are case-sensitive most of the time and using the wrong case may lead to a different destination URL. Not all redirection is treated equally; the redirection instruction sent to a browser can contain in its headerHTTP response status codessuch as 301 (moved permanently), 302 (found), 307 (temporary redirect) or 308 (permanent redirect).
There are several techniques to implement URL shortening. Keys can be generated inbase 36, assuming 26 letters and 10 numbers. In this case, each character in the sequence will be0, 1, 2, ..., 9, a, b, c, ..., y, z. Alternatively, if uppercase and lowercase letters are differentiated, then each character can represent a single digit within a number ofbase 62(26 + 26 + 10). In order to form the key, ahash functioncan be made, or arandom number generatedso that key sequence is not predictable. Or users may propose their own custom keys. For example,https://example.com/product?ref=01652&type=shirtcan be shortened tohttps://tinyurl.com/exampleshirt.
Not all URI schemes are capable of being shortened as of 2011, although URI schemes such ashttp,https,ftp,ftps,mailto,mms,rtmp,rtmpt,ed2k,pop,imap,nntp,news,ldap,gopher,dictanddnsare being addressed by such services as URL shorteners. Typically,data:andjavascript:URLs are not supported for security reasons (to combat attacks likecross-site scriptingandsession hijacking). Some URL shortening services support the forwarding ofmailtoURLs, as an alternative toaddress munging, to avoid unwanted harvest byweb crawlersorbots. This may sometimes be done using short,CAPTCHA-protected URLs, but this is not common.[6]
Makers of URL shorteners usually register domain names with less popular or esotericTop-level domainsin order to achieve a short URL and a catchy name, often usingdomain hacks.
This results in registration of different URL shorteners with a myriad of different countries, leaving no relation between the country where the domain has been registered and the URL shortener itself or the shortened links.Top-level domainsof countries such asLibya(.ly),Samoa(.ws),Mongolia(.mn),Malaysia(.my) andLiechtenstein(.li) have been used as well as many others. In some cases, the political or cultural aspects of the country in charge of thetop-level domainmay become an issue for users and owners,[7]but this is not usually the case.
Services may record inbound statistics, which may be viewed publicly by others.[8]
While many providers claim their shortened URLs won't expire for as long as the service is provided, they may decide to discontinue the service at any time.
A permanent URL is not necessarily a good thing. There are security implications, and obsolete short URLs remain in existence and may be circulated long after they cease to point to a relevant or even extant destination. Sometimes a short URL is useful simply to give someone over a telephone conversation for a one-off access or file download, and no longer needed within a couple of minutes.
Some providers offer expiration on shortened URLs. This may include URLs that expire after a certain amount of time, on a certain date or after a certain number of usages.[citation needed]
A Microsoft Security Brief recommends the creation of short-lived URLs, but for reasons explicitly of security rather than convenience.[9]
An early reference is US Patent6957224, which describes
...a system, method and computer program product for providing links to remotely located information in a network of remotely connected computers. A uniform resource locator (URL) is registered with a server. A shorthand link is associated with the registered URL. The associated shorthand link and URL are logged in a registry database. When a request is received for a shorthand link, the registry database is searched for an associated URL. If the shorthand link is found to be associated with a URL, the URL is fetched, otherwise an error message is returned.[10]
The patent was filed in September 2000; while the patent was issued in 2005, US patent applications are made public within 18 months of filing.
Another reference to URL shortening was in 2001.[11]The first notable URL shortening service,TinyURL, was launched in 2002. Its popularity influenced the creation of at least 100 similar websites,[12]although most are simply domain alternatives. Initially,Twitterautomatically translated URLs longer than twenty-six characters using TinyURL, although it began using bit.ly instead in 2009[13]and later developed its own URL shortening service, t.co.
On 14 August 2009WordPressannounced thewp.meURL shortener for use when referring to any WordPress.com blog post.[14]In November 2009, shortened links onbit.lywere accessed 2.1 billion times.[15]Around that time,bit.lyandTinyURLwere the most widely used URL-shortening services.[15]
One service, tr.im, stopped generating short URLs in 2009, blaming a lack of revenue-generating mechanisms to cover costs and Twitter's default use of thebit.lyshortener, and questioning whether other shortening services could be profitable from URL shortening in the longer term.[16]It resumed for a time,[17]then closed.
The shortest possible long-term URLs were generated by NanoURL from December 2009 until about 2011, associated with thetop-level.to(Tonga) domain, in the formhttp://to./xxxx, wherexxxxrepresents a sequence of random numbers and letters.[18]
On 14 December 2009Googleannounced a service called Google URL Shortener at goo.gl, which originally was only available for use through Google products (such asGoogle ToolbarandFeedBurner)[19]and extensions forGoogle Chrome.[20]On 21 December 2009, Google introduced aYouTubeURL Shortener, youtu.be.[21]From September 2010Google URL Shortenerbecame available via a direct interface. The goo.gl service provides analytics details and a QR code generator.[citation needed]On 30 March 2018Googleannounced that it is "turning down support for goo.gl over the coming weeks and replacing it withFirebase Dynamic Links" (although existing goo.gl links will continue to function).[22]On July 18, 2024, Google announced that existing Google URL shortener URLs will stop working as of August 25, 2025. Google will add an interstitial page to warn users about this starting August 23, 2024.[23]
The main advantage of a short link is its brevity. Depending on the transcription used, it might be more easily communicated and entered without error. To some extent it can obscure the destination of the URL; this may be advantageous, disadvantageous, or irrelevant.
Short URLs often circumvent the intended use oftop-level domainsfor indicating the country of origin; domain registration in many countries requires proof of physical presence within that country, although a redirected URL has no such guarantee.
URL shortening may be utilized byspammersor for illicit internet activities. As a result, many have been removed from online registries or shut down by web hosts or internet service providers.
According to Tonic Corporation, the registry for .to domains, it is "very serious about keeping domains spam free" and may remove URL shortening services from their registry if the service is abused.[24]
In addition, "u.nu" made the following announcement upon closing operations:
The last straw came on September 3, 2010, when the server was disconnected without notice by our hosting provider in response to reports of a number of links to child pornography sites. The disconnection of the server caused us serious problems, and to be honest, the level and nature of the abuse has become quite demoralizing. Given the choice between spending time and money to find a different home, or just giving up, the latter won out.[25]
Google's url-shortener discussion group has frequently included messages from frustrated users reporting that specific shortened URLs have been disabled after they were reported as spam.[26]
A study in May 2012 showed that 61% of URL shorteners had shut down (614 of 1002).[27]The most common cause cited was abuse by spammers.
The convenience offered by URL shortening also introduces potential problems, which have led to criticism of the use of these services. Short URLs, for example, will be subject tolinkrotif the shortening service stops working; all URLs related to the service will become broken. It is a legitimate concern that many existing URL shortening services may not have a sustainable business model in the long term.[15]In late 2009, theInternet Archivestarted the "301 Works" projects,[28]together with twenty collaborating companies (initially), whose short URLs will be preserved by the project.[15]
Shortened internet links typically useccTLDdomains, and are therefore often under the jurisdiction of a nation other than where the service provider is located.Libya, for instance, exercised its control over the .ly domain in October 2010 to shut down vb.ly for violating Libyan pornography laws. Failure to predict such problems with URL shorteners and investment in URL shortening companies may reflect a lack ofdue diligence.[29]
Some websites prevent short, redirected URLs from being posted.
In April 2009, TinyURL was reported to be blocked inSaudi Arabia.[30]Yahoo! Answersblocks postings that contain TinyURLs,[citation needed]andWikipediadoes not accept links by any URL shortening services in its articles.[31]TheRedditcommunity strongly discourages—and in some subreddits, outright bans—URL shortening services for link submissions, because they disguise the origin domain name and whether the link has previously been submitted to Reddit, and there are few or no legitimate reasons to use link shorteners for Reddit link submissions.[32]
A short URL obscures the target address and can be used to redirect to an unexpected site. Examples of this are "rickrolling", and redirecting toshock sites, or to affiliate websites. The short URL can allow blocked URLs to be accessed, bypassing siteblocklists; this facilitates redirection of a user to blocked scam pages or pages containing malware or XSS attacks. TinyURL tries to disable spam-related links from redirecting.[33]ZoneAlarm, however, has warned its users: "TinyURL may be unsafe. This website has been known to distribute spyware." TinyURL countered this problem by offering an option to view a link's destination before using a shortened URL. This ability is installed on the browser via the TinyURL website and requires the use of cookies.[34]A destination preview may also be obtained by prefixing the word "preview" to the URL of the TinyURL; for example, the destination ofhttps://tinyurl.com/8kmfpis revealed by enteringhttps://preview.tinyurl.com/8kmfp. Other URL shortening services provide a similar destination display.[35]Security professionals suggest that users check a short URL's destination before accessing it,[36]following an instance where the shortening service cli.gs was compromised, exposing millions of users to security uncertainties.[37]There are several web applications that can display the destination URL of a shortened URL.[citation needed]
Some URL shortening services filter their links through bad-site screening services such asGoogle Safe Browsing. Many sites that accept user-submitted content block links, however, to certain domains in order to cut down on spam, and for this reason, known URL redirection services are often themselves added to spam blocklists.
Another privacy problem is that many services' shortened URL format is small enough that it is vulnerable to brute-force search. Many people use URL shorteners when they share links to private content, and in fact many web services like Google Maps have offered automatic generation of shortened links for driving directions that reveal personal information like home addresses and sensitive destinations like "clinics for specific diseases (including cancer and mental diseases), addiction treatment centers, abortion providers, correctional and juvenile detention facilities, payday and car-title lenders, gentlemen's clubs, etc."[38][39]
Short URLs, although making it easier to access what might otherwise be a very long URL or user-space on an ISP server, add an additional layer of complexity to the process of retrieving web pages. Every access requires more requests (at least one more DNS lookup, though it may be cached, and one more HTTP/HTTPS request), thereby increasing latency, the time taken to access the page, and also the risk of failure, since the shortening service may become unavailable. Another operational limitation of URL shortening services is that browsers do not resend POST bodies when a redirect is encountered. This can be overcome by making the service areverse proxy, or by elaborate schemes involving cookies and buffered POST bodies, but such techniques present security and scaling challenges, and are therefore not used onextranetsor Internet-scale services.[original research?]
Open sourceand commercial scripts are also available for redirecting and shortening links, usually written inPHPas aweb applicationor a plugin for one of the popular applications such asWordPress. Such scripts avoid many issues with shortening services, keep the domain name as part of the shortened link, and can be made private.
|
https://en.wikipedia.org/wiki/URL_shortening
|
Aserveris acomputerthat provides information to other computers called "clients" on acomputer network.[1]Thisarchitectureis called theclient–server model. Servers can provide various functionalities, often called "services", such as sharing data orresourcesamong multiple clients or performingcomputationsfor a client. A single server can serve multiple clients, and a single client can use multiple servers. A client process may run on the same device or may connect over a network to a server on a different device.[2]Typical servers aredatabase servers,file servers,mail servers,print servers,web servers,game servers, andapplication servers.[3]
Client–server systems are usually most frequently implemented by (and often identified with) therequest–responsemodel: a client sends a request to the server, which performs some action and sends a response back to the client, typically with a result or acknowledgment. Designating a computer as "server-class hardware" implies that it is specialized for running servers on it. This often implies that it is more powerful and reliable than standardpersonal computers, but alternatively, largecomputing clustersmay be composed of many relatively simple, replaceable server components.
The use of the wordserverin computing comes fromqueueing theory,[4]where it dates to the mid 20th century, being notably used inKendall (1953)(along with "service"), the paper that introducedKendall's notation. In earlier papers, such as theErlang (1909), more concrete terms such as "[telephone] operators" are used.
In computing, "server" dates at least to RFC 5 (1969),[5]one of the earliest documents describingARPANET(the predecessor ofInternet), and is contrasted with "user", distinguishing two types ofhost: "server-host" and "user-host". The use of "serving" also dates to early documents, such as RFC 4,[6]contrasting "serving-host" with "using-host".
TheJargon Filedefinesserverin the common sense of a process performing service for requests, usually remote,[7]with the 1981 version reading:[8]
SERVER n. A kind ofDAEMONwhich performs a service for the requester, which often runs on a computer other than the one on which the server runs.
The average utilization of a server in the early 2000s was 5 to 15%, but with the adoption of virtualization this figure started to increase to reduce the number of servers needed.[9]
Strictly speaking, the termserverrefers to acomputer programorprocess(running program). Throughmetonymy, it refers to a device used for (or a device dedicated to) running one or several server programs. On a network, such a device is called ahost. In addition toserver, the wordsserveandservice(as verb and as noun respectively) are frequently used, thoughservicerandservantare not.[a]The wordservice(noun) may refer to the abstract form of functionality, e.g.Web service. Alternatively, it may refer to a computer program that turns a computer into a server, e.g.Windows service. Originally used as "servers serve users" (and "users use servers"), in the sense of "obey", today one often says that "servers serve data", in the same sense as "give". For instance,web servers"serve [up] web pages to users" or "service their requests".
The server is part of theclient–server model; in this model, a server serves data forclients. The nature of communication between a client and server isrequest and response. This is in contrast withpeer-to-peermodel in which the relationship is on-demand reciprocation. In principle, any computerized process that can be used or called by another process (particularly remotely, particularly to share a resource) is a server, and the calling process or processes is a client. Thus any general-purpose computer connected to a network can host servers. For example, iffileson a device are shared by some process, that process is afile server. Similarly,web serversoftware canrunon any capable computer, and so alaptopor a personal computer can host a web server.
While request–response is the most common client-server design, there are others, such as thepublish–subscribe pattern. In the publish-subscribe pattern, clients register with a pub-sub server, subscribing to specified types of messages; this initial registration may be done by request-response. Thereafter, the pub-sub server forwards matching messages to the clientswithoutany further requests: the serverpushesmessages to the client, rather than the clientpullingmessages from the server as in request-response.[10]
The role of a server is to share data as well as to shareresourcesand distribute work. A server computer can serve its own computer programs as well; depending on the scenario, this could be part of aquid pro quotransaction, or simply a technical possibility. The following table shows several scenarios in which a server is used.
Almost the entire structure of theInternetis based upon aclient–servermodel. High-levelroot nameservers,DNS, and routers direct the traffic on the internet. There are millions of servers connected to the Internet, running continuously throughout the world[13]and virtually every action taken by an ordinaryInternetuser requires one or more interactions with one or more servers. There are exceptions that do not use dedicated servers; for example,peer-to-peer file sharingand some implementations oftelephony(e.g. pre-MicrosoftSkype).
Hardwarerequirement for servers vary widely, depending on the server's purpose and its software. Servers often are more powerful and expensive than the clients that connect to them.
The name server is used both for the hardware and software pieces. For the hardware servers, it is usually limited to mean the high-end machines although software servers can run on a variety of hardwares.
Since servers are usually accessed over a network, many run unattended without acomputer monitoror input device, audio hardware andUSBinterfaces. Many servers do not have agraphical user interface(GUI). They are configured and managed remotely. Remote management can be conducted via various methods includingMicrosoft Management Console(MMC),PowerShell,SSHandbrowser-basedout-of-band managementsystems such as Dell'siDRACor HP'siLo.
Large traditional single servers would need to be run for long periods without interruption.Availabilitywould have to be very high, making hardware reliability and durability extremely important.Mission-criticalenterprise servers would be veryfault tolerantand use specialized hardware with lowfailure ratesin order to maximizeuptime.Uninterruptible power suppliesmight be incorporated to guard against power failure. Servers typically include hardwareredundancysuch as dualpower supplies,RAIDdisksystems, andECC memory,[14]along with extensivepre-bootmemory testing and verification. Critical components might behot swappable, allowing technicians to replace them on the running server without shutting it down, and to guard against overheating, servers might have more powerful fans or usewater cooling. They will often be able to be configured, powered up and down, or rebooted remotely, usingout-of-band management, typically based onIPMI. Server casings are usuallyflat and wide, and designed to be rack-mounted, either on19-inch racksor onOpen Racks.
These types of servers are often housed in dedicateddata centers. These will normally have very stable power and Internet and increased security. Noise is also less of a concern, but power consumption and heat output can be a serious issue. Server rooms are equipped with air conditioning devices.
Aserver farmorserver clusteris a collection of computer servers maintained by an organization to supply server functionality far beyond the capability of a single device. Moderndata centersare now often built of very large clusters of much simpler servers,[15]and there is a collaborative effort,Open Compute Projectaround this concept.
A class of small specialist servers callednetwork appliancesare generally at the low end of the scale, often being smaller than common desktop computers.
A mobile server has a portable form factor, e.g. alaptop.[16]In contrast to largedata centersor rack servers, the mobile server is designed for on-the-road orad hocdeployment into emergency, disaster or temporary environments where traditional servers are not feasible due to their power requirements, size, and deployment time.[17]The main beneficiaries of so-called "server on the go" technology include network managers, software or database developers, training centers, military personnel, law enforcement, forensics, emergency relief groups, and service organizations.[18]To facilitate portability, features such as thekeyboard,display,battery(uninterruptible power supply, to provide power redundancy in case of failure), and mouse are all integrated into the chassis.
On the Internet, the dominantoperating systemsamong servers are UNIX-likeopen-sourcedistributions, such as those based onLinuxandFreeBSD,[19]withWindows Serveralso having a significant share. Proprietary operating systems such asz/OSandmacOS Serverare also deployed, but in much smaller numbers. Servers that run Linux are commonly used as Webservers or Databanks. Windows Servers are used for Networks that are made out of Windows Clients.
Specialist server-oriented operating systems have traditionally had features such as:
In practice, today many desktop and server operating systems share similarcode bases, differing mostly in configuration.
In 2010, data centers (servers, cooling, and other electrical infrastructure) were responsible for 1.1–1.5% of electrical energy consumption worldwide and 1.7–2.2% in the United States.[21][needs update]One estimate is that total energy consumption for information and communications technology saves more than 5 times itscarbon footprint[22]in the rest of the economy by increasing efficiency.
Global energy consumption is increasing due to the increasing demand of data and bandwidth.Natural Resources Defense Council(NRDC) states that data centers used 91 billion kilowatt hours (kWh) electrical energy in 2013 which accounts to 3% of global electricity usage.[23][needs update]
Environmental groupshave placed focus on the carbon emissions of data centers as it accounts to 200 million metric tons ofcarbon dioxidein a year.
|
https://en.wikipedia.org/wiki/Server_(computing)
|
Anapplication serveris aserverthat hosts applications[1]orsoftwarethat delivers abusiness applicationthrough acommunication protocol.[2]For a typicalweb application, the application server sits behind theweb servers.
An applicationserver frameworkis a service layer model. It includessoftwarecomponents available to asoftware developerthrough anapplication programming interface. An application server may have features such asclustering,fail-over, andload-balancing. The goal is for developers to focus on thebusiness logic.[3]
Jakarta EE(formerly Java EE or J2EE) defines the core set of API and features ofJava application servers.
The Jakarta EE infrastructure is partitioned into logical containers.
Microsoft's .NET positions their middle-tier applications and services infrastructure in theWindows Serveroperating system and the.NET Frameworktechnologies in the role of an application server.[4]The Windows Application Server role includesInternet Information Services(IIS) to provide web server support, the .NET Framework to provide application support,ASP.NETto provideserver side scripting, COM+ for application component communication, Message Queuing for multithreaded processing, and theWindows Communication Foundation(WCF) for application communication.[5]
PHP application servers run and managePHPapplications.
Mobile application servers provide data delivery to mobile devices.
Core capabilities of mobile application services include
Although most standards-basedinfrastructure(includingSOAs) are designed to connect to any independent of any vendor, product or technology, most enterprises have trouble connecting back-end systems to mobile applications, because mobile devices add the following technological challenges:[6]
An application server can be deployed:
{Table Web Interfaces}
|
https://en.wikipedia.org/wiki/Application_server
|
Web server software allows computers to act asweb servers. The first web servers supported only static files, such asHTML(and images), but now they commonly allow embedding of server side applications.
Some web application frameworks include simple HTTP servers. For examplethe Django frameworkprovidesrunserver, andPHPhas a built-in server. These are generally intended only for use during initial development. A production server will require a more robust HTTP front-end such as one of the servers listed here.
(discontinued)
Some features may be intentionally not included to web server to avoidfeaturitis. For example:
|
https://en.wikipedia.org/wiki/Comparison_of_web_server_software
|
Aweb serveriscomputersoftwareand underlyinghardwarethat accepts requests viaHTTP(thenetwork protocolcreated to distributeweb content) or its secure variantHTTPS. A user agent, commonly aweb browserorweb crawler, initiates communication by making a request for aweb pageor otherresourceusing HTTP, and theserverresponds with the content of that resource or anerror message. A web server can also accept and store resources sent from the user agent if configured to do so.[1][2][3][4][5]
The hardware used to run a web server can vary according to the volume of requests that it needs to handle. At the low end of the range areembedded systems, such as arouterthat runs a small web server as its configuration interface. A high-trafficInternetwebsitemight handle requests with hundreds of servers that run on racks of high-speed computers.[6]
A resource sent from a web server can be a pre-existingfile(static content) available to the web server, or it can be generated at the time of the request (dynamic content) by anotherprogramthat communicates with the server software. The former usually can be served faster and can be more easilycachedfor repeated requests, while the latter supports a broader range of applications.
Technologies such asRESTandSOAP, which use HTTP as a basis for general computer-to-computer communication, as well as support forWebDAVextensions, have extended the application of web servers well beyond their original purpose of serving human-readable pages.
This is a very brief history ofweb server programs, so some information necessarily overlaps with the histories of theweb browsers, theWorld Wide Weband theInternet; therefore, for the sake of clarity and understandability, some key historical information below reported may be similar to that found also in one or more of the above-mentioned history articles.[7]
In March 1989,Sir Tim Berners-Leeproposed a new project to his employerCERN, with the goal of easing the exchange of information between scientists by using ahypertextsystem. The proposal titled"HyperText and CERN", asked for comments and it was read by several people. In October 1990 the proposal was reformulated and enriched (having as co-authorRobert Cailliau), and finally, it was approved.[8][9][10]
Between late 1990 and early 1991 the project resulted in Berners-Lee and his developers writing and testing several software libraries along with three programs, which initially ran onNeXTSTEP OSinstalled onNeXTworkstations:[11][12][10]
Those early browsers retrieved web pages written in asimple early form of HTML, from web server(s) using a new basic communication protocol that was namedHTTP 0.9.
In August 1991 Tim Berners-Lee announced thebirth of WWW technologyand encouraged scientists to adopt and develop it.[13]Soon after, those programs, along with theirsource code, were made available to people interested in their usage.[11]Although the source code was not formally licensed or placed in the public domain, CERN informally allowed users and developers to experiment and further develop on top of them. Berners-Lee started promoting the adoption and the usage of those programs along with theirportingto otheroperating systems.[10]
In December 1991, thefirst web server outside Europewas installed at SLAC (U.S.A.).[12]This was a very important event because it started trans-continental web communications between web browsers and web servers.
In 1991–1993, CERN web server program continued to be actively developed by the www group, meanwhile, thanks to the availability of its source code and the public specifications of the HTTP protocol, many other implementations of web servers started to be developed.
In April 1993, CERN issued a public official statement stating that the three components of Web software (the basic line-mode client, the web server and the library of common code), along with theirsource code, were put in thepublic domain.[14]This statement freed web server developers from any possible legal issue about the development ofderivative workbased on that source code (a threat that in practice never existed).
At the beginning of 1994, the most notable among new web servers wasNCSA httpdwhich ran on a variety ofUnix-based OSs and could servedynamically generated contentby implementing thePOSTHTTP method and theCGIto communicate with external programs. These capabilities, along with the multimedia features of NCSA'sMosaicbrowser (also able to manageHTMLFORMsin order to send data to a web server) highlighted the potential of web technology for publishing anddistributed computingapplications.
In the second half of 1994, the development of NCSA httpd stalled to the point that a group of external software developers,webmastersand other professional figures interested in that server, started to write and collectpatchesthanks to the NCSA httpd source code being available to the public domain. At the beginning of 1995 those patches were all applied to the last release of NCSA source code and, after several tests, theApache HTTP serverproject was started.[17][18]
At the end of 1994, a new commercial web server, namedNetsite, was released with specific features. It was the first one of many other similar products that were developed first byNetscape, then also bySun Microsystems, and finally byOracle Corporation.
In mid-1995, the first version ofIISwas released, forWindows NTOS, byMicrosoft. This marked the entry, in the field of World Wide Web technologies, of a very important commercial developer and vendor that has played and still is playing a key role on both sides (client and server) of the web.
In the second half of 1995, CERN and NCSA web servers started to decline (in global percentage usage) because of the widespread adoption of new web servers which had a much faster development cycle along with more features, more fixes applied, and more performances than the previous ones.
At the end of 1996, there were already overfiftyknown (different) web server software programs that were available to everybody who wanted to own an Internetdomain nameand/or to host websites.[20]Many of them lived only shortly and were replaced by other web servers.
The publication ofRFCsabout protocol versions HTTP/1.0 (1996) and HTTP/1.1 (1997, 1999), forced most web servers to comply (not always completely) with those standards. The use of TCP/IPpersistent connections(HTTP/1.1) required web servers both to increase the maximum number of concurrent connections allowed and to improve their level of scalability.
Between 1996 and 1999,Netscape Enterprise Serverand Microsoft's IIS emerged among the leading commercial options whereas among the freely available andopen-sourceprograms Apache HTTP Server held the lead as the preferred server (because of its reliability and its many features).
In those years there was also another commercial, highly innovative and thus notable web server calledZeus(now discontinued) that was known as one of the fastest and most scalable web servers available on market, at least till the first decade of 2000s, despite its low percentage of usage.
Apache resulted in the most used web server from mid-1996 to the end of 2015 when, after a few years of decline, it was surpassed initially by IIS and then by Nginx. Afterward IIS dropped to much lower percentages of usage than Apache (see alsomarket share).
From 2005–2006, Apache started to improve its speed and its scalability level by introducing new performance features (e.g. event MPM and new content cache).[21][22]As those new performance improvements initially were marked as experimental, they were not enabled by its users for a long time and so Apache suffered, even more, the competition of commercial servers and, above all, of other open-source servers which meanwhile had already achieved far superior performances (mostly when serving static content) since the beginning of their development and at the time of the Apache decline were able to offer also a long enough list of well tested advanced features.
In fact, a few years after 2000 started, not only other commercial and highly competitive web servers, e.g.LiteSpeed, but also many other open-source programs, often of excellent quality and very high performances, among which should be notedHiawatha,Cherokee HTTP server,Lighttpd,Nginxand other derived/related products also available with commercial support, emerged.
Around 2007–2008, most popular web browsers increased their previous default limit of 2 persistent connections per host-domain (a limit recommended by RFC-2616)[23]to 4, 6 or 8 persistent connections per host-domain, in order to speed up the retrieval of heavy web pages with lots of images, and to mitigate the problem of the shortage of persistent connections dedicated to dynamic objects used for bi-directional notifications of events in web pages.[24]Within a year, these changes, on average, nearly tripled the maximum number of persistent connections that web servers had to manage. This trend (of increasing the number of persistent connections) definitely gave a strong impetus to the adoption ofreverse proxiesin front of slower web servers and it gave also one more chance to the emerging new web servers that could show all their speed and their capability to handle very high numbers of concurrent connections without requiring too many hardware resources (expensive computers with lots of CPUs, RAM and fast disks).[25]
In 2015, RFCs published new protocol version [HTTP/2], and as the implementation of new specifications was not trivial at all, adilemma arose among developers of less popular web servers(e.g. with a percentage of usage lower than 1% .. 2%), about adding or not adding support for that new protocol version.[26][27]
In fact supporting HTTP/2 often required radical changes to their internal implementation due to many factors (practically always required encrypted connections, capability to distinguish between HTTP/1.x and HTTP/2 connections on the same TCP port, binary representation of HTTP messages, message priority, compression of HTTP headers, use of streams also known as TCP/IP sub-connections and related flow-control, etc.) and so a few developers of those web servers opted fornot supporting new HTTP/2 version(at least in the near future) also because of these main reasons:[26][27]
Instead, developers ofmost popular web servers, rushed to offer the availability of new protocol, not only because they had the work force and the time to do so, but also because usually their previous implementation ofSPDYprotocol could be reused as a starting point and because most used web browsers implemented it very quickly for the same reason. Another reason that prompted those developers to act quickly was that webmasters felt the pressure of the ever increasingweb trafficand they really wanted to install and to try – as soon as possible – something that could drastically lower the number of TCP/IP connections and speedup accesses to hosted websites.[28]
In 2020–2021 the HTTP/2 dynamics about its implementation (by top web servers and popular web browsers) were partly replicated after the publication of advanced drafts of future RFC aboutHTTP/3protocol.
The following technical overview should be considered only as an attempt to give a few verylimited examplesaboutsomefeatures that may beimplementedin a web server andsomeof the tasks that it may perform in order to have a sufficiently wide scenario about the topic.
Aweb server programplays the role of a server in aclient–server modelby implementing one or more versions of HTTP protocol, often including the HTTPS secure variant and other features and extensions that are considered useful for its planned usage.
The complexity and the efficiency of a web server program may vary a lot depending on (e.g.):[1]
Although web server programs differ in how they are implemented, most of them offer the following common features.
These arebasic featuresthat most web servers usually have.
A few other moreadvancedand popularfeatures(only a very short selection) are the following ones.
A web server program, when it is running, usually performs several generaltasks, (e.g.):[1]
Web server programs are able:[29][30][31]
Once an HTTP request message has been decoded and verified, its values can be used to determine whether that request can be satisfied or not. This requires many other steps, includingsecurity checks.
Web server programs usually perform some type ofURL normalization(URLfound in most HTTP request messages) in order to:
The termURL normalizationrefers to the process of modifying and standardizing a URL in a consistent manner. There are several types of normalization that may be performed, including the conversion of the scheme and host to lowercase. Among the most important normalizations are the removal of "." and ".." path segments and adding trailing slashes to a non-empty path component.
"URL mapping is the process by which a URL is analyzed to figure out what resource it is referring to, so that that resource can be returned to the requesting client. This process is performed with every request that is made to a web server, with some of the requests being served with a file, such as an HTML document, or a gif image, others with the results of running a CGI program, and others by some other process, such as a built-in module handler, a PHP document, or a Java servlet."[32][needs update]
In practice, web server programs that implement advanced features, beyond the simplestatic content serving(e.g. URL rewrite engine, dynamic content serving), usually have to figure out how that URL has to be handled, e.g. as a:
One or more configuration files of web server may specify the mapping of parts ofURL path(e.g. initial parts offile path,filename extensionand other path components) to a specific URL handler (file, directory, external program or internal module).[33]
When a web server implements one or more of the above-mentioned advanced features then the path part of a valid URL may not always match an existing file system path under website directory tree (a file or a directory infile system) because it can refer to a virtual name of an internal or external module processor for dynamic requests.
Web server programs are able to translate an URL path (all or part of it), that refers to a physical file system path, to anabsolute pathunder the target website's root directory.[33]
Website's root directory may be specified by a configuration file or by some internal rule of the web server by using the name of the website which is thehostpart of the URL found in HTTP client request.[33]
Path translation to file system is done for the following types of web resources:
The web server appends the path found in requested URL (HTTP request message) and appends it to the path of the (Host) website root directory. On anApache server, this is commonly/home/www/website(onUnixmachines, usually it is:/var/www/website). See the following examples of how it may result.
URL path translation for a static file request
Example of astatic requestof an existing file specified by the following URL:
The client'suser agentconnects towww.example.comand then sends the followingHTTP/1.1 request:
The result is the local file system resource:
The web server then reads thefile, if it exists, and sends a response to the client's web browser. The response will describe the content of the file and contain the file itself or an error message will return saying that the file does not exist or its access is forbidden.
URL path translation for a directory request (without a static index file)
Example of an implicitdynamic requestof an existing directory specified by the following URL:
The client'suser agentconnects towww.example.comand then sends the followingHTTP/1.1 request:
The result is the local directory path:
The web server then verifies the existence of thedirectoryand if it exists and it can be accessed then tries to find out an index file (which in this case does not exist) and so it passes the request to an internal module or a program dedicated to directory listings and finally reads data output and sends a response to the client's web browser. The response will describe the content of the directory (list of contained subdirectories and files) or an error message will return saying that the directory does not exist or its access is forbidden.
URL path translation for a dynamic program request
For adynamic requestthe URL path specified by the client should refer to an existing external program (usually an executable file with a CGI) used by the web server to generate dynamic content.[34]
Example of adynamic requestusing a program file to generate output:
The client'suser agentconnects towww.example.comand then sends the followingHTTP/1.1 request:
The result is the local file path of the program (in this example, aPHPprogram):
The web server executes that program, passing in the path-info and thequery stringaction=view&orderby=thread&date=2021-10-15so that the program has the info it needs to run. (In this case, it will return an HTML document containing a view of forum entries ordered by thread from October 15, 2021). In addition to this, the web server reads data sent from the external program and resends that data to the client that made the request.
Once a request has been read, interpreted, and verified, it has to be managed depending on its method, its URL, and its parameters, which may include values of HTTP headers.
In practice, the web server has to handle the request by using one of these response paths:[33]
If a web server program is capable ofserving static contentand it has been configured to do so, then it is able to send file content whenever a request message has a valid URL path matching (after URL mapping, URL translation and URL redirection) that of an existing file under the root directory of a website and file has attributes which match those required by internal rules of web server program.[33]
That kind of content is calledstaticbecause usually it is not changed by the web server when it is sent to clients and because it remains the same until it is modified (file modification) by some program.
NOTE: when servingstatic content only, a web server program usuallydoes not change file contentsof served websites (as they are only read and never written) and so it suffices to support only theseHTTP methods:
Response of static file content can be sped up by afile cache.
If a web server program receives a client request message with an URL whose path matches one of an existingdirectoryand that directory is accessible and serving directory index file(s) is enabled then a web server program may try to serve the first of known (or configured) static index file names (aregular file) found in that directory; if no index file is found or other conditions are not met then an error message is returned.
Most used names for static index files are:index.html,index.htmandDefault.htm.
If a web server program receives a client request message with an URL whose path matches the file name of an existingfileand that file is accessible by web server program and its attributes match internal rules of web server program, then web server program can send that file to client.
Usually, for security reasons, most web server programs are pre-configured to serve onlyregular filesor to avoid to usespecial file typeslikedevice files, along withsymbolic linksorhard linksto them. The aim is to avoid undesirable side effects when serving static web resources.[35]
If a web server program is capable ofserving dynamic contentand it has been configured to do so, then it is able to communicate with the proper internal module or external program (associated with the requested URL path) in order to pass to it the parameters of the client request. After that, the web server program reads from it its data response (that it has generated, often on the fly) and then it resends it to the client program who made the request.[citation needed]
NOTE: when servingstatic and dynamic content, a web server program usually has to support also the following HTTP method in order to be able to safelyreceive datafrom client(s) and so to be able to host also websites with interactive form(s) that may send large data sets (e.g. lots ofdata entryorfile uploads) to web server / external programs / modules:
In order to be able to communicate with its internal modules and/or external programs, a web server program must have implemented one or more of the many availablegateway interface(s)(see alsoWeb Server Gateway Interfaces used for dynamic content).
The threestandardand historicalgateway interfacesare the following ones.
A web server program may be capable to manage the dynamic generation (on the fly) of adirectory index listof files and sub-directories.[36]
If a web server program is configured to do so and a requested URL path matches an existing directory and its access is allowed and no static index file is found under that directory then a web page (usually in HTML format), containing the list of files and/or subdirectories of above mentioned directory, isdynamically generated(on the fly). If it cannot be generated an error is returned.
Some web server programs allow the customization of directory listings by allowing the usage of a web page template (an HTML document containing placeholders, e.g.$(FILE_NAME), $(FILE_SIZE), etc., that are replaced with the field values of each file entry found in directory by web server), e.g.index.tplor the usage of HTML and embedded source code that is interpreted and executed on the fly, e.g.index.asp, and / or by supporting the usage of dynamic index programs such as CGIs, SCGIs, FCGIs, e.g.index.cgi,index.php,index.fcgi.
Usage of dynamically generateddirectory listingsis usually avoided or limited to a few selected directories of a website because that generation takes much more OS resources than sending a static index page.
The main usage ofdirectory listingsis to allow the download of files (usually when their names, sizes, modification date-times orfile attributesmay change randomly / frequently)as they are, without requiring to provide further information to requesting user.[37]
An external program or an internal module (processing unit) can execute some sort of application function that may be used to get data from or to store data to one or moredata repositories, e.g.:[citation needed]
Aprocessing unitcan return any kind of web content, also by using data retrieved from a data repository, e.g.:[citation needed]
In practice whenever there is content that may vary, depending on one or more parameters contained in client request or in configuration settings, then, usually, it is generated dynamically.
Web server programs are able to send response messages as replies to client request messages.[29]
An error response message may be sent because a request message could not be successfully read or decoded or analyzed or executed.[30]
NOTE: the following sections are reported only as examples to help to understand what a web server, more or less, does; these sections are by any means neither exhaustive nor complete.
A web server program may reply to a client request message with many kinds of error messages, anyway these errors are divided mainly in two categories:
When an error response / message is received by a client browser, then if it is related to the main user request (e.g. an URL of a web resource such as a web page) then usually that error message is shown in some browser window / message.
A web server program may be able to verify whether the requested URL path:[40]
If the authorization / access rights feature has been implemented and enabled and access to web resource is not granted, then, depending on the required access rights, a web server program:
A web server programmayhave the capability of doing URL redirections to new URLs (new locations) which consists in replying to a client request message with a response message containing a new URL suited to access a valid or an existing web resource (client should redo the request with the new URL).[41]
URL redirection of location is used:[41]
Example 1: a URL path points to adirectoryname but it does not have a final slash '/' so web server sends a redirect to client in order to instruct it to redo the request with the fixed path name.[36]
From:/directory1/directory2To:/directory1/directory2/
Example 2: a whole set of documents has beenmoved inside websitein order to reorganize their file system paths.
From:/directory1/directory2/2021-10-08/To:/directory1/directory2/2021/10/08/
Example 3: a whole set of documents has beenmoved to a new websiteand now it is mandatory to use secure HTTPS connections to access them.
From:http://www.example.com/directory1/directory2/2021-10-08/To:https://docs.example.com/directory1/2021-10-08/
Above examples are only a few of the possible kind of redirections.
A web server program is able to reply to a valid client request message with a successful message, optionally containing requestedweb resource data.[42]
If web resource data is sent back to client, then it can bestatic contentordynamic contentdepending on how it has been retrieved (from a file or from the output of some program / module).
In order to speed up web server responses by lowering average HTTP response times and hardware resources used, many popular web servers implement one or more contentcaches, each one specialized in a content category.[43][44]
Content is usually cached by its origin, e.g.:
Historically, static contents found infileswhich had to be accessed frequently, randomly and quickly, have been stored mostly on electro-mechanicaldiskssince mid-late 1960s / 1970s; regrettably reads from and writes to those kind ofdeviceshave always been considered very slow operations when compared toRAMspeed and so, since earlyOSs, first disk caches and then alsoOSfilecachesub-systems were developed to speed upI/Ooperations of frequently accessed data / files.
Even with the aid of an OS file cache, the relative / occasional slowness of I/O operations involving directories and files stored on disks became soon abottleneckin the increase ofperformancesexpected from top level web servers, specially since mid-late 1990s, when web Internet traffic started to grow exponentially along with the constant increase of speed of Internet / network lines.
The problem about how to further efficiently speed-up the serving of static files, thus increasing the maximum number of requests/responses per second (RPS), started to be studied / researched since mid 1990s, with the aim to propose useful cache models that could be implemented in web server programs.[45]
In practice, nowadays, many popular / high performance web server programs include their ownuserlandfile cache, tailored for a web server usage and using their specific implementation and parameters.[46][47][48]
The wide spread adoption ofRAIDand/or fastsolid-state drives(storage hardware with very high I/O speed) has slightly reduced but of course not eliminated the advantage of having a file cache incorporated in a web server.
Dynamic content, output by an internal module or an external program, may not always change very frequently (given a unique URL with keys / parameters) and so, maybe for a while (e.g. from 1 second to several hours or more), the resulting output can be cached in RAM or even on a fastdisk.[49]
The typical usage of a dynamic cache is when a website hasdynamic web pagesabout news, weather, images, maps, etc. that do not change frequently (e.g. everynminutes) and that are accessed by a huge number of clients per minute / hour; in those cases it is useful to return cached content too (without calling the internal module or the external program) because clients often do not have an updated copy of the requested content in their browser caches.[50]
Anyway, in most cases those kind of caches are implemented by external servers (e.g.reverse proxy) or by storing dynamic data output in separate computers, managed by specific applications (e.g.memcached), in order to not compete for hardware resources (CPU, RAM, disks) with web server(s).[51][52]
A web server software can be either incorporated into theOSand executed inkernelspace, or it can be executed inuser space(like other regular applications).
Web servers that run inkernel mode(usually calledkernel space web servers) can have direct access to kernel resources and so they can be, in theory, faster than those running in user mode; anyway there are disadvantages in running a web server in kernel mode, e.g.: difficulties in developing (debugging) software whereasrun-timecritical errorsmay lead to serious problems in OS kernel.
Web servers that run inuser-modehave to ask the system for permission to use more memory or moreCPUresources. Not only do these requests to the kernel take time, but they might not always be satisfied because the system reserves resources for its own usage and has the responsibility to share hardware resources with all the other running applications. Executing in user mode can also mean using more buffer/data copies (between user-space and kernel-space) which can lead to a decrease in the performance of a user-mode web server.
Nowadays almost all web server software is executed in user mode (because many of the aforementioned small disadvantages have been overcome by faster hardware, new OS versions, much faster OSsystem callsand new optimized web server software). See alsocomparison of web server softwareto discover which of them run in kernel mode or in user mode (also referred as kernel space or user space).
Toimprove theuser experience(on client / browser side), a web server shouldreply quickly(as soon as possible) to client requests; unless content response is throttled (by configuration) for some type of files (e.g. big or huge files), also returned data content should be sent as fast as possible (high transfer speed).
In other words, aweb server should always be veryresponsive, even under high load of web traffic, in order to keeptotal user's wait(sum of browser time + network time +web server response time) for a responseas low as possible.
For web server software, main keyperformance metrics(measured under varyoperating conditions) usually are at least the following ones (i.e.):[53]
Among the operating conditions, thenumber(1 ..n) ofconcurrent client connectionsused during a test is an important parameter because it allows to correlate theconcurrencylevelsupported by web server with results of the tested performance metrics.
Thespecific web serversoftware designand model adopted(e.g.):
... and otherprogramming techniques, such as (e.g.):
... used to implement a web server program,can bias a lot theperformancesand in particular thescalabilitylevelthat can be achieved underheavy loador when using high end hardware (many CPUs, disks and lots of RAM).
In practice some web server software models may require more OS resources (specially more CPUs and more RAM) than others to be able to work well and so to achieve target performances.
There are manyoperating conditions that can affect the performancesof a web server; performance values may vary depending on (i.e.):
Performances of a web server are typicallybenchmarkedby using one or more of the availableautomated load testing tools.
A web server (program installation) usually has pre-definedload limitsfor each combination ofoperating conditions, also because it is limited by OS resources and because it can handle only a limited number of concurrent client connections (usually between 2 and several tens of thousands for each active web server process, see also theC10k problemand theC10M problem).
When a web server is near to or over its load limits, it getsoverloadedand so it may becomeunresponsive.
At any time web servers can be overloaded due to one or more of the following causes (e.g.).
The symptoms of an overloaded web server are usually the following ones (e.g.).
To partially overcome above average load limits and to prevent overload, most popular websites use common techniques like the following ones (e.g.).
Caveats about using HTTP/2 and HTTP/3 protocols
Even if newer HTTP (2 and 3) protocols usually generate less network traffic for each request / response data, they may require moreOSresources (i.e. RAM and CPU) used by web server software (because ofencrypted data, lots of stream buffers and other implementation details); besides this, HTTP/2 and maybe HTTP/3 too, depending also on settings of web server and client program, may not be the best options for data upload of big or huge files at very high speed because their data streams are optimized for concurrency of requests and so, in many cases, using HTTP/1.1 TCP/IP connections may lead to better results / higher upload speeds (your mileage may vary).[58][59]
Below are the latest statistics of the market share of all sites of the top web servers on the Internet byNetcraft.
NOTE: (*) percentage rounded to integer number, because its decimal values are not publicly reported by source page (only its rounded value is reported in graph).
Standard Web Server Gateway Interfaces used fordynamic contents:
A few other Web Server Interfaces (server orprogramming languagespecific) used for dynamic contents:
|
https://en.wikipedia.org/wiki/HTTP_server
|
HTTP compressionis a capability that can be built intoweb serversandweb clientsto improve transfer speed and bandwidth utilization.[1]
HTTP data iscompressedbefore it is sent from the server: compliant browsers will announce what methods are supported to the server before downloading the correct format; browsers that do not support compliant compression method will download uncompressed data. The most common compression schemes includegzipandBrotli; a full list of available schemes is maintained by theIANA.[2]
There are two different ways compression can be done in HTTP. At a lower level, a Transfer-Encoding header field may indicate the payload of an HTTP message is compressed. At a higher level, a Content-Encoding header field may indicate that a resource being transferred,cached, or otherwise referenced is compressed. Compression using Content-Encoding is more widely supported than Transfer-Encoding, and some browsers do not advertise support for Transfer-Encoding compression to avoid triggering bugs in servers.[3]
The negotiation is done in two steps, described in RFC 2616 and RFC 9110:
1. Theweb clientadvertises which compression schemes it supports by including a list of tokens in theHTTP request. ForContent-Encoding, the list is in a field calledAccept-Encoding; forTransfer-Encoding, the field is calledTE.
2. If the server supports one or more compression schemes, the outgoing data may be compressed by one or more methods supported by both parties. If this is the case, the server will add aContent-EncodingorTransfer-Encodingfield in the HTTP response with the used schemes, separated by commas.
Theweb serveris by no means obligated to use any compression method – this depends on the internal settings of the web server and also may depend on the internal architecture of the website in question.
The official list of tokens available to servers and client is maintained by IANA,[4]and it includes:
In addition to these, a number of unofficial or non-standardized tokens are used in the wild by either servers or clients:
Manycontent delivery networksalso implement HTTP compression to improve speedy delivery of resources to end users.
The compression in HTTP can also be achieved by using the functionality ofserver-side scriptinglanguages likePHP, or programming languages likeJava.
Various online tools exist to verify a working implementation of HTTP compression. These online tools usually request multiple variants of a URL, each with different request headers (with varying Accept-Encoding content). HTTP compression is considered to be implemented correctly when the server returns a document in a compressed format.[18]By comparing the sizes of the returned documents, the effective compression ratio can be calculated (even between different compression algorithms).
A 2009 article by Google engineers Arvind Jain and Jason Glasgow states that more than 99 person-years are wasted[19]daily due to increase in page load time when users do not receive compressed content. This occurs when anti-virus software interferes with connections to force them to be uncompressed, where proxies are used (with overcautious web browsers), where servers are misconfigured, and where browser bugs stop compression being used. Internet Explorer 6, which drops to HTTP 1.0 (without features like compression or pipelining) when behind a proxy – a common configuration in corporate environments – was the mainstream browser most prone to failing back to uncompressed HTTP.[19]
Another problem found while deploying HTTP compression on large scale is due to thedeflateencoding definition: while HTTP 1.1 defines thedeflateencoding as data compressed with deflate (RFC 1951) inside azlibformatted stream (RFC 1950), Microsoft server and client products historically implemented it as a "raw" deflated stream,[20]making its deployment unreliable.[21][22]For this reason, some software, including the Apache HTTP Server, only implementsgzipencoding.
Compression allows a form ofchosen plaintextattack to be performed: if an attacker can inject any chosen content into the page, they can know whether the page contains their given content by observing the size increase of the encrypted stream. If the increase is smaller than expected for random injections, it means that the compressor has found a repeat in the text, i.e. the injected content overlaps the secret information. This is the idea behind CRIME.
In 2012, a general attack against the use of data compression, calledCRIME, was announced. While the CRIME attack could work effectively against a large number of protocols, including but not limited to TLS, and application-layer protocols such as SPDY or HTTP, only exploits against TLS and SPDY were demonstrated and largely mitigated in browsers and servers. The CRIME exploit against HTTP compression has not been mitigated at all, even though the authors of CRIME have warned that this vulnerability might be even more widespread than SPDY and TLS compression combined.
In 2013, a new instance of the CRIME attack against HTTP compression, dubbed BREACH, was published. A BREACH attack can extract login tokens, email addresses or other sensitive information from TLS encrypted web traffic in as little as 30 seconds (depending on the number of bytes to be extracted), provided the attacker tricks the victim into visiting a malicious web link.[23]All versions of TLS and SSL are at risk from BREACH regardless of the encryption algorithm or cipher used.[24]Unlike previous instances ofCRIME, which can be successfully defended against by turning off TLS compression or SPDY header compression, BREACH exploits HTTP compression which cannot realistically be turned off, as virtually all web servers rely upon it to improve data transmission speeds for users.[23]
As of 2016, the TIME attack and the HEIST attack are now public knowledge.[25][26][27][28]
|
https://en.wikipedia.org/wiki/HTTP_compression
|
Aweb application(orweb app) isapplication softwarethat is created withweb technologiesand runs via aweb browser.[1][2]Web applications emerged during the late 1990s and allowed for the server todynamicallybuild a response to the request, in contrast tostatic web pages.[3]
Web applications are commonly distributed via aweb server. There are several different tier systems that web applications use to communicate between the web browsers, the client interface, and server data. Each system has its own uses as they function in different ways. However, there are many security risks that developers must be aware of during development; proper measures to protect user data are vital.
Web applications are often constructed with the use of aweb application framework.Single-page applications (SPAs)andprogressive web apps (PWAs)are two architectural approaches to creating web applications that provide auser experiencesimilar tonative apps, including features such as smooth navigation, offline support, and faster interactions.
The concept of a "web application" was first introduced in the Java language in the Servlet Specification version 2.2, which was released in 1999. At that time, both JavaScript andXMLhad already been developed, but theXMLHttpRequestobject had only been recently introduced on Internet Explorer 5 as anActiveXobject.[citation needed]Beginning around the early 2000s, applications such as "Myspace(2003),Gmail(2004),Digg(2004), [and]Google Maps(2005)," started to make their client sides more and more interactive. A web page script is able to contact the server for storing/retrieving data without downloading an entire web page. The practice became known as Ajax in 2005.
In earlier computing models like client-server, the processing load for the application was shared between code on the server and code installed on each client locally. In other words, an application had its own pre-compiled client program which served as itsuser interfaceand had to be separately installed on each user'spersonal computer. An upgrade to the server-side code of the application would typically also require an upgrade to the client-side code installed on each user workstation, adding to thesupportcost and decreasingproductivity. Additionally, both the client and server components of the application were bound tightly to a particularcomputer architectureandoperating system, which madeportingthem to other systems prohibitively expensive for all but the largest applications.
Later, in 1995,Netscapeintroduced theclient-side scriptinglanguage calledJavaScript, which allowed programmers to adddynamic elementsto the user interface that ran on the client side. Essentially, instead of sending data to the server in order to generate an entire web page, the embedded scripts of the downloaded page can perform various tasks such asinput validationor showing/hiding parts of the page.
"Progressive web apps", the term coined by designer Frances Berriman andGoogle Chromeengineer Alex Russell in 2015, refers to apps taking advantage of new features supported by modern browsers, which initially run inside a web browser tab but later can run completely offline and can be launched without entering the app URL in the browser.
Traditional PC applications are typically single-tiered, residing solely on the client machine. In contrast, web applications inherently facilitate a multi-tiered architecture. Though many variations are possible, the most common structure is thethree-tieredapplication. In its most common form, the three tiers are calledpresentation,applicationandstorage. The first tier, presentation, refers to a web browser itself. The second tier refers to any engine using dynamic web content technology (such asASP,CGI,ColdFusion,Dart,JSP/Java,Node.js,PHP,PythonorRuby on Rails). The third tier refers to a database that stores data and determines the structure of a user interface. Essentially, when using the three-tiered system, the web browser sends requests to the engine, which then services them by making queries and updates against the database and generates a user interface.
The 3-tier solution may fall short when dealing with more complex applications, and may need to be replaced with the n-tiered approach; the greatest benefit of which is howbusiness logic(which resides on the application tier) is broken down into a more fine-grained model.[4]Another benefit would be to add an integration tier, which separates the data tier and provides an easy-to-use interface to access the data.[4]For example, the client data would be accessed by calling a "list_clients()" function instead of making anSQLquery directly against the client table on the database. This allows the underlying database to be replaced without making any change to the other tiers.[4]
There are some who view a web application as a two-tier architecture. This can be a "smart" client that performs all the work and queries a "dumb" server, or a "dumb" client that relies on a "smart" server.[4]The client would handle the presentation tier, the server would have the database (storage tier), and the business logic (application tier) would be on one of them or on both.[4]While this increases the scalability of the applications and separates the display and the database, it still does not allow for true specialization of layers, so most applications will outgrow this model.[4]
Security breaches on these kinds of applications are a major concern because it can involve both enterprise information and private customer data. Protecting these assets is an important part of any web application, and there are some key operational areas that must be included in the development process.[5]This includes processes for authentication, authorization, asset handling, input, and logging and auditing. Building security into the applications from the beginning is sometimes more effective and less disruptive in the long run.
Writing web applications is simplified with the use ofweb application frameworks. These frameworks facilitaterapid application developmentby allowing a development team to focus on the parts of their application which are unique to their goals without having to resolve common development issues such as user management.[6]
In addition, there is potential for the development of applications onInternet operating systems, although currently there are not many viable platforms that fit this model.[citation needed]
|
https://en.wikipedia.org/wiki/Web_application
|
Aweb application(orweb app) isapplication softwarethat is created withweb technologiesand runs via aweb browser.[1][2]Web applications emerged during the late 1990s and allowed for the server todynamicallybuild a response to the request, in contrast tostatic web pages.[3]
Web applications are commonly distributed via aweb server. There are several different tier systems that web applications use to communicate between the web browsers, the client interface, and server data. Each system has its own uses as they function in different ways. However, there are many security risks that developers must be aware of during development; proper measures to protect user data are vital.
Web applications are often constructed with the use of aweb application framework.Single-page applications (SPAs)andprogressive web apps (PWAs)are two architectural approaches to creating web applications that provide auser experiencesimilar tonative apps, including features such as smooth navigation, offline support, and faster interactions.
The concept of a "web application" was first introduced in the Java language in the Servlet Specification version 2.2, which was released in 1999. At that time, both JavaScript andXMLhad already been developed, but theXMLHttpRequestobject had only been recently introduced on Internet Explorer 5 as anActiveXobject.[citation needed]Beginning around the early 2000s, applications such as "Myspace(2003),Gmail(2004),Digg(2004), [and]Google Maps(2005)," started to make their client sides more and more interactive. A web page script is able to contact the server for storing/retrieving data without downloading an entire web page. The practice became known as Ajax in 2005.
In earlier computing models like client-server, the processing load for the application was shared between code on the server and code installed on each client locally. In other words, an application had its own pre-compiled client program which served as itsuser interfaceand had to be separately installed on each user'spersonal computer. An upgrade to the server-side code of the application would typically also require an upgrade to the client-side code installed on each user workstation, adding to thesupportcost and decreasingproductivity. Additionally, both the client and server components of the application were bound tightly to a particularcomputer architectureandoperating system, which madeportingthem to other systems prohibitively expensive for all but the largest applications.
Later, in 1995,Netscapeintroduced theclient-side scriptinglanguage calledJavaScript, which allowed programmers to adddynamic elementsto the user interface that ran on the client side. Essentially, instead of sending data to the server in order to generate an entire web page, the embedded scripts of the downloaded page can perform various tasks such asinput validationor showing/hiding parts of the page.
"Progressive web apps", the term coined by designer Frances Berriman andGoogle Chromeengineer Alex Russell in 2015, refers to apps taking advantage of new features supported by modern browsers, which initially run inside a web browser tab but later can run completely offline and can be launched without entering the app URL in the browser.
Traditional PC applications are typically single-tiered, residing solely on the client machine. In contrast, web applications inherently facilitate a multi-tiered architecture. Though many variations are possible, the most common structure is thethree-tieredapplication. In its most common form, the three tiers are calledpresentation,applicationandstorage. The first tier, presentation, refers to a web browser itself. The second tier refers to any engine using dynamic web content technology (such asASP,CGI,ColdFusion,Dart,JSP/Java,Node.js,PHP,PythonorRuby on Rails). The third tier refers to a database that stores data and determines the structure of a user interface. Essentially, when using the three-tiered system, the web browser sends requests to the engine, which then services them by making queries and updates against the database and generates a user interface.
The 3-tier solution may fall short when dealing with more complex applications, and may need to be replaced with the n-tiered approach; the greatest benefit of which is howbusiness logic(which resides on the application tier) is broken down into a more fine-grained model.[4]Another benefit would be to add an integration tier, which separates the data tier and provides an easy-to-use interface to access the data.[4]For example, the client data would be accessed by calling a "list_clients()" function instead of making anSQLquery directly against the client table on the database. This allows the underlying database to be replaced without making any change to the other tiers.[4]
There are some who view a web application as a two-tier architecture. This can be a "smart" client that performs all the work and queries a "dumb" server, or a "dumb" client that relies on a "smart" server.[4]The client would handle the presentation tier, the server would have the database (storage tier), and the business logic (application tier) would be on one of them or on both.[4]While this increases the scalability of the applications and separates the display and the database, it still does not allow for true specialization of layers, so most applications will outgrow this model.[4]
Security breaches on these kinds of applications are a major concern because it can involve both enterprise information and private customer data. Protecting these assets is an important part of any web application, and there are some key operational areas that must be included in the development process.[5]This includes processes for authentication, authorization, asset handling, input, and logging and auditing. Building security into the applications from the beginning is sometimes more effective and less disruptive in the long run.
Writing web applications is simplified with the use ofweb application frameworks. These frameworks facilitaterapid application developmentby allowing a development team to focus on the parts of their application which are unique to their goals without having to resolve common development issues such as user management.[6]
In addition, there is potential for the development of applications onInternet operating systems, although currently there are not many viable platforms that fit this model.[citation needed]
|
https://en.wikipedia.org/wiki/Open_source_web_application
|
ALAMP(Linux,Apache,MySQL,Perl/PHP/Python) is one of the most commonsoftware stacksfor the web's most popularapplications. Its generic software stack model has largely interchangeable components.[1]
Each letter in the acronym stands for one of its fouropen-sourcebuilding blocks:
The components of the LAMP stack are present in thesoftware repositoriesof mostLinux distributions.
The acronym LAMP was coined by Michael Kunze in the December 1998 issue ofComputertechnik, a German computing magazine, as he demonstrated that a bundle offree and open-source software"could be a feasible alternative to expensive commercial packages".[2]Since then,O'Reilly MediaandMySQLteamed up to popularize the acronym and evangelize its use.[3]One of the first open-source software stacks for the web, the term and the concept became popular. The stack is capable of hosting a variety of web frameworks and applications, such asWordPressandDrupal.[4]
The LAMP model has been adapted to other componentry, though typically consisting offree and open-source software. With the growing use of the archetypal LAMP, variations andretronymsappeared for other combinations of operating system, web server, database, and software language. For example, an equivalent installation on theMicrosoft Windowsoperating system family is known asWAMP. An alternative runningIISin place of Apache is calledWIMP. Variants involving other operating systems include DAMP, which uses the Darwin operating system.[5]
The web server or database management system also varies. LEMP is a version where Apache has been replaced with the more lightweight web serverNginx.[6]A version where MySQL has been replaced byPostgreSQLis called LAPP, or sometimes by keeping the original acronym, LAMP (Linux / Apache / Middleware (Perl, PHP, Python, Ruby) / PostgreSQL).[7]
The LAMP bundle can be combined with many other free and open-source software packages, including:
As another example, the software whichWikipediaand otherWikimedia Foundationprojects use for theirunderlying infrastructureis a customized LAMP stack with additions such asLinux Virtual Server(LVS) forload balancingandCephandSwiftfor distributed object storages.[citation needed]
Linux is aUnix-likecomputeroperating systemassembled under the model offree and open-source softwaredevelopment and distribution. MostLinux distributions, as collections of software based around theLinux kerneland often around apackage management system, provide complete LAMP setups through their packages. According to W3Techs in October 2013, 58.5% of web server market share is shared betweenDebianandUbuntu, whileRHEL,FedoraandCentOStogether shared 37.3%.[8]
The role of LAMP's web server has been traditionally supplied by Apache, and has since included other web servers such asNginx.
Apache is developed and maintained by an open community of developers under the auspices of theApache Software Foundation. Released under theApache License, Apache isopen-source software. A wide variety of features are supported, and many of them are implemented ascompiledmoduleswhich extend the core functionality of Apache. These can range from server-side programming language support to authentication.
MySQL's original role as the LAMP'srelational database management systemhas since been alternately provisioned by others likePostgreSQL, MariaDB (a community-developedforkof MySQL developed by its original developers), and evenNoSQLdatabases likeMongoDB.
MySQL is amultithreaded,multi-user,SQLdatabase management system,[9]acquired bySun Microsystemsin 2008, which was then acquired byOracle Corporationin 2010.[10]Since its early years, the MySQL team has made itssource codeavailable under the terms of theGNU General Public License, as well as under a variety ofproprietaryagreements.
PostgreSQLis also anACID-compliantobject-relational databasemanagement system developed by PostgreSQL Global Development Group.
MongoDB is aNoSQLdatabase that eschews the traditionalrelational databasestructure in favor ofJSON-like documents with dynamic schemas (calling the formatBSON), making the integration of data in certain types of applications easier and faster.
PHP's role as the LAMP's application programming language has also been performed by other languages such as Perl and Python.
PHP is aserver-side scriptinglanguage designed forweb developmentbut also used as ageneral-purpose programming language. PHP code isinterpretedby a web server via a PHP processor module, which generates the resulting web page. PHP commands can optionally be embedded directly into anHTMLsource document rather than calling an external file to process data. It has also evolved to include acommand-line interfacecapability and can be used in standalonegraphical applications.[11]PHP isfree softwarereleased under the terms ofPHP License, which is incompatible with theGNU General Public License(GPL) due to the restrictions PHP License places on the usage of the termPHP.[12]
Perlis a family ofhigh-level, general-purpose, interpreted,dynamic programming languages. The languages in this family include Perl 5 andRaku.[13]They provide advanced text processing facilities without the arbitrary data-length limits of many contemporaryUnix command line tools,[14]facilitating manipulation oftext files. Perl 5 gained widespread popularity in the late 1990s as aCGI scriptinglanguage for the Web, in part due to itsparsingabilities.[15]
Pythonis a widely used general-purpose, high-level,interpreted, programming language.[16]Python supports multipleprogramming paradigms, includingobject-oriented,imperative,functionalandproceduralparadigms. It features adynamic typesystem, automaticmemory management, astandard library, and strict use ofwhitespace.[17]Like otherdynamic languages, Python is often used as ascripting language, but is also used in a wide range of non-scripting contexts.
Specific approaches are required for websites that serve large numbers of requests, or provide services that demand highuptime. High-availability approaches for the LAMP stack may involve multiple web and database servers, combined with additional components that perform logical aggregation of resources provided by each of the servers, as well as distribution of the workload across multiple servers. The aggregation of web servers may be provided by placing a load balancer in front of them, for example by usingLinux Virtual Server(LVS). For the aggregation of database servers, MySQL provides internal replication mechanisms that implement amaster/slaverelationship between the original database (master) and its copies (slaves).[18]
Such high-availability setups may also improve theavailabilityof LAMP instances by providing various forms ofredundancy, making it possible for a certain number of components (separate servers) to experiencedowntimewithout interrupting the users of services provided as a whole. Such redundant setups may also handle hardware failures resulting indata losson individual servers in a way that prevents collectively stored data from actually becoming lost. Beside higher availability, such LAMP setups are capable of providing almost linear improvements in performance for services having the number of internal database read operations much higher than the number of write/update operations.[18]
|
https://en.wikipedia.org/wiki/List_of_AMP_packages
|
Variant objectsin the context ofHTTPareobjectsserved by anOrigin Content Serverin a type of transmitted data variation (i.e. uncompressed,compressed, different languages, etc.).
HTTP/1.1 (1997–1999)[1][2]introducesContent/Acceptheaders. These are used inHTTPrequests and responses to state which variant the data is presented in.[citation needed]
Client:
Server:
Thisnetwork-relatedsoftwarearticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Variant_object
|
Virtual hostingis a method for hosting multipledomain names(with separate handling of each name) on a singleserver(or pool of servers).[1]This allows one server to share its resources, such as memory and processor cycles, without requiring all services provided to use the same host name. The term virtual hosting is usually used in reference toweb serversbut the principles do carry over to otherInternetservices.
One widely used application isshared web hosting. The price for shared web hosting is lower than for a dedicatedweb serverbecause many customers can be hosted on a single server. It is also very common for a single entity to want to use multiple names on the same machine so that the names can reflect services offered rather than where those services happen to be hosted.
There are two main types of virtual hosting, name-based and IP-based. Name-based virtual hosting uses the host name presented by the client. This saves IP addresses and the associated administrative overhead but the protocol being served must supply the host name at an appropriate point. In particular, there are significant difficulties using name-based virtual hosting withSSL/TLS. IP-based virtual hosting uses a separateIP addressfor each host name, and it can be performed with any protocol but requires a dedicated IP address per domain name served. Port-based virtual hosting is also possible in principle but is rarely used in practice because it is unfriendly to users.
Name-based and IP-based virtual hosting can be combined: a server may have multiple IP addresses and serve multiple names on some or all of those IP addresses. This technique can be useful when using SSL/TLS with wildcard certificates. For example, if a server operator had two certificates, one for *.example.com and one for *.example.net, the operator could serve foo.example.com and bar.example.com off the same IP address but would need a separate IP address for baz.example.net.
Name-based virtual hosts use multiple host names for the sameIP address.
A technical prerequisite needed for name-based virtual hosts is a web browser withHTTP/1.1 support (commonplace today) to include the target hostname in the request. This allows a server hosting multiple sites behind one IP address to deliver the correct site's content. More specifically it means setting theHostHTTP header, which is mandatory in HTTP/1.1.[2]
For instance, a server could be receiving requests for two domains,www.example.comandwww.example.net, both of whichresolveto the same IP address. Forwww.example.com, the server would send theHTML filefrom the directory/var/www/user/Joe/site/, while requests forwww.example.netwould make the server serve pages from/var/www/user/Mary/site/. Equally two subdomains of the same domain may be hosted together. For instance, a blog server may host both blog1.example.com and blog2.example.com.
The biggest issue with name-based virtual hosting is that it is difficult to host multiple secure websites runningSSL/TLS. Because the SSL/TLShandshaketakes place before the expected hostname is sent to the server, the server doesn't know which certificate to present in the handshake. It is possible for a single certificate to cover multiple names either through the "subjectaltname" field or through wildcards but the practical application of this approach is limited by administrative considerations and by the matching rules for wildcards. There is an extension to TLS calledServer Name Indication, that presents the name at the start of the handshake to circumvent that issue, except for some older clients (in particularInternet ExploreronWindows XPor olderAndroidversions) which do not implementSNI.
Furthermore, if theDomain Name System(DNS) is not properly functioning, it is difficult to access a virtually-hosted website even if the IP address is known. If the user tries to fall back to using the IP address to contact the system, as inhttp://10.23.45.67/, the web browser will send the IP address as the host name. Since the web server relies on the web browser client telling it what server name (vhost) to use, the server will respond with a default website—often not the site the user expects.
A workaround in this case is to add the IP address and host name to the client system'shosts file. Accessing the server with the domain name should work again. Users should be careful when doing this, however, as any changes to the true mapping between host name and IP address will be overridden by the local setting. This workaround is not really useful for an average web user, but may be of some use to a site administrator while fixing DNS records.
When IP-based virtual hosting is used, each site (either a DNS host name or a group of DNS host names that act the same) points to a unique IP address. The webserver is configured with multiple physical network interfaces, virtual network interfaces on the same physical interface or multiple IP addresses on one interface.
The web server can either open separate listening sockets for each IP address, or it can listen on all interfaces with a single socket and obtain the IP address the TCP connection was received on after accepting the connections. Either way, it can use the IP address to determine which website to serve. The client is not involved in this process and therefore (unlike with name-based virtual hosting) there are no compatibility issues.
The downside of this approach is the server needs a different IP address for every web site. This increases administrative overhead (both assigning addresses to servers and justifying the use of those addresses to internet registries) and contributes toIPv4 address exhaustion.
Virtual web hosting is often used on a large scale in companies whose business model is to provide low cost website hosting for customers. The vast majority ofweb hosting servicecustomer websites worldwide are hosted onshared servers, using virtual hosting technology.
Many business companies utilize virtual servers for internal purposes, where there is a technological or administrative reason to operate several separate websites, such as a customer extranet website, employeeextranet, internalintranet, and intranets for different departments. If there are no security concerns in the website architectures, they can be merged into a single server using virtual hosting technology, which reduces management andadministrative overheadand the number of separate servers required to support the business.
|
https://en.wikipedia.org/wiki/Virtual_hosting
|
Aweb hosting serviceis a type ofInternet hosting servicethat hostswebsitesfor clients, i.e. it offers the facilities required for them to create and maintain a site and makes it accessible on theWorld Wide Web. Companies providing web hosting services are sometimes calledweb hosts.
Typically, web hosting requires the following:
Until 1991, theInternetwas restricted to use only "... for research and education in the sciences and engineering..."[1][2]and was used foremail,telnet,FTPandUSENETtraffic—but only a tiny number of web pages. The World Wide Web protocols had only just been written[3]and not until the end of 1993 would there be a graphical web browser for Mac or Windows computers.[4]Even after there was some opening up of Internet access, thesituation was confused[clarification needed]until 1995.[5]
To host awebsiteon theinternet, an individual or company would need their owncomputerorserver.[2]As not all companies had the budget or expertise to do this, web hosting services began to offer to host users'websiteson their own servers, without the client needing to own the necessary infrastructure required to operate the website. The owners of the websites, also calledwebmasters, would be able to create a website that would be hosted on the web hosting service's server and published to the web by the web hosting service.
As the number of users on the World Wide Web grew, the pressure for companies, both large and small, to have an online presence grew. By 1995, companies such asGeoCities,AngelfireandTripodwere offering free hosting.[6]
Static web pagefiles can beuploadedviaFile Transfer Protocol(FTP) or a web interface. The files are usually delivered to the Web "as is" or with minimal processing. ManyInternet service providers(ISPs) offer this service free to subscribers. Individuals and organizations may also obtain web page hosting from alternative service providers.
Free web hosting service is offered by different companies with limited services, sometimes supported by advertisements,[needs update?]and often limited when compared to paid hosting.
Single page hosting is generally sufficient forpersonal web pages. Personal website hosting is typically free, advertisement-sponsored, or inexpensive. Business website hosting often has a higher expense depending upon the size and type of the site.
Commercial services that provide static page hosting includeGitHub Pages, where the website version control is tracked usingGit.
A complex site calls for a more comprehensive package that providesdatabasesupportand application development platforms (e.g.ASP.NET,ColdFusion,Java EE,Perl/Plack,PHPorRuby on Rails). These facilities allow customers to write or install scripts for applications likeforumsandcontent management. Web hosting packages often include aweb content management system, so the end-user does not have to worry about the more technical aspects.Secure Sockets Layer(SSL) is used for websites that wish to encrypt the transmitted data.
Internet hosting services can runweb servers. The scope of web hosting services varies greatly.
Some specific types of hosting provided by web host service providers:
The host may also provide an interface orcontrol panelfor managing theweb serverand installing scripts, as well as other modules and service applications like e-mail. A web server that does not use acontrol panelfor managing the hosting account, is often referred to as a "headless" server. Some hosts specialize in certain software or services (e.g. e-commerce, blogs, etc.).
Theavailabilityof a website is measured by the percentage of a year in which the website is publicly accessible and reachable via the Internet. This is different from measuring theuptimeof a system. Uptime refers to the system itself being online. Uptime does not take into account being able to reach it as in the event of a network outage.[citation needed]A hosting provider'sService Level Agreement(SLA) may include a certain amount of scheduleddowntimeper year in order to perform maintenance on the systems. This scheduled downtime is often excluded from the SLA timeframe, and needs to be subtracted from the Total Time when availability is calculated. Depending on the wording of an SLA, if the availability of a system drops below that in the signed SLA, a hosting provider often will provide a partial refund for time lost. How downtime is determined changes from provider to provider, therefore reading the SLA is imperative.[10]Not all providers release uptime statistics.
Because web hosting services host websites belonging to their customers,online securityis an important concern. When a customer agrees to use a web hosting service, they are relinquishing control of the security of their site to the company that is hosting the site. The level of security that a web hosting service offers is extremely important to a prospective customer and can be a major factor when considering which provider a customer may choose.[11]
Web hosting servers can be attacked by malicious users in different ways, including uploadingmalwareor maliciouscodeonto a hostedwebsite. These attacks may be done for different reasons, including stealing credit card data, launching aDistributed Denial of Service Attack(DDoS) orspamming.[12]
|
https://en.wikipedia.org/wiki/Web_hosting_service
|
Aweb container(also known as a servlet container;[1]and compare "webcontainer"[2]) is the component of aweb serverthat interacts withJakarta Servlets. A web container is responsible for managing the lifecycle of servlets, mapping aURLto a particular servlet and ensuring that the URL requester has the correct access-rights. A web container handles requests toservlets,Jakarta Server Pages(JSP) files, and other types of files that include server-side code. The Web container creates servlet instances, loads and unloads servlets, creates and manages request and response objects, and performs other servlet-management tasks. A web container implements the web component contract of theJakarta EEarchitecture. This architecture specifies aruntime environmentfor additional web components, includingsecurity,concurrency,lifecycle management,transaction, deployment, and other services.
The following is a list of notable applications which implement theJakarta Servletspecification fromEclipse Foundation, divided depending on whether they are directly sold or not.
Thiscomputer networkingarticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Web_container
|
Incomputer networking, aproxy serveris aserver applicationthat acts as anintermediarybetween aclientrequesting aresourceand the server providing that resource.[1]It improves privacy, security, and possibly performance in the process.
Instead of connecting directly to a server that can fulfill a request for a resource, such as a file orweb page, the client directs the request to the proxy server, which evaluates the request and performs the required network transactions. This serves as a method to simplify or control the complexity of the request, or provide additional benefits such asload balancing, privacy, or security. Proxies were devised to add structure andencapsulationtodistributed systems.[2]A proxy server thus functions on behalf of the client when requesting service, potentially masking the true origin of the request to the resource server.
A proxy server may reside on the user'slocal computer, or at any point between the user's computer and destination servers on theInternet. A proxy server that passes unmodified requests and responses is usually called agatewayor sometimes atunneling proxy. A forward proxy is an Internet-facing proxy used to retrieve data from a wide range of sources (in most cases, anywhere on the Internet). Areverse proxyis usually an internal-facing proxy used as a front-end to control and protect access to a server on a private network. A reverse proxy commonly also performs tasks such asload-balancing,authentication,decryption, andcaching.[3]
Anopen proxyis aforwardingproxy server that is accessible by any Internet user. In 2008, network security expertGordon Lyonestimated that "hundreds of thousands" of open proxies are operated on the Internet.[4]
A reverse proxy (or surrogate) is a proxy server that appears to clients to be an ordinary server. Reverse proxies send requests to one or more ordinary servers that handle the request. The response from the original server is returned as if it came directly from the proxy server, leaving the client with no knowledge of the original server.[5]Reverse proxies are installed in the vicinity of one or more web servers. Alltraffic coming from the Internetand with a destination of one of the neighborhood's web servers goes through the proxy server. The use of "reverse" originates in its counterpart "forward proxy" since the reverse proxy sits closer to the web server and serves only a restricted set of websites. There are several reasons for installing reverse proxy servers:
A forward proxy is a server that routes traffic between clients and another system, which is in most occasions external to the network. This means it can regulate traffic according to preset policies, convert and mask client IP addresses, enforce security protocols and block unknown traffic. A forward proxy enhances security and policy enforcement within an internal network.[6]A reverse proxy, instead of protecting the client, is used to protect the servers. A reverse proxy accepts a request from a client, forwards that request to another one of many other servers, and then returns the results from the server that specifically processed the request to the client. Effectively a reverse proxy acts as a gateway between clients, users and application servers and handles all the traffic routing whilst also protecting the identity of the server that physically processes the request.[7]
Acontent-filteringweb proxy server provides administrative control over the content that may be relayed in one or both directions through the proxy. It is commonly used in both commercial and non-commercial organizations (especially schools) to ensure that Internet usage conforms toacceptable use policy.
Content filtering proxy servers will often supportuser authenticationto control web access. It also usually produceslogs, either to give detailed information about the URLs accessed by specific users or to monitorbandwidthusage statistics. It may also communicate todaemon-based orICAP-based antivirus software to provide security against viruses and othermalwareby scanning incoming content in real-time before it enters the network.
Many workplaces, schools, and colleges restrict web sites and online services that are accessible and available in their buildings. Governments also censor undesirable content. This is done either with a specialized proxy, called a content filter (both commercial and free products are available), or by using a cache-extension protocol such as ICAP, that allows plug-in extensions to an open caching architecture.
Websites commonly used by students to circumvent filters and access blocked content often include a proxy, from which the user can then access the websites that the filter is trying to block.
Requests may be filtered by several methods, such as aURLorDNS blacklists, URL regex filtering,MIMEfiltering, or content keyword filtering. Blacklists are often provided and maintained by web-filtering companies, often grouped into categories (pornography, gambling, shopping, social networks, etc.).
The proxy then fetches the content, assuming the requested URL is acceptable. At this point, a dynamic filter may be applied on the return path. For example,JPEGfiles could be blocked based on fleshtone matches, or language filters could dynamically detect unwanted language. If the content is rejected then an HTTP fetch error may be returned to the requester.
Most web filtering companies use an internet-wide crawling robot that assesses the likelihood that content is a certain type. Manual labor is used to correct the resultant database based on complaints or known flaws in the content-matching algorithms.[8]
Some proxies scan outbound content, e.g., for data loss prevention; or scan content for malicious software.
Web filtering proxies are not able to peer inside secure sockets HTTP transactions, assuming the chain-of-trust of SSL/TLS (Transport Layer Security) has not been tampered with. The SSL/TLS chain-of-trust relies on trusted rootcertificate authorities.
In a workplace setting where the client is managed by the organization, devices may be configured to trust a root certificate whose private key is known to the proxy. In such situations, proxy analysis of the contents of an SSL/TLS transaction becomes possible. The proxy is effectively operating aman-in-the-middle attack, allowed by the client's trust of a root certificate the proxy owns.
If the destination server filters content based on the origin of the request, the use of a proxy can circumvent this filter. For example, a server usingIP-basedgeolocationto restrict its service to a certain country can be accessed using a proxy located in that country to access the service.[9]: 3
Web proxies are the most common means of bypassing government censorship, although no more than 3% of Internet users use any circumvention tools.[9]: 7
Some proxy service providers allow businesses access to their proxy network for rerouting traffic for business intelligence purposes.[10]
In some cases, users can circumvent proxies that filter using blacklists by using services designed to proxy information from a non-blacklisted location.[11]
Proxies can be installed in order toeavesdropupon the data-flow between client machines and the web. All content sent or accessed – including passwords submitted andcookiesused – can be captured and analyzed by the proxy operator. For this reason, passwords to online services (such as webmail and banking) should always be exchanged over a cryptographically secured connection, such as SSL.
By chaining the proxies which do not reveal data about the original requester, it is possible to obfuscate activities from the eyes of the user's destination. However, more traces will be left on the intermediate hops, which could be used or offered up to trace the user's activities. If the policies and administrators of these other proxies are unknown, the user may fall victim to a false sense of security just because those details are out of sight and mind.
In what is more of an inconvenience than a risk, proxy users may find themselves being blocked from certain Web sites, as numerous forums and Web sitesblock IP addressesfrom proxies known to havespammedortrolledthe site. Proxy bouncing can be used to maintain privacy.
Acaching proxyserver accelerates service requests by retrieving the content saved from a previous request made by the same client or even other clients.[12]Caching proxies keep local copies of frequently requested resources, allowing large organizations to significantly reduce their upstream bandwidth usage and costs, while significantly increasing performance. Most ISPs and large businesses have a caching proxy. Caching proxies were the first kind of proxy server. Web proxies are commonly used tocacheweb pages from a web server.[13]Poorly implemented caching proxies can cause problems, such as an inability to use user authentication.[14]
A proxy that is designed to mitigate specific link related issues or degradation is aPerformance Enhancing Proxy(PEPs). These are typically used to improveTCPperformance in the presence of high round-trip times or high packet loss (such as wireless or mobile phone networks); or highly asymmetric links featuring very different upload and download rates. PEPs can make more efficient use of the network, for example, by merging TCPACKs(acknowledgements) or compressing data sent at theapplication layer.[15]
A translation proxy is a proxy server that is used to localize a website experience for different markets. Traffic from the global audience is routed through the translation proxy to the source website. As visitors browse the proxied site, requests go back to the source site where pages are rendered. The original language content in the response is replaced by the translated content as it passes back through the proxy. The translations used in a translation proxy can be either machine translation, human translation, or a combination of machine and human translation. Different translation proxy implementations have different capabilities. Some allow further customization of the source site for the local audiences such as excluding the source content or substituting the source content with the original local content.
An anonymous proxy server (sometimes called a web proxy) generally attempts to anonymize web surfing.Anonymizersmay be differentiated into several varieties. The destination server (the server that ultimately satisfies the web request) receives requests from the anonymizing proxy server and thus does not receive information about the end user's address. The requests are not anonymous to the anonymizing proxy server, however, and so a degree of trust is present between the proxy server and the user. Many proxy servers are funded through a continued advertising link to the user.
Access control: Some proxy servers implement a logon requirement. In large organizations, authorized users must log on to gain access to theweb. The organization can thereby track usage to individuals. Some anonymizing proxy servers may forwarddata packetswith header lines such as HTTP_VIA, HTTP_X_FORWARDED_FOR, or HTTP_FORWARDED, which may reveal the IP address of the client. Other anonymizing proxy servers, known as elite or high-anonymity proxies, make it appear that the proxy server is the client. A website could still suspect a proxy is being used if the client sends packets that include a cookie from a previous visit that did not use the high-anonymity proxy server. Clearing cookies, and possibly the cache, would solve this problem.
Advertisers use proxy servers for validating, checking and quality assurance ofgeotargeted ads. A geotargeting ad server checks the request source IP address and uses ageo-IP databaseto determine the geographic source of requests.[16]Using a proxy server that is physically located inside a specific country or a city gives advertisers the ability to test geotargeted ads.
A proxy can keep the internal network structure of a company secret by usingnetwork address translation, which can help thesecurityof the internal network.[17]This makes requests from machines and users on the local network anonymous. Proxies can also be combined withfirewalls.
An incorrectly configured proxy can provide access to a network otherwise isolated from the Internet.[4]
Proxies allow web sites to make web requests to externally hosted resources (e.g. images, music files, etc.) whencross-domain restrictionsprohibit the web site from linking directly to the outside domains. Proxies also allow the browser to make web requests to externally hosted content on behalf of a website when cross-domain restrictions (in place to protect websites from the likes of data theft) prohibit the browser from directly accessing the outside domains.
Secondary market brokers use web proxy servers to circumvent restrictions on online purchases of limited products such as limited sneakers[18]or tickets.
Web proxies forwardHTTPrequests. The request from the client is the same as aregular HTTP requestexcept the full URL is passed, instead of just the path.[19]
This request is sent to the proxy server, the proxy makes the request specified and returns the response.
Some web proxies allow theHTTP CONNECTmethod to set up forwarding of arbitrary data through the connection; a common policy is to only forward port 443 to allowHTTPStraffic.
Examples of web proxy servers includeApache(withmod_proxyorTraffic Server),HAProxy,IISconfigured as proxy (e.g., with Application Request Routing),Nginx,Privoxy,Squid,Varnish(reverse proxy only),WinGate,Ziproxy, Tinyproxy, RabbIT andPolipo.
For clients, the problem of complex or multiple proxy-servers is solved by a client-serverProxy auto-configprotocol (PAC file).
SOCKSalso forwards arbitrary data after a connection phase, and is similar to HTTP CONNECT in web proxies.
Also known as anintercepting proxy,inline proxy, orforced proxy, a transparent proxy intercepts normalapplication layercommunication without requiring any special client configuration. Clients need not be aware of the existence of the proxy. A transparent proxy is normally located between the client and the Internet, with the proxy performing some of the functions of agatewayorrouter.[20]
RFC2616(Hypertext Transfer Protocol—HTTP/1.1) offers standard definitions:
"A 'transparent proxy' is a proxy that does not modify the request or response beyond what is required for proxy authentication and identification". "A 'non-transparent proxy' is a proxy that modifies the request or response in order to provide some added service to the user agent, such as group annotation services, media type transformation, protocol reduction, or anonymity filtering".
TCP Intercept is a traffic filtering security feature that protects TCP servers from TCPSYN floodattacks, which are a type of denial-of-service attack. TCP Intercept is available for IP traffic only.
In 2009 a security flaw in the way that transparent proxies operate was published by Robert Auger,[21]and the Computer Emergency Response Team issued an advisory listing dozens of affected transparent and intercepting proxy servers.[22]
Intercepting proxies are commonly used in businesses to enforce acceptable use policies and to ease administrative overheads since no client browser configuration is required. This second reason, however is mitigated by features such as Active Directory group policy, orDHCPand automatic proxy detection.
Intercepting proxies are also commonly used by ISPs in some countries to save upstream bandwidth and improve customer response times by caching. This is more common in countries where bandwidth is more limited (e.g. island nations) or must be paid for.
The diversion or interception of a TCP connection creates several issues. First, the original destination IP and port must somehow be communicated to the proxy. This is not always possible (e.g., where the gateway and proxy reside on different hosts). There is a class ofcross-site attacksthat depend on certain behaviors of intercepting proxies that do not check or have access to information about the original (intercepted) destination. This problem may be resolved by using an integrated packet-level and application level appliance or software which is then able to communicate this information between the packet handler and the proxy.
Intercepting also creates problems forHTTPauthentication, especially connection-oriented authentication such asNTLM, as the client browser believes it is talking to a server rather than a proxy. This can cause problems where an intercepting proxy requires authentication, and then the user connects to a site that also requires authentication.
Finally, intercepting connections can cause problems for HTTP caches, as some requests and responses become uncacheable by a shared cache.
In integrated firewall/proxy servers where the router/firewall is on the same host as the proxy, communicating original destination information can be done by any method, for exampleMicrosoft TMGorWinGate.
Interception can also be performed using Cisco'sWCCP(Web Cache Control Protocol). This proprietary protocol resides on the router and is configured from the cache, allowing the cache to determine what ports and traffic is sent to it via transparent redirection from the router. This redirection can occur in one of two ways:GRE tunneling(OSI Layer 3) or MAC rewrites (OSI Layer 2).
Once traffic reaches the proxy machine itself, interception is commonly performed with NAT (Network Address Translation). Such setups are invisible to the client browser, but leave the proxy visible to the web server and other devices on the internet side of the proxy. Recent Linux and some BSD releases provide TPROXY (transparent proxy) which performs IP-level (OSI Layer 3) transparent interception and spoofing of outbound traffic, hiding the proxy IP address from other network devices.
Several methods may be used to detect the presence of an intercepting proxy server:
ACGIweb proxy accepts target URLs using aWeb formin the user's browser window, processes the request, and returns the results to the user's browser. Consequently, it can be used on a device or network that does not allow "true" proxy settings to be changed. The first recorded CGI proxy, named "rover" at the time but renamed in 1998 to "CGIProxy",[25]was developed by American computer scientist James Marshall in early 1996 for an article in "Unix Review" by Rich Morin.[26]
The majority of CGI proxies are powered by one of CGIProxy (written in thePerllanguage), Glype (written in thePHPlanguage), or PHProxy (written in the PHP language). As of April 2016, CGIProxy has received about two million downloads, Glype has received almost a million downloads,[27]whilst PHProxy still receives hundreds of downloads per week.[28]Despite waning in popularity[29]due toVPNsand other privacy methods, as of September 2021[update]there are still a few hundred CGI proxies online.[30]
Some CGI proxies were set up for purposes such asmaking websites more accessibleto disabled people, but have since been shut down due toexcessive traffic, usually caused by athird party advertising the serviceas a means to bypass local filtering. Since many of these users do not care about the collateral damage they are causing, it became necessary for organizations to hide their proxies, disclosing the URLs only to those who take the trouble to contact the organization and demonstrate a genuine need.[31]
A suffix proxy allows a user to access web content by appending the name of the proxy server to the URL of the requested content (e.g. "en.wikipedia.org.SuffixProxy.com"). Suffix proxy servers are easier to use than regular proxy servers, but they do not offer high levels of anonymity, and their primary use is for bypassing web filters. However, this is rarely used due to more advanced web filters.
Toris a system intended to provideonline anonymity.[32]Tor client software routes Internet traffic through a worldwide volunteer network of servers for concealing a user's computer location or usage from someone conductingnetwork surveillanceortraffic analysis. Using Tor makes tracing Internet activity more difficult,[32]and is intended to protect users' personal freedom and their online privacy.
"Onion routing" refers to the layered nature of the encryption service: The original data are encrypted and re-encrypted multiple times, then sent through successive Tor relays, each one of which decrypts a "layer" of encryption before passing the data on to the next relay and ultimately the destination. This reduces the possibility of the original data being unscrambled or understood in transit.[33]
TheI2P anonymous network('I2P') is a proxy network aiming atonline anonymity. It implementsgarlic routing, which is an enhancement of Tor's onion routing. I2P is fully distributed and works by encrypting all communications in various layers and relaying them through a network of routers run by volunteers in various locations. By keeping the source of the information hidden, I2P offers censorship resistance. The goals of I2P are to protect users' personal freedom, privacy, and ability to conduct confidential business.
Each user of I2P runs an I2P router on their computer (node). The I2P router takes care of finding other peers and building anonymizing tunnels through them. I2P provides proxies for all protocols (HTTP,IRC, SOCKS, ...).
The proxy concept refers to a layer-7 application in theOSI reference model.Network address translation(NAT) is similar to a proxy but operates in layer 3.
In the client configuration of layer-3 NAT, configuring the gateway is sufficient. However, for the client configuration of a layer-7 proxy, the destination of the packets that the client generates must always be the proxy server (layer 7), then the proxy server reads each packet and finds out the true destination.
Because NAT operates at layer 3, it is less resource-intensive than the layer-7 proxy, but also less flexible. As we compare these two technologies, we might encounter a terminology known as 'transparent firewall'.Transparent firewallmeans that the proxy uses the layer-7 proxy advantages without the knowledge of the client. The client presumes that the gateway is a NAT in layer 3, and it does not have any idea about the inside of the packet, but through this method, the layer-3 packets are sent to the layer-7 proxy for investigation.[citation needed]
ADNSproxy server takes DNS queries from a (usually local) network and forwards them to an Internet Domain Name Server. It may also cache DNS records.
Some client programs "SOCKS-ify" requests,[34]which allows adaptation of any networked software to connect to external networks via certain types of proxy servers (mostly SOCKS).
A residential proxy is an intermediary that uses a real IP address provided by anInternet Service Provider (ISP)with physical devices such asmobilesandcomputers of end-users. Instead of connecting directly to aserver, residential proxy users connect to the target through residential IP addresses. The target then identifies them as organic internet users. It does not let any tracking tool identify the reallocation of the user.[35]Any residential proxy can send any number of concurrent requests, and IP addresses are directly related to a specific region.[36]Unlike regular residential proxies, which hide the user's real IP address behind another IP address, rotating residential proxies, also known asbackconnect proxies, conceal the user's real IP address behind a pool of proxies. These proxies switch between themselves at every session or at regular intervals.[37]
Despite the providers assertion that the proxy hosts are voluntarily participating, numerous proxies are operated on potentially compromised hosts, includingInternet of thingsdevices. Through the process of cross-referencing the hosts, researchers have identified and analyzed logs that have been classified aspotentially unwanted programand exposed a range of unauthorized activities conducted by RESIP hosts. These activities encompassed illegal promotion, fast fluxing, phishing, hosting malware, and more.[38]
|
https://en.wikipedia.org/wiki/Web_proxy
|
AnRDP shopis a website where access tohacked computersis sold tocybercriminals.
The computers may be acquired via scanning the web for openRemote Desktop Protocolconnections andbrute-forcingpasswords.[1]High-valueransomwaretargets are sometimes available such as airports.[2]Access to a compromised machine retails from $3 to $19 depending on automatically gathered system and network metrics using a standardisedback door.[3][4]
Many such shops exist on thedark web.
Ukrainian sites such asxDedic[3]do not sell access to machines within theformer Soviet nations.[5]
This Internet-related article is astub. You can help Wikipedia byexpanding it.
Thiscrime-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/RDP_shop
|
Unix-like operating systems identify a user by a value called auser identifier, often abbreviated touser IDorUID. The UID, along with the group identifier (GID) and other access control criteria, is used to determine which system resources a user can access. Thepassword filemaps textual user names to UIDs. UIDs are stored in theinodesof the Unixfile system, running processes, tar archives, and the now-obsoleteNetwork Information Service. InPOSIX-compliant environments, the shell commandidgives the current user's UID, as well as more information such as the user name, primary user group and group identifier (GID).
The POSIX standard introduced three different UID fields into the process descriptor table, to allow privileged processes to take on different roles dynamically:
The effective UID (euid) of a process is used for most access checks. It is also used as the owner for files created by that process. The effective GID (egid) of a process also affects access control and may also affect file creation, depending on the semantics of the specific kernel implementation in use and possibly themountoptions used. According toBSD Unixsemantics, the group ownership given to a newly created file is unconditionally inherited from the group ownership of the directory in which it is created. According toAT&TUNIX System Vsemantics (also adopted byLinuxvariants), a newly created file is normally given the group ownership specified by theegidof the process that creates the file. Most filesystems implement a method to select whether BSD or AT&T semantics should be used regarding group ownership of a newly created file; BSD semantics are selected for specific directories when the S_ISGID (s-gid) permission is set.[1]
Linux also has a file system user ID (fsuid) which is used explicitly for access control to the file system. It matches theeuidunless explicitly set otherwise. It may beroot's user ID only ifruid,suid, oreuidis root. Whenever theeuidis changed, the change is propagated to thefsuid.
The intent offsuidis to permit programs (e.g., theNFSserver) to limit themselves to the file system rights of some givenuidwithout giving thatuidpermission to send them signals. Since kernel 2.0, the existence offsuidis no longer necessary because Linux adheres toSUSv3rules for sending signals, butfsuidremains for compatibility reasons.[2]
The saved user ID is used when a program running with elevated privileges needs to do some unprivileged work temporarily; changingeuidfrom a privileged value (typically0) to some unprivileged value (anything other than the privileged value) causes the privileged value to be stored insuid. Later, a program'seuidcan be set back to the value stored insuid, so that elevated privileges can be restored; an unprivileged process may set itseuidto one of only three values: the value ofruid, the value ofsuid, or the value ofeuid.
The real UID (ruid) and real GID (rgid) identify the real owner of the process and affect the permissions for sending signals. A process without superuser privileges may signal another process only if the sender'sruidoreuidmatches receiver'sruidorsuid. Because achild processinherits its credentials from its parent, a child and parent may signal each other.
POSIX requires the UID to be anintegertype. Most Unix-like operating systems represent the UID as an unsigned integer. The size of UID values varies amongst different systems; some UNIX OS's[which?]used 15-bit values, allowing values up to 32767[citation needed], while others such as Linux (before version 2.4) supported16-bitUIDs, making 65536 unique IDs possible. The majority of modern Unix-like systems (e.g., Solaris 2.0 in 1990, Linux 2.4 in 2001) have switched to32-bitUIDs, allowing 4,294,967,296 (232) unique IDs.
TheLinux Standard BaseCore Specification specifies that UID values in the range 0 to 99 should be statically allocated by the system, and shall not be created by applications, while UIDs from 100 to 499 should be reserved for dynamic allocation by system administrators and post install scripts.[3]
Debian Linuxnot only reserves the range 100–999 for dynamically allocated system users and groups, but also centrally and statically allocates users and groups in the range 60000-64999 and further reserves the range 65000–65533.[4]
Systemddefines a number of special UID ranges, including[5]
OnFreeBSD, porters who need a UID for their package can pick a free one from the range 50 to 999 and then register the static allocation.[6][7]
Some POSIX systems allocate UIDs for new users starting from 500 (macOS,Red Hat Enterprise Linuxtill version 6), others start at 1000 (Red Hat Enterprise Linux since version 7,[8]openSUSE,Debian[4]). On many Linux systems, these ranges are specified in/etc/login.defs, foruseraddand similar tools.
Central UID allocations in enterprise networks (e.g., viaLDAPandNFSservers) may limit themselves to using only UID numbers well above 1000, and outside the range 60000–65535, to avoid potential conflicts with UIDs locally allocated on client computers. When new users are created locally, the local system is supposed to check for and avoid conflicts with UID's already existing on NSS'[9]
OS-level virtualizationcan remap user identifiers, e.g. usingLinux namespaces, and therefore need to allocate ranges into which remapped UIDs and GIDs are mapped:
The systemd authors recommend thatOS-level virtualizationsystems should allocate 65536 (216) UIDs per container, and map them by adding an integer multiple of 216.[5]
NFSv4was intended to help avoid numeric identifier collisions by identifying users (and groups) in protocol packets using textual “user@domain” names rather than integer numbers. However, as long as operating-system kernels and local file systems continue to use integer user identifiers, this comes at the expense of additional translation steps (using idmap daemon processes), which can introduce additional failure points if local UID mapping mechanisms or databases get configured incorrectly, lost, or out of sync. The “@domain” part of the user name could be used to indicate which authority allocated a particular name, for example in form of
But in practice many existing implementations only allow setting the NFSv4 domain to a fixed value, thereby rendering it useless.
|
https://en.wikipedia.org/wiki/User_identifier
|
InUnix-likesystems, multiple users can be put intogroups.POSIXand conventionalUnixfile system permissionsare organized into three classes,user,group, andothers. The use of groups allows additional abilities to be delegated in an organized fashion, such as access to disks,printers, and otherperipherals. This method, among others, also enables thesuperuserto delegate some administrative tasks to normal users, similar to theAdministratorsgroup onMicrosoft Windows NTand its derivatives.
Agroup identifier, often abbreviated toGID, is a numeric value used to represent a specific group.[1]The range of values for a GID varies amongst different systems; at the very least, a GID can be between 0 and 32,767, with one restriction: the login group for the superuser must have GID 0. This numeric value is used to refer to groups in the/etc/passwdand/etc/groupfiles or their equivalents.Shadow passwordfiles andNetwork Information Servicealso refer to numeric GIDs. The group identifier is a necessary component ofUnixfile systemsandprocesses.
In Unix systems, every user must be a member of at least one group, theprimary group, which is identified by the numeric GID of the user's entry in the passwd database, which can be viewed with the commandgetent passwd(usually stored in/etc/passwdorLDAP). This group is referred to as theprimary group ID. A user may be listed as member of additional groups in the relevant entries in the group database, which can be viewed withgetent group(usually stored in/etc/grouporLDAP); the IDs of these groups are referred to assupplementary group IDs.
Unix processes have aneffective(EUID, EGID), areal(UID, GID) and asaved(SUID, SGID) ID. Normally these are identical, but insetuidandsetgidprocesses they are different.
Originally, a signed 16-bit integer was used. Since the sign was not necessary – negative numbers do not make valid group IDs – an unsigned integer is now used instead, allowing group IDs between 0 and 65,535. Modern operating systems usually use unsigned 32-bit integers, which allow for group IDs between 0 and 4,294,967,295.
Many Linux systems reserve the GID number range 0 to 99 for statically allocated groups, and either 100−499 or 100−999 for groups dynamically allocated by the system in post-installation scripts. These ranges are often specified in/etc/login.defs, foruseradd,groupaddand similar tools.
On FreeBSD, porters who need a GID for their package can pick a free one from the range 50 to 999 and then register this static allocation inports/GIDs.[2]
Many system administrators allocate for each user also a personal primary group that has the same name as the user's login name, and often also has the same numeric GID as the user's UID. Such personal groups have no other members and make collaboration with other users in shared directories easier, by allowing users to habitually work withumask0002. This way, newly created files can have by default write permissions enabled for group members, because this will normally only enable write access for members of the personal group, that is only for the file's owner. However, if a file is created in a shared directory that belongs to another group and has thesetgidbit set, then the created file will automatically become writable to members of that directory's group as well.
On many Linux systems, theUSERGROUPS_ENABvariable in/etc/login.defscontrols whether commands likeuseraddoruserdelautomatically add or delete an associated personal group.
|
https://en.wikipedia.org/wiki/Group_identifier
|
Incomputing, theprocess identifier(a.k.a.process IDorPID) is a number used by mostoperating systemkernels—such as those ofUnix,macOSandWindows—to uniquely identify an activeprocess. This number may be used as a parameter in various function calls, allowing processes to be manipulated, such as adjusting the process's priority orkillingit altogether.
InUnix-likeoperating systems, new processes are created by thefork()system call. The PID is returned to theparent process, enabling it to refer to the child in further function calls. The parent may, for example, wait for the child to terminate with thewaitpid()function, or terminate the process withkill().
There are two tasks with specially distinguished process IDs: PID 0 is used forswapperorsched, which is part of the kernel and is a process that runs on a CPU core whenever that CPU core has nothing else to do.[1]Linux also calls the threads of this processidle tasks.[2]In some APIs, PID 0 is also used as a special value that always refers to the calling thread, process, or process group.[3][4]Process ID 1 is usually theinitprocess primarily responsible for starting and shutting down the system. Originally, process ID 1 was not specifically reserved for init by any technical measures: it simply had this ID as a natural consequence of being the first process invoked by the kernel. More recent Unix systems typically have additional kernel components visible as 'processes', in which case PID 1 is actively reserved for the init process to maintain consistency with older systems.
Process IDs, in the first place, are usually allocated on a sequential basis,[5]beginning at 0 and rising to a maximum value which varies from system to system. Once this limit is reached, allocation restarts at 300 and again increases. InmacOSandHP-UX, allocation restarts at 100.[6]However, for this and subsequent passes any PIDs still assigned to processes are skipped. Some consider this to be a potential security vulnerability in that it allows information about the system to be extracted, or messages to be covertly passed between processes. As such, implementations that are particularly concerned about security may choose a different method of PID assignment.[7]On some systems, likeMPE/iX, the lowest available PID is used, sometimes in an effort to minimize the number of process information kernel pages in memory.
The current process ID is provided by agetpid()system call,[8]or as a variable$$in shell. The process ID of a parent process is obtainable by agetppid()system call.[9]
OnLinux, the maximum process ID is given by the pseudo-file/proc/sys/kernel/pid_max.[10]
Some processes, for example, themocmusic player and theMySQLdaemon, write their PID to a documented file location, to allow other processes to look it up.[citation needed]
On theWindowsfamily of operating systems, one can get the current process's ID using theGetCurrentProcessId()function of theWindows API,[11]and ID of other processes usingGetProcessId().[12]Internally, process ID is called aclient ID, and is allocated from the same namespace asthreadIDs, so these two never overlap. TheSystem Idle Processis given process ID 0. TheSystem Processis given the process ID 8 onWindows 2000and 4 onWindows XPandWindows Server 2003.[13]On theWindows NT familyof operating systems, process and thread identifiers are all multiples of 4, but it is not part of the specification.[14]
|
https://en.wikipedia.org/wiki/Process_identifier
|
chmodis ashellcommandfor changingaccess permissionsand special mode flags offiles(includingspecial filessuch asdirectories). The name is short forchangemodewheremoderefers to the permissions and flags collectively.[1][2]
The command originated inAT&T Unixversion 1 and was exclusive toUnixandUnix-likeoperating systemsuntil it was ported to other operating systems such asWindows(inUnxUtils)[3]andIBM i.[4]
InUnixandUnix-likeoperating systems, asystem callwith the same name as the command,chmod(), provides access to the underlying access control data. The command exposes the capabilities of the system call to a shell user.
As the need for enhancedfile-system permissionsgrew,access-control lists[5]were added to many file systems to augment the modes controlled viachmod.
The implementation ofchmodbundled inGNU coreutilswas written by David MacKenzie and Jim Meyering.[6]
Although the syntax of the command varies somewhat by implementation, it generally accepts either a single octal value (which specifiesallthe mode bits on each file), or a comma-delimited list of symbolic specifiers (which describes how to change the existing mode bits of each file). The remaining arguments are a list of paths to files to be modified.[7]
Changing permissions is only allowed for the superuser (root) and the owner of a file.
If asymbolic linkis specified, the target of the link has its mode bits adjusted. Permissions directly associated with a symbolic link file system entry are typically not used.
Optional, command-line options may include:
Given a numeric permissions argument, thechmodcommand treats it as anoctalnumber, and replacesallthe mode bits for each file. (Although 4 digits are specified, leading0digits can be elided.)[8]
Why octal rather than decimal?[9]
There are twelve standard mode bits, comprising 3 special bits (setuid,setgid, andsticky), and 3 permission groups (controlling access byuser,group, andother) of 3 bits each (read,write, andexec/scan); each permission bit grants access if set (1) or denies access if clear (0).
As an octal digit represents a 3-bit value, the twelve mode bits can be represented as four octal digits.chmodaccepts up to four digits and uses 0 for left digits not specified (as is normal for numeric representation). In practice, 3 digits are commonly specified since the special modes are rarely used and the user class is usually specified.
In the context of an octal digit, each operation bit represents a numeric value: read: 4, write: 2 and execute: 1. The following table relates octal digit values to a class operations value.
The commandstatcan report a file's permissions as octal. For example:
The reported value,754indicates the following permissions:
A code permits execution if and only if it isodd(i.e. 1, 3, 5, or 7). A code permits read if and only if it is greater than or equal to 4 (i.e. 4, 5, 6, or 7). A code permits write if and only if it is 2, 3, 6, or 7.
Thechmodcommand accepts symbolic notation that specifies how to modify the existing permissions.[10]The command accepts a comma-separate list of specifiers like:[classes]+|-|=operations
Classes map permissions to users. A change specifier can select one class by including its symbol, multiple by including each class's symbol with no delimiter or if not specified, then all classes are selected and further the bits ofumaskmask will be unchanged.[11]Class specifiers include:
As ownership is key to access control, and since the symbolic specification uses the abbreviationo, some incorrectly think that it meansowner, when, in fact, it is short forothers.
The change operators include:
Operations can be specified as follows:
Mostchmodimplementations support the specification of the special modes in octal, but some do not which requires using the symbolic notation.
Thelscommand can report file permissions in a symbolic notation that is similar to the notation used withchmod.ls -lreports permissions in a notation that consists of 10 letters. The first indicates the type of the file system entry, such as dash for regular file and 'd' for directory. Following that are three sets of three letters that indicate read, write and execute permissions grouped by user, group and others classes. Each position is either dash to indicate lack of permission or the single-letter abbreviation for the permission to indicate that it's granted. For example:
The permission specifier-rwxr-xr--starts with a dash which indicates thatfindPhoneNumbers.shis a regular file; not a directory. The next three lettersrwxindicate that the file can be read, written, and executed by the owning userdgerman. The next three lettersr-xindicate that the file can be read and executed by members of thestaffgroup. And the last three lettersr--indicate that the file is read-only for other users.
Addwrite permission to thegroup class of a directory, allowing users in the same group to add files:
Removewrite permission forall classes, preventing anyone from writing to the file:
Set the permissions for theuser andgroup classes toread and execute only; nowrite permission; preventing anyone from adding files:
Enablewrite for theuser class while making itread-only forgroup and others:
To recursively set access for the directorydocs/and its contained files:
chmod -R u+w docs/
To set user and group for read and write only and set others for read only:
chmod 664 file
To set user for read, write, and execute only and group and others for read only:
chmod 744 file
To set the sticky bit in addition to user, group and others permissions:
chmod 1755 file
To set UID in addition to user, group and others permissions:
chmod 4755 file
To set GID in addition to user, group and others permissions:
chmod 2755 file
|
https://en.wikipedia.org/wiki/Chmod
|
Polkit(formerlyPolicyKit) is a component for controlling system-wideprivilegesinUnix-likeoperating systems. It provides an organized way for non-privileged processes to communicate with privileged ones. Polkit allows a level of control of centralized system policy. It is developed and maintained by David Zeuthen fromRed Hatand hosted by thefreedesktop.orgproject. It is published asfree softwareunder the terms of version 2 of theGNU Lesser General Public License.[3]
Since version 0.105, released in April 2012,[4][5]the name of the project was changed fromPolicyKittopolkitto emphasize that the system component was rewritten[6]and that theAPIhad changed, breakingbackward compatibility.[7][dubious–discuss]
Fedorabecame the firstdistributionto include PolicyKit, and it has since been used in other distributions, includingUbuntusince version 8.04 andopenSUSEsince version 10.3. Some distributions, like Fedora,[8]have already switched to the rewritten polkit.
It is also possible to use polkit to execute commands with elevated privileges using the commandpkexecfollowed by the command intended to be executed (withrootpermission).[9]However, it may be preferable to usesudo, as this command provides more flexibility and security, in addition to being easier to configure.[10]
Thepolkitddaemonimplements Polkit functionality.[11]
Amemory corruptionvulnerability PwnKit (CVE-2021-4034[12]) discovered in thepkexeccommand (installed on all major Linux distributions) was announced on January 25, 2022.[13][14]The vulnerability dates back to the original distribution from 2009. The vulnerability received aCVSS scoreof 7.8 ("High severity") reflecting serious factors involved in a possible exploit: unprivileged users can gain full root privileges, regardless of the underlying machine architecture or whether thepolkitdaemon is running or not.
Thisfree and open-source softwarearticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/PolicyKit
|
Unix securityrefers to the means ofsecuringaUnixorUnix-likeoperating system.
A core security feature in these systems is thefile system permissions. All files in a typicalUnix filesystemhave permissions set enabling different access to a file.Unix permissionspermit different users access to a file with different privilege (e.g., reading, writing, execution). Like users, differentuser groupshave different permissions on a file.
Many Unix implementations add an additional layer of security by requiring that a user be a member of thewheeluser privileges groupin order to access thesucommand.[1]
Most Unix and Unix-like systems have an account or group which enables a user to exact complete control over the system, often known as arootaccount. If access to this account is gained by an unwanted user, this results in a complete breach of the system. A root account however is necessary for administrative purposes, and for the above security reasons the root account isseldomused for day to day purposes (thesudoprogram is more commonly used), so usage of the root account can be more closely monitored.[citation needed]
Selecting strongpasswordsand guarding them properly are important for Unix security.[citation needed]
On many UNIX systems, user and password information, if stored locally, can be found in the/etc/passwdand/etc/shadowfile pair.
Operating systems, like all software, may contain bugs in need of fixing or may be enhanced with the addition of new features; many UNIX systems come with a package manager for this. Patching the operating system in a secure manner requires that the software come from a trustworthy source and not have been altered since it was packaged. Common methods for verifying that operating system patches have not been altered include the use of thedigital signatureof acryptographic hash, such as aSHA-256based checksum, or the use of read-only media.[citation needed]
There are viruses and worms that target Unix-like operating systems. In fact, the first computer worm—theMorris worm—targeted Unix systems.
There arevirus scannersfor UNIX-like systems, frommultiple vendors.
Network firewallprotects systems and networks from network threats which exist on the opposite side of the firewall. Firewalls can block access to strictly internal services, unwanted users and in some cases filter network traffic by content.[citation needed]
iptablesis the current user interface for interacting withLinuxkernelnetfilterfunctionality. It replacedipchains. OtherUnixlike operating systems may provide their own native functionality and otheropen sourcefirewall products exist.
|
https://en.wikipedia.org/wiki/Unix_security
|
TheCommon Weakness Enumeration(CWE) is a category system for hardware and software weaknesses and vulnerabilities. It is sustained by a community project with the goals of understanding flaws in software and hardware and creating automated tools that can be used to identify, fix, and prevent those flaws.[1]The project is sponsored by the office of the U.S. Department of Homeland Security (DHS) Cybersecurity and Infrastructure Security Agency (CISA), which is operated byThe MITRE Corporation,[2]with support fromUS-CERTand theNational Cyber Security Divisionof the U.S. Department of Homeland Security.[3][4]
The first release of the list and associated classification taxonomy was in 2006.[5]Version 4.15 of the CWE standard was released in July 2024.[6]
CWE has over 600 categories, including classes for buffer overflows, path/directory tree traversal errors, race conditions,cross-site scripting, hard-coded passwords, and insecure random numbers.[7]
Common Weakness Enumeration (CWE) Compatibility program allows a service or a product to be reviewed and registered as officially "CWE-Compatible" and "CWE-Effective". The program assists organizations in selecting the right software tools and learning about possible weaknesses and their possible impact.
In order to obtain CWE Compatible status a product or a service must meet 4 out of 6 requirements, shown below:
There are 56 organizations as of September 2019 that develop and maintain products and services that achieved CWE Compatible status.[9]
Some researchers think that ambiguities in CWE can be avoided or reduced.[10]
As of 4/16/2024, the CWE Compatibility Program has been discontinued.[11]
|
https://en.wikipedia.org/wiki/Common_Weakness_Enumeration
|
Software composition analysis(SCA) is a practice in the fields of Information technology and software engineering for analyzing custom-built software applications to detect embedded open-source software and detect if they are up-to-date, contain security flaws, or have licensing requirements.[1]
It is a common software engineering practice to develop software by using different components.[2]Usingsoftware componentssegments the complexity of larger elements into smaller pieces of code and increases flexibility by enabling easier reuse of components to address new requirements.[3]The practice has widely expanded since the late 1990s with the popularization ofopen-source software(OSS) to help speed up the software development process and reduce time to market.[4]
However, usingopen-source softwareintroduces many risks for the software applications being developed. These risks can be organized into 5 categories:[5]
Shortly after the foundation of theOpen Source Initiativein February 1998,[6]the risks associated with OSS were raised[7]and organizations tried to manage this using spreadsheets and documents to track all the open source components used by their developers.[8]
For organizations using open-source components extensively, there was a need to help automate the analysis and management of open source risk. This resulted in a new category of software products called Software Composition Analysis (SCA) which helps organizations manage open source risk.
SCA strives to detect all the 3rd party components in use within a software application to help reduce risks associated with security vulnerabilities, IP licensing requirements, and obsolescence of components being used.
SCA products typically work as follows:[9]
As SCA impacts different functions in organizations, different teams may use the data depending on the organization's corporation size and structure. The IT department will often use SCA for implementing and operationalizing the technology with common stakeholders including the chief information officer (CIO), the Chief Technology Officer (CTO), and the Chief Enterprise Architects (EA).[12]Security and license data are often used by roles such as Chief Information Security Officers (CISO) for security risks, and Chief IP / Compliance officer for Intellectual Property risk management.[13]
Depending on the SCA product capabilities, it can be implemented directly within a developer'sIntegrated Development Environment(IDE) who uses and integrates OSS components, or it can be implemented as a dedicated step in thesoftware quality controlprocess.[14][15]
SCA products, and particularly their capacity to generate an SBOM is required in some countries such as the United States to enforce the security of software delivered to one of their agencies by a vendor.[16]
Another common use case for SCA is for TechnologyDue diligence. Prior to aMerger & Acquisition(M&A) transaction,Advisory firmsreview the risks associated with the software of the target firm.[17]
The automatic nature of SCA products is their primary strength. Developers don't have to manually do an extra work when using and integrating OSS components.[18]The automation also applies to indirect references to other OSS components within code and artifacts.[19]
Conversely, some key weaknesses of current SCA products may include:
|
https://en.wikipedia.org/wiki/Software_composition_analysis
|
Static application security testing(SAST) is used to secure software by reviewing the source code of the software to identify sources of vulnerabilities. Although the process ofchecking programs by reading their code(modernly known asstatic program analysis) has existed as long as computers have existed, the technique spread to security in the late 90s and the first public discussion ofSQL injectionin 1998 when Web applications integrated new technologies likeJavaScriptandFlash.
Unlikedynamic application security testing(DAST) tools forblack-box testingof application functionality, SAST tools focus on the code content of the application,white-box testing.
A SAST tool scans the source code of applications and its components to identify potential security vulnerabilities in their software and architecture.
Static analysis tools can detect an estimated 50% of existing security vulnerabilities.[1]
In thesoftware development life cycle(SDLC), SAST is performed early in the development process and at code level, and also when all pieces of code and components are put together in a consistent testing environment. SAST is also used for software quality assurance,[2]even if the many resultingfalse-positiveimpede its adoption by developers[3]
SAST tools are integrated into the development process to help development teams as they are primarily focusing on developing and delivering software respecting requested specifications.[4]SAST tools, like other security tools, focus on reducing the risk of downtime of applications or that private information stored in applications will not be compromised.
For the year of 2018, the Privacy Rights Clearinghouse database[5]shows that more than 612 million records have been compromised by hacking.
Application security tests of applications their release: static application security testing (SAST),dynamic application security testing(DAST), and interactive application security testing (IAST), a combination of the two.[6]
Static analysis tools examine the text of a program syntactically. They look for a fixed set of patterns or rules in the source code. Theoretically, they can also examine a compiled form of the software. This technique relies oninstrumentationof the code to do the mapping between compiled components and source code components to identify issues.
Static analysis can be done manually as acode revieworauditingof the code for different purposes, including security, but it is time-consuming.[7]
The precision of SAST tool is determined by its scope of analysis and the specific techniques used to identify vulnerabilities. Different levels of analysis include:
The scope of the analysis determines its accuracy and capacity to detect vulnerabilities using contextual information.[8]SAST tools unlikeDASTgives the developers real-time feedback, and help them secure flaws before they the code to the next level.
At a function level, a common technique is the construction of anAbstract syntax treeto control the flow of data within the function.[9]
Since late 90s, the need to adapt to business challenges has transformed software development with componentization[10]enforced by processes and organization of development teams.[11]Following the flow of data between all the components of an application or group of applications allows validation of required calls to dedicated procedures forsanitizationand that proper actions are taken to taint data in specific pieces of code.[12][13]
The rise of web applications entailed testing them: Verizon Data Breach reports in 2016 that 40% of all data breaches use web application vulnerabilities.[14]As well as external security validations, there is a rise in focus on internal threats. The Clearswift Insider Threat Index (CITI) has reported that 92% of their respondents in a 2015 survey said they had experienced IT or security incidents in the previous 12 months and that 74% of these breaches were originated by insiders.[15][16]Lee Hadlington categorized internal threats in 3 categories: malicious, accidental, and unintentional. Mobile applications' explosive growth implies securing applications earlier in the development process to reduce malicious code development.[17]
The earlier a vulnerability is fixed in the SDLC, the cheaper it is to fix. Costs to fix in development are 10 times lower than in testing, and 100 times lower than in production.[18]SAST tools run automatically, either at the code level or application-level and do not require interaction. When integrated into a CI/CD context, SAST tools can be used to automatically stop the integration process if critical vulnerabilities are identified.[19]
Because the tool scans the entire source-code, it can cover 100% of it, whiledynamic application security testingcovers its execution possibly missing part of the application,[6]or unsecured configuration in configuration files.
SAST tools can offer extended functionalities such as quality and architectural testing. There is a direct correlation between the quality and the security. Bad quality software is also poorly secured software.[20]
Even though developers are positive about the usage of SAST tools, there are different challenges to the adoption of SAST tools by developers.[4]The usability of the output generated by these tools may challenge how much developers can make use of these tools. Research shows that despite the long out generated by these tools, they may lack usability.[21]
With Agile Processes in software development, early integration of SAST generates many bugs, as developers using this framework focus first on features and delivery.[22]
Scanning many lines of code with SAST tools may result in hundreds or thousands of vulnerability warnings for a single application. It can generate many false-positives, increasing investigation time and reducing trust in such tools. This is particularly the case when the context of the vulnerability cannot be caught by the tool.[3]
|
https://en.wikipedia.org/wiki/Static_application_security_testing
|
TheEuropean Union Agency for Cybersecurity[1]– self-designationENISAfrom the abbreviation of its original name – is anagency of the European Union. It is fully operational since September 1, 2005. The Agency is located inAthens,Greeceand has offices in Brussels, Belgium andHeraklion, Greece.
ENISA was created in 2004 by EU Regulation No 460/2004[2]under the name ofEuropean Network and Information Security Agency. ENISA's Regulation is the EU Regulation No 2019/881[3]of the European Parliament and of the Council of 17 April 2019 on ENISA (theEuropean Union Agency for Cybersecurity) and on information and communications technology cybersecurity certification and repealing EU Regulation No 526/2013 (Cybersecurity Act).
ENISA, is the Union’s agency dedicated to achieving a high common level of cybersecurity across Europe. Established in 2004 and strengthened by the EU Cybersecurity Act, the European Union Agency for Cybersecurity contributes to EU cyber policy, enhances the trustworthiness of ICT products, services and processes with cybersecurity certification schemes, cooperates with Member States and EU bodies, and helps Europe prepare for the cyber challenges of tomorrow. Through knowledge sharing, capacity building and awareness raising, the Agency works together with its key stakeholders to strengthen trust in the connected economy, to boost resilience of the Union’s infrastructure, and, ultimately, to keep Europe’s society and citizens digitally secure.
ENISA is managed by the executive director and supported by a staff composed of experts representing stakeholders such as the information and communication technologies industry, consumer groups and academic experts. The agency is overseen by the executive board and the management board, which are composed of representatives from the EU member states, the EU Commission and other stakeholders.
Set up in 2004 as an informal point of reference into the member states, as of 27 June 2019 the National Liaison Officers network has become a statutory body of ENISA. The National Liaison Officers Network facilitates the exchange of information between ENISA and the member states. The agency is also assisted by the Advisory Group which is composed of “nominated members” and members appointed “ad personam”, all in total 33 members from all over Europe.[4]The advisory group focuses on issues relevant to stakeholders and brings them to the attention of ENISA.
In order to carry out its tasks, the agency had a budget of nearly €17 million for the year 2019[5]and 109 statutory staff members. In addition, the agency employs a number of other employees including seconded national experts, trainees and interim agents. There are plans for additional experts to be integrated into the agency following the entering into force of Regulation 2019/881.[3]
In 2007,European CommissionerViviane Redingproposed that ENISA be folded into a new European Electronic Communications Market Authority (EECMA).[6]By 2010, CommissionerNeelie Kroessignalled that the European Commission wanted a reinforced agency. The agency mandate was extended up to 2012 with an annual budget of €8 million, under the leadership of Dr. Udo Helmbrecht. The last extension of ENISA's mandate before it became permanent was done by the EU Regulation526/2013of the European Parliament and of the Council of 21 May 2013, repealing Regulation (EC) 460/2004. As of 27 June 2019, ENISA has been established for an indefinite period.
ENISA headquarters, including its administration and support functions, were originally based inHeraklion, Greece. The choice of a rather remote site was contentious from the outset, particularly since Greece held the EU Council presidency when the agency’s mandate was being negotiated.[7]In addition, the agency has had a liaison office in Athens since October 2009. In 2013, it moved one-third of its staff of then sixty from Crete to Athens.[8]In 2016, theCommittee on Budgetsbacked ENISA’s bid to shut down the Heraklion office.[9]Since 2019, ENISA has two offices in Greece; Its headquarters in Athens and a second office in Heraklion, Greece. In June 2021, the European Commission gave their consent to the establishment of an ENISA office in Brussels.[10]
In 2019, the agency launched the "EU's Cybersecurity East Project" intended to strengthen cybersecurity in the member states of the EU'sEastern Partnership. On 4 October 2022, the agency hosted a cybersecurity summit with the member states of the EU's Eastern Partnership in Athens. Representatives fromArmenia,Azerbaijan,Georgia,MoldovaandUkrainediscussed legal frameworks, best practices, and increasing cooperation with the EU.[11]
Since 2022 the agency has held a cybersecurity competition, known as theInternational Cybersecurity Challenge.[12]The agency has also been involved in organizing theEuropean Cybersecurity Challenge.
|
https://en.wikipedia.org/wiki/European_Union_Agency_for_Cybersecurity#European_Vulnerability_Database
|
Archive Teamis a group dedicated todigital preservationandweb archivingthat was co-founded byJason Scottin 2009.[1][2]
Its primary focus is the copying and preservation of content housed by at-risk online services. Some of its projects include the partial and complete preservation of services such asGeoCities,[3][4]Yahoo! Video,Google Video,Friendster,FortuneCity,[a]TwitPic,[5]SoundCloud,[6]and the "Aaron SwartzMemorialJSTORLiberator".[7]Archive Team also archivesURL shortenerservices[8]andwikis[9]on a regular basis.
According toJason Scott, "Archive Team was started out of anger and a feeling of powerlessness, this feeling that we were letting companies decide for us what was going to survive and what was going to die."[10]Scott continues, "it's not our job to figure out what's valuable, to figure out what's meaningful. We work by three virtues:rage,paranoia, andkleptomania."[11]
Archive Team is composed of a loose community of independent contributors/users.[12][13][14]Their archival process makes use of a "Warrior", avirtual machineenvironment. Individuals use the Warrior in their desktop environments to download content without requiring technical expertise. Tasks are allocated by a centrally-managed Tracker that networks with and allocates items to Warriors. The tracker also monitors user upload activity and displays a leader board.[15]
There are several projects currently running:
As of 12 December 2024[update], the largest project on ArchiveTeam isURLs, with over 10 petabytes archived.[28][b]
|
https://en.wikipedia.org/wiki/Archive_Team
|
Thedead Internet theoryis aconspiracy theorythat asserts, due to a coordinated and intentional effort, theInternetnow consists mainly ofbot activityandautomatically generated contentmanipulated byalgorithmic curationto control the population and minimize organic human activity.[1][2][3][4][5]Proponents of the theory believe thesesocial botswere created intentionally to help manipulate algorithms and boost search results in order to manipulate consumers.[6][7]Some proponents of the theory accuse government agencies of using bots to manipulate public perception.[2][6]The date given for this "death" is generally around 2016 or 2017.[2][8][9]The dead Internet theory has gained traction because many of the observed phenomena are quantifiable, such as increased bot traffic, but the literature on the subject does not support the full theory.[2][4][10]
The dead Internet theory's exact origin is difficult to pinpoint. In 2021, a post titled "Dead Internet Theory: Most Of The Internet Is Fake" was published onto the forumAgora Road's Macintosh Cafe esoteric board by a user named "IlluminatiPirate",[11]claiming to be building on previous posts from the same board and fromWizardchan,[2]and marking the term's spread beyond these initial imageboards.[2][12]The conspiracy theory has entered public culture through widespread coverage and has been discussed on various high-profile YouTube channels.[2]It gained more mainstream attention with an article inThe Atlantictitled "Maybe You Missed It, but the Internet 'Died' Five Years Ago".[2]This article has been widely cited by other articles on the topic.[13][12]
The dead Internet theory has two main components: that organic human activity on the web has been displaced by bots and algorithmically curated search results, and that state actors are doing this in a coordinated effort to manipulate the human population.[3][14][15]The first part of this theory, that bots create much of the content on the internet and perhaps contribute more than organic human content, has been a concern for a while, with the original post by "IlluminatiPirate" citing the article "How Much of the Internet Is Fake? Turns Out, a Lot of It, Actually" inNew Yorkmagazine.[2][16][14]The Dead Internet Theory goes on to include thatGoogle, and other search engines, are censoring the Web by filtering content that is not desirable by limiting what is indexed and presented in search results.[3]While Google may suggest that there are millions of search results for a query, the results available to a user do not reflect that.[3]This problem is exacerbated by the phenomenon known aslink rot, which is caused when content at a website becomes unavailable, and all links to it on other sites break.[3]This has led to the theory that Google is aPotemkin village, and the searchable Web is much smaller than we are led to believe.[3]The Dead Internet Theory suggests that this is part of the conspiracy to limit users to curated, and potentially artificial, content online.
The second half of the dead Internet theory builds on this observable phenomenon by proposing that the U.S. government, corporations, or other actors are intentionally limiting users to curated, and potentially artificial AI-generated content, to manipulate the human population for a variety of reasons.[2][14][15][3]In the original post, the idea that bots have displaced human content is described as the "setup", with the "thesis" of the theory itself focusing on the United States government being responsible for this, stating: "The U.S. government is engaging in an artificial intelligence-poweredgaslightingof the entire world population."[2][6]
Caroline Busta, founder of the media platformNew Models, was quoted in a 2021 article inThe Atlanticcalling much of the dead Internet theory a "paranoid fantasy," even if there are legitimate criticisms involving bot traffic and the integrity of the internet, but she said she does agree with the "overarching idea.”[2]In an article inThe New Atlantis,Robert Mariani called the theory a mix between a genuine conspiracy theory and acreepypasta.[6]
In 2024, the dead Internet theory was sometimes used to refer to the observable increase in content generated vialarge language models(LLMs) such asChatGPTappearing in popular Internet spaces without mention of the full theory.[1][17][18][19]In a 2025 article byThomas Sommerer, this portion of the Dead Internet Theory is explored, with Sommerer calling the displacment of human generated content with Artificial content "an inevitable event."[18]Sommerer states the Dead Internet Theory is not scientific in nature, but reflects the public perception of the Internet.[18]Another article in theJournal of Cancer Educationdiscussed the impact of the perception of the Dead Internet Thoery in online cancer support forums, specifically focusing on the psycological impact on patience who find that support is coming from a LLM and not a genuine human.[19]The article also discussed the possible problems in training data for LLMs that could emerge from using AI generated content to train the LLMs.[19]
Generative pre-trained transformers(GPTs) are a class oflarge language models(LLMs) that employartificial neural networksto produce human-like content.[20][21]The first of these to be well known was developed byOpenAI.[22]These models have created significant controversy. For example, Timothy Shoup of theCopenhagen Institute for Futures Studiessaid in 2022, "in the scenario whereGPT-3'gets loose', the internet would be completely unrecognizable".[23]He predicted that in such a scenario, 99% to 99.9% of content online might be AI-generated by 2025 to 2030.[23]These predictions have been used as evidence for the dead internet theory.[13]
In 2024,Googlereported that its search results were being inundated with websites that "feel like they were created for search engines instead of people".[24]In correspondence withGizmodo, a Google spokesperson acknowledged the role ofgenerative AIin the rapid proliferation of such content and that it could displace more valuable human-made alternatives.[25]Bots using LLMs are anticipated to increase the amount of spam, and run the risk of creating a situation where bots interacting with each other create "self-replicating prompts" that result in loops only human users could disrupt.[5]
ChatGPTis an AIchatbotwhose late 2022 release to the general public led journalists to call the dead internet theory potentially more realistic than before.[8][26]Before ChatGPT's release, the dead internet theory mostly emphasized government organizations, corporations, and tech-literate individuals. ChatGPT gives the average internet user access to large-language models.[8][26]This technology caused concern that the Internet would become filled with content created through the use of AI that would drown out organic human content.[8][26][27][5][28]
In 2016, the security firmImpervareleased a report on bot traffic and found that automated programs were responsible for 52% of web traffic.[29][30]This report has been used as evidence in reports on the dead Internet theory.[2]Imperva's report for 2023 found that 49.6% of internet traffic was automated, a 2% rise on 2022 which was partly attributed to artificial intelligence modelsscraping the webfor training content.[31]
In 2024, AI-generated images onFacebook, referred to as "AI slop", began going viral.[35][36]Subjects of these AI-generated images included various iterations ofJesus"meshed in various forms" with shrimp, flight attendants, and black children next to artwork they supposedly created. Many of those said iterations have hundreds or even thousands of AI comments that say "Amen".[37][38]These images have been referred as an example for why the Internet feels "dead".[39]Sommerer discussed Shrimp Jesus in detail within his article as a symbol to represent the shift in the Interent, specifically stating
"Just as Jesus was supposedly the messenger for God, Shrimp Jesus is the messenger for the fatal system maneuvered ourselves into. Decoupled, proliferated, and in a state of exponential metastasis."[18]
Facebook includes an option to provide AI-generated responses to group posts. Such responses appear if a user explicitly tags @MetaAI in a post, or if the post includes a question and no other users have responded to it within an hour.[40]
In January 2025, interest renewed in the theory following statements from Meta on their plans to introduce new AI powered autonomous accounts.[41]Connor Hayes, vice-president of product for generative AI at Meta stated, "We expect these AIs to actually, over time, exist on our platforms, kind of in the same way that accounts do...They’ll have bios and profile pictures and be able to generate and share content powered by AI on the platform."[42]
In the past, theRedditwebsite allowed free access to its API and data, which allowed users to employ third-party moderation apps and train AI in human interaction.[27]In 2023, the companymoved to charge for access to its user dataset. Companies training AI are expected to continue to use this data for training future AI.[citation needed]As LLMs such as ChatGPT become available to the general public, they are increasingly being employed on Reddit by users and bot accounts.[27]ProfessorToby Walsh, a computer scientist at the University of New South Wales, said in an interview withBusiness Insiderthat training the next generation of AI on content created by previous generations could cause the content to suffer.[27]University of South Florida professor John Licato compared this situation of AI-generated web content flooding Reddit to the dead Internet theory.[27]
Since 2020, severalTwitteraccounts started posting tweets starting with the phrase "I hate texting" followed by an alternative activity, such as "i hate texting i just want to hold ur hand", or "i hate texting just come live with me".[2]These posts received tens of thousands of likes, many of which are suspected to be frombot accounts. Proponents of the dead internet theory have used these accounts as an example.[2][12]
The proportion of Twitter accounts run by bots became a major issue duringElon Musk's acquisition of the company.[44][45][46][47]Musk disputed Twitter's claim that fewer than 5% of their monetizable daily active users (mDAU) were bots.[44][48]Musk commissioned the company Cyabra to estimate what percentage of Twitter accounts were bots, with one study estimating 13.7% and another estimating 11%.[44]CounterAction, another firm commissioned by Musk, estimated 5.3% of accounts were bots.[49]Some bot accounts provide services, such as one noted bot that can provide stock prices when asked, while others troll, spread misinformation, or try to scam users.[48]Believers in the dead Internet theory have pointed to this incident as evidence.[50]
In 2024,TikTokbegan discussing offering the use of virtual influencers to advertisement agencies.[15]In a 2024 article inFast Company, journalistMichael Grothauslinked this and other AI-generated content on social media to the Dead Internet Theory.[15]In this article, he referred to the content as "AI-slime".[15]
OnYouTube, there is a market online for fake views to boost a video's credibility and reach broader audiences.[51]At one point, fake views were so prevalent that some engineers were concerned YouTube's algorithm for detecting them would begin to treat the fake views as default and start misclassifying real ones.[51][2]YouTube engineers coined the term "the Inversion" to describe this phenomenon.[51][16][28]YouTube bots and the fear of "the Inversion" were cited as support for the dead Internet theory in a thread on the internet forum Melonland.[2]
SocialAI, an app created on September 18, 2024 byMichael Sayman, was created with the full purpose of chatting with only AI bots without human interaction.[52]An article on theArs Technicawebsite linked SocialAI to the Dead Internet Theory.[52][53]
The dead internet theory has been discussed among users of the social media platformTwitter. Users have noted that bot activity has affected their experience.[2]Numerous YouTube channels and online communities, including theLinus Tech Tipsforums andJoe Rogansubreddit, have covered the dead Internet theory, which has helped to advance the idea into mainstream discourse.[2]There has also been discussion and memes about this topic on the appTikTok, due to the fact thatAIgenerated content has become more mainstream.[attribution needed]
|
https://en.wikipedia.org/wiki/Dead_Internet_theory
|
Inlibraryandarchival science,digital preservationis a formal process to ensure that digital information of continuing value remains accessible and usable in the long term.[1]It involves planning, resource allocation, and application ofpreservationmethods andtechnologies,[2]and combines policies, strategies and actions to ensure access toreformattedand "born-digital" content, regardless of the challenges of media failure and technological change. The goal of digital preservation is the accurate rendering of authenticated content over time.[3]
The Association for Library Collections and Technical Services Preservation and Reformatting Section of theAmerican Library Associationdefined digital preservation as combination of "policies, strategies and actions that ensure access to digital content over time."[4]According to theHarrod's Librarian Glossary, digital preservation is the method of keeping digital material alive so that they remain usable as technological advances render original hardware and software specification obsolete.[5]
The necessity for digital preservation mainly arises because of the relatively short lifespan of digital media. Widely usedhard drivescan become unusable in a few years due to a variety of reasons such as damaged spindle motors, andflash memory(found onSSDs, phones,USB flash drives, and in memory cards such as SD, microSD, andCompactFlashcards) can start to lose data around a year after its last use, depending on its storage temperature and how much data has been written to it during its lifetime.[citation needed]Currently,archival disc-based media is available, but it is only designed to last for 50 years and it is a proprietary format, sold by just two Japanese companies, Sony and Panasonic.M-DISCis a DVD-based format that claims to retain data for 1,000 years, but writing to it requires special optical disc drives and reading the data it contains requires increasingly uncommonoptical disc drives, in addition the company behind the format went bankrupt. Data stored onLTOtapes require periodic migration, as older tapes cannot be read by newer LTO tape drives.RAIDarrays could be used to protect against failure of single hard drives, although care needs to be taken to not mix the drives of one array with those of another.
Archival appraisal(or, alternatively, selection[6]) refers to the process of identifying records and other materials to be preserved by determining their permanent value. Several factors are usually considered when making this decision.[7]It is a difficult and critical process because the remaining selected records will shape researchers' understanding of that body of records, orfonds. Appraisal is identified as A4.2 within the Chain of Preservation (COP) model[8]created by the InterPARES 2 project.[9]Archival appraisal is not the same as monetary appraisal, which determinesfair market value.
Archival appraisal may be performed once or at the various stages of acquisition andprocessing. Macro appraisal,[10]a functional analysis of records at a high level, may be performed even before the records have been acquired to determine which records to acquire. More detailed, iterative appraisal may be performed while the records are being processed.
Appraisal is performed on all archival materials, not just digital. It has been proposed that, in the digital context, it might be desirable to retain more records than have traditionally been retained after appraisal of analog records, primarily due to a combination of the declining cost of storage and the availability of sophisticated discovery tools which will allow researchers to find value in records of low information density.[11][12]In the analog context, these records may have been discarded or only a representative sample kept. However, the selection, appraisal, and prioritization of materials must be carefully considered in relation to the ability of an organization to responsibly manage the totality of these materials.
Often libraries, and to a lesser extent, archives, are offered the same materials in several different digital or analog formats. They prefer to select the format that they feel has the greatest potential for long-term preservation of the content. TheLibrary of Congresshas created a set of recommended formats for long-term preservation.[13]They would be used, for example, if the Library was offered items for copyright deposit directly from a publisher.
In digital preservation andcollection management, discovery and identification of objects is aided by the use of assigned identifiers and accurate descriptive metadata. Anidentifieris a unique label that is used to reference an object or record, usually manifested as a number or string of numbers and letters. As a crucial element ofmetadatato be included in a database record or inventory, it is used in tandem with other descriptive metadata to differentiate objects and their various instantiations.[14]
Descriptive metadata refers to information about an object's content such as title, creator, subject, date etc...[14]Determination of the elements used to describe an object are facilitated by the use of a metadata schema. Extensive descriptive metadata about a digital object helps to minimize the risks of a digital object becoming inaccessible.[15]
Another common type of file identification is thefilename. Implementing a file naming protocol is essential to maintaining consistency and efficient discovery and retrieval of objects in a collection, and is especially applicable during digitization of analog media. Using a file naming convention, such as the8.3 filenameor theWarez standard naming, will ensure compatibility with other systems and facilitate migration of data, and deciding between descriptive (containing descriptive words and numbers) and non-descriptive (often randomly generated numbers) file names is generally determined by the size and scope of a given collection.[16]However, filenames are not good for semantic identification, because they are non-permanent labels for a specific location on a system and can be modified without affecting the bit-level profile of a digital file.
The cornerstone of digital preservation, "data integrity" refers to the assurance that the data is "complete and unaltered in all essential respects"; a program designed to maintain integrity aims to "ensure data is recorded exactly as intended, and upon later retrieval, ensure the data is the same as it was when it was originally recorded".[17]
Unintentional changes to data are to be avoided, and responsible strategies should be put in place to detect unintentional changes and react as appropriately determined. However, digital preservation efforts may necessitate modifications to content or metadata through responsibly-developed procedures and by well-documented policies. Organizations or individuals may choose to retain original, integrity-checked versions of content and/or modified versions with appropriate preservation metadata. Data integrity practices also apply to modified versions, as their state of capture must be maintained and resistant to unintentional modifications.
The integrity of a record can be preserved through bit-level preservation, fixity checking, and capturing a full audit trail of all preservation actions performed on the record. These strategies can ensure protection against unauthorised or accidental alteration.[18]
File fixityis the property of a digital file being fixed, or unchanged. File fixity checking is the process of validating that a file has not changed or been altered from a previous state.[19]This effort is often enabled by the creation, validation, and management ofchecksums.
While checksums are the primary mechanism for monitoring fixity at the individual file level, an important additional consideration for monitoring fixity is file attendance. Whereas checksums identify if a file has changed, file attendance identifies if a file in a designated collection is newly created, deleted, or moved. Tracking and reporting on file attendance is a fundamental component of digital collection management and fixity.
Characterization of digital materials is the identification and description of what a file is and of its defining technical characteristics[20]often captured by technical metadata, which records its technical attributes like creation or production environment.[21]
Digital sustainability encompasses a range of issues and concerns that contribute to the longevity of digital information.[22]Unlike traditional, temporary strategies, and more permanent solutions, digital sustainability implies a more active and continuous process. Digital sustainability concentrates less on the solution and technology and more on building an infrastructure and approach that is flexible with an emphasis oninteroperability, continued maintenance and continuous development.[23]Digital sustainability incorporates activities in the present that will facilitate access and availability in the future.[24][25]The ongoing maintenance necessary to digital preservation is analogous to the successful, centuries-old, community upkeep of theUffington White Horse(according toStuart M. Shieber) or theIse Grand Shrine(according toJeffrey Schnapp).[26][27]
Renderability refers to the continued ability to use and access a digital object while maintaining its inherent significant properties.[28]
Physical media obsolescencecan occur when access to digital content requires external dependencies that are no longer manufactured, maintained, or supported. External dependencies can refer tohardware, software, or physical carriers.For example,DLT tapewas used for backups and data preservation, but is no longer used.
File format obsolescence can occur when adoption of new encoding formats supersedes use of existing formats, or when associated presentation tools are no longer readily available.[29]
While the use of file formats will vary among archival institutions given their capabilities, there is documented acceptance among the field that chosen file formats should be "open, standard, non-proprietary, and well-established" to enable long-term archival use.[30]Factors that should enter consideration when selecting sustainable file formats include disclosure, adoption, transparency, self-documentation, external dependencies, impact of patents, and technical protection mechanisms.[31]Other considerations for selecting sustainable file formats include "format longevity and maturity, adaptation in relevant professional communities, incorporated information standards, and long-term accessibility of any required viewing software".[30]For example, theSmithsonian Institution Archivesconsiders uncompressedTIFFsto be "a good preservation format for born-digital and digitized still images because of its maturity, wide adaptation in various communities, and thorough documentation".[30]
Formats proprietary to one software vendor are more likely to be affected by format obsolescence. Well-used standards such asUnicodeandJPEGare more likely to be readable in future.
Significant properties refer to the "essential attributes of a digital object which affect its appearance, behavior, quality and usability" and which "must be preserved over time for the digital object to remain accessible and meaningful."[32]
"Proper understanding of the significant properties of digital objects is critical to establish best practice approaches to digital preservation. It assists appraisal and selection, processes in which choices are made about which significant properties of digital objects are worth preserving; it helps the development of preservation metadata, the assessment of different preservation strategies and informs future work on developing common standards across the preservation community."[33]
Whether analog or digital, archives strive to maintain records as trustworthy representations of what was originally received. Authenticity has been defined as ". . . the trustworthiness of a record as a record; i.e., the quality of a record that is what it purports to be and that is free from tampering or corruption".[34]Authenticity should not be confused with accuracy;[35]an inaccurate record may be acquired by an archives and have its authenticity preserved. The content and meaning of that inaccurate record will remain unchanged.
A combination of policies, security procedures, and documentation can be used to ensure and provide evidence that the meaning of the records has not been altered while in the archives' custody.
Digital preservation efforts are largely to enable decision-making in the future. Should anarchiveor library choose a particular strategy to enact, the content and associated metadata must persist to allow for actions to be taken or not taken at the discretion of the controlling party.
Preservation metadatais a key enabler for digital preservation, and includes technical information for digital objects, information about a digital object's components and its computing environment, as well as information that documents the preservation process and underlying rights basis. It allows organizations or individuals to understand thechain of custody.Preservation Metadata: Implementation Strategies (PREMIS), is the de facto standard that defines the implementable, core preservation metadata needed by most repositories and institutions. It includes guidelines and recommendations for its usage, and has developed shared community vocabularies.[36][37]
The challenges of long-term preservation of digital information have been recognized by the archival community for years.[38]In December 1994, theResearch Libraries Group(RLG) and Commission on Preservation and Access (CPA) formed a Task Force on Archiving of Digital Information with the main purpose of investigating what needed to be done to ensure long-term preservation and continued access to the digital records. The final report published by the Task Force (Garrett, J. and Waters, D., ed. (1996). "Preserving digital information: Report of the task force on archiving of digital information."[39]) became a fundamental document in the field of digital preservation that helped set out key concepts, requirements, and challenges.[38][40]
The Task Force proposed development of a national system of digital archives that would take responsibility for long-term storage and access to digital information; introduced the concept of trusted digital repositories and defined their roles and responsibilities; identified five features of digital information integrity (content, fixity, reference, provenance, and context) that were subsequently incorporated into a definition of Preservation Description Information in the Open Archival Information System Reference Model; and defined migration as a crucial function of digital archives. The concepts and recommendations outlined in the report laid a foundation for subsequent research and digital preservation initiatives.[41][42]
To standardize digital preservation practice and provide a set of recommendations for preservation program implementation, the Reference Model for anOpen Archival Information System(OAIS) was developed, and published in 2012. OAIS is concerned with all technical aspects of a digital object's life cycle: ingest, archival storage, data management, administration, access and preservation planning.[43]The model also addresses metadata issues and recommends that five types of metadata be attached to a digital object: reference (identification) information, provenance (including preservation history), context, fixity (authenticity indicators), and representation (formatting, file structure, and what "imparts meaning to an object's bitstream").[44]
In March 2000, theResearch Libraries Group(RLG) andOnline Computer Library Center(OCLC) began a collaboration to establish attributes of a digital repository for research organizations, building on and incorporating the emerging international standard of the Reference Model for an Open Archival Information System (OAIS). In 2002, they published "Trusted Digital Repositories: Attributes and Responsibilities." In that document a "Trusted Digital Repository" (TDR) is defined as "one whose mission is to provide reliable, long-term access to managed digital resources to its designated community, now and in the future." The TDR must include the following seven attributes: compliance with the reference model for an Open Archival Information System (OAIS), administrative responsibility, organizational viability, financial sustainability, technological and procedural suitability, system security, procedural accountability. The Trusted Digital Repository Model outlines relationships among these attributes. The report also recommended the collaborative development of digital repository certifications, models for cooperative networks, and sharing of research and information on digital preservation with regard to intellectual property rights.[45]
In 2004 Henry M. Gladney proposed another approach to digital object preservation that called for the creation of "Trustworthy Digital Objects" (TDOs). TDOs are digital objects that can speak to their own authenticity since they incorporate a record maintaining their use and change history, which allows the future users to verify that the contents of the object are valid.[46]
International Research on Permanent Authentic Records in Electronic Systems (InterPARES) is a collaborative research initiative led by the University of British Columbia that is focused on addressing issues of long-term preservation of authentic digital records. The research is being conducted by focus groups from various institutions inNorth America,Europe,Asia, andAustralia, with an objective of developing theories and methodologies that provide the basis for strategies, standards, policies, and procedures necessary to ensure the trustworthiness, reliability, and accuracy of digital records over time.[47]
Under the direction of archival science professorLuciana Duranti, the project began in 1999 with the first phase, InterPARES 1, which ran to 2001 and focused on establishing requirements for authenticity of inactive records generated and maintained in large databases and document management systems created by government agencies.[48]InterPARES 2 (2002–2007) concentrated on issues of reliability, accuracy and authenticity of records throughout their whole life cycle, and examined records produced in dynamic environments in the course of artistic, scientific and online government activities.[49]The third five-year phase (InterPARES 3) was initiated in 2007. Its goal is to utilize theoretical and methodological knowledge generated by InterPARES and other preservation research projects for developing guidelines, action plans, and training programs on long-term preservation of authentic records for small and medium-sized archival organizations.[50]
Society's heritage has been presented on many different materials, including stone, vellum, bamboo, silk, and paper. Now a large quantity of information exists in digital forms, including emails, blogs, social networking websites, national elections websites, web photo albums, and sites which change their content over time.[51]With digital media it is easier to create content and keep it up-to-date, but at the same time there are many challenges in the preservation of this content, both technical and economic.
Unlike traditional analog objects such as books or photographs where the user has unmediated access to the content, a digital object always needs a software environment to render it. These environments keep evolving and changing at a rapid pace, threatening the continuity of access to the content.[52]Physical storage media, data formats, hardware, and software all become obsolete over time, posing significant threats to the survival of the content.[3]This process can be referred to asdigital obsolescence.
In the case ofborn-digitalcontent (e.g., institutional archives, websites, electronic audio and video content, born-digital photography and art, research data sets, observational data), the enormous and growing quantity of content presents significant scaling issues to digital preservation efforts. Rapidly changing technologies can hinder digital preservationists' work and techniques due to outdated and antiquated machines or technology. This has become a common problem and one that is a constant worry for a digital archivist—how to prepare for the future.
Digital content can also present challenges to preservation because of its complex and dynamic nature, e.g., interactive Web pages,[53]virtual realityandgamingenvironments,[54]learning objects, social media sites.[55]In many cases of emergent technological advances there are substantial difficulties in maintaining the authenticity,fixity, and integrity of objects over time deriving from the fundamental issue of experience with that particular digital storage medium and while particular technologies may prove to be more robust in terms of storage capacity, there are issues in securing a framework of measures to ensure that the object remains fixed while in stewardship.[2][56]
For the preservation ofsoftwareas digital content, a specific challenge is the typically non-availability of thesource codeascommercial softwareis normally distributed only incompiledbinaryform. Without the source code an adaption (porting) on moderncomputing hardwareoroperating systemsis most often impossible, therefore the original hardware and software context needs to beemulated. Another potential challenge for software preservation can be thecopyrightwhich prohibits often the bypassing ofcopy protectionmechanisms (Digital Millennium Copyright Act) in case software has become anorphaned work(Abandonware). An exemption from the United States Digital Millennium Copyright Act to permit to bypass copy protection was approved in 2003 for a period of 3 years to theInternet Archivewho created an archive of "vintage software", as a way to preserve them.[57][58]The exemption was renewed in 2006, and as of 27 October 2009[update], has been indefinitely extended pending further rulemakings[59]"for the purpose of preservation or archival reproduction of published digital works by a library or archive".[60]TheGitHub Archive Programhas stored all ofGitHub'sopen sourcecode in a secure vault atSvalbard, on the frozen Norwegian island ofSpitsbergen, as part of theArctic World Archive, with the code stored asQR codes.[61]
Another challenge surrounding preservation of digital content resides in the issue of scale. The amount of digital information being created along with the "proliferation of format types"[2]makes creating trusted digital repositories with adequate and sustainable resources a challenge. The Web is only one example of what might be considered the "data deluge".[2]For example, the Library of Congress currently amassed 170 billiontweetsbetween 2006 and 2010 totaling 133.2terabytes[62][63]and each Tweet is composed of 50 fields of metadata.[64]
The economic challenges of digital preservation are also great. Preservation programs require significant up front investment to create, along with ongoing costs for data ingest, data management, data storage, and staffing. One of the key strategic challenges to such programs is the fact that, while they require significant current and ongoing funding, their benefits accrue largely to future generations.[65]
The various levels of security may be represented as three layers: the "hot" (accessibleonline repositories) and "warm" (e.g.Internet Archive) layers both have the weakness of being founded uponelectronics- both would be wiped out in a repeat of the powerful 19th-centurygeomagnetic stormknown as the "Carrington Event". The Arctic World Archive, stored on specially developed film coated withsilver halidewith a lifespan of 500+ years, represents more secure snapshot of data, with archiving intended at five-year intervals.[61]
In 2006, theOnline Computer Library Centerdeveloped a four-point strategy for the long-term preservation of digital objects that consisted of:
There are several additional strategies that individuals and organizations may use to actively combat the loss of digital information.
Refreshingis the transfer of data between two types of the same storage medium so there are nobitrotchanges or alteration of data.[44]For example, transferringcensusdata from an old preservationCDto a new one. This strategy may need to be combined with migration when thesoftwareorhardwarerequired to read the data is no longer available or is unable to understand the format of the data. Refreshing will likely always be necessary due to the deterioration of physical media.
Migrationis the transferring of data to newer system environments (Garrett et al., 1996). This may include conversion of resources from onefile formatto another (e.g., conversion ofMicrosoft WordtoPDForOpenDocument) or from oneoperating systemto another (e.g.,WindowstoLinux) so the resource remains fully accessible and functional. Two significant problems face migration as a plausible method of digital preservation in the long terms. Due to the fact that digital objects are subject to a state of near continuous change, migration may cause problems in relation to authenticity and migration has proven to be time-consuming and expensive for "large collections of heterogeneous objects, which would need constant monitoring and intervention.[2]Migration can be a very useful strategy for preserving data stored on external storage media (e.g. CDs, USB flash drives, and 3.5" floppy disks). These types of devices are generally not recommended for long-term use, and the data can become inaccessible due to media and hardware obsolescence or degradation.[67]
Creating duplicate copies of data on one or more systems is calledreplication. Data that exists as a single copy in only one location is highly vulnerable to software or hardware failure, intentional or accidental alteration, and environmental catastrophes like fire, flooding, etc. Digital data is more likely to survive if it is replicated in several locations. Replicated data may introduce difficulties in refreshing, migration, versioning, andaccess controlsince the data is located in multiple places.
Understanding digital preservation means comprehending how digital information is produced and reproduced. Because digital information (e.g., a file) can be exactly replicated down to the bit level, it is possible to create identical copies of data. Exact duplicates allow archives and libraries to manage, store, and provide access to identical copies of data across multiple systems and/or environments.
Emulationis the replicating of functionality of an obsolete system. According to van der Hoeven, "Emulation does not focus on the digital object, but on the hard- and software environment in which the object is rendered. It aims at (re)creating the environment in which the digital object was originally created."[68]Examples are having the ability to replicate or imitate another operating system.[69]Examples include emulating anAtari 2600on aWindowssystem or emulatingWordPerfect 1.0on aMacintosh.Emulatorsmay be built for applications, operating systems, or hardware platforms. Emulation has been a popular strategy for retaining the functionality of old video game systems, such as with theMAMEproject. The feasibility of emulation as a catch-all solution has been debated in the academic community. (Granger, 2000)
Raymond A. Lorie has suggested aUniversal Virtual Computer(UVC) could be used to run any software in the future on a yet unknown platform.[70]The UVC strategy uses a combination of emulation and migration. The UVC strategy has not yet been widely adopted by the digital preservation community.
Jeff Rothenberg, a major proponent of Emulation for digital preservation in libraries, working in partnership withKoninklijke BibliotheekandNationaal Archiefof theNetherlands, developed a software program called Dioscuri, a modular emulator that succeeds in running MS-DOS, WordPerfect 5.1, DOS games, and more.[71]
Another example of emulation as a form of digital preservation can be seen in the example ofEmory Universityand theSalman Rushdie's papers. Rushdie donated an outdated computer to theEmory University library, which was so old that the library was unable to extract papers from the harddrive. In order to procure the papers, the library emulated the old software system and was able to take the papers off his old computer.[72]
This method maintains that preserved objects should be self-describing, virtually "linking content with all of the information required for it to be deciphered and understood".[2]The files associated with the digital object would have details of how to interpret that object by using "logical structures called "containers" or "wrappers" to provide a relationship between all information components[73]that could be used in future development of emulators, viewers or converters through machine readable specifications.[74]The method of encapsulation is usually applied to collections that will go unused for long periods of time.[74]
Developed by theSan Diego Supercomputer Centerand funded by theNational Archives and Records Administration, this method requires the development of comprehensive and extensive infrastructure that enables "the preservation of the organisation of collection as well as the objects that make up that collection, maintained in a platform independent form".[2]A persistent archive includes both the data constituting the digital object and the context that the defines the provenance, authenticity, and structure of the digital entities.[75]This allows for the replacement of hardware or software components with minimal effect on the preservation system. This method can be based on virtual data grids and resemblesOAIS Information Model(specifically the Archival Information Package).
Metadatais data on a digital file that includes information on creation, access rights, restrictions, preservation history, and rights management.[76]Metadata attached to digital files may be affected by file format obsolescence.ASCIIis considered to be the most durable format for metadata[77]because it is widespread, backwards compatible when used withUnicode, and utilizes human-readable characters, not numeric codes. It retains information, but not the structure information it is presented in. For higher functionality,SGMLorXMLshould be used. Both markup languages are stored in ASCII format, but contain tags that denote structure and format.
A few of the major frameworks for digital preservation repository assessment and certification are described below. A more detailed list is maintained by the U.S. Center for Research Libraries.[78]
In 2007, CRL/OCLC published Trustworthy Repositories Audit & Certification: Criteria & Checklist (TRAC), a document allowing digital repositories to assess their capability to reliably store, migrate, and provide access to digital content. TRAC is based upon existing standards and best practices for trustworthy digital repositories and incorporates a set of 84 audit and certification criteria arranged in three sections: Organizational Infrastructure; Digital Object Management; and Technologies, Technical Infrastructure, and Security.[79]
TRAC "provides tools for the audit, assessment, and potential certification of digital repositories, establishes the documentation requirements required for audit, delineates a process for certification, and establishes appropriate methodologies for determining the soundness and sustainability of digital repositories".[80]
Digital Repository Audit Method Based On Risk Assessment (DRAMBORA), introduced by theDigital Curation Centre(DCC) and DigitalPreservationEurope (DPE) in 2007, offers a methodology and a toolkit for digital repository risk assessment.[81]The tool enables repositories to either conduct the assessment in-house (self-assessment) or to outsource the process.
The DRAMBORA process is arranged in six stages and concentrates on the definition of mandate, characterization of asset base, identification of risks and the assessment of likelihood and potential impact of risks on the repository. The auditor is required to describe and document the repository's role, objectives, policies, activities and assets, in order to identify and assess the risks associated with these activities and assets and define appropriate measures to manage them.[82]
TheEuropean Framework for Audit and Certification of Digital Repositorieswas defined in a memorandum of understanding signed in July 2010 between Consultative Committee for Space Data Systems (CCSDS), Data Seal of Approval (DSA) Board andGerman Institute for Standardization(DIN) "Trustworthy Archives – Certification" Working Group.
The framework is intended to help organizations in obtaining appropriate certification as a trusted digital repository and establishes three increasingly demanding levels of assessment:
A German initiative,nestorArchived2012-10-26 at theWayback Machine(the Network of Expertise in Long-Term Storage of Digital Resources) sponsored by the GermanMinistry of Education and Research, developed a catalogue of criteria for trusted digital repositories in 2004. In 2008 the second version of the document was published. The catalogue, aiming primarily at German cultural heritage and higher education institutions, establishes guidelines for planning, implementing, and self-evaluation of trustworthy long-term digital repositories.[84]
Thenestorcatalogue of criteria conforms to the OAIS reference model terminology and consists of three sections covering topics related to Organizational Framework, Object Management, and Infrastructure and Security.[85]
In 2002 thePreservation and Long-term Access through Networked Services(PLANETS) project, part of the EUFramework Programmes for Research and Technological Development6, addressed core digital preservation challenges. The primary goal forPlanetswas to build practical services and tools to help ensure long-term access to digital cultural and scientific assets. The Open Planets project ended May 31, 2010.[86]The outputs of the project are now sustained by the follow-on organisation, the Open Planets Foundation.[86][87]On October 7, 2014 the Open Planets Foundation announced that it would be renamed the Open Preservation Foundation to align with the organization's current direction.[88]
Planning Tool for Trusted Electronic Repositories (PLATTER) is a tool released by DigitalPreservationEurope (DPE) to help digital repositories in identifying their self-defined goals and priorities in order to gain trust from the stakeholders.[89]
PLATTER is intended to be used as a complementary tool to DRAMBORA, NESTOR, and TRAC. It is based on ten core principles for trusted repositories and defines nine Strategic Objective Plans, covering such areas as acquisition, preservation and dissemination of content, finance, staffing, succession planning, technical infrastructure, data and metadata specifications, and disaster planning. The tool enables repositories to develop and maintain documentation required for an audit.[90]
A system for the "audit and certification of trustworthy digital repositories" was developed by theConsultative Committee for Space Data Systems(CCSDS) and published asISOstandard 16363 on 15 February 2012.[91]Extending the OAIS reference model, and based largely on the TRAC checklist, the standard was designed for all types of digital repositories. It provides a detailed specification of criteria against which the trustworthiness of a digital repository can be evaluated.[92]
The CCSDS Repository Audit and Certification Working Group also developed and submitted a second standard, defining operational requirements for organizations intending to provide repository auditing and certification as specified in ISO 16363.[93]This standard was published as ISO 16919 – "requirements for bodies providing audit and certification of candidate trustworthy digital repositories" – on 1 November 2014.[94]
Although preservation strategies vary for different types of materials and between institutions, adhering to nationally and internationally recognized standards and practices is a crucial part of digital preservation activities. Best or recommended practices define strategies and procedures that may help organizations to implement existing standards or provide guidance in areas where no formal standards have been developed.[95]
Best practices in digital preservation continue to evolve and may encompass processes that are performed on content prior to or at the point of ingest into a digital repository as well as processes performed on preserved files post-ingest over time. Best practices may also apply to the process of digitizing analog material and may include the creation of specialized metadata (such as technical, administrative and rights metadata) in addition to standard descriptive metadata. The preservation of born-digital content may include format transformations to facilitate long-term preservation or to provide better access.[96]
No one institution can afford to develop all of the software tools needed to ensure the accessibility of digital materials over the long term. Thus the problem arises of maintaining a repository of shared tools. TheLibrary of Congresshas been doing that for years,[97]until that role was assumed by theCommunity Owned Digital Preservation Tool Registry.[98]
Various best practices and guidelines for digital audio preservation have been developed, including:
TheAudio Engineering Society(AES) also issues a variety of standards and guidelines relating to the creation of archival audio content and technical metadata.[104]
The term "moving images" includes analog film and video and their born-digital forms: digital video, digital motion picture materials, and digital cinema. As analog videotape and film become obsolete, digitization has become a key preservation strategy, although many archives do continue to perform photochemical preservation of film stock.[105][106]
"Digital preservation" has a double meaning for audiovisual collections: analog originals are preserved through digital reformatting, with the resulting digital files preserved; and born-digital content is collected, most often in proprietary formats that pose problems for future digital preservation.
There is currently no broadly accepted standard target digital preservation format for analog moving images.[107]The complexity of digital video as well as the varying needs and capabilities of an archival institution are reasons why no "one-size-fits-all" format standard for long-term preservation exists for digital video like there is for other types of digital records "(e.g., word-processing converted toPDF/AorTIFFfor images)".[108][109]
Library and archival institutions, such as the Library of Congress andNew York University, have made significant efforts to preserve moving images; however, a national movement to preserve video has not yet materialized".[110]The preservation of audiovisual materials "requires much more than merely putting objects in cold storage".[110]Moving image media must be projected and played, moved and shown. Born-digital materials require a similar approach".[110]
The following resources offer information on analog to digital reformatting and preserving born-digital audiovisual content.
Moving images require acodecfor the decoding process; therefore, determining a codec is essential to digital preservation.[116][117]In"A Primer on Codecs for Moving Image and Sound Archives: 10 Recommendations for Codec Selection and Management"written by Chris Lacinak and published by AudioVisual Preservation Solutions, Lacinak stresses the importance of archivists choosing the correct codec as this can "impact the ability to preserve the digital object".[117][116]Therefore, the codec selection process is critical, "whether dealing withborn digitalcontent, reformatting older content, or converting analog materials".[117][116]Lacinak's ten recommendations for codec selection and management are the following: adoption, disclosure, transparency, external dependencies, documentation and metadata, pre-planning, maintenance, obsolescence monitoring, maintenance of the original, and avoidance of unnecessary trans-coding or re-encoding.[117][116]There is a lack of consensus to date among the archival community as to what standard codec should be used for the digitization of analog video and the long-term preservation of digital video nor is there a single "right" codec for a digital object; each archival institution must "make the decision as part of an overall preservation strategy".[117][118][109][116]
Adigital container formator wrapper is also required for moving images and must be chosen carefully just like the codec.[118]According to an international survey conducted in 2010 of over 50 institutions involved with film and video reformatting, "the three main choices for preservation products wereAVI,QuickTime(.MOV) orMXF(Material Exchange Format)".[119]These are just a few examples of containers. TheNational Archives and Records Administration(NARA) has chosen the AVI wrapper as its standard container format for several reasons including that AVI files are compatible with numerous open source tools such asVLC.[119]
Uncertainty about which formats will or will not become obsolete or become the future standard makes it difficult to commit to one codec and one container."[109]Choosing a format should "be a trade off for which the best quality requirements and long-term sustainability are ensured."[109]
By considering the following steps, content creators and archivists can ensure better accessibility and preservation of moving images in the long term:
Emailposes special challenges for preservation:email client softwarevaries widely; there is no common structure for email messages; email often communicates sensitive information; individual email accounts may contain business and personal messages intermingled; and email may include attached documents in a variety of file formats. Email messages can also carry viruses or have spam content. While email transmission is standardized, there is no formal standard for the long-term preservation of email messages.[121]
Approaches to preserving email may vary according to the purpose for which it is being preserved. For businesses and government entities, email preservation may be driven by the need to meet retention and supervision requirements for regulatory compliance and to allow for legal discovery. (Additional information about email archiving approaches for business and institutional purposes may be found under the separate article,Email archiving.) For research libraries and archives, the preservation of email that is part of born-digital or hybrid archival collections has as its goal ensuring its long-term availability as part of the historical and cultural record.[122]
Several projects developing tools and methodologies for email preservation have been conducted based on various preservation strategies: normalizing email into XML format, migrating email to a new version of the software and emulating email environments:Memories Using Email(MUSE),Collaborative Electronic Records Project(CERP),E-Mail Collection And Preservation(EMCAP),PeDALS Email Extractor Software(PeDALS),XML Electronic Normalizing of Archives tool(XENA).
Some best practices and guidelines for email preservation can be found in the following resources:
In 2007 theKeeping Emulation Environments Portable(KEEP) project, part of the EUFramework Programmes for Research and Technological Development7, developed tools and methodologies to keep digital software objects available in their original context. Digital software objects asvideo gamesmight get lost because ofdigital obsolescenceand non-availability of required legacy hardware or operating system software; such software is referred to asabandonware. Because thesource codeis often not available any longer,[54]emulation is the only preservation opportunity. KEEP provided an emulation framework to help the creation of such emulators. KEEP was developed by Vincent Joguin, first launched in February 2009 and was coordinated by Elisabeth Freyre of theFrench National Library.[125]
A community project,MAME, aims to emulate any historic computer game, including arcade games, console games and the like, at a hardware level, for future archiving.
In January 2012 the POCOS project funded by JISC organised a workshop on the preservation of gaming environments and virtual worlds.[126]
There are many things consumers and artists can do themselves to help care for their collections at home.
The Library of Congress also hosts a list for the self-preserver which includes direction toward programs and guidelines from other institutions that will help the user preserve social media, email, and formatting general guidelines (such as caring for CDs).[128]Some of the programs listed include:
In 2020, researchers reported in apreprintthat they found "176Open Accessjournalsthat, through lack of comprehensive and open archives, vanished from the Web between 2000-2019, spanning all major research disciplines and geographic regions of the world" and that in 2019 only about a third of the 14,068DOAJ-indexed journals ensured the long-term preservation of their content.[130][131][132]Some of the scientific research output is not located at the scientific journal's website but on other sites like source-code repositories such asGitLab. TheInternet Archivearchived many – but not all – of the lost academic publications and makes them available on the Web.[133]According to an analysis by the Internet Archive "18 per cent of all open access articles since 1945, over three million, are not independently archived by us or another preservation organization, other than the publishers themselves".[133]Sci-Hubdoes academic archiving outside the bounds of contemporarycopyright lawand also provides access to academic works that do not have an open access license.[133]
"The creation of a 3D model of a historical building needs a lot of effort."[134]Recent advances in technology have led to developments of 3-D rendered buildings in virtual space. Traditionally the buildings in video games had to be rendered via code, and many game studios have done highly detailed renderings (seeAssassin's Creed). But due to most preservationist not being highly capable teams of professional coders, Universities have begun developing methods by doing 3-D laser scanning. Such work was attempted by theNational Taiwan University of Science and Technologyin 2009. Their goal was "to build as-built 3D computer models of a historical building, the Don Nan-Kuan House, to fulfill the need of digital preservation."[135]To rather great success, they were capable of scanning the Don Nan-Kuan House with bulky 10 kg (22 lbs.) cameras and with only minor touch-ups where the scanners were not detailed enough. More recently in 2018 inCalw, Germany, a team conducted a scanning of the historic Church of St. Peter and Paul by collecting data via laser scanning and photogrammetry. "The current church's tower is about 64 m high, and its architectonic style is neo-gothic of the late nineteenth century. This church counts with a main nave, a chorus and two lateral naves in each side with tribunes in height. The church shows a rich history, which is visible in the different elements and architectonic styles used. Two small windows between the choir and the tower are the oldest parts preserved, which date to thirteenth century. The church was reconstructed and extended during the sixteenth (expansion of the nave) and seventeenth centuries (construction of tribunes), after the destruction caused by the Thirty Years' War (1618-1648). However, the church was again burned by the French Army under General Mélac at the end of the seventeenth century. The current organ and pulpit are preserved from this time. In the late nineteenth century, the church was rebuilt and the old dome Welsch was replaced by the current neo-gothic tower. Other works from this period are the upper section of the pulpit, the choir seats and the organ case. The stained-glass windows of the choir are from the late nineteenth and early twentieth centuries, while some of the nave's windows are from middle of the twentieth century. Second World War having ended, some neo-gothic elements were replaced by pure gothic ones, such as the altar of the church, and some drawings on the walls and ceilings."[136]With this much architectural variance it presented a challenge and a chance to combine different technologies in a large space with the goal of high-resolution. The results were rather good and are available to view online.
The Digital Preservation Outreach and Education (DPOE), as part of the Library of Congress, serves to foster preservation of digital content through a collaborative network of instructors and collection management professionals working in cultural heritage institutions. Composed of Library of Congress staff, the National Trainer Network, the DPOE Steering Committee, and a community of Digital Preservation Education Advocates, as of 2013 the DPOE has 24 working trainers across the six regions of the United States.[137]In 2010 the DPOE conducted an assessment, reaching out to archivists, librarians, and other information professionals around the country. A working group of DPOE instructors then developed a curriculum[138]based on the assessment results and other similar digital preservation curricula designed by other training programs, such asLYRASIS, Educopia Institute,MetaArchive Cooperative,University of North Carolina, DigCCurr (Digital Curation Curriculum) andCornell University-ICPSR Digital Preservation Management Workshops. The resulting core principles are also modeled on the principles outlined in "A Framework of Guidance for Building Good Digital Collections" by theNational Information Standards Organization(NISO).[139]
In Europe,Humboldt-Universität zu BerlinandKing's College Londonoffer a joint program inDigital CurationArchived2015-12-26 at theWayback Machinethat emphasizes both digital humanities and the technologies necessary for long term curation. TheMSc in Information Management and Preservation (Digital)offered by theHATIIat theUniversity of Glasgowhas been running since 2005 and is the pioneering program in the field.
A number of open source products have been developed to assist with digital preservation, includingArchivematica,DSpace,Fedora Commons,OPUS,SobekCMandEPrints. The commercial sector also offers digital preservation software tools, such as Ex Libris Ltd.'sRosetta, Preservica's Cloud, Standard and Enterprise Editions, CONTENTdm, Digital Commons, Equella, intraLibrary, Open Repository and Vital.[140]
Many research libraries and archives have begun or are about to begin large-scale digital preservation initiatives (LSDIs). The main players in LSDIs are cultural institutions, commercial companies such as Google and Microsoft, and non-profit groups including theOpen Content Alliance(OCA), theMillion Book Project(MBP), andHathiTrust. The primary motivation of these groups is to expand access to scholarly resources.
Approximately 30 cultural entities, including the 12-memberCommittee on Institutional Cooperation(CIC), have signed digitization agreements with either Google or Microsoft. Several of these cultural entities are participating in the Open Content Alliance and the Million Book Project. Some libraries are involved in only one initiative and others have diversified their digitization strategies through participation in multiple initiatives. The three main reasons for library participation in LSDIs are: access, preservation, and research and development. It is hoped that digital preservation will ensure that library materials remain accessible for future generations. Libraries have a responsibility to guaranteeperpetual accessfor their materials and a commitment to archive their digital materials. Libraries plan to use digitized copies as backups for works in case they go out of print, deteriorate, or are lost and damaged.
TheArctic World Archiveis a facility for data preservation of historical and cultural data from several countries, includingopen source code.[61]
|
https://en.wikipedia.org/wiki/Digital_preservation
|
Aninfodemicis a rapid and far-reaching spread of both accurate and inaccurate information about certain issues.[1][2][3]The word is aportmanteauofinformationandepidemicand is used as a metaphor to describe how misinformation and disinformation can spread like a virus from person to person and affect people like a disease.[4][5]This term, originally coined in 2003 byDavid Rothkopf, rose to prominence in 2020 during the COVID-19 pandemic.[4]
In his 11 May 2003 article in theWashington Post—also published inNewsday,The Record, theOakland Tribune, and theChina Daily—foreign policy expertDavid Rothkopf, referred to the information epidemic—or "infodemic", in the context of the2002–2004 SARS outbreak.[6][7][8][9][10]The outbreak ofSARS, which was caused bysevere acute respiratory syndrome coronavirus 1began in a remote region inGuangdong, China, in November 2002. By the time the outbreak ended in May 2003, it had reached 30 countries and there were over 8,000 confirmed cases and 774 deaths.
Rothkopf, who was at that time, a member of the advisory committee's board of directors at theJohns Hopkins Bloomberg School of Public Health'sJohns Hopkins Center for Health Securitywhich provides policy recommendations to theUnited States governmentand theWorld Health Organization,[11]said that the infodemic was the second of two concurrent epidemics.[6]Rothkopf described how the "information epidemic" transformed SARS from a regional health crisis into a "debacle" that spread globally with both economic and social repercussions.[8]He said this infodemic "was not the rapid spread of simple news via the media, nor is it simply the rumor mill on steroids. Rather, as with SARS, it is a complex phenomenon caused by the interaction of mainstream media, specialist media and internet sites, and 'informal' media, which is to say wireless phones, text messaging, pagers, faxes, and e-mail, all transmitting some combination of fact, rumor, interpretation, and propaganda."[6]Rothkopf citing the State Department, said that 2002 was the "year of the most heightened state of terrorism panic in our history" even though terrorism globally had decreased to its "lowest level since 1969".[6]His company, the Washington DC–based strategic intelligence and analysis firm Intellibridge, which he had founded in 1999, tracked the January 2003 Chinese reports on the outbreak. On 9 February 2003, Intellibridge provided their analysis to the U.S. defense community, and then posted the information on ProMED, a Federation of American Scientists Web site.[6]
The general public did not learn of the outbreak until 23 February 2003, when an elderly woman died of SARS in her home inToronto, Canada, from Hong Kong. Her son, who spread the disease in a Toronto hospital, also died.[12]With the first death in North America, the Western media began to cover the outbreak. Rothkopf said that if more had been done earlier to manage the disease as well as information about SARS, perhaps there might not have been a worldwide panic. The infodemic spread globally, far beyond the countries that had SARS victims and "set off a chain reaction of economic and social consequences".[6]It also made it harder for health organizations to control the SARS epidemic as panic spread online.[10]
In his 15 December 2002 article entitled "Infodemiology: The epidemiology of (mis)information" inThe American Journal of Medicine,[13]health researcherGunther Eysenbachcoined the terminfodemiologist[14]and later used the term to refer to attempts at digital disease detection.[15][14]
Use of the terminfodemicincreased rapidly during the COVID-19 pandemic. A study found that from 2010-2020 there were 61 articles mentioning the wordinfodemic, while between 2020 and 2021 there were 14,301 published stories using the term.[4]TheUnited Nationsand theWorld Health Organizationbegan using the terminfodemicduring the COVID-19 pandemic as early as 2 February 2020.[1][16]The related termdisinfodemic(referring to COVID-19disinformationcampaigns) has been used byUNESCO.[17]By the time that theJournal of Medical Internet Researchpublished their June 2020 issue featuring the WHO's framework for managingthe infodemic related to the COVID-19 pandemic, the WHO and public health agencies had acknowledged infodemiology as an "emerging scientific field" that was of critical importance during a pandemic.[14]By 2021, the WHO had published a number of resources clarifying the infodemic.[18]
ARoyal SocietyandBritish Academyjoint report published in October 2020 said of infodemics that: "COVID-19 vaccinedeployment faces an infodemic with misinformation often filling the knowledge void, characterised by: (1) distrust of science and selective use of expert authority, (2) distrust in pharmaceutical companies and government, (3) straightforward explanations, (4) use of emotion; and, (5) echo chambers," and to combat the ill and "inoculate the public" endorsed the SingaporeanPOFMAlegislation, which criminalises misinformation.[19][20]TheAspen Instituteeven started their misinformation project before the pandemic.[21]
A blue-ribbon working group on infodemics, from theForum on Information and Democracy, produced a report in November 2020, highlighting 250 recommendations to protectdemocracies,human rights, and health.[22]
The Merriam-Webster Dictionary tracked its renewed usage during theCOVID-19 pandemic.[23]
In his 11 May 2003 article in thePost, Rothkopf wrote that the information epidemic or "infodemic" was a "combination of "[a] few facts, mixed with fear, speculation, and rumor, amplified and relayed swiftly worldwide by modern information technologies."[8]
On 2 February 2020, the World Health Organization definedinfodemicas a "an over-abundance of information—some accurate and some not—that makes it hard for people to find trustworthy sources and reliable guidance when they need it."[1]A 21 February 2021 WHO publication, said that "[a]n infodemic is too much information including false or misleading information in digital and physical environments during a disease outbreak."[18]
Eysenbach describedinfodemiologyas the study of "the determinants and distribution of health information and misinformation".[13]
As COVID-19 swept across the globe, information about how to stay safe and how to identify symptoms became vital. However, especially in the first phases of the pandemic, the amount of false, not validated and partially true information on the media was huge. Even seemingly reliable government sources did not always follow best practices in disseminating data about COVID-19, with many potentially misleading maps published on official websites.[2][24][25]The inappropriate use of maps on these websites may have contributed to political polarization in response to COVID-19 epidemiological control measures.[26]There was also a proliferation of systematic reviews of COVID-19-related evidence, not all of which was robustly conducted.[27]Researchers have pointed out a few primary challenges of communicating with the public about COVID-19. First, social media platforms that prioritize engagement over accuracy and allow fringe opinions to thrive without correction create an information ecology that is difficult to understand.[25]Second, as fast-moving science and politics intertwine during the pandemic, making decisions related to combatting misinformation becomes complicated by a volatile political environment and frequently changing scientific information.[28]A U.S.-based survey research revealed that during March and April 2020 higher news consumption about COVID-19, especially through social media, was associated with lower levels of knowledge and morefake newsbeliefs.[29]However, preliminary research published in fall 2021 suggested that visual information (e.g., infographics) about science and scientists, designed to address trust, might be able to mitigate belief in misinformation about COVID-19.[30]
Researchers have been seeking tools to combat infodemics. Gunther Eysenbach brings up four pillars of infodemic management: (1) information monitoring (infoveillance); (2) buildingeHealth literacyandscience literacycapacity; (3) encouraging knowledge refinement and quality improvement processes such asfact-checkingandpeer-review; and (4) accurate and timelyknowledge translationwhich minimizes distorting factors such as political or commercial influences.[14]Scholars also advocate for tech platforms to police their content more effectively, and empower individuals to make better decisions on their own to promote the emergence of truth. Social media companies may offer a variety of cues to help people make better judgments of whether a message is legitimate or not. For example, Facebook might, in addition to showing how many "likes" a post has received, allow the count of "dislikes" to offer a more symmetric view of opinions.[31]
Research on information dissemination during the COVID-19 pandemic identified issues with standardization and presentation of related information on official U.S. government sources, specifically state and federal government COVID-19 dashboards.[2][24]When the most authoritative sources of information are not presenting the data accurately, bad conclusions are inevitable. The research suggested official sources for information take steps to ensure the way data are collected, analyzed, and presented is up to the highest standards and adheres to all conventions.[24]Standards of web-maps for government agencies should be developed, widely published, and adhered to.[2]The web-based maps and dashboards are, if properly employed, suggested as possible ways to combat infodemics in the future.[2]
However, scholars emphasize that traditionally proposed ways to combat misinformation tend to rely on the assumption that if people encounter the correct information about an issue, they will make rational decisions based on the best scientific information available.[28]Research shows that this is often not the case and that people do not act in the best interest of scientific fact for reasons including "cognitive preferences for old habits, forgetfulness, small inconveniences in the moment, preferences for the path of least resistance, and motivated reasoning."[32]Thus, combatting misinformation should rely on a more nuanced analysis of both the content of the misinformation, as well as the socio-political environment in which it was disseminated.
Financial Timesjournalist Siddharth Venkataramakrishnan said in his 20 August 2021 article that casting the spread of misinformation and disinformation in terms of disease risks oversimplifying the problem and that "unlike the status of being healthy or infected by an actual disease, what constitutes accurate information is also subject to change." Venkataramakrishnan also pointed out that the focus of the infodemic has often been on "conspiracy theorists and snake-oil salesmen", largely ignoring the at times problematic actions and confusing messaging of governments and public health bodies throughout the pandemic.[33]
Communication scholars Felix Simon and Chico Camargo atOxford Universitysaid in their 20 July 2021New Media & Societyarticle thatinfodemicas metaphor "can be misleading, as it conflates multiple forms of social behaviour, oversimplifies a complex situation and helps constitute a phenomenon for which concrete evidence remains patchy." Pointing out that the infodemic as a concept is "journalistically powerful, intuitively satisfying, and in strong resonance with personal experiences and intuition", Simon and Camargo argue that empirical evidence for many of the claims surrounding the term is lacking. Instead of a genuine phenomenon they see the infodemic as "a territorial claim for those who want to apply their skills, a signal to others that they are working in this area, or a framing device to tie one's work to larger debates".[4]Along the same lines, Krause, Freiling, and Scheufele warn of difficulties related to creating "an infodemic about the infodemic" and that research surrounding the term warrants clarification and acknowledgment of uncertainties related to its novelty and impact.[34]
|
https://en.wikipedia.org/wiki/Infodemic
|
Lost mediais any piece ofmediathought to no longer exist in any format, or for which nocopiescan be located. The term primarily encompasses visual, audio, oraudiovisualmedia such asfilms,television,radio broadcasts,music,[2]andvideo games.[3][4]
Many television and radio broadcast masters, recorded onto magnetic tape, may be lost due to the industry practice ofwiping. Motion picture studios also often destroyed their originalnitrate filmelements, as film and broadcast material was often considered ephemeral and of little historical worth after they had made their revenue. Some media considered lost may exist in studio or public archives, but may not be available to most people due to copyright or donor restriction rules.[5]Due to the unstable nature of any format, films, tapes, phonograph records, optical discs like CDs and DVDs, and digital data stored on hard drives all naturally degrade over time, especially if not kept in correct storage conditions.
Preservation efforts attempt to avoid the loss of works; this is usually done by storing them inarchives.
A large portion ofsilent filmsmade in the United States are now considered lost.[6]A 2013 report made by the United StatesLibrary of Congressestimates that 70 percent of silent films made in the United States have been completely lost.[7]
Most lost television broadcasts are early television programs which cannot be accounted for in studio archives or in personal archives. A majority of lost television broadcasts are lost due to deliberate destruction (such as a technique used in the early days of television calledwiping) or neglect.[8][9]
The Library of Congress estimates that a large portion of the earliest musical recordings, from the late 19th century to the early 20th century, have been lost. For example, only two percent of the over 3000wax cylindersproduced by theNorth American Phonograph Companybetween 1889 and 1894 are part of theNational Recording Preservation Board's sound recording library as of 2024[update].[10]
A concept related to lost music is "lostwave", a term coined on the Internet for extant recordings of music for which little to no information about its authors or origin exists. Some examples of lostwave, such as "Subways of Your Mind" and "Ulterior Motives", both of which were eventually identified in 2024, have been the subjects of online crowdsourced research since the late 2000s for "Subways of Your Mind" and 2021 for "Ulterior Motives".[11][12][13]
Video games, including digital downloads, often fade from existence when digital game stores close, as demonstrated by theWii Shop Channel, V Cast Network and theNintendo eShopon theWii UandNintendo 3DS.P.T., a teaser to the unreleasedSilent HillgameSilent Hills, became unable to be redownloaded after its removal from thePlayStation Networkwithin a year.[14]The Wii U and Nintendo 3DS digital download gamesDodge Club PartyandDodge Club Pocketwere removed from Nintendo eShop in 2019 and 2022 and became publicly unavailable due to reasons beyond Nintendo's control.[15]
According to theVideo Game History Foundation, 87% of American video games released before 2010 are out of print and cannot be acquired outside of thegrey marketorpiracy. Many of these titles are in danger of becoming lost, or already are.[16]Some video game enthusiasts argue that out of respect for both the original designers and the fans of the game, video game publishers have a duty to make sure that the game remains accessible.[17]Some enthusiasts believe that when the publishers don't, the consumers are justified inpirating the game, as they are left with no other alternative in the absence of proper methods of purchase which would benefit the publishers or creators of the game. In other words, they claim that piracy is okay in that context because it doesn't harm the publishers/creators of the game, i.e. if the publisher wants to benefit from the sale of the game, then they need to ensure it remains available for sale.
Video gamepreservationists, including both organizations such as theVideo Game History Foundationand hobbyists seek to preserve video game history that would have otherwise been lost to time due to a variety of factors, such asdegrading storage mediums, digital game stores closing, or the game becoming unavailable because of licensing or financial issues. Their motivations are that the games hold cultural and historical value,[18]can be educational material for the future (such as learning to code by imitating a classic game from scratch or learning about past peoples' lives[18]), or that they simply hold emotional value through nostalgia.
Data stored in electronic computers risks being lost if it is not frequentlymigratedinto more recentfile formats. This happens because as new computer systems are developed and new technologies are built, now obsolete systems may break down over time, leaving the data inside inaccessible.[19]Electronic data preservation is further complicated by the fact that unless anemulatorfor a given computer system which can decode the data is present at the time of the preservation, the original data may become inaccessible as the original hardware breaks down, as it may depend on the originalhardwareto be decoded,[20]although in some cases the original data may be recoverable through lengthyreverse engineeringwork with the objective of understanding the original computer system enough to decode the most original electronic data possible.[21]
To mitigate the loss of their data, theArctic World Archivehas been the chosen location for the preservation of the code on public repositories onGitHub.[22]The Arctic World Archive also stores a wide range of data of interest to multiple companies, institutions and governments; including the Constitutions of Brazil and Norway.[23]
Media released on the internet, such asFile sharing,livestreamsandblog posts, are especially vulnerable to being lost due to a lack of archived and various issues, such as:
Media released solely tostreaming servicesin lieu of hard physical release are also vulnerable to being rendered legally inaccessible by the service's parent company, making it so consumers have no legal way to purchase or stream the media, rendering it lost media by legal means. TheElectronic Frontier Foundationdescribes this phenomenon as "a whole new kind of lost media [that's] only going to be preserved by those individuals who did the work to make and save copies of it, often risking draconian legal liability, regardless of how the studio feels about that work".[24][25][26]
|
https://en.wikipedia.org/wiki/Lost_media
|
This is a list ofperformance analysistoolsfor use insoftware development.
The following tools work based on log files that can be generated from various systems.
The following tools work for multiple languages or binaries.
Supports multi-threaded and multi-process applications - such as those withMPIorOpenMPparallelism and scales to very high node counts.
GUI based code profiler; does only basic timer-based profiling on Intel processors. Based onOProfile.
Group of events are monitored by selecting specific instruments from: File Activity, Memory Allocations, Time Profiler, GPU activity etc. For system wide impact of the executable: System Trace, System usage, Network Usage, Energy log etc. are useful.
(formerly VTune Amplifier)
|
https://en.wikipedia.org/wiki/List_of_performance_analysis_tools
|
Adebuggeris acomputer programused totestanddebugother programs (the "target" programs). Common features of debuggers include the ability to run or halt the target program usingbreakpoints,stepthrough code line by line, and display or modify the contents of memory, CPU registers, and stack frames.
The code to be examined might alternatively be running on aninstruction set simulator(ISS), a technique that allows great power in its ability to halt when specific conditions are encountered, but which will typically be somewhat slower than executing the code directly on the appropriate (or the same) processor. Some debuggers offer two modes of operation, full or partial simulation, to limit this impact.
An exception occurs when the program cannot normally continue because of aprogramming bugor invalid data. For example, the program might have tried to use an instruction not available on the current version of theCPUor attempted to access unavailable orprotectedmemory. When the program "traps" or reaches a preset condition, the debugger typically shows the location in the original code if it is asource-level debuggerorsymbolic debugger, commonly now seen inintegrated development environments. If it is alow-level debuggeror amachine-language debuggerit shows the line in thedisassembly(unless it also has online access to the original source code and can display the appropriate section of code from the assembly or compilation).
Typically, debuggers offer a query processor, a symbol resolver, an expression interpreter, and a debug support interface at its top level.[1]Debuggers also offer more sophisticated functions such as running a program step by step (single-steppingorprogram animation), stopping (breaking) (pausing the program to examine the current state) at some event or specified instruction by means of abreakpoint, and tracking the values of variables.[2]Some debuggers have the ability to modify the program state while it is running. It may also be possible to continue execution at a different location in the program to bypass a crash or logical error.
The same functionality which makes a debugger useful for correcting bugs allows it to be used as asoftware crackingtool to evadecopy protection,digital rights management, and other software protection features. It often also makes it useful as a general verification tool,fault coverage, andperformance analyser, especially ifinstruction path lengthsare shown.[3]Early microcomputers with disk-based storage often benefitted from the ability to diagnose and recover corrupted directory or registry data records, to "undelete" files marked as deleted, or to crack file password protection.
Most mainstream debugging engines, such asgdbanddbx, provide console-basedcommand line interfaces.Debugger front-endsare popular extensions to debugger engines that provideIDEintegration,program animation, and visualization features.
Record and replay debugging,[4]also known as "software flight recording" or "program execution recording", captures application state changes and stores them to disk as each instruction in a program executes. The recording can then be replayed over and over, and interactively debugged to diagnose and resolve defects. Record and replay debugging is very useful for remote debugging and for resolving intermittent, non-deterministic, and other hard-to-reproduce defects.
Some debuggers include a feature called "reverse debugging", also known as "historical debugging" or "backwards debugging". These debuggers make it possible to step a program's execution backwards in time. Various debuggers include this feature.Microsoft Visual Studio(2010 Ultimate edition, 2012 Ultimate, 2013 Ultimate, and 2015 Enterprise edition) offers IntelliTrace reverse debugging for C#, Visual Basic .NET, and some other languages, but not C++. Reverse debuggers also exist for C, C++, Java, Python, Perl, and other languages. Some are open source; some are proprietary commercial software. Some reverse debuggers slow down the target by orders of magnitude, but the best reverse debuggers cause a slowdown of 2× or less. Reverse debugging is very useful for certain types of problems, but is still not commonly used yet.[5]
In addition to the features of reverse debuggers,time travel debuggingalso allow users to interact with the program, changing the history if desired, and watch how the program responds.
Some debuggers operate on a single specific language while others can handle multiple languages transparently. For example, if the main target program is written inCOBOLbut callsassembly languagesubroutines andPL/1subroutines, the debugger may have to dynamically switch modes to accommodate the changes in language as they occur.
Some debuggers also incorporate memory protection to avoidstorage violationssuch asbuffer overflow. This may be extremely important intransaction processingenvironments where memory is dynamically allocated from memory 'pools' on a task by task basis.
Most modern microprocessors have at least one of these features in theirCPU designto make debugging easier:
Some of the most capable and popular debuggers implement only a simple command line interface (CLI)—often to maximizeportabilityand minimize resource consumption. Developers typically consider debugging via agraphical user interface(GUI) easier and more productive.[citation needed]This is the reason for visual front-ends, that allow users to monitor and control subservient CLI-only debuggers viagraphical user interface. Some GUI debugger front-ends are designed to be compatible with a variety of CLI-only debuggers, while others are targeted at one specific debugger.
Debugging is often used to illegally crack or pirate software, which is usually illegal even when done non-maliciously.Crackme'sare programs specifically designed to be cracked or debugged. These programs allow those with debuggers to practice their debugging ability without getting into legal trouble.
Some widely used debuggers are:
Earlierminicomputerdebuggers include:
Mainframedebuggers include:
|
https://en.wikipedia.org/wiki/Debugger
|
Software engineeringis a branch of bothcomputer scienceandengineeringfocused on designing, developing, testing, and maintainingsoftware applications. It involves applyingengineering principlesandcomputer programmingexpertise to develop software systems that meet user needs.[1][2][3][4]
The termsprogrammerandcoderoverlapsoftware engineer, but they imply only the construction aspect of a typical software engineer workload.[5]
A software engineer applies asoftware development process,[1][6]which involves defining,implementing,testing,managing, andmaintainingsoftware systems, as well as developing the software development process itself.
Beginning in the 1960s, software engineering was recognized as a separate field ofengineering.
The development of software engineering was seen as a struggle. Problems included software that was over budget, exceeded deadlines, required extensivedebuggingand maintenance, and unsuccessfully met the needs of consumers or was never even completed.
In 1968,NATOheld the first software engineering conference, where issues related to software were addressed. Guidelines and best practices for the development of software were established.[7]
The origins of the termsoftware engineeringhave been attributed to various sources. The term appeared in a list of services offered by companies in the June 1965 issue of "Computers and Automation"[8]and was used more formally in the August 1966 issue ofCommunications of the ACM(Volume 9, number 8) in "President's Letter to the ACM Membership" by Anthony A. Oettinger.[9][10][11]It is also associated with the title of a NATO conference in 1968 by ProfessorFriedrich L. Bauer.[12]Margaret Hamiltondescribed the discipline of "software engineering" during the Apollo missions to give what they were doing legitimacy.[13]At the time, there was perceived to be a "software crisis".[14][15][16]The 40th International Conference on Software Engineering (ICSE 2018) celebrates 50 years of "Software Engineering" with the Plenary Sessions' keynotes ofFrederick Brooks[17]andMargaret Hamilton.[18]
In 1984, theSoftware Engineering Institute(SEI) was established as a federally funded research and development center headquartered on the campus ofCarnegie Mellon UniversityinPittsburgh, Pennsylvania, United States.[19]Watts Humphreyfounded the SEI Software Process Program, aimed at understanding and managing the software engineering process.[19]The Process Maturity Levels introduced became theCapability Maturity Model Integrationfor Development (CMMI-DEV), which defined how the US Government evaluates the abilities of a software development team.
Modern, generally accepted best practices for software engineering have been collected by theISO/IEC JTC 1/SC 7subcommittee and published as theSoftware Engineering Body of Knowledge(SWEBOK).[6]Software engineering is considered one of the majorcomputingdisciplines.[20]
Notable definitions of software engineering include:
The term has also been used less formally:
Individual commentators have disagreed sharply on how to definesoftware engineeringor its legitimacy as an engineering discipline.David Parnashas said that software engineering is, in fact, a form of engineering.[30][31]Steve McConnellhas said that it is not, but that it should be.[32]Donald Knuthhas said that programming is an art and a science.[33]Edsger W. Dijkstraclaimed that the termssoftware engineeringandsoftware engineerhave been misused in the United States.[34]
Requirements engineeringis about elicitation, analysis, specification, and validation ofrequirementsforsoftware. Software requirements can befunctional,non-functionalor domain.
Functional requirements describe expected behaviors (i.e. outputs). Non-functional requirements specify issues like portability, security, maintainability, reliability, scalability, performance, reusability, and flexibility. They are classified into the following types: interface constraints, performance constraints (such as response time, security, storage space, etc.), operating constraints, life cycle constraints (maintainability, portability, etc.), and economic constraints. Knowledge of how the system or software works is needed when it comes to specifying non-functional requirements. Domain requirements have to do with the characteristic of a certain category or domain of projects.[35]
Software designis the process of making high-level plans for the software. Design is sometimes divided into levels:
Software construction typically involvesprogramming(a.k.a. coding),unit testing,integration testing, anddebuggingso as to implement the design.[1][6]"Software testing is related to, but different from, ... debugging".[6]Testing during this phase is generally performed by the programmer and with the purpose to verify that the code behaves as designed and to know when the code is ready for the next level of testing.[citation needed]
Software testingis an empirical, technical investigation conducted to provide stakeholders with information about the quality of the software under test.[1][6]
When described separately from construction, testing typically is performed bytest engineersorquality assuranceinstead of the programmers who wrote it. It is performed at thesystem leveland is considered an aspect ofsoftware quality.
Program analysis is the process of analyzing computer programs with respect to an aspect such asperformance,robustness, andsecurity.
Software maintenancerefers to supporting the software after release. It may include but is not limited to:error correction, optimization, deletion ofunusedand discarded features, and enhancement of existing features.[1][6]
Usually, maintenance takes up 40% to 80% of project cost.[37]
Knowledge ofcomputer programmingis a prerequisite for becoming a software engineer. In 2004, theIEEE Computer Societyproduced theSWEBOK, which has been published as ISO/IEC Technical Report 1979:2005, describing the body of knowledge that they recommend to be mastered by a graduate software engineer with four years of experience.[38]Many software engineers enter the profession by obtaining a university degree or training at a vocational school. One standard international curriculum for undergraduate software engineering degrees was defined by the Joint Task Force on Computing Curricula of theIEEE Computer Societyand theAssociation for Computing Machinery, and updated in 2014.[20]A number of universities have Software Engineering degree programs; as of 2010[update], there were 244 CampusBachelor of Software Engineeringprograms, 70 Online programs, 230 Masters-level programs, 41 Doctorate-level programs, and 69 Certificate-level programs in the United States.
In addition to university education, many companies sponsor internships for students wishing to pursue careers in information technology. These internships can introduce the student to real-world tasks that typical software engineers encounter every day. Similar experience can be gained throughmilitary servicein software engineering.
Half of all practitioners today havedegreesincomputer science,information systems, orinformation technology.[citation needed]A small but growing number of practitioners have software engineering degrees. In 1987, theDepartment of ComputingatImperial College Londonintroduced the first three-year software engineeringbachelor's degreein the world; in the following year, theUniversity of Sheffieldestablished a similar program.[39]In 1996, theRochester Institute of Technologyestablished the first software engineering bachelor's degree program in the United States; however, it did not obtainABETaccreditation until 2003, the same year asRice University,Clarkson University,Milwaukee School of Engineering, andMississippi State University.[40]In 1997, PSG College of Technology in Coimbatore, India was the first to start a five-year integrated Master of Science degree in Software Engineering.[citation needed]
Since then, software engineering undergraduate degrees have been established at many universities. A standard international curriculum for undergraduate software engineering degrees,SE2004, was defined by a steering committee between 2001 and 2004 with funding from theAssociation for Computing Machineryand theIEEE Computer Society. As of 2004[update], about 50 universities in the U.S. offer software engineering degrees, which teach both computer science and engineering principles and practices. The first software engineeringmaster's degreewas established atSeattle Universityin 1979. Since then, graduate software engineering degrees have been made available from many more universities. Likewise in Canada, the Canadian Engineering Accreditation Board (CEAB) of theCanadian Council of Professional Engineershas recognized several software engineering programs.
In 1998, the USNaval Postgraduate School(NPS) established the firstdoctorateprogram in Software Engineering in the world.[citation needed]Additionally, many online advanced degrees in Software Engineering have appeared such as the Master of Science in Software Engineering (MSE) degree offered through the Computer Science and Engineering Department atCalifornia State University, Fullerton.Steve McConnellopines that because most universities teach computer science rather than software engineering, there is a shortage of true software engineers.[41]ETS(École de technologie supérieure) University andUQAM(Université du Québec à Montréal) were mandated by IEEE to develop the Software Engineering Body of Knowledge (SWEBOK), which has become an ISO standard describing the body of knowledge covered by a software engineer.[6]
Legalrequirements for the licensing or certification of professional software engineers vary around the world. In the UK, there is no licensing or legal requirement to assume or use the job title Software Engineer. In some areas of Canada, such as Alberta, British Columbia, Ontario,[42]and Quebec, software engineers can hold the Professional Engineer (P.Eng) designation and/or the Information Systems Professional (I.S.P.) designation. In Europe, Software Engineers can obtain theEuropean Engineer(EUR ING) professional title. Software Engineers can also become professionally qualified as aChartered Engineerthrough theBritish Computer Society.
In the United States, theNCEESbegan offering aProfessional Engineerexam for Software Engineering in 2013, thereby allowing Software Engineers to be licensed and recognized.[43]NCEES ended the exam after April 2019 due to lack of participation.[44]Mandatory licensing is currently still largely debated, and perceived as controversial.[45][46]
TheIEEE Computer Societyand theACM, the two main US-based professional organizations of software engineering, publish guides to the profession of software engineering. The IEEE'sGuide to the Software Engineering Body of Knowledge – 2004 Version, orSWEBOK, defines the field and describes the knowledge the IEEE expects a practicing software engineer to have. The most current version is SWEBOK v4.[6]The IEEE also promulgates a "Software Engineering Code of Ethics".[47]
There are an estimated 26.9 million professional software engineers in the world as of 2022, up from 21 million in 2016.[48][49]
Many software engineers work as employees or contractors. Software engineers work with businesses, government agencies (civilian or military), and non-profit organizations. Some software engineers work for themselves asfreelancers. Some organizations have specialists to perform each of the tasks in thesoftware development process. Other organizations require software engineers to do many or all of them. In large projects, people may specialize in only one role. In small projects, people may fill several or all roles at the same time. Many companies hireinterns, often university or college students during a summer break, orexternships. Specializations includeanalysts,architects,developers,testers,technical support,middleware analysts,project managers,software product managers,educators, andresearchers.
Most software engineers and programmers work 40 hours a week, but about 15 percent of software engineers and 11 percent of programmers worked more than 50 hours a week in 2008.[50]Potential injuries in these occupations are possible because like other workers who spend long periodssittingin front of a computer terminal typing at a keyboard, engineers and programmers are susceptible to eyestrain, back discomfort,Thrombosis,Obesity, and hand and wrist problems such ascarpal tunnel syndrome.[51]
TheU. S. Bureau of Labor Statistics(BLS) counted 1,365,500 software developers holding jobs in theU.S.in 2018.[52]Due to its relative newness as a field of study, formal education in software engineering is often taught as part of a computer science curriculum, and many software engineers hold computer science degrees.[53]The BLS estimates from 2023 to 2033 that computer software engineering would increase by 17%.[54]This is down from the 2022 to 2032 BLS estimate of 25% for software engineering.[54][55]And, is further down from their 30% 2010 to 2020 BLS estimate.[56]Due to this trend, job growth may not be as fast as during the last decade, as jobs that would have gone to computer software engineers in the United States would instead be outsourced to computer software engineers in countries such as India and other foreign countries.[57][50]In addition, the BLS Job Outlook for Computer Programmers, theU.S. Bureau of Labor Statistics(BLS) Occupational Outlook predicts a decline of -7 percent from 2016 to 2026, a further decline of -9 percent from 2019 to 2029, a decline of -10 percent from 2021 to 2031.[57]and then a decline of -11 percent from 2022 to 2032.[57]Since computer programming can be done from anywhere in the world, companies sometimes hire programmers in countries where wages are lower.[57][58][59]Furthermore, the ratio of women in many software fields has also been declining over the years as compared to other engineering fields.[60]Then there is the additional concern that recent advances inArtificial Intelligencemight impact the demand for future generations of Software Engineers.[61][62][63][64][65][66][67]However, this trend may change or slow in the future as many current software engineers in the U.S. market flee the profession orage outof the market in the next few decades.[57]
TheSoftware Engineering Instituteoffers certifications on specific topics likesecurity, process improvement andsoftware architecture.[68]IBM,Microsoftand other companies also sponsor their own certification examinations. ManyITcertificationprograms are oriented toward specific technologies, and managed by the vendors of these technologies.[69]These certification programs are tailored to the institutions that would employ people who use these technologies.
Broader certification of general software engineering skills is available through various professional societies. As of 2006[update], theIEEEhad certified over 575 software professionals as aCertified Software Development Professional(CSDP).[70]In 2008 they added an entry-level certification known as the Certified Software Development Associate (CSDA).[71]TheACMhad a professional certification program in the early 1980s,[citation needed]which was discontinued due to lack of interest. The ACM and theIEEE Computer Societytogether examined the possibility of licensing of software engineers as Professional Engineers in the 1990s,
but eventually decided that such licensing was inappropriate for the professional industrial practice of software engineering.[45]John C. Knight andNancy G. Levesonpresented a more balanced analysis of the licensing issue in 2002.[46]
In the U.K. theBritish Computer Societyhas developed a legally recognized professional certification calledChartered IT Professional (CITP), available to fully qualified members (MBCS). Software engineers may be eligible for membership of theBritish Computer SocietyorInstitution of Engineering and Technologyand so qualify to be considered forChartered Engineerstatus through either of those institutions. In Canada theCanadian Information Processing Societyhas developed a legally recognized professional certification calledInformation Systems Professional (ISP).[72]In Ontario, Canada, Software Engineers who graduate from aCanadian Engineering Accreditation Board (CEAB)accredited program, successfully complete PEO's (Professional Engineers Ontario) Professional Practice Examination (PPE) and have at least 48 months of acceptable engineering experience are eligible to be licensed through theProfessional Engineers Ontarioand can become Professional Engineers P.Eng.[73]The PEO does not recognize any online or distance education however; and does not consider Computer Science programs to be equivalent to software engineering programs despite the tremendous overlap between the two. This has sparked controversy and a certification war. It has also held the number of P.Eng holders for the profession exceptionally low. The vast majority of working professionals in the field hold a degree in CS, not SE. Given the difficult certification path for holders of non-SE degrees, most never bother to pursue the license.
The initial impact of outsourcing, and the relatively lower cost of international human resources in developing third world countries led to a massive migration of software development activities from corporations in North America and Europe to India and later: China, Russia, and other developing countries. This approach had some flaws, mainly the distance / time zone difference that prevented human interaction between clients and developers and the massive job transfer. This had a negative impact on many aspects of the software engineering profession. For example, some students in thedeveloped worldavoid education related to software engineering because of the fear ofoffshore outsourcing(importing software products or services from other countries) and of being displaced byforeign visa workers.[74]Although statistics do not currently show a threat to software engineering itself; a related career,computer programmingdoes appear to have been affected.[75]Nevertheless, the ability to smartly leverage offshore and near-shore resources via thefollow-the-sunworkflow has improved the overall operational capability of many organizations.[76]When North Americans leave work, Asians are just arriving to work. When Asians are leaving work, Europeans arrive to work. This provides a continuous ability to have human oversight on business-critical processes 24 hours per day, without paying overtime compensation or disrupting a key human resource, sleep patterns.
While global outsourcing has several advantages, global – and generally distributed – development can run into serious difficulties resulting from the distance between developers. This is due to the key elements of this type of distance that have been identified as geographical, temporal, cultural and communication (that includes the use of different languages and dialects of English in different locations).[77]Research has been carried out in the area of global software development over the last 15 years and an extensive body of relevant work published that highlights the benefits and problems associated with the complex activity. As with other aspects of software engineering research is ongoing in this and related areas.
There are various prizes in the field of software engineering:
Some call for licensing, certification and codified bodies of knowledge as mechanisms for spreading the engineering knowledge and maturing the field.[81]
Some claim that the concept of software engineering is so new that it is rarely understood, and it is widely misinterpreted, including in software engineering textbooks, papers, and among the communities of programmers and crafters.[82]
Some claim that a core issue with software engineering is that its approaches are not empirical enough because a real-world validation of approaches is usually absent, or very limited and hence software engineering is often misinterpreted as feasible only in a "theoretical environment."[82]
Edsger Dijkstra, a founder of many of the concepts in software development today, rejected the idea of "software engineering" up until his death in 2002, arguing that those terms were poor analogies for what he called the "radical novelty" ofcomputer science:
A number of these phenomena have been bundled under the name "Software Engineering". As economics is known as "The Miserable Science", software engineering should be known as "The Doomed Discipline", doomed because it cannot even approach its goal since its goal is self-contradictory. Software engineering, of course, presents itself as another worthy cause, but that is eyewash: if you carefully read its literature and analyse what its devotees actually do, you will discover that software engineering has accepted as its charter "How to program if you cannot."[83]
|
https://en.wikipedia.org/wiki/Software_engineering
|
Computer programmingorcodingis the composition of sequences of instructions, calledprograms, thatcomputerscan follow to perform tasks.[1][2]It involves designing and implementingalgorithms, step-by-step specifications of procedures, by writingcodein one or moreprogramming languages. Programmers typically usehigh-level programming languagesthat are more easily intelligible to humans thanmachine code, which is directly executed by thecentral processing unit. Proficient programming usually requires expertise in several different subjects, including knowledge of theapplication domain, details of programming languages and generic codelibraries, specialized algorithms, andformal logic.
Auxiliary tasks accompanying and related to programming includeanalyzing requirements,testing,debugging(investigating and fixing problems), implementation ofbuild systems, and management of derivedartifacts, such as programs'machine code. While these are sometimes considered programming, often the termsoftware developmentis used for this larger overall process – with the termsprogramming,implementation, andcodingreserved for the writing and editing of code per se. Sometimes software development is known assoftware engineering, especially when it employsformal methodsor follows anengineering design process.
Programmable deviceshave existed for centuries. As early as the 9th century, a programmablemusic sequencerwas invented by the PersianBanu Musabrothers, who described an automated mechanicalfluteplayer in theBook of Ingenious Devices.[3][4]In 1206, the Arab engineerAl-Jazariinvented a programmabledrum machinewhere a musical mechanicalautomatoncould be made to play different rhythms and drum patterns, via pegs andcams.[5][6]In 1801, theJacquard loomcould produce entirely different weaves by changing the "program" – a series ofpasteboardcards with holes punched in them.
Code-breakingalgorithms have also existed for centuries. In the 9th century, theArab mathematicianAl-Kindidescribed acryptographicalgorithm for deciphering encrypted code, inA Manuscript on Deciphering Cryptographic Messages. He gave the first description ofcryptanalysisbyfrequency analysis, the earliest code-breaking algorithm.[7]
The firstcomputer programis generally dated to 1843 when mathematicianAda Lovelacepublished analgorithmto calculate a sequence ofBernoulli numbers, intended to be carried out byCharles Babbage'sAnalytical Engine.[8]The algorithm, which was conveyed through notes on a translation of Luigi Federico Menabrea's paper on the analytical engine was mainly conceived by Lovelace as can be discerned through her correspondence with Babbage. However, Charles Babbage himself had written a program for the AE in 1837.[9][10]Lovelace was also the first to see a broader application for the analytical engine beyond mathematical calculations.
In the 1880s,Herman Hollerithinvented the concept of storingdatain machine-readable form.[11]Later acontrol panel(plug board) added to his 1906 Type I Tabulator allowed it to be programmed for different jobs, and by the late 1940s,unit record equipmentsuch as theIBM 602andIBM 604, were programmed by control panels in a similar way, as were the firstelectronic computers. However, with the concept of thestored-program computerintroduced in 1949, both programs and data were stored and manipulated in the same way incomputer memory.[12]
Machine codewas the language of early programs, written in theinstruction setof the particular machine, often inbinarynotation.Assembly languageswere soon developed that let the programmer specify instructions in a text format (e.g., ADD X, TOTAL), with abbreviations for each operation code and meaningful names for specifying addresses. However, because an assembly language is little more than a different notation for a machine language, two machines withdifferent instruction setsalso have different assembly languages.
High-level languagesmade the process of developing a program simpler and more understandable, and less bound to the underlyinghardware.
The first compiler related tool, theA-0 System, was developed in 1952[13]byGrace Hopper, who also coined the term 'compiler'.[14][15]FORTRAN, the first widely used high-level language to have a functional implementation, came out in 1957,[16]and many other languages were soon developed—in particular,COBOLaimed at commercial data processing, andLispfor computer research.
These compiled languages allow the programmer to write programs in terms that are syntactically richer, and more capable ofabstractingthe code, making it easy to target varying machine instruction sets via compilation declarations andheuristics. Compilers harnessed the power of computers to make programming easier[16]by allowing programmers to specify calculations by entering a formula usinginfix notation.
Programs were mostly entered using punched cards orpaper tape. By the late 1960s,data storage devicesandcomputer terminalsbecame inexpensive enough that programs could be created by typing directly into the computers.Text editorswere also developed that allowed changes and corrections to be made much more easily than withpunched cards.
Whatever the approach to development may be, the final program must satisfy some fundamental properties. The following properties are among the most important:[17][18]
Usingautomated testsandfitness functionscan help to maintain some of the aforementioned attributes.[20]
In computer programming,readabilityrefers to the ease with which a human reader can comprehend the purpose,control flow, and operation ofsource code. It affects the aspects of quality above, including portability, usability and most importantly maintainability.
Readability is important because programmers spend the majority of their time reading, trying to understand, reusing, and modifying existing source code, rather than writing new source code. Unreadable code often leads to bugs, inefficiencies, andduplicated code. A study found that a few simple readability transformations made code shorter and drastically reduced the time to understand it.[21]
Following a consistentprogramming styleoften helps readability. However, readability is more than just programming style. Many factors, having little or nothing to do with the ability of the computer to efficiently compile and execute the code, contribute to readability.[22]Some of these factors include:
Thepresentationaspects of this (such as indents, line breaks, color highlighting, and so on) are often handled by thesource code editor, but the content aspects reflect the programmer's talent and skills.
Variousvisual programming languageshave also been developed with the intent to resolve readability concerns by adopting non-traditional approaches to code structure and display.Integrated development environments(IDEs) aim to integrate all such help. Techniques likeCode refactoringcan enhance readability.
The academic field and the engineering practice of computer programming are concerned with discovering and implementing the most efficient algorithms for a given class of problems. For this purpose, algorithms are classified intoordersusingBig O notation, which expresses resource use—such as execution time or memory consumption—in terms of the size of an input. Expert programmers are familiar with a variety of well-established algorithms and their respective complexities and use this knowledge to choose algorithms that are best suited to the circumstances.
The first step in most formal software development processes isrequirements analysis, followed by testing to determine value modeling, implementation, and failure elimination (debugging). There exist a lot of different approaches for each of those tasks. One approach popular for requirements analysis isUse Caseanalysis. Many programmers use forms ofAgile software developmentwhere the various stages of formal software development are more integrated together into short cycles that take a few weeks rather than years. There are many approaches to the Software development process.
Popular modeling techniques include Object-Oriented Analysis and Design (OOAD) and Model-Driven Architecture (MDA). The Unified Modeling Language (UML) is a notation used for both the OOAD and MDA.
A similar technique used for database design is Entity-Relationship Modeling (ER Modeling).
Implementation techniques include imperative languages (object-orientedorprocedural),functional languages, andlogic programminglanguages.
It is very difficult to determine what are the most popular modern programming languages. Methods of measuring programming language popularity include: counting the number of job advertisements that mention the language,[23]the number of books sold and courses teaching the language (this overestimates the importance of newer languages), and estimates of the number of existing lines of code written in the language (this underestimates the number of users of business languages such as COBOL).
Some languages are very popular for particular kinds of applications, while some languages are regularly used to write many different kinds of applications. For example,COBOLis still strong in corporate data centers[24]often on largemainframe computers,Fortranin engineering applications,scripting languagesinWebdevelopment, andCinembedded software. Many applications use a mix of several languages in their construction and use. New languages are generally designed around the syntax of a prior language with new functionality added, (for exampleC++adds object-orientation to C, andJavaadds memory management andbytecodeto C++, but as a result, loses efficiency and the ability for low-level manipulation).
Debugging is a very important task in the software development process since having defects in a program can have significant consequences for its users. Some languages are more prone to some kinds of faults because their specification does not require compilers to perform as much checking as other languages. Use of astatic code analysistool can help detect some possible problems. Normally the first step in debugging is to attempt to reproduce the problem. This can be a non-trivial task, for example as with parallel processes or some unusual software bugs. Also, specific user environment and usage history can make it difficult to reproduce the problem.
After the bug is reproduced, the input of the program may need to be simplified to make it easier to debug. For example, when a bug in a compiler can make it crash whenparsingsome large source file, a simplification of the test case that results in only few lines from the original source file can be sufficient to reproduce the same crash. Trial-and-error/divide-and-conquer is needed: the programmer will try to remove some parts of the original test case and check if the problem still exists. When debugging the problem in a GUI, the programmer can try to skip some user interaction from the original problem description and check if the remaining actions are sufficient for bugs to appear. Scripting andbreakpointingare also part of this process.
Debugging is often done withIDEs. Standalone debuggers likeGDBare also used, and these often provide less of a visual environment, usually using acommand line. Some text editors such asEmacsallow GDB to be invoked through them, to provide a visual environment.
Different programming languages support different styles of programming (calledprogramming paradigms). The choice of language used is subject to many considerations, such as company policy, suitability to task, availability of third-party packages, or individual preference. Ideally, the programming language best suited for the task at hand will be selected. Trade-offs from this ideal involve finding enough programmers who know the language to build a team, the availability of compilers for that language, and the efficiency with which programs written in a given language execute. Languages form an approximate spectrum from "low-level" to "high-level"; "low-level" languages are typically more machine-oriented and faster to execute, whereas "high-level" languages are more abstract and easier to use but execute less quickly. It is usually easier to code in "high-level" languages than in "low-level" ones.
Programming languages are essential for software development. They are the building blocks for all software, from the simplest applications to the most sophisticated ones.
Allen Downey, in his bookHow To Think Like A Computer Scientist, writes:
Many computer languages provide a mechanism to call functions provided byshared libraries. Provided the functions in a library follow the appropriate run-time conventions (e.g., method of passingarguments), then these functions may be written in any other language.
Learning to program has a long history related to professional standards and practices, academic initiatives and curriculum, and commercial books and materials for students, self-taught learners, hobbyists, and others who desire to create or customize software for personal use. Since the 1960s, learning to program has taken on the characteristics of apopular movement, with the rise of academic disciplines, inspirational leaders, collective identities, and strategies to grow the movement and make institutionalize change.[26]Through these social ideals and educational agendas, learning to code has become important not just for scientists and engineers, but for millions of citizens who have come to believe that creating software is beneficial to society and its members.
In 1957, there were approximately 15,000 computer programmers employed in the U.S., a figure that accounts for 80% of the world's active developers. In 2014, there were approximately 18.5 million professional programmers in the world, of which 11 million can be considered professional and 7.5 million student or hobbyists.[27]Before the rise of the commercial Internet in the mid-1990s, most programmers learned about software construction through books, magazines, user groups, and informal instruction methods, with academic coursework and corporate training playing important roles for professional workers.[28]
The first book containing specific instructions about how to program a computer may have beenMaurice Wilkes,David Wheeler, andStanley Gill'sPreparation of Programs for an Electronic Digital Computer(1951). The book offered a selection of common subroutines for handling basic operations on the EDSAC, one of the world's first stored-program computers.
When high-level languages arrived, they were introduced by numerous books and materials that explained language keywords, managing program flow, working with data, and other concepts. These languages includedFLOW-MATIC, COBOL, FORTRAN,ALGOL,Pascal,BASIC, and C. An example of an early programming primer from these years isMarshal H. Wrubel'sA Primer of Programming for Digital Computers(1959), which included step-by-step instructions for filling out coding sheets, creating punched cards, and using the keywords in IBM's early FORTRAN system.[29]Daniel McCracken'sA Guide to FORTRAN Programming(1961) presented FORTRAN to a larger audience, including students and office workers.
In 1961,Alan Perlissuggested that all university freshmen at Carnegie Technical Institute take a course in computer programming.[30]His advice was published in the popular technical journalComputers and Automation, which became a regular source of information for professional programmers.
Programmers soon had a range of learning texts at their disposal.Programmer's referenceslisted keywords and functions related to a language, often in alphabetical order, as well as technical information about compilers and related systems. An early example was IBM'sProgrammers' Reference Manual: the FORTRAN Automatic Coding System for the IBM 704 EDPM(1956).
Over time, the genre ofprogrammer's guidesemerged, which presented the features of a language in tutorial or step by step format. Many early primers started with a program known as“Hello, World”, which presented the shortest program a developer could create in a given system. Programmer's guides then went on to discuss core topics like declaring variables, data types, formulas, flow control, user-defined functions, manipulating data, and other topics.
Early and influential programmer's guides includedJohn G. KemenyandThomas E. Kurtz'sBASIC Programming(1967), Kathleen Jensen andNiklaus Wirth'sThe Pascal User Manual and Report(1971), andBrian KernighanandDennis Ritchie'sThe C Programming Language(1978). Similar books for popular audiences (but with a much lighter tone) includedBob Albrecht'sMy Computer Loves Me When I Speak BASIC(1972), Al Kelley and Ira Pohl'sA Book on C(1984), andDan Gookin'sC for Dummies(1994).
Beyond language-specific primers, there were numerous books and academic journals that introduced professional programming practices. Many were designed for university courses in computer science, software engineering, or related disciplines.Donald Knuth'sThe Art of Computer Programming(1968 and later), presented hundreds of computational algorithms and their analysis.The Elements of Programming Style(1974), byBrian W. KernighanandP. J. Plauger, concerned itself with programmingstyle, the idea that programs should be written not only to satisfy the compiler but human readers.Jon Bentley'sProgramming Pearls(1986) offered practical advice about the art and craft of programming in professional and academic contexts. Texts specifically designed for students included Doug Cooper and Michael Clancy'sOh Pascal!(1982),Alfred Aho'sData Structures and Algorithms(1983), and Daniel Watt'sLearning with Logo(1983).
As personal computers became mass-market products, thousands of trade books and magazines sought to teach professional, hobbyist, and casual users to write computer programs. A sample of these learning resources includesBASIC Computer Games, Microcomputer Edition(1978), byDavid Ahl;Programming the Z80(1979), byRodnay Zaks;Programmer's CP/M Handbook(1983), byAndy Johnson-Laird;C Primer Plus(1984), byMitchell Waiteand The Waite Group;The Peter Norton Programmer's Guide to the IBM PC(1985), byPeter Norton;Advanced MS-DOS(1986), by Ray Duncan;Learn BASIC Now(1989), byMichael Halvorsonand David Rygymr;Programming Windows(1992 and later), byCharles Petzold;Code Complete: A Practical Handbook for Software Construction(1993), bySteve McConnell; andTricks of the Game-Programming Gurus(1994), byAndré LaMothe.
The PC software industry spurred the creation of numerous book publishers that offered programming primers and tutorials, as well as books for advanced software developers.[31]These publishers includedAddison-Wesley,IDG,Macmillan Inc.,McGraw-Hill,Microsoft Press,O'Reilly Media,Prentice Hall, Sybex, Ventana Press, Waite Group Press,Wiley,Wrox Press, andZiff-Davis.
Computer magazinesand journals also provided learning content for professional and hobbyist programmers. A partial list of these resources includesAmiga World,Byte (magazine),Communications of the ACM,Computer (magazine),Compute!,Computer Language (magazine),Computers and Electronics,Dr. Dobb's Journal,IEEE Software,Macworld,PC Magazine,PC/Computing, andUnixWorld.
Between 2000 and 2010, computer book and magazine publishers declined significantly as providers of programming instruction, as programmers moved to Internet resources to expand their access to information. This shift brought forward new digital products and mechanisms to learn programming skills. During the transition, digital books from publishers transferred information that had traditionally been delivered in print to new and expanding audiences.[32]
Important Internet resources for learning to code included blogs, wikis, videos, online databases, subscription sites, and custom websites focused on coding skills. New commercial resources includedYouTubevideos, Lynda.com tutorials (laterLinkedIn Learning),Khan Academy,Codecademy,GitHub,W3Schools, and numerous coding bootcamps.
Most software development systems andgame enginesincluded rich online help resources, includingintegrated development environments(IDEs),context-sensitive help,APIs, and other digital resources. Commercialsoftware development kits(SDKs) also provided a collection of software development tools and documentation in one installable package.
Commercial and non-profit organizations published learning websites for developers, created blogs, and established newsfeeds and social media resources about programming. Corporations likeApple,Microsoft,Oracle,Google, andAmazonbuilt corporate websites providing support for programmers, including resources like theMicrosoft Developer Network(MSDN). Contemporary movements like Hour of Code (Code.org) show how learning to program has become associated with digital learning strategies, education agendas, and corporate philanthropy.
Computer programmers are those who write computer software. Their jobs usually involve:
Although programming has been presented in the media as a somewhat mathematical subject, some research shows that good programmers have strong skills in natural human languages, and that learning to code is similar to learning aforeign language.[33][34]
|
https://en.wikipedia.org/wiki/Computer_programming
|
Incomputing, acore dump,[a]memory dump,crash dump,storage dump,system dump, orABEND dump[1]consists of the recorded state of the workingmemoryof acomputer programat a specific time, generally when the program hascrashedor otherwise terminated abnormally.[2]In practice, other key pieces ofprogram stateare usually dumped at the same time, including theprocessor registers, which may include theprogram counterandstack pointer, memory management information, and other processor and operating system flags and information. Asnapshot dump(orsnap dump) is a memory dump requested by thecomputer operatoror by the running program, after which the program is able to continue. Core dumps are often used to assist in diagnosing anddebuggingerrors in computer programs.
On many operating systems, afatal exceptionin a program automatically triggers a core dump. By extension, the phrase "to dump core" has come to mean in many cases, any fatal error, regardless of whether a record of the program memory exists. The term "core dump", "memory dump", or just "dump" has also become jargon to indicate any output of a large amount of raw data for further examination or other purposes.[3][4]
The name comes frommagnetic-core memory,[5][6]the principal form ofrandom-access memoryfrom the 1950s to the 1970s. The name has remained long after magnetic-core technology became obsolete.
Earliest core dumps were paper printouts[7]of the contents of memory, typically arranged in columns ofoctalorhexadecimalnumbers (a "hex dump"), sometimes accompanied by their interpretations asmachine languageinstructions, text strings, or decimal or floating-point numbers (cf.disassembler).
As memory sizes increased and post-mortem analysis utilities were developed, dumps were written to magnetic media like tape or disk.
Instead of only displaying the contents of the applicable memory, modern operating systems typically generate a file containing an image of the memory belonging to the crashed process, or the memory images of parts of theaddress spacerelated to that process, along with other information such as the values of processor registers, program counter, system flags, and other information useful in determining the root cause of the crash. These files can be viewed as text, printed, or analysed with specialised tools such as elfdump onUnixandUnix-likesystems,objdumpandkdumponLinux, IPCS (Interactive Problem Control System) on IBMz/OS,[8]DVF (Dump Viewing Facility) on IBMz/VM,[9]WinDbgon Microsoft Windows,Valgrind, or other debuggers.
In some operating systems[b]an application or operator may request a snapshot of selected storage blocks, rather than all of the storage used by the application or operating system.
Core dumps can serve as useful debugging aids in several situations. On early standalone orbatch-processingsystems, core dumps allowed a user to debug a program without monopolizing the (very expensive) computing facility for debugging; a printout could also be more convenient than debugging usingfront panelswitches and lights.
On shared computers, whether time-sharing, batch processing, or server systems, core dumps allow off-line debugging of theoperating system, so that the system can go back into operation immediately.
Core dumps allow a user to save a crash for later or off-site analysis, or comparison with other crashes. Forembedded computers, it may be impractical to support debugging on the computer itself, so analysis of a dump may take place on a different computer. Some operating systems such as early versions ofUnixdid not support attachingdebuggersto running processes, so core dumps were necessary to run a debugger on a process's memory contents.
Core dumps can be used to capture data freed duringdynamic memory allocationand may thus be used to retrieve information from a program that is no longer running. In the absence of an interactive debugger, the core dump may be used by an assiduous programmer to determine the error from direct examination.
Snap dumps are sometimes a convenient way for applications to record quick and dirty debugging output.
A core dump generally represents the complete contents of the dumped regions of the address space of the dumped process. Depending on the operating system, the dump may contain few or no data structures to aid interpretation of the memory regions. In these systems, successful interpretation requires that the program or user trying to interpret the dump understands the structure of the program's memory use.
A debugger can use asymbol table, if one exists, to help the programmer interpret dumps, identifying variables symbolically and displaying source code; if the symbol table is not available, less interpretation of the dump is possible, but there might still be enough possible to determine the cause of the problem. There are also special-purpose tools calleddump analyzersto analyze dumps. One popular tool, available on many operating systems, is the GNU binutils'objdump.
On modernUnix-likeoperating systems, administrators and programmers can read core dump files using the GNU BinutilsBinary File Descriptor library(BFD), and theGNU Debugger(gdb) and objdump that use this library. This library will supply the raw data for a given address in a memory region from a core dump; it does not know anything about variables or data structures in that memory region, so the application using the library to read the core dump will have to determine the addresses of variables and determine the layout of data structures itself, for example by using the symbol table for the program undergoing debugging.
Analysts of crash dumps fromLinuxsystems can usekdumpor the Linux Kernel Crash Dump (LKCD).[10]
Core dumps can save the context (state) of a process at a given state for returning to it later. Systems can be made highly available by transferring core between processors, sometimes via core dump files themselves.
Core can also be dumped onto a remote host over a network (which is a security risk).[11]
Users of IBM mainframes runningz/OScan browse SVC and transaction dumps using Interactive Problem Control System (IPCS), a full screen dump reader which was originally introduced inOS/VS2 (MVS), supports user written scripts inREXXand supports point-and-shoot browsing[c]of dumps.
In older and simpler operating systems, each process had a contiguous address-space, so a dump file was sometimes simply a file with the sequence of bytes, digits,[d]characters[d]or words. On other systems a dump file contained discrete records, each containing a storage address and the associated contents. On the earliest of these machines, the dump was often written by a stand-alone dump program rather than by the application or the operating system.
TheIBSYSmonitor for theIBM 7090included a System Core-Storage Dump Program[12]that supported post-mortem and snap dumps.
On theIBM System/360, the standard operating systems wrote formatted ABEND and SNAP dumps, with the addresses, registers, storage contents, etc., all converted into printable forms. Later releases added the ability to write unformatted[e]dumps, called at that time core image dumps (also known as SVC dumps.)
In modern operating systems, a process address space may contain gaps, and it may share pages with other processes or files, so more elaborate representations are used; they may also include other information about the state of the program at the time of the dump.
InUnix-likesystems, core dumps generally use the standardexecutableimage-format:
InOS/360 and successors, a job may assign arbitrary data set names (dsnames) to the ddnamesSYSABENDandSYSUDUMPfor a formatted ABEND dump and to arbitrary ddnames for SNAP dumps, or define those ddnames as SYSOUT.[f]The Damage Assessment and Repair (DAR) facility added an automatic unformatted[h]storage dump to the datasetSYS1.DUMP[i]at the time of failure as well as a console dump requested by the operator. A job may assign an arbitrary dsname to the ddnameSYSMDUMPfor an unformatted ABEND dump, or define that ddname as SYSOUT.[j]The newer transaction dump is very similar to the older SVC dump. TheInteractive Problem Control System(IPCS), added to OS/VS2 bySelectable Unit(SU) 57[14][15]and part of every subsequentMVSrelease, can be used to interactively analyze storage dumps onDASD. IPCS understands the format and relationships of system control blocks, and can produce a formatted display for analysis. The current versions of IPCS allow inspection of active address spaces[16][k]without first taking a storage dump and of unformaated dumps on SPOOL.
Since Solaris 8, system utilitycoreadmallows the name and location of core files to be configured. Dumps of user processes are traditionally created ascore. On Linux (since versions 2.4.21 and 2.6 of theLinux kernel mainline), a different name can be specified viaprocfsusing the/proc/sys/kernel/core_patternconfiguration file; the specified name can also be a template that contains tags substituted by, for example, the executable filename, the process ID, or the reason for the dump.[17]System-wide dumps on modern Unix-like systems often appear asvmcoreorvmcore.incomplete.
Systems such asMicrosoft Windows, which usefilename extensions, may use extension.dmp; for example, core dumps may be namedmemory.dmpor\Minidump\Mini051509-01.dmp.
Microsoft Windowssupports two memory dump formats, described below.
There are five types of kernel-mode dumps:[18]
To analyze the Windows kernel-mode dumpsDebugging Tools for Windowsare used, a set that includes tools like WinDbg & DumpChk.[20][21][22]
User-mode memory dump, also known asminidump,[23]is a memory dump of a single process. It contains selected data records: full or partial (filtered) process memory; list of thethreadswith theircall stacksand state (such asregistersorTEB); information abouthandlesto the kernel objects; list of loaded and unloadedlibraries. Full list of options available inMINIDUMP_TYPEenum.[24]
TheNASAVoyager programwas probably the first craft to routinely utilize the core dump feature in the Deep Space segment. The core dump feature is a mandatory telemetry feature for the Deep Space segment as it has been proven to minimize system diagnostic costs.[citation needed]The Voyager craft uses routine core dumps to spot memory damage fromcosmic rayevents.
Space Mission core dump systems are mostly based on existing toolkits for the target CPU or subsystem. However, over the duration of a mission the core dump subsystem may be substantially modified or enhanced for the specific needs of the mission.
Descriptions of the file format
Kernel core dumps:
|
https://en.wikipedia.org/wiki/Core_dump
|
Dangling pointersandwild pointersincomputer programmingarepointersthat do not point to a valid object of the appropriate type. These are special cases ofmemory safetyviolations. More generally,dangling referencesandwild referencesarereferencesthat do not resolve to a valid destination.
Dangling pointers arise duringobject destruction, when an object that is pointed to by a given pointer is deleted or deallocated, without modifying the value of that said pointer, so that the pointer still points to the memory location of the deallocated memory. The system may reallocate the previously freed memory, and if the program thendereferencesthe (now) dangling pointer,unpredictable behaviormay result, as the memory may now contain completely different data. If the program writes to memory referenced by a dangling pointer, a silent corruption of unrelated data may result, leading to subtlebugsthat can be extremely difficult to find. If the memory has been reallocated to another process, then attempting to dereference the dangling pointer can causesegmentation faults(UNIX, Linux) orgeneral protection faults(Windows). If the program has sufficient privileges to allow it to overwrite the bookkeeping data used by the kernel's memory allocator, the corruption can cause system instabilities. Inobject-oriented languageswithgarbage collection, dangling references are prevented by only destroying objects that are unreachable, meaning they do not have any incoming pointers; this is ensured either by tracing orreference counting. However, afinalizermay create new references to an object, requiringobject resurrectionto prevent a dangling reference.
Wild pointers, also called uninitialized pointers, arise when a pointer is used prior to initialization to some known state, which is possible in some programming languages. They show the same erratic behavior as dangling pointers, though they are less likely to stay undetected because many compilers will raise a warning at compile time if declared variables are accessed before being initialized.[1]
In many languages (e.g., theC programming language) deleting an object from memory explicitly or by destroying thestack frameon return does not alter associated pointers. The pointer still points to the same location in memory even though that location may now be used for other purposes.
A straightforward example is shown below:
If the operating system is able to detect run-time references tonull pointers, a solution to the above is to assign 0 (null) to dp immediately before the inner block is exited. Another solution would be to somehow guarantee dp is not used again without further initialization.
Another frequent source of dangling pointers is a jumbled combination ofmalloc()andfree()library calls: a pointer becomes dangling when the block of memory it points to is freed. As with the previous example one way to avoid this is to make sure to reset the pointer to null after freeing its reference—as demonstrated below.
An all too common misstep is returning addresses of a stack-allocated local variable: once a called function returns, the space for these variables gets deallocated and technically they have "garbage values".
Attempts to read from the pointer may still return the correct value (1234) for a while after callingfunc, but any functions called thereafter may overwrite the stack storage allocated fornumwith other values and the pointer would no longer work correctly. If a pointer tonummust be returned,nummust have scope beyond the function—it might be declared asstatic.
Antoni Kreczmar[pl](1945–1996) has created a complete object management system which is free of dangling reference phenomenon.[2]A similar approach was proposed by Fisher and LeBlanc[3]under the nameLocks-and-keys.
Wild pointers are created by omitting necessary initialization prior to first use. Thus, strictly speaking, every pointer in programming languages which do not enforce initialization begins as a wild pointer.
This most often occurs due to jumping over the initialization, not by omitting it. Most compilers are able to warn about this.
Likebuffer-overflowbugs, dangling/wild pointer bugs frequently become security holes. For example, if the pointer is used to make avirtual functioncall, a different address (possibly pointing at exploit code) may be called due to thevtablepointer being overwritten. Alternatively, if the pointer is used for writing to memory, some other data structure may be corrupted. Even if the memory is only read once the pointer becomes dangling, it can lead to information leaks (if interesting data is put in the next structure allocated there) or toprivilege escalation(if the now-invalid memory is used in security checks). When a dangling pointer is used after it has been freed without allocating a new chunk of memory to it, this becomes known as a "use after free" vulnerability.[4]For example,CVE-2014-1776is a use-after-free vulnerability in Microsoft Internet Explorer 6 through 11[5]being used byzero-day attacksby anadvanced persistent threat.[6]
In C, the simplest technique is to implement an alternative version of thefree()(or alike) function which guarantees the reset of the pointer. However, this technique will not clear other pointer variables which may contain a copy of the pointer.
The alternative version can be used even to guarantee the validity of an empty pointer before callingmalloc():
These uses can be masked through#definedirectives to construct useful macros (a common one being#define XFREE(ptr) safefree((void **)&(ptr))), creating something like a metalanguage or can be embedded into a tool library apart. In every case, programmers using this technique should use the safe versions in every instance wherefree()would be used; failing in doing so leads again to the problem. Also, this solution is limited to the scope of a single program or project, and should be properly documented.
Among more structured solutions, a popular technique to avoid dangling pointers in C++ is to usesmart pointers. A smart pointer typically usesreference countingto reclaim objects. Some other techniques include thetombstonesmethod and thelocks-and-keysmethod.[3]
Another approach is to use theBoehm garbage collector, a conservativegarbage collectorthat replaces standard memory allocation functions in C andC++with a garbage collector. This approach completely eliminates dangling pointer errors by disabling frees, and reclaiming objects by garbage collection.
Another approach is to use a system such asCHERI, which stores pointers with additional metadata which may prevent invalid accesses by including lifetime information in pointers. CHERI typically requires support in the CPU to conduct these additional checks.
In languages like Java, dangling pointers cannot occur because there is no mechanism to explicitly deallocate memory. Rather, the garbage collector may deallocate memory, but only when the object is no longer reachable from any references.
In the languageRust, thetype systemhas been extended to include also the variables lifetimes andresource acquisition is initialization. Unless one disables the features of the language, dangling pointers will be caught at compile time and reported as programming errors.
To expose dangling pointer errors, one common programming technique is to set pointers to thenull pointeror to an invalid address once the storage they point to has been released. When the null pointer is dereferenced (in most languages) the program will immediately terminate—there is no potential for data corruption or unpredictable behavior. This makes the underlying programming mistake easier to find and resolve. This technique does not help when there are multiple copies of the pointer.
Some debuggers will automatically overwrite and destroy data that has been freed, usually with a specific pattern, such as0xDEADBEEF(Microsoft's Visual C/C++ debugger, for example, uses0xCC,0xCDor0xDDdepending on what has been freed[7]). This usually prevents the data from being reused by making it useless and also very prominent (the pattern serves to show the programmer that the memory has already been freed).
Tools such asPolyspace,TotalView,Valgrind, Mudflap,[8]AddressSanitizer, or tools based onLLVM[9]can also be used to detect uses of dangling pointers.
Other tools (SoftBound,Insure++, andCheckPointer) instrument the source code to collect and track legitimate values for pointers ("metadata") and check each pointer access against the metadata for validity.
Another strategy, when suspecting a small set of classes, is to temporarily make all their member functionsvirtual: after the class instance has been destructed/freed, its pointer to theVirtual Method Tableis set toNULL, and any call to a member function will crash the program and it will show the guilty code in the debugger.
|
https://en.wikipedia.org/wiki/Dangling_pointer
|
Incomputingorcomputer programming,delegationrefers generally to one entity passing something to another entity,[1]and narrowly to various specific forms of relationships. These include:
|
https://en.wikipedia.org/wiki/Delegation_(programming)
|
Aterminate-and-stay-resident program(commonlyTSR) is acomputer programrunning underDOSthat uses asystem callto return control to DOS as though it has finished, but remains incomputer memoryso it can be reactivated later.[1]This technique partially overcame DOS's limitation of executing only one program, ortask, at a time. TSRs are used only in DOS, not inWindows.
Some TSRs areutility softwarethat a computer user might call up several times a day, while working in another program, by using ahotkey.Borland Sidekickwas an early and popular example of this type. Others serve asdevice driversforhardwarethat the operating system does not directly support.
NormallyDOScan run only one program at a time. When a program finishes, it returns control to DOS using thesystem callINT 21h/4Chof theDOS API.[2]The memory and system resources used are then marked as unused. This makes it impossible to restart parts of the program without having to reload it all. However, if a program ends with the system callINT 27horINT 21h/31h, the operating system does not reuse a certain specified part of its memory.
The original call,INT 27h, is called "terminate but stay resident", hence the name "TSR". Using this call, a program can make up to 64 KB of its memory resident. MS-DOS version 2.0 introduced an improved call,INT 21h/31h('Keep Process'), which removed this limitation and let the program return anexit code. Before making this call, the program can install one or severalinterrupthandlers pointing into itself, so that it can be called again. Installing a hardware interrupt vector allows such a program to react to hardware events. Installing a software interrupt vector allows it to be called by the currently running program. Installing a timer interrupt handler allows a TSR to run periodically (using aprogrammable interval timer).
The typical method of using an interrupt vector involves reading its present value (the address), storing it within the memory space of the TSR, and replacing it with an address in its own code. The stored address is called from the TSR, in effect forming a singly linked list ofinterrupt handlers, also calledinterrupt service routines, or ISRs. This procedure of installing ISRs is calledchainingorhookingan interrupt or an interrupt vector.
TSRs can be loaded at any time; either during the DOS startup sequence (for example, fromAUTOEXEC.BAT), or at the user's request (for example,Borland'sSidekickand Turbo Debugger, Quicken's QuickPay, or FunStuff Software's Personal Calendar). Parts of DOS itself use this technique, especially in DOS versions 5.0 and later. For example, theDOSKEYcommand-line editor and various other utilities are installed by running them at the command line (manually, or fromAUTOEXEC.BATor throughINSTALLfrom within CONFIG.SYS) rather than loading them as device drivers throughDEVICEstatements in CONFIG.SYS.
Some TSRs have no way to unload themselves, so they will remain in memory until a reboot. However unloading is possible externally, using utilities like theMARK.EXE/RELEASE.EXEcombo byTurboPower Softwareorsoft rebootTSRs which will catch a specific key combination and release all TSRs loaded after them. As the chain of ISRs is singly linked, and a TSR may store the link to its predecessor anywhere it chooses, there is no general way for a TSR to remove itself from the chain. So usually a stub must be left in memory when unloading a TSR, causing memory fragmentation. This problem gave rise to TSR cooperation frameworks such asTesSeRactand AMIS.[3]
To manage problems with many TSRs sharing the same interrupt, a method called Alternate Multiplex Interrupt Specification (AMIS) was proposed byRalf D. Brownas an improvement over previously used services offered via INT 2Fh. AMIS provides ways to sharesoftware interruptsin a controlled manner. It is modeled after IBM's Interrupt Sharing Protocol, originally invented for sharing hardware interrupts of an x86 processor. AMIS services are available via Int 2Dh.[4]
The proposal never gained a widespread traction among programmers in its days. It existed alongside several other competing specifications of varying sophistication.[5]
While very useful, or even essential to overcomeDOS's limitations, TSRs have a reputation as troublemakers. Many hijack the operating system in varying documented or undocumented ways, often causing systems to crash on their activation or deactivation when used with particular applications or other TSRs.
By chaining the interrupt vectors TSRs can take complete control of the computer. A TSR can have one of two behaviors:
The terminate-and-stay-resident method is used by most DOSvirusesand other malware, which can either take control of the PC or stay in the background. This malware can react to disk I/O or execution events by infectingexecutable(.EXE or .COM) files when it is run and data files when they are opened.
Additionally, in DOS all programs must be loaded into the first 640KBof RAM (theconventional memory), even on computers with large amounts of physicalRAM. TSRs are no exception, and take chunks from that 640 KB that are thus unavailable to other applications. This meant that writing a TSR was a challenge of achieving the smallest possible size for it, and checking it for compatibility with a lot of software products from different vendors—often a very frustrating task.
In the late 1980s and early 1990s, manyvideo gameson the PC platform pushed up against this limit and left less and less space for TSRs—even essential ones likeCD-ROMdrivers—and arranging things so that there was enough free RAM to run the games, while keeping the necessary TSRs present, became very complicated. Many gamers had severalboot diskswith different configurations for different games. In later versions of MS-DOS, "boot menu" scripts allowed various configurations to be selectable via a single menu entry. In the mid- to later 1990s, while many games were still written for DOS, the 640 KB limit was eventually overcome by putting parts of the game's data above the first 1 MB of memory and using the code below 640 KB to access the extended memory usingexpanded memory(EMS) by making use ofoverlaytechnique. An alternative later approach was to switch the CPU into Protected Mode by usingDOS extendersand run the program in protected mode. The latter allowed to have code and data in the extended memory area.[citation needed]
Because programming with many overlays is a challenge in and of itself, once the program was too big to fit entirely into about 512 KB, use of extended memory was almost always done using a third-party DOS extender implementingVCPIorDPMI, because it becomes much easier and faster to access memory above the 1 MB boundary, and possible to run code in that area, when the x86 processor is switched fromreal modetoprotected mode. However, since DOS and most DOS programs run in real mode (VCPI or DPMI makes a protected-mode program look like a real-mode program to DOS and the rest of the system by switching back and forth between the two modes), DOS TSRs and device drivers also run in real mode, and so any time one gets control, the DOS extender has to switch back to real mode until it relinquishes control, incurring a time penalty (unless they utilize techniques such asDPMSorCLOAKING).
With the arrival ofexpanded memoryboards and especially ofIntel 80386processors in the second half of the 1980s, it became possible to use memory above 640 KB to load TSRs. This required complex software solutions, namedexpanded memory managers. Some memory managers areQRAMandQEMMbyQuarterdeck,386MAXbyQualitas,CEMMbyCompaq, and laterEMM386byMicrosoft. The memory areas usable for loading TSRs above 640 KB are called "upper memory blocks" (UMBs) and loading programs into them is calledloading high. Later, memory managers started including programs such as Quarterdeck's Optimize or Microsoft'sMEMMAKERwhich try to maximize the available space in the first 640 KB by determining how best to allocate TSRs between low and high memory.
With the development of games usingDOS extenders(an early example wasDoom) which bypassed the 640 KB barrier, many of the issues relating to TSRs disappeared, and with the widespread adoption ofMicrosoft Windowsand especiallyWindows 95(followed byWindows 98) – which rendered most TSRs unnecessary and some TSRs incompatible – the TSR faded into obsolescence, thoughWin16applications can do TSR-like tricks such as patching theinterrupt descriptor table(IDT) because Windows allowed it.Windows Medoes not allow a computer to boot into a DOS Kernel by shutting down Windows Me; thus TSRs became useless on Windows Me.
TheWindows NTseries (includingWindows 2000,Windows XP, and later) replaced DOS completely and run inprotected modeorlong mode(later 64-bit versions only) all the time, disabling the ability to switch to real mode, which is needed for TSRs to function. Instead these operating systems have modern driver andserviceframeworks withmemory protectionandpreemptive multitasking, allowing multiple programs and device drivers to run simultaneously without the need for special programming tricks; thekerneland its modules have been made exclusively responsible for modifying the interrupt table.
|
https://en.wikipedia.org/wiki/Terminate-and-stay-resident_program
|
ARMInstruction Set Simulator, also known asARMulator, is one of the software development tools provided by the development systems business unit ofARM Limitedto all users of ARM-based chips. It owes its heritage to the early development of the instruction set bySophie Wilson. Part of this heritage is still visible in the provision of aTube BBC Micromodel in ARMulator.
ARMulator is written inCand provides more than just an instruction set simulator, it provides a virtual platform for system emulation. It comes ready to emulate an ARM processor and certain ARMcoprocessors. If the processor is part of anembedded system, then licensees may extend ARMulator to add their own implementations of the additional hardware to the ARMulator model. ARMulator provides a number of services to help with the time-based behaviour and event scheduling and ships with examples of memory mapped and co-processor expansions. This way, they can use ARMulator to emulate their entireembedded system. A key limitation for ARMulator is that it can only simulate a single ARM CPU at one time, although almost all ARM cores up toARM11are available.
Performance of ARMulator is good for the technology employed, it's about 1000 host (PC) instructions per ARM instruction. This means that emulated speeds of 1 MHz were normal for PCs of the mid to late 90s. Accuracy is good too, although it is classed as cycle count accurate rather than cycle accurate, this is because the ARM pipeline isn't fully modeled (although register interlocks are). Resolution is to an instruction, as a consequence when single stepping the register interlocks are ignored and different cycle counts are returned than if the program had simply run, this was unavoidable.
Testing ARMulator was always a time-consuming challenge, the full ARM architecture validation suites being employed. At over 1 million lines of C code it was a fairly hefty product.
ARMulator allows runtime debugging using either armsd (ARM Symbolic Debugger), or either of the graphical debuggers that were shipped in SDT and the later ADS products. ARMulator suffered from being an invisible tool with a text file configuration (armul.conf) that many found complex to configure.
ARMulator II formed the basis for the high accuracy, cycle callable co-verification models of ARM processors, these CoVs models (seeCycle Accurate Simulator) were the basis of many CoVerification systems for ARM processors.
ARMulator was available on a very broad range of platforms through its life, includingMac,RISC OSplatforms,DEC Alpha,HP-UX,Solaris,SunOS,Windows,Linux. In the mid-1990s there was reluctance to support Windows platforms; pre-Windows 95 it was a relatively challenging platform. Through the late 1990s and early 2000s support was removed for all but Solaris, Windows and Linux - although undoubtedly the code base remains littered with pragmas such as #ifdef RISCOS.
ARMulator II shipped in early ARM toolkits as well as the later SDT 2.5, SDT 2.5.1, ADS 1.0, ADS 1.1, ADS 1.2, RCVT 1.0 and also separately as RVISS.
Special models were produced during the development of CPUs, notably theARM9E, ARM10 andARM11, these models helped with architectural decisions such as Thumb-2 and TrustZone.
ARMulator has been gradually phased out and has been replaced byJust-in-time compilation-based high performance CPU and system models (See FastSim link below).
ARMulator I was made open source and is the basis for the GNU version of ARMulator. Key differences are in the memory interface and services, also the instruction decode is done differently. The GNU ARMulator is available as part of theGDBdebugger in the ARM GNU Tools.
ARMulator II formed the basis for the high accuracy, cycle callable co-verification models of ARM processors, these CoVs models (see Cycle Accurate Simulator) were the basis of many CoVerification systems for ARM processors. Mentor Graphic's Seamless have the market leading CoVs system that supports many ARM cores, and many other CPUs.
ARMulator II shipped in early ARM toolkits as well as the later SDT 2.5, SDT 2.5.1, ADS 1.0, ADS 1.1, ADS 1.2, RVCT 1.0 and also separately as RVISS.
Key contributors to ARMulator II were Mike Williams, Louise Jameson, Charles Lavender, Donald Sinclair, Chris Lamb and Rebecca Bryan (who worked on ARMulator as both an engineer and later as product manager). Significant input was also made by Allan Skillman, who was working on ARM CoVerification models at the time.
A key contributor to ARMulator I wasDave Jaggar.
Special models were produced during the development of CPUs, notably the ARM9E, ARM10 and ARM11, these models helped with architectural decisions such as Thumb-2 and TrustZone.
|
https://en.wikipedia.org/wiki/ARMulator
|
ARM(stylised in lowercase asarm, formerly an acronym forAdvanced RISC Machinesand originallyAcorn RISC Machine) is a family ofRISCinstruction set architectures(ISAs) forcomputer processors.Arm Holdingsdevelops the ISAs and licenses them to other companies, who build the physical devices that use the instruction set. It also designs and licensescoresthat implement these ISAs.
Due to their low costs, low power consumption, and low heat generation, ARM processors are useful for light, portable, battery-powered devices, includingsmartphones,laptops, andtablet computers, as well asembedded systems.[3][4][5]However, ARM processors are also used fordesktopsandservers, includingFugaku, the world's fastestsupercomputerfrom 2020[6]to 2022. With over 230 billion ARM chips produced,[7][8]since at least 2003, and with its dominance increasing every year[update], ARM is the most widely used family of instruction set architectures.[9][4][10][11][12]
There have been several generations of the ARM design. The original ARM1 used a32-bitinternal structure but had a 26-bitaddress spacethat limited it to 64 MB ofmain memory. This limitation was removed in the ARMv3 series, which has a 32-bit address space, and several additional generations up to ARMv7 remained 32-bit. Released in 2011, the ARMv8-A architecture added support for a64-bitaddress space and 64-bit arithmetic with its new 32-bit fixed-length instruction set.[13]Arm Holdings has also released a series of additional instruction sets for different roles: the "Thumb" extensions add both 32- and 16-bit instructions for improvedcode density, whileJazelleadded instructions for directly handlingJava bytecode. More recent changes include the addition ofsimultaneous multithreading(SMT) for improved performance orfault tolerance.[14]
Acorn Computers' first widely successful design was theBBC Micro, introduced in December 1981. This was a relatively conventional machine based on theMOS Technology 6502CPU but ran at roughly double the performance of competing designs like theApple IIdue to its use of fasterdynamic random-access memory(DRAM). Typical DRAM of the era ran at about 2 MHz; Acorn arranged a deal withHitachifor a supply of faster 4 MHz parts.[15]
Machines of the era generally shared memory between the processor and theframebuffer, which allowed the processor to quickly update the contents of the screen without having to perform separateinput/output(I/O). As the timing of the video display is exacting, the video hardware had to have priority access to that memory. Due to a quirk of the 6502's design, the CPU left the memory untouched for half of the time. Thus by running the CPU at 1 MHz, the video system could read data during those down times, taking up the total 2 MHz bandwidth of the RAM. In the BBC Micro, the use of 4 MHz RAM allowed the same technique to be used, but running at twice the speed. This allowed it to outperform any similar machine on the market.[16]
1981 was also the year that theIBM Personal Computerwas introduced. Using the recently introducedIntel 8088, a16-bitCPU compared to the 6502's8-bitdesign, it offered higher overall performance. Its introduction changed the desktop computer market radically: what had been largely a hobby and gaming market emerging over the prior five years began to change to a must-have business tool where the earlier 8-bit designs simply could not compete. Even newer32-bitdesigns were also coming to market, such as theMotorola 68000[17]andNational Semiconductor NS32016.[18]
Acorn began considering how to compete in this market and produced a new paper design named theAcorn Business Computer. They set themselves the goal of producing a machine with ten times the performance of the BBC Micro, but at the same price.[19]This would outperform and underprice the PC. At the same time, the recent introduction of theApple Lisabrought thegraphical user interface(GUI) concept to a wider audience and suggested the future belonged to machines with a GUI.[20]The Lisa, however, cost $9,995, as it was packed with support chips, large amounts of memory, and ahard disk drive, all very expensive then.[21]
The engineers then began studying all of the CPU designs available. Their conclusion about the existing 16-bit designs was that they were a lot more expensive and were still "a bit crap",[22]offering only slightly higher performance than their BBC Micro design. They also almost always demanded a large number of support chips to operate even at that level, which drove up the cost of the computer as a whole. These systems would simply not hit the design goal.[22]They also considered the new 32-bit designs, but these cost even more and had the same issues with support chips.[23]According toSophie Wilson, all the processors tested at that time performed about the same, with about a 4 Mbit/s bandwidth.[24][a]
Two key events led Acorn down the path to ARM. One was the publication of a series of reports from theUniversity of California, Berkeley, which suggested that a simple chip design could nevertheless have extremely high performance, much higher than the latest 32-bit designs on the market.[25]The second was a visit bySteve Furberand Sophie Wilson to theWestern Design Center, a company run byBill Menschand his sister, which had become the logical successor to the MOS team and was offering new versions like theWDC 65C02. The Acorn team saw high school students producing chip layouts on Apple II machines, which suggested that anyone could do it.[26][27]In contrast, a visit to another design firm working on modern 32-bit CPU revealed a team with over a dozen members who were already on revision H of their design and yet it still contained bugs.[b]This cemented their late 1983 decision to begin their own CPU design, the Acorn RISC Machine.[28]
The originalBerkeley RISCdesigns were in some sense teaching systems, not designed specifically for outright performance. To the RISC's basic register-heavy and load/store concepts, ARM added a number of the well-received design notes of the 6502. Primary among them was the ability to quickly serveinterrupts, which allowed the machines to offer reasonableinput/outputperformance with no added external hardware. To offer interrupts with similar performance as the 6502, the ARM design limited its physicaladdress spaceto 64 MB of total addressable space, requiring 26 bits of address. As instructions were 4 bytes (32 bits) long, and required to be aligned on 4-byte boundaries, the lower 2 bits of an instruction address were always zero. This meant theprogram counter(PC) only needed to be 24 bits, allowing it to be stored along with the eight bitprocessor flagsin a single 32-bit register. That meant that upon receiving an interrupt, the entire machine state could be saved in a single operation, whereas had the PC been a full 32-bit value, it would require separate operations to store the PC and the status flags. This decision halved the interrupt overhead.[29]
Another change, and among the most important in terms of practical real-world performance, was the modification of theinstruction setto take advantage ofpage mode DRAM. Recently introduced, page mode allowed subsequent accesses of memory to run twice as fast if they were roughly in the same location, or "page", in the DRAM chip. Berkeley's design did not consider page mode and treated all memory equally. The ARM design added special vector-like memory access instructions, the "S-cycles", that could be used to fill or save multiple registers in a single page using page mode. This doubled memory performance when they could be used, and was especially important for graphics performance.[30]
The Berkeley RISC designs usedregister windowsto reduce the number of register saves and restores performed inprocedure calls; the ARM design did not adopt this.
Wilson developed the instruction set, writing a simulation of the processor inBBC BASICthat ran on a BBC Micro with asecond 6502 processor.[31][32]This convinced Acorn engineers they were on the right track. Wilson approached Acorn's CEO,Hermann Hauser, and requested more resources. Hauser gave his approval and assembled a small team to design the actual processor based on Wilson's ISA.[33]The official Acorn RISC Machine project started in October 1983.
Acorn choseVLSI Technologyas the "silicon partner", as they were a source of ROMs and custom chips for Acorn. Acorn provided the design and VLSI provided the layout and production. The first samples of ARM silicon worked properly when first received and tested on 26 April 1985.[3]Known as ARM1, these versions ran at 6 MHz.[34]
The first ARM application was as a second processor for the BBC Micro, where it helped in developing simulation software to finish development of the support chips (VIDC, IOC, MEMC), and sped up theCAD softwareused in ARM2 development. Wilson subsequently rewroteBBC BASICin ARMassembly language. The in-depth knowledge gained from designing the instruction set enabled the code to be very dense, making ARM BBC BASIC an extremely good test for any ARM emulator.
The result of the simulations on the ARM1 boards led to the late 1986 introduction of the ARM2 design running at 8 MHz, and the early 1987 speed-bumped version at 10 to 12 MHz.[c]A significant change in the underlying architecture was the addition of aBooth multiplier, whereas formerly multiplication had to be carried out in software.[36]Further, a new Fast Interrupt reQuest mode, FIQ for short, allowed registers 8 through 14 to be replaced as part of the interrupt itself. This meant FIQ requests did not have to save out their registers, further speeding interrupts.[37]
The first use of the ARM2 were in ARM Evaluations systems, supplied as a second processor for BBC Micro and Master machines, from July 1986,[38]internal Acorn A500 development machines,[39]and theAcorn Archimedespersonal computer models A305, A310, and A440, launched on the 6th June 1987.
According to theDhrystonebenchmark, the ARM2 was roughly seven times the performance of a typical 7 MHz 68000-based system like theAmigaorMacintosh SE. It was twice as fast as anIntel 80386running at 16 MHz, and about the same speed as a multi-processorVAX-11/784superminicomputer. The only systems that beat it were theSun SPARCandMIPS R2000RISC-basedworkstations.[40]Further, as the CPU was designed for high-speed I/O, it dispensed with many of the support chips seen in these machines; notably, it lacked any dedicateddirect memory access(DMA) controller which was often found on workstations. The graphics system was also simplified based on the same set of underlying assumptions about memory and timing. The result was a dramatically simplified design, offering performance on par with expensive workstations but at a price point similar to contemporary desktops.[40]
The ARM2 featured a32-bitdata bus,26-bitaddress space and 27 32-bitregisters, of which 16 are accessible at any one time (including thePC).[41]The ARM2 had atransistor countof just 30,000,[42]compared to Motorola's six-year-older 68000 model with around 68,000. Much of this simplicity came from the lack ofmicrocode, which represents about one-quarter to one-third of the 68000's transistors, and the lack of (like most CPUs of the day) acache. This simplicity enabled the ARM2 to have a low power consumption and simpler thermal packaging by having fewer powered transistors. Nevertheless, ARM2 offered better performance than the contemporary 1987IBM PS/2 Model 50, which initially utilised anIntel 80286, offering 1.8 MIPS @ 10 MHz, and later in 1987, the 2 MIPS of the PS/2 70, with itsIntel 386DX @ 16 MHz.[43][44]
A successor, ARM3, was produced with a 4 KB cache, which further improved performance.[45]The address bus was extended to 32 bits in the ARM6, but program code still had to lie within the first 64 MB of memory in 26-bit compatibility mode, due to the reserved bits for the status flags.[46]
In the late 1980s,Apple ComputerandVLSI Technologystarted working with Acorn on newer versions of the ARM core. In 1990, Acorn spun off the design team into a new company named Advanced RISC Machines Ltd.,[47][48][49]which became ARM Ltd. when its parent company,Arm Holdingsplc, floated on theLondon Stock ExchangeandNasdaqin 1998.[50]The new Apple–ARM work would eventually evolve into the ARM6, first released in early 1992. Apple used the ARM6-based ARM610 as the basis for theirApple NewtonPDA.
In 1994, Acorn used the ARM610 as the maincentral processing unit(CPU) in theirRiscPCcomputers.DEClicensed the ARMv4 architecture and produced theStrongARM.[51]At 233MHz, this CPU drew only one watt (newer versions draw far less). This work was later passed to Intel as part of a lawsuit settlement, and Intel took the opportunity to supplement theiri960line with the StrongARM. Intel later developed its own high performance implementation namedXScale, which it has since sold toMarvell. Transistor count of the ARM core remained essentially the same throughout these changes; ARM2 had 30,000 transistors,[52]while ARM6 grew only to 35,000.[53]
In 2005, about 98% of all mobile phones sold used at least one ARM processor.[54]In 2010, producers of chips based on ARM architectures reported shipments of 6.1 billionARM-based processors, representing 95% ofsmartphones, 35% ofdigital televisionsandset-top boxes, and 10% ofmobile computers. In 2011, the 32-bit ARM architecture was the most widely used architecture in mobile devices and the most popular 32-bit one in embedded systems.[55]In 2013, 10 billion were produced[56]and "ARM-based chips are found in nearly 60 percent of the world's mobile devices".[57]
Arm Holdings's primary business is sellingIP cores, which licensees use to createmicrocontrollers(MCUs),CPUs, andsystems-on-chipsbased on those cores. Theoriginal design manufacturercombines the ARM core with other parts to produce a complete device, typically one that can be built in existingsemiconductor fabrication plants(fabs) at low cost and still deliver substantial performance. The most successful implementation has been theARM7TDMIwith hundreds of millions sold.Atmelhas been a precursor design center in the ARM7TDMI-based embedded system.
The ARM architectures used in smartphones, PDAs and othermobile devicesrange from ARMv5 toARMv8-A.
In 2009, some manufacturers introduced netbooks based on ARM architecture CPUs, in direct competition with netbooks based onIntel Atom.[58]
Arm Holdings offers a variety of licensing terms, varying in cost and deliverables. Arm Holdings provides to all licensees an integratable hardware description of the ARM core as well as complete software development toolset (compiler,debugger,software development kit), and the right to sell manufacturedsiliconcontaining the ARM CPU.
SoC packages integrating ARM's core designs include Nvidia Tegra's first three generations, CSR plc's Quatro family, ST-Ericsson's Nova and NovaThor, Silicon Labs's Precision32 MCU, Texas Instruments'sOMAPproducts, Samsung's Hummingbird andExynosproducts, Apple'sA4,A5, andA5X, andNXP'si.MX.
Fablesslicensees, who wish to integrate an ARM core into their own chip design, are usually only interested in acquiring a ready-to-manufacture verifiedsemiconductor intellectual property core. For these customers, Arm Holdings delivers agate netlistdescription of the chosen ARM core, along with an abstracted simulation model and test programs to aid design integration and verification. More ambitious customers, including integrated device manufacturers (IDM) and foundry operators, choose to acquire the processor IP insynthesizableRTL(Verilog) form. With the synthesizable RTL, the customer has the ability to perform architectural level optimisations and extensions. This allows the designer to achieve exotic design goals not otherwise possible with an unmodified netlist (high clock speed, very low power consumption, instruction set extensions, etc.). While Arm Holdings does not grant the licensee the right to resell the ARM architecture itself, licensees may freely sell manufactured products such as chip devices, evaluation boards and complete systems.Merchant foundriescan be a special case; not only are they allowed to sell finished silicon containing ARM cores, they generally hold the right to re-manufacture ARM cores for other customers.
Arm Holdings prices its IP based on perceived value. Lower performing ARM cores typically have lower licence costs than higher performing cores. In implementation terms, a synthesisable core costs more than a hard macro (blackbox) core. Complicating price matters, a merchant foundry that holds an ARM licence, such as Samsung or Fujitsu, can offer fab customers reduced licensing costs. In exchange for acquiring the ARM core through the foundry's in-house design services, the customer can reduce or eliminate payment of ARM's upfront licence fee.
Compared to dedicated semiconductor foundries (such asTSMCandUMC) without in-house design services, Fujitsu/Samsung charge two- to three-times more per manufacturedwafer.[citation needed]For low to mid volume applications, a design service foundry offers lower overall pricing (through subsidisation of the licence fee). For high volume mass-produced parts, the long term cost reduction achievable through lower wafer pricing reduces the impact of ARM's NRE (non-recurring engineering) costs, making the dedicated foundry a better choice.
Companies that have developed chips with cores designed by Arm includeAmazon.com'sAnnapurna Labssubsidiary,[59]Analog Devices,Apple,AppliedMicro(now:MACOM Technology Solutions[60]),Atmel,Broadcom,Cavium,Cypress Semiconductor,Freescale Semiconductor(nowNXP Semiconductors),Huawei,Intel,[dubious–discuss]Maxim Integrated,Nvidia,NXP,Qualcomm,Renesas,Samsung Electronics,ST Microelectronics,Texas Instruments, andXilinx.
In February 2016, ARM announced the Built on ARM Cortex Technology licence, often shortened to Built on Cortex (BoC) licence. This licence allows companies to partner with ARM and make modifications to ARM Cortex designs. These design modifications will not be shared with other companies. These semi-custom core designs also have brand freedom, for exampleKryo 280.
Companies that are current licensees of Built on ARM Cortex Technology includeQualcomm.[61]
Companies can also obtain an ARMarchitectural licencefor designing their own CPU cores using the ARM instruction sets. These cores must comply fully with the ARM architecture. Companies that have designed cores that implement an ARM architecture include Apple, AppliedMicro (now:Ampere Computing), Broadcom,Cavium(now: Marvell),Digital Equipment Corporation, Intel, Nvidia, Qualcomm, Samsung Electronics,Fujitsu, and NUVIA Inc. (acquired by Qualcomm in 2021).
On 16 July 2019, ARM announced ARM Flexible Access. ARM Flexible Access provides unlimited access to included ARMintellectual property(IP) for development. Per product licence fees are required once a customer reaches foundry tapeout or prototyping.[62][63]
75% of ARM's most recent IP over the last two years are included in ARM Flexible Access. As of October 2019:
Arm provides a list of vendors who implement ARM cores in their design (application specific standard products (ASSP), microprocessor and microcontrollers).[105]
ARM cores are used in a number of products, particularlyPDAsandsmartphones. Somecomputingexamples areMicrosoft'sfirst generation Surface,Surface 2andPocket PCdevices (following2002),Apple'siPads, andAsus'sEee Pad Transformertablet computers, and severalChromebooklaptops. Others include Apple'siPhonesmartphonesandiPodportable media players,Canon PowerShotdigital cameras,Nintendo Switchhybrid, theWiisecurity processor and3DShandheld game consoles, andTomTomturn-by-turnnavigation systems.
In 2005, Arm took part in the development ofManchester University's computerSpiNNaker, which used ARM cores to simulate thehuman brain.[106]
ARM chips are also used inRaspberry Pi,BeagleBoard,BeagleBone,PandaBoard, and othersingle-board computers, because they are very small, inexpensive, and consume very little power.
The 32-bit ARM architecture (ARM32), such asARMv7-A(implementing AArch32; seesection on Armv8-Afor more on it), was the most widely used architecture in mobile devices as of 2011[update].[55]
Since 1995, various versions of theARM Architecture Reference Manual(see§ External links) have been the primary source of documentation on the ARM processor architecture and instruction set, distinguishing interfaces that all ARM processors are required to support (such as instruction semantics) from implementation details that may vary. The architecture has evolved over time, and version seven of the architecture, ARMv7, defines three architecture "profiles":
Although the architecture profiles were first defined for ARMv7, ARM subsequently defined the ARMv6-M architecture (used by the CortexM0/M0+/M1) as a subset of the ARMv7-M profile with fewer instructions.
Except in the M-profile, the 32-bit ARM architecture specifies several CPU modes, depending on the implemented architecture features. At any moment in time, the CPU can be in only one mode, but it can switch modes due to external events (interrupts) or programmatically.[107]
The original (and subsequent) ARM implementation was hardwired withoutmicrocode, like the much simpler8-bit6502processor used in prior Acorn microcomputers.
The 32-bit ARM architecture (and the 64-bit architecture for the most part) includes the following RISC features:
To compensate for the simpler design, compared with processors like the Intel 80286 andMotorola 68020, some additional design features were used:
ARM includes integer arithmetic operations for add, subtract, and multiply; some versions of the architecture also support divide operations.
ARM supports 32-bit × 32-bit multiplies with either a 32-bit result or 64-bit result, though Cortex-M0 / M0+ / M1 cores do not support 64-bit results.[112]Some ARM cores also support 16-bit × 16-bit and 32-bit × 16-bit multiplies.
The divide instructions are only included in the following ARM architectures:
Registers R0 through R7 are the same across all CPU modes; they are never banked.
Registers R8 through R12 are the same across all CPU modes except FIQ mode. FIQ mode has its own distinct R8 through R12 registers.
R13 and R14 are banked across all privileged CPU modes except system mode. That is, each mode that can be entered because of an exception has its own R13 and R14. These registers generally contain the stack pointer and the return address from function calls, respectively.
Aliases:
The Current Program Status Register (CPSR) has the following 32 bits.[115]
Almost every ARM instruction has a conditional execution feature calledpredication, which is implemented with a 4-bit condition code selector (the predicate). To allow for unconditional execution, one of the four-bit codes causes the instruction to be always executed. Most other CPU architectures only have condition codes on branch instructions.[116]
Though the predicate takes up four of the 32 bits in an instruction code, and thus cuts down significantly on the encoding bits available for displacements in memory access instructions, it avoids branch instructions when generating code for smallifstatements. Apart from eliminating the branch instructions themselves, this preserves the fetch/decode/execute pipeline at the cost of only one cycle per skipped instruction.
An algorithm that provides a good example of conditional execution is the subtraction-basedEuclidean algorithmfor computing thegreatest common divisor. In theC programming language, the algorithm can be written as:
The same algorithm can be rewritten in a way closer to target ARMinstructionsas:
and coded inassembly languageas:
which avoids the branches around thethenandelseclauses. Ifr0andr1are equal then neither of theSUBinstructions will be executed, eliminating the need for a conditional branch to implement thewhilecheck at the top of the loop, for example hadSUBLE(less than or equal) been used.
One of the ways that Thumb code provides a more dense encoding is to remove the four-bit selector from non-branch instructions.
Another feature of theinstruction setis the ability to fold shifts and rotates into thedata processing(arithmetic, logical, and register-register move) instructions, so that, for example, the statement inClanguage:
could be rendered as a one-word, one-cycle instruction:[117]
This results in the typical ARM program being denser than expected with fewer memory accesses; thus the pipeline is used more efficiently.
The ARM processor also has features rarely seen in other RISC architectures, such asPC-relative addressing (indeed, on the 32-bit[1]ARM thePCis one of its 16 registers) and pre- and post-increment addressing modes.
The ARM instruction set has increased over time. Some early ARM processors (before ARM7TDMI), for example, have no instruction to store a two-byte quantity.
The ARM7 and earlier implementations have a three-stagepipeline; the stages being fetch, decode, and execute. Higher-performance designs, such as the ARM9, have deeper pipelines: Cortex-A8 has thirteen stages. Additional implementation changes for higher performance include a fasteradderand more extensivebranch predictionlogic. The difference between the ARM7DI and ARM7DMI cores, for example, was an improved multiplier; hence the added "M".
The ARM architecture (pre-Armv8) provides a non-intrusive way of extending the instruction set using "coprocessors" that can be addressed using MCR, MRC, MRRC, MCRR, and similar instructions. The coprocessor space is divided logically into 16 coprocessors with numbers from 0 to 15, coprocessor 15 (cp15) being reserved for some typical control functions like managing the caches andMMUoperation on processors that have one.
In ARM-based machines, peripheral devices are usually attached to the processor by mapping their physical registers into ARM memory space, into the coprocessor space, or by connecting to another device (a bus) that in turn attaches to the processor. Coprocessor accesses have lower latency, so some peripherals—for example, an XScale interrupt controller—are accessible in both ways: through memory and through coprocessors.
In other cases, chip designers only integrate hardware using the coprocessor mechanism. For example, an image processing engine might be a small ARM7TDMI core combined with a coprocessor that has specialised operations to support a specific set of HDTV transcoding primitives.
All modern ARM processors include hardware debugging facilities, allowing software debuggers to perform operations such as halting, stepping, and breakpointing of code starting from reset. These facilities are built usingJTAGsupport, though some newer cores optionally support ARM's own two-wire "SWD" protocol. In ARM7TDMI cores, the "D" represented JTAG debug support, and the "I" represented presence of an "EmbeddedICE" debug module. For ARM7 and ARM9 core generations, EmbeddedICE over JTAG was a de facto debug standard, though not architecturally guaranteed.
The ARMv7 architecture defines basic debug facilities at an architectural level. These include breakpoints, watchpoints and instruction execution in a "Debug Mode"; similar facilities were also available with EmbeddedICE. Both "halt mode" and "monitor" mode debugging are supported. The actual transport mechanism used to access the debug facilities is not architecturally specified, but implementations generally include JTAG support.
There is a separate ARM "CoreSight" debug architecture, which is not architecturally required by ARMv7 processors.
The Debug Access Port (DAP) is an implementation of an ARM Debug Interface.[118]There are two different supported implementations, the Serial WireJTAGDebug Port (SWJ-DP) and the Serial Wire Debug Port (SW-DP).[119]CMSIS-DAP is a standard interface that describes how various debugging software on a host PC can communicate over USB to firmware running on a hardware debugger, which in turn talks over SWD or JTAG to a CoreSight-enabled ARM Cortex CPU.[120][121][122]
To improve the ARM architecture fordigital signal processingand multimedia applications, DSP instructions were added to the instruction set.[123]These are signified by an "E" in the name of the ARMv5TE and ARMv5TEJ architectures. E-variants also imply T, D, M, and I.
The new instructions are common indigital signal processor(DSP) architectures. They include variations on signedmultiply–accumulate,saturated add and subtract, andcount leading zeros.
First introduced in 1999, this extension of the core instruction set contrasted with ARM's earlier DSP coprocessor known as Piccolo, which employed a distinct, incompatible instruction set whose execution involved a separate program counter.[124]Piccolo instructions employed a distinct register file of sixteen 32-bit registers, with some instructions combining registers for use as 48-bit accumulators and other instructions addressing 16-bit half-registers. Some instructions were able to operate on two such 16-bit values in parallel. Communication with the Piccolo register file involvedload to Piccoloandstore from Piccolocoprocessor instructions via two buffers of eight 32-bit entries. Described as reminiscent of other approaches, notably Hitachi's SH-DSP and Motorola's 68356, Piccolo did not employ dedicated local memory and relied on the bandwidth of the ARM core for DSP operand retrieval, impacting concurrent performance.[125]Piccolo's distinct instruction set also proved not to be a "good compiler target".[124]
Introduced in the ARMv6 architecture, this was a precursor to Advanced SIMD, also namedNeon.[126]
Jazelle DBX (Direct Bytecode eXecution) is a technique that allowsJava bytecodeto be executed directly in the ARM architecture as a third execution state (and instruction set) alongside the existing ARM and Thumb-mode. Support for this state is signified by the "J" in the ARMv5TEJ architecture, and in ARM9EJ-S and ARM7EJ-S core names. Support for this state is required starting in ARMv6 (except for the ARMv7-M profile), though newer cores only include a trivial implementation that provides no hardware acceleration.
To improve compiled code density, processors since the ARM7TDMI (released in 1994[127]) have featured theThumbcompressed instruction set, which have their own state. (The "T" in "TDMI" indicates the Thumb feature.) When in this state, the processor executes the Thumb instruction set, a compact 16-bit encoding for a subset of the ARM instruction set.[128]Most of the Thumb instructions are directly mapped to normal ARM instructions. The space saving comes from making some of the instruction operands implicit and limiting the number of possibilities compared to the ARM instructions executed in the ARM instruction set state.
In Thumb, the 16-bit opcodes have less functionality. For example, only branches can be conditional, and many opcodes are restricted to accessing only half of all of the CPU's general-purpose registers. The shorter opcodes give improved code density overall, even though some operations require extra instructions. In situations where the memory port or bus width is constrained to less than 32 bits, the shorter Thumb opcodes allow increased performance compared with 32-bit ARM code, as less program code may need to be loaded into the processor over the constrained memory bandwidth.
Unlike processor architectures with variable length (16- or 32-bit) instructions, such as the Cray-1 andHitachiSuperH, the ARM and Thumb instruction sets exist independently of each other. Embedded hardware, such as theGame Boy Advance, typically have a small amount of RAM accessible with a full 32-bit datapath; the majority is accessed via a 16-bit or narrower secondary datapath. In this situation, it usually makes sense to compile Thumb code and hand-optimise a few of the most CPU-intensive sections using full 32-bit ARM instructions, placing these wider instructions into the 32-bit bus accessible memory.
The first processor with a Thumbinstruction decoderwas the ARM7TDMI. All processors supporting 32-bit instruction sets, starting with ARM9, and including XScale, have included a Thumb instruction decoder. It includes instructions adopted from the HitachiSuperH(1992), which was licensed by ARM.[129]ARM's smallest processor families (Cortex M0 and M1) implement only the 16-bit Thumb instruction set for maximum performance in lowest cost applications. ARM processors that don't support 32-bit addressing also omit Thumb.
Thumb-2technology was introduced in theARM1156 core, announced in 2003. Thumb-2 extends the limited 16-bit instruction set of Thumb with additional 32-bit instructions to give the instruction set more breadth, thus producing a variable-length instruction set. A stated aim for Thumb-2 was to achieve code density similar to Thumb with performance similar to the ARM instruction set on 32-bit memory.
Thumb-2 extends the Thumb instruction set with bit-field manipulation, table branches and conditional execution. At the same time, the ARM instruction set was extended to maintain equivalent functionality in both instruction sets. A new "Unified Assembly Language" (UAL) supports generation of either Thumb or ARM instructions from the same source code; versions of Thumb seen on ARMv7 processors are essentially as capable as ARM code (including the ability to write interrupt handlers). This requires a bit of care, and use of a new "IT" (if-then) instruction, which permits up to four successive instructions to execute based on a tested condition, or on its inverse. When compiling into ARM code, this is ignored, but when compiling into Thumb it generates an actual instruction. For example:
All ARMv7 chips support the Thumb instruction set. All chips in the Cortex-A series that support ARMv7, all Cortex-R series, and all ARM11 series support both "ARM instruction set state" and "Thumb instruction set state", while chips in theCortex-Mseries support only the Thumb instruction set.[130][131][132]
ThumbEE(erroneously calledThumb-2EEin some ARM documentation), which was marketed as Jazelle RCT[133](Runtime Compilation Target), was announced in 2005 and deprecated in 2011. It first appeared in theCortex-A8processor. ThumbEE is a fourth instruction set state, making small changes to the Thumb-2 extended instruction set. These changes make the instruction set particularly suited to code generated at runtime (e.g. byJIT compilation) in managedExecution Environments. ThumbEE is a target for languages such asJava,C#,Perl, andPython, and allowsJIT compilersto output smaller compiled code without reducing performance.[citation needed]
New features provided by ThumbEE include automatic null pointer checks on every load and store instruction, an instruction to perform an array bounds check, and special instructions that call a handler. In addition, because it utilises Thumb-2 technology, ThumbEE provides access to registers r8–r15 (where the Jazelle/DBX Java VM state is held).[134]Handlers are small sections of frequently called code, commonly used to implement high level languages, such as allocating memory for a new object. These changes come from repurposing a handful of opcodes, and knowing the core is in the new ThumbEE state.
On 23 November 2011, Arm deprecated any use of the ThumbEE instruction set,[135]and Armv8 removes support for ThumbEE.
VFP(Vector Floating Point) technology is afloating-point unit(FPU) coprocessor extension to the ARM architecture[136](implemented differently in Armv8 – coprocessors not defined there). It provides low-costsingle-precisionanddouble-precision floating-pointcomputation fully compliant with theANSI/IEEE Std 754-1985 Standard for Binary Floating-Point Arithmetic. VFP provides floating-point computation suitable for a wide spectrum of applications such as PDAs, smartphones, voice compression and decompression, three-dimensional graphics and digital audio, printers, set-top boxes, and automotive applications. The VFP architecture was intended to support execution of short "vector mode" instructions but these operated on each vector element sequentially and thus did not offer the performance of truesingle instruction, multiple data(SIMD) vector parallelism. This vector mode was therefore removed shortly after its introduction,[137]to be replaced with the much more powerful Advanced SIMD, also namedNeon.
Some devices such as the ARM Cortex-A8 have a cut-downVFPLitemodule instead of a full VFP module, and require roughly ten times more clock cycles per float operation.[138]Pre-Armv8 architecture implemented floating-point/SIMD with the coprocessor interface. Other floating-point and/or SIMD units found in ARM-based processors using the coprocessor interface includeFPA, FPE,iwMMXt, some of which were implemented in software by trapping but could have been implemented in hardware. They provide some of the same functionality as VFP but are notopcode-compatible with it. FPA10 also providesextended precision, but implements correct rounding (required by IEEE 754) only in single precision.[139]
InDebianLinuxand derivatives such asUbuntuandLinux Mint,armhf(ARM hard float) refers to the ARMv7 architecture including the additional VFP3-D16 floating-point hardware extension (and Thumb-2) above. Software packages and cross-compiler tools use the armhf vs. arm/armel suffixes to differentiate.[141]
TheAdvanced SIMDextension (also known asNeonor "MPE" Media Processing Engine) is a combined 64- and128-bitSIMD instruction set that provides standardised acceleration for media and signal processing applications. Neon is included in all Cortex-A8 devices, but is optional in Cortex-A9 devices.[142]Neon can execute MP3 audio decoding on CPUs running at 10 MHz, and can run theGSMadaptive multi-rate(AMR) speech codec at 13 MHz. It features a comprehensive instruction set, separate register files, and independent execution hardware.[143]Neon supports 8-, 16-, 32-, and 64-bit integer and single-precision (32-bit) floating-point data and SIMD operations for handling audio and video processing as well as graphics and gaming processing. In Neon, the SIMD supports up to 16 operations at the same time. The Neon hardware shares the same floating-point registers as used in VFP. Devices such as the ARM Cortex-A8 and Cortex-A9 support 128-bit vectors, but will execute with 64 bits at a time,[138]whereas newer Cortex-A15 devices can execute 128 bits at a time.[144][145]
A quirk of Neon in Armv7 devices is that it flushes allsubnormal numbersto zero, and as a result theGCCcompiler will not use it unless-funsafe-math-optimizations, which allows losing denormals, is turned on. "Enhanced" Neon defined since Armv8 does not have this quirk, but as ofGCC 8.2the same flag is still required to enable Neon instructions.[146]On the other hand, GCC does consider Neon safe on AArch64 for Armv8.
ProjectNe10 is ARM's first open-source project (from its inception; while they acquired an older project, now namedMbed TLS). The Ne10 library is a set of common, useful functions written in both Neon and C (for compatibility). The library was created to allow developers to use Neon optimisations without learning Neon, but it also serves as a set of highly optimised Neon intrinsic and assembly code examples for common DSP, arithmetic, and image processing routines. The source code is available on GitHub.[147]
Helium is the M-Profile Vector Extension (MVE). It adds more than 150 scalar and vector instructions.[148]
The Security Extensions, marketed as TrustZone Technology, is in ARMv6KZ and later application profile architectures. It provides a low-cost alternative to adding another dedicated security core to an SoC, by providing two virtual processors backed by hardware based access control. This lets the application core switch between two states, referred to asworlds(to reduce confusion with other names for capability domains), to prevent information leaking from the more trusted world to the less trusted world.[149]This world switch is generally orthogonal to all other capabilities of the processor, thus each world can operate independently of the other while using the same core. Memory and peripherals are then made aware of the operating world of the core and may use this to provide access control to secrets and code on the device.[150]
Typically, a rich operating system is run in the less trusted world, with smaller security-specialised code in the more trusted world, aiming to reduce theattack surface. Typical applications includeDRMfunctionality for controlling the use of media on ARM-based devices,[151]and preventing any unapproved use of the device.
In practice, since the specific implementation details of proprietary TrustZone implementations have not been publicly disclosed for review, it is unclear what level of assurance is provided for a giventhreat model, but they are not immune from attack.[152][153]
Open Virtualization[154]is an open source implementation of the trusted world architecture for TrustZone.
AMDhas licensed and incorporated TrustZone technology into itsSecure Processor Technology.[155]AMD'sAPUsinclude a Cortex-A5 processor for handling secure processing, which is enabled in some, but not all products.[156][157][158]In fact, the Cortex-A5 TrustZone core had been included in earlier AMD products, but was not enabled due to time constraints.[157]
Samsung Knoxuses TrustZone for purposes such as detecting modifications to the kernel, storing certificates and attestating keys.[159]
The Security Extension, marketed as TrustZone for Armv8-M Technology, was introduced in the Armv8-M architecture. While containing similar concepts to TrustZone for Armv8-A, it has a different architectural design, as world switching is performed using branch instructions instead of using exceptions.[160]It also supports safe interleaved interrupt handling from either world regardless of the current security state. Together these features provide low latency calls to the secure world and responsive interrupt handling. ARM provides a reference stack of secure world code in the form of Trusted Firmware for M andPSA Certified.
As of ARMv6, the ARM architecture supportsno-execute page protection, which is referred to asXN, foreXecute Never.[161]
The Large Physical Address Extension (LPAE), which extends the physical address size from 32 bits to 40 bits, was added to the Armv7-A architecture in 2011.[162]
The physical address size may be even larger in processors based on the 64-bit (Armv8-A) architecture. For example, it is 44 bits in Cortex-A75 and Cortex-A65AE.[163]
TheArmv8-RandArmv8-Marchitectures, announced after the Armv8-A architecture, share some features with Armv8-A. However, Armv8-M does not include any 64-bit AArch64 instructions, and Armv8-R originally did not include any AArch64 instructions; those instructions were added toArmv8-Rlater.
The Armv8.1-M architecture, announced in February 2019, is an enhancement of the Armv8-M architecture. It brings new features including:
Announced in October 2011,[13]Armv8-A(often called ARMv8 while the Armv8-R is also available) represents a fundamental change to the ARM architecture. It supports twoexecution states: a 64-bit state namedAArch64and a 32-bit state namedAArch32. In the AArch64 state, a new 64-bitA64instruction set is supported; in the AArch32 state, two instruction sets are supported: the original 32-bit instruction set, namedA32, and the 32-bit Thumb-2 instruction set, namedT32. AArch32 providesuser-spacecompatibility with Armv7-A. The processor state can change on an Exception level change; this allows 32-bit applications to be executed in AArch32 state under a 64-bit OS whose kernel executes in AArch64 state, and allows a 32-bit OS to run in AArch32 state under the control of a 64-bithypervisorrunning in AArch64 state.[1]ARM announced their Cortex-A53 and Cortex-A57 cores on 30 October 2012.[75]Apple was the first to release an Armv8-A compatible core in a consumer product (Apple A7iniPhone 5S).AppliedMicro, using anFPGA, was the first to demo Armv8-A.[164]The first Armv8-ASoCfromSamsungis the Exynos 5433 used in theGalaxy Note 4, which features two clusters of four Cortex-A57 and Cortex-A53 cores in abig.LITTLEconfiguration; but it will run only in AArch32 mode.[165]
To both AArch32 and AArch64, Armv8-A makes VFPv3/v4 and advanced SIMD (Neon) standard. It also adds cryptography instructions supportingAES,SHA-1/SHA-256andfinite field arithmetic.[166]AArch64 was introduced in Armv8-A and its subsequent revision. AArch64 is not included in the 32-bit Armv8-R and Armv8-M architectures.
An ARMv8-A processor can support one or both of AArch32 and AArch64; it may support AArch32 and AArch64 at lower Exception levels and only AArch64 at higher Exception levels.[167]For example, the ARM Cortex-A32 supports only AArch32,[168]theARM Cortex-A34supports only AArch64,[169]and theARM Cortex-A72supports both AArch64 and AArch32.[170]An ARMv9-A processor must support AArch64 at all Exception levels, and may support AArch32 at EL0.[167]
Optional AArch64 support was added to the Armv8-R profile, with the first ARM core implementing it being the Cortex-R82.[171]It adds the A64 instruction set.
Announced in March 2021, the updated architecture places a focus on secure execution andcompartmentalisation.[172][173]
Arm SystemReady is a compliance program that helps ensure the interoperability of an operating system on Arm-based hardware from datacenter servers to industrial edge and IoT devices. The key building blocks of the program are the specifications for minimum hardware and firmware requirements that the operating systems and hypervisors can rely upon. These specifications are:[174]
These specifications are co-developed byArmand its partners in the System Architecture Advisory Committee (SystemArchAC).
Architecture Compliance Suite (ACS) is the test tools that help to check the compliance of these specifications. The Arm SystemReady Requirements Specification documents the requirements of the certifications.[179]
This program was introduced byArmin 2020 at the firstDevSummitevent. Its predecessor Arm ServerReady was introduced in 2018 at the Arm TechCon event. This program currently includes two bands:
PSA Certified, formerly named Platform Security Architecture, is an architecture-agnostic security framework and evaluation scheme. It is intended to help secureInternet of things(IoT) devices built on system-on-a-chip (SoC) processors.[182]It was introduced to increase security where a fulltrusted execution environmentis too large or complex.[183]
The architecture was introduced byArmin 2017 at the annualTechConevent.[183][184]Although the scheme is architecture agnostic, it was first implemented on Arm Cortex-M processor cores intended for microcontroller use. PSA Certified includes freely available threat models and security analyses that demonstrate the process for deciding on security features in common IoT products.[185]It also provides freely downloadable application programming interface (API) packages, architectural specifications, open-source firmware implementations, and related test suites.[186]
Following the development of the architecture security framework in 2017, thePSA Certifiedassurance scheme launched two years later at Embedded World in 2019.[187]PSA Certified offers a multi-level security evaluation scheme for chip vendors, OS providers and IoT device makers.[188]The Embedded World presentation introduced chip vendors to Level 1 Certification. A draft of Level 2 protection was presented at the same time.[189]Level 2 certification became a usable standard in February 2020.[190]
The certification was created by PSA Joint Stakeholders to enable a security-by-design approach for a diverse set of IoT products. PSA Certified specifications are implementation and architecture agnostic, as a result they can be applied to any chip, software or device.[191][189]The certification also removes industry fragmentation forIoT productmanufacturers and developers.[192]
The first 32-bit ARM-based personal computer, theAcorn Archimedes, was originally intended to run an ambitious operating system calledARX. The machines shipped withRISC OS, which was also used on later ARM-based systems from Acorn and other vendors. Some early Acorn machines were also able to run aUnixport calledRISC iX. (Neither is to be confused withRISC/os, a contemporary Unix variant for the MIPS architecture.)
The 32-bit ARM architecture is supported by a large number ofembeddedandreal-time operating systems, including:
As of March 2024, the 32-bit ARM architecture used to be the primary hardware environment for most mobile device operating systems such as the following but many of these platforms such as Android and Apple iOS have evolved to the 64-bit ARM architecture:
Formerly, but now discontinued:
The 32-bit ARM architecture is supported by RISC OS and by multipleUnix-likeoperating systems including:
Windows applications recompiled for ARM and linked with Winelib, from theWineproject, can run on 32-bit or 64-bit ARM in Linux, FreeBSD, or other compatible operating systems.[222][223]x86 binaries, e.g. when not specially compiled for ARM, have been demonstrated on ARM usingQEMUwith Wine (on Linux and more),[citation needed]but do not work at full speed or same capability as with Winelib.
|
https://en.wikipedia.org/wiki/ARM_architecture
|
Acomputer architecture simulatoris aprogramthat simulates theexecutionofcomputer architecture.
Computer architecture simulators are used for the following purposes:
Computer architecture simulators can be classified into many different categories depending on the context.
Afull-system simulatoris execution-driven architecture simulation at such a level of detail that complete software stacks from real systems can run on the simulator without any modification. A full system simulator provides virtual hardware that is independent of the nature of the host computer. The full-system model typically includesprocessor cores,peripheral devices,memories, interconnection buses, andnetworkconnections.Emulatorsare full system simulators that imitate obsolete hardware instead of under development hardware.
The defining property of full-system simulation compared to aninstruction set simulatoris that the model allows realdevice driversandoperating systemsto be run, not just single programs. Thus, full-system simulation makes it possible to simulate individual computers and networked computer nodes with all theirsoftware, from network device drivers to operating systems,network stacks,middleware,servers, andapplication programs.
Full system simulation can speed the system development process by making it easier to detect, recreate and repair flaws. The use ofmulti-core processorsis driving the need for full system simulation, because it can be extremely difficult and time-consuming to recreate and debug errors without the controlled environment provided by virtual hardware.[1]This also allows the software development to take place before the hardware is ready,[2]thus helping to validate design decisions.
Acycle-accurate simulatoris a computer program that simulates amicroarchitectureon a cycle-by-cycle basis. In contrast aninstruction set simulatorsimulates aninstruction set architectureusually faster but not cycle-accurate to a specific implementation of this architecture; they are often used when emulating older hardware, where time precision is important for legacy reasons. Often, a cycle-accurate simulator is used when designing new microprocessors – they can be tested, and benchmarked accurately (including running full operating system, or compilers) without actually building a physical chip, and easily change design many times to meet expected plan.
Cycle-accurate simulators must ensure that all operations are executed in the proper virtual (or real if it is possible) time – branch prediction, cache misses, fetches, pipeline stalls, thread context switching, and many other subtle aspects of microprocessors.
|
https://en.wikipedia.org/wiki/Computer_architecture_simulator
|
CPU Simis a software development environment for the simulation of simple computers. It was developed by Dale Skrien to help students understandcomputer architectures. With thisapplicationthe user is able to simulate new or existing simple CPUs. Users can create new virtual CPUs with custommachine languageinstructions, which are implemented by a sequence of micro instructions.CPU Simallows the user to edit and run assembly language programs for the CPU being simulated.
CPU Simhas been programmed using theJavaSwingpackage. This means that it isplatform independent(runs on every platform that has aJava virtual machineinstalled).
A sample computer system, the Wombat 1, is provided withCPU Sim. It has the following registers:
Theassembly languageof the Wombat 1 computer consists of 12 instructions. Each instruction is stored on 16 bits; the first 4 are theopcodeand the other 12 are the address field.
CPU Simhas the following features:
This program reads in integers until a negative integer is read. It then outputs the sum of all the positive integers.
The following modification of the program is also used sometimes:
This one can use negative input to subtract, or 0 to break the loop.
|
https://en.wikipedia.org/wiki/CPU_Sim
|
gpsimis afull system simulatorforMicrochipPIC microcontrollersoriginally written by Scotte Dattalo.[1]It is distributed under theGNU General Public License.
gpsim has been designed for accuracy including the entire PIC - from the core to the I/O pins and including the functions of all internal peripherals. This makes it possible to create stimuli and tie them to the I/O pins and test the PIC the same way you would in the real world.[2]
The software can run natively inWindowsusing gpsimWin32, a port to Windows created by Borut Ražem.
Thissimulation softwarearticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Gpsim
|
PIC(usually pronounced as /pɪk/) is a family ofmicrocontrollersmade byMicrochip Technology, derived from the PIC1640[1][2]originally developed byGeneral Instrument's Microelectronics Division. The name PIC initially referred toPeripheral Interface Controller,[3]and was subsequently expanded for a short time to includeProgrammable Intelligent Computer,[4]though the name PIC is no longer used as an acronym for any term.
The first parts of the family were available in 1976; by 2013 the company had shipped more than twelve billion individual parts, used in a wide variety ofembedded systems.[5]
The PIC was originally designed as a peripheral for theGeneral Instrument CP1600, the first commercially available single-chip16-bitmicroprocessor. To limit the number of pins required, the CP1600 had a complex highly-multiplexed buswhich was difficult to interface with, so in addition to a variety of special-purpose peripherals, General Instrument made the programmable PIC1640 as an all-purpose peripheral. With its own smallRAM,ROMand a simple CPU for controlling the transfers, it could connect the CP1600 bus to virtually any existing 8-bit peripheral. While this offered considerable power, GI's marketing was limited and the CP1600 was not a success. However, GI had also made the PIC1650, a standalone PIC1640 with additional general-purpose I/O in place of the CP1600 interface. When the company spun off their chip division to form Microchip in 1985, sales of the CP1600 were all but dead, but the PIC1650 and successors had formed a major market of their own, and they became one of the new company's primary products.[6]
Early models only hadmask ROMfor code storage, but with its spinoff it was soon upgraded to useEPROMand thenEEPROM, which made it possible for end-users to program the devices in their own facilities. All current models useflash memoryfor program storage, and newer models allow the PIC to reprogram itself. Since then the line has seen significant change; memory is now available in 8-bit, 16-bit, and, in latest models, 32-bit wide. Program instructions vary in bit-count by family of PIC, and may be 12, 14, 16, or 24 bits long. The instruction set also varies by model, with more powerful chips adding instructions fordigital signal processingfunctions. The hardware implementations of PIC devices range from 6-pinSMD, 8-pinDIPchips up to 144-pin SMD chips, with discrete I/O pins,ADCandDACmodules, and communications ports such asUART,I2C,CAN, and evenUSB. Low-power and high-speed variations exist for many types.
The manufacturer supplies computer software for development known asMPLAB X, assemblers and C/C++ compilers, and programmer/debugger hardware under theMPLABandPICKitseries. Third party and some open-source tools are also available. Some parts have in-circuit programming capability; low-cost development programmers are available as well as high-volume production programmers.
PIC devices are popular with both industrial developers and hobbyists due to their low cost, wide availability, large user base, an extensive collection of application notes, availability of low cost or free development tools, serial programming, and re-programmable flash-memory capability.
The original PIC was intended to be used with General Instrument's newCP160016-bitcentral processing unit(CPU). In order to fit 16-bitdataandaddress busesinto a then-standard 40-pindual inline package(DIP) chip, the two buses shared the same set of 16 connection pins. In order to communicate with the CPU, devices had to watch other pins on the CPU to determine if the information on the bus was an address or data. Since only one of these was being presented at a time, the devices had to watch the bus to go into address mode, see if that address was part of itsmemory mapped input/outputrange, "latch" that address and then wait for the data mode to turn on and then read the value. Additionally, the CP1600 used several external pins to select which device it was attempting to talk to, further complicating the interfacing.
As interfacing devices to the CP1600 could be complex, GI also released the 164x series of support chips with all of the required circuitry built-in. These included keyboard drivers,cassette deckinterfaces for storage, and a host of similar systems. For more complex systems, GI introduced the 1640 "Programmable Interface Controller" in 1975. The idea was that a device would use the PIC to handle all the interfacing with the host computer's CP1600, but also use its own internal processor to handle the actual device it was connected to. For instance, afloppy disk drivecould be implemented with a PIC talking to the CPU on one side and thefloppy disk controlleron the other. In keeping with this idea, what would today be known as amicrocontroller, the PIC included a small amount ofread-only memory(ROM) that would be written with the user's device controller code, and a separaterandom access memory(RAM) for buffering and working with data. These were connected separately, making the PIC aHarvard architecturesystem withcodeanddatabeingstored and managedon separate internal pathways.
In theory, the combination of CP1600 CPU and PIC1640 device controllers provided a very high-performance device control system, one that was similar in power and performance to thechannel I/O controllersseen onmainframe computers. In the floppy controller example, for instance, a single PIC could control the drive, provide a reasonable amount of buffering to improve performance, and then transfer data to and from the host computer usingdirect memory access(DMA) or through relatively simple code in the CPU. The downside to this approach was cost; while the PIC was not necessary for low-speed devices like a keyboard, many tasks would require one or more PICs to build out a complete system.
While the design concept had a number of attractive features, General Instrument never strongly marketed the CP1600, preferring to deal only with large customers and ignoring the low-end market. This resulted in very little uptake of the system, with theIntellivisionbeing the only really widespread use with about three million units. However, GI had introduced a standalone model PIC1650[7]in 1976, designed for use without a CP1600. Although not as powerful as theIntel MCS-48introduced the same year, it was cheaper, and it found a market.[6]Follow-ons included the PIC1670, with instructions widened from 12 to 13 bits to provide twice the address space (64 bytes of RAM and 1024 words of ROM).[8]When GI spun off its chip division to formMicrochip Technologyin 1985, production of the CP1600 ended. By this time, however, the PIC1650 had developed a large market of customers using it for a wide variety of roles, and the PIC went on to become one of the new company's primary products.[6]
In 1985, General Instrument sold theirmicroelectronicsdivision and the new owners cancelled almost everything which by this time was mostly out-of-date. The PIC, however, was upgraded with an internalEPROMto produce a programmablechannel I/Ocontroller.
At the same timePlesseyin the UK released NMOS processors numbered PIC1650 and PIC1655 based on the GI design, using the same instruction sets, either user mask programmable or versions pre-programmed for auto-diallers and keyboard interfaces.[9]
In 1998 Microchip introduced the PIC16F84, a flash programmable and erasable version of its successful serial programmable PIC16C84.
In 2001, Microchip introduced more flash programmable devices, with full production commencing in 2002.[10]
Today, a huge variety of PICs are available with various on-board peripherals (serial communicationmodules,UARTs, motor control kernels, etc.) and program memory from 256 words to 64K words and more. A "word" is oneassembly languageinstruction, varying in length from 8 to 16bits, depending on the specific PICmicrocontrollerseries.
While PIC and PICmicro are now registered trademarks of Microchip Technology, theprefix″PIC″ is no longer used as anacronymfor any term. It is generally thought that PIC stands for "Programmable Intelligent Computer", General Instruments'prefixin 1977 for the PIC1640 and PIC1650 family of microcomputers,[4]replacing the 1976 original meaning "Programmable Interface Controller" for the PIC1640 that was designed specifically to work in combination with the CP1600 microcomputer.[3]The "PIC Series Microcomputers" by General Instrument were a series of Metal-Oxide Semiconductor Large-Scale Integration (MOS/LSI) 8-bit microcomputers containing ROM, RAM, a CPU, and 8-bit input/output (I/O) registers for interfacing. At its time, this technology combined the advantages of MOS circuits with Large-Scale Integration, allowing for the creation of complex integrated circuits with high transistor density.[4]
The Microchip 16C84 (PIC16x84), introduced in 1993, was the first[11]Microchip CPU with on-chip EEPROM memory.
By 2013, Microchip was shipping over one billion PIC microcontrollers every year.[5][dubious–discuss]
PIC micro chips are designed with aHarvard architecture, and are offered in various device families. The baseline and mid-range families use 8-bit wide data memory, and the high-end families use 16-bit data memory. The latest series, PIC32MZ, is a 32-bitMIPS-based microcontroller. Instruction word sizes are12 bits(PIC10 and PIC12),14 bits(PIC16) and24 bits(PIC24 and dsPIC). The binary representations of the machine instructions vary by family and are shown inPIC instruction listings.
Within these families, devices may be designated PICnnCxxx (CMOS) or PICnnFxxx (Flash). "C" devices are generally classified as "Not suitable for new development" (not actively promoted by Microchip). The program memory of "C" devices is variously described as OTP, ROM, or EEPROM. As of October 2016, the only OTP product classified as "In production" is the pic16HV540. "C" devices with quartz windows (for UV erasure) are in general no longer available.
These devices feature a 12-bit wide code memory, a 32-byte register file, and a tiny two level deep call stack. They are represented by the PIC10 series, as well as by some PIC12 and PIC16 devices. Baseline devices are available in 6-pin to 40-pin packages.
Generally the first 7 to 9 bytes of the register file are special-purpose registers, and the remaining bytes are general purpose RAM. Pointers are implemented using a register pair: after writing an address to the FSR (file select register), the INDF (indirect f) register becomes an alias for the addressed register. If banked RAM is implemented, the bank number is selected by the high 3 bits of the FSR. This affects register numbers 16–31; registers 0–15 are global and not affected by the bank select bits.
Because of the very limited register space (5 bits), 4 rarely read registers were not assigned addresses, but written by special instructions (OPTIONandTRIS).
The ROM address space is 512 and may only specify addresses in the first half of each 512-word page. That is, the CALL instruction specifies the low 9 bits of the address, but only the low 8 bits of that address are a parameter of the instruction, while the 9th bit (bit 8) is implicitly specified as 0 by the CALL instruction itself.
Lookup tables are implemented using a computedGOTO(assignment to PCL register) into a table ofRETLWinstructions. RETLW performs a subroutine return and simultaneously loads the W register with an 8-bit immediate constant that is part of the instruction.
This "baseline core" doesnotsupportinterrupts; allI/Omust bepolled. There are some "enhanced baseline" variants with interrupt support and a four-level call stack.
PIC10F32x devices feature a mid-range 14-bit wide code memory of 256 or 512 words, a 64-byte SRAM register file, and an 8-level deep hardware stack. These devices are available in 6-pin SMD and 8-pin DIP packages (with two pins unused). One input only and three I/O pins are available. A complex set of interrupts are available. Clocks are an internal calibrated high-frequency oscillator of 16 MHz with a choice of selectable speeds via software and a 31 kHz low-power source.
These devices feature a 14-bit wide code memory, and an improved 8-level deep call stack. The instruction set differs very little from the baseline devices, but the two additional opcode bits allow 128 registers and 2048 words of code to be directly addressed. There are a few additional miscellaneous instructions, and two additional 8-bit literal instructions, add and subtract. The mid-range core is available in the majority of devices labeled PIC12 and PIC16.
The first 32 bytes of the register space are allocated to special-purpose registers; the remaining 96 bytes are used for general-purpose RAM. If banked RAM is used, the high 16 registers (0x70–0x7F) are global, as are a few of the most important special-purpose registers, including the STATUS register, which holds the RAM bank select bits. (The other global registers are FSR and INDF, the low 8 bits of the program counter PCL, the PC high preload register PCLATH, and the master interrupt control register INTCON.)
The PCLATH register supplies high-order instruction address bits when the 8 bits supplied by a write to the PCL register, or the 11 bits supplied by aGOTOorCALLinstruction, are not sufficient to address the available ROM space.
The PIC17 series never became popular and has been superseded by the PIC18 architecture (however, seeclonesbelow). The PIC17 series is not recommended for new designs, and availability may be limited to users.
Improvements over earlier cores are 16-bit wide opcodes (allowing many new instructions), and a 16-level deep call stack. PIC17 devices were produced in packages from 40 to 68 pins.
The PIC17 series introduced a number of important new features:[12]
A significant limitation was that RAM space was limited to 256 bytes (26 bytes of special function registers, and 232 bytes of general-purpose RAM), with awkward bank-switching in the models that supported more.
In 2000, Microchip introduced the PIC18 architecture. Unlike the PIC17 series, it has proven to be very popular, with a large number of device variants presently in manufacture. In contrast to earlier devices, which were more often than not programmed inassembly language,Chas become the predominant development language.[13]
The PIC18 series inherits most of the features and instructions of the PIC17 series, while adding a number of important new features:
The RAM space is 12 bits, addressed using a 4-bit bank select register (BSR) and an 8-bit offset in each instruction. An additional "access" bit in each instruction selects between bank 0 (a=0) and the bank selected by the BSR (a=1).
A 1-level stack is also available for the STATUS, WREG and BSR registers. They are saved on every interrupt, and may be restored on return. If interrupts are disabled, they may also be used on subroutine call/return by setting thesbit (appending ", FAST" to the instruction).
The auto increment/decrement feature was improved by removing the control bits and adding four new indirect registers per FSR. Depending on which indirect file register is being accessed, it is possible to postdecrement, postincrement, or preincrement FSR; or form the effective address by adding W to FSR.
In more advanced PIC18 devices, an "extended mode" is available which makes the addressing even more favorable to compiled code:
PIC18 devices are still developed (2021) and fitted with CIP (Core Independent Peripherals)
In 2001, Microchip introduced the dsPIC series of chips,[14]which entered mass production in late 2004. They are Microchip's first inherently 16-bit microcontrollers. PIC24 devices are designed as general purpose microcontrollers. dsPIC devices includedigital signal processingcapabilities in addition.
Although still similar to earlier PIC architectures, there are significant enhancements:[15]
Some features are:
dsPICs can be programmed inCusing Microchip's XC16 compiler (formerly called C30), which is a variant ofGCC.
Instruction ROM is 24 bits wide. Software can access ROM in 16-bit words, where even words hold the least significant 16 bits of each instruction, and odd words hold the most significant 8 bits. The high half of odd words reads as zero. The program counter is 23 bits wide, but the least significant bit is always 0, so there are 22 modifiable bits.
Instructions come in two main varieties, with most important operations (add, xor, shifts, etc.) allowing both forms:
Microchip's PIC32M products use the PIC trademark, but have a completely different architecture, and are described here only briefly.
In November 2007, Microchip introduced thePIC32MXfamily of 32-bit microcontrollers, based on theMIPS32 M4K Core.[16]The device can be programmed using theMicrochip MPLAB C Compiler for PIC32 MCUs, a variant of the GCC compiler. The first 18 models currently in production (PIC32MX3xx and PIC32MX4xx) are pin to pin compatible and share the same peripherals set with the PIC24FxxGA0xx family of (16-bit) devices, allowing the use of common libraries, software and hardware tools. Today, starting at 28 pin in small QFN packages up to high performance devices with Ethernet, CAN and USB OTG, full family range of mid-range 32-bit microcontrollers are available.
The PIC32 architecture brought a number of new features to Microchip portfolio, including:
In November 2013, Microchip introduced the PIC32MZ series of microcontrollers, based on theMIPSM14K core. The PIC32MZ series include:[18][19]
In 2015, Microchip released the PIC32MZ EF family, using the updated MIPS M5150 Warrior M-class processor.[20][21]
In 2017, Microchip introduced the PIC32MZ DA Family, featuring an integrated graphics controller, graphics processor and 32MB of DDR2 DRAM.[22][23]
In June 2016, Microchip introduced the PIC32MM family, specialized for low-power and low-cost applications.[24]The PIC32MM features core-independent peripherals, sleep modes down to 500 nA, and 4 x 4 mm packages.[25]The PIC32MM microcontrollers use theMIPS TechnologiesM4K, a 32-bitMIPS32processor.
They are meant for very low power consumption and limited to 25 MHz.
Their key advantage is to support the 16-bit instructions of MIPS, making program size much more compact (about 40%)
Microchip introduced the PIC32MK family in 2017, specialized for motor control, industrial control, Industrial Internet of Things (IIoT) and multi-channel CAN applications.[26]
Microchip's PIC32C products also use the PIC trademark, but similarly have a completely different architecture. PIC32C products employ the Arm processor architecture, including various lines using Cortex-M0+, M4, M7, M23, and M33 cores. They are offered in addition to the Arm-based SAM series of MCUs which Microchip inherited from its acquisition of Atmel.[27]
Microchip's PIC64 products use the PIC trademark, but have a completely different architecture, and are described here only briefly.
In July 2024, Microchip introduced the PIC64 series of high-performance multi-coremicroprocessors. The series will initially use theRISC-Vinstruction set, however Microchip is also planning versions withARM Cortex-Acores.[28]The PIC64 series will include the PIC64GX line, which focuses on intelligent edge applications, and the PIC64-HPSC line, which isradiation-hardenedand focuses on spaceflight applications.[29][30]
The PIC architecture (excluding the unrelated PIC32 and PIC64) is a one-operandaccumulator machinelike thePDP-8or theApollo Guidance Computer. Its characteristics are:
There is no distinction between memory space and register space because the RAM serves the job of both memory and registers, and the RAM is usually just referred to as "the register file" or simply as "the registers".
PICs have a set of registers that function as general-purpose RAM. Special-purpose control registers for on-chip hardware resources are also mapped into the data space. The addressability of memory varies depending on device series, and all PIC device types have somebanking mechanismto extend addressing to additional memory (but some device models have only one bank implemented). Later series of devices feature move instructions, which can cover the whole addressable space, independent of the selected bank. In earlier devices, any register move must be achieved through the accumulator.
To implement indirect addressing, a "file select register" (FSR) and "indirect register" (INDF) are used. A register number is written to the FSR, after which reads from or writes to INDF will actually be from or to the register pointed to by FSR. Later devices extended this concept with post- and pre- increment/decrement for greater efficiency in accessing sequentially stored data. This also allows FSR to be treated almost like a stack pointer (SP).
External data memory is not directly addressable except in some PIC18 devices with high pin count. However, general I/O ports can be used to implement a parallel bus or a serial interface for accessing external memory and other peripherals (using subroutines), with the caveat that such programmed memory access is (of course) much slower than access to the native memory of the PIC MCU.
The code space is generally implemented as on-chipROM,EPROMorflash ROM. In general, there is no provision for storing code in external memory due to the lack of an external memory interface. The exceptions are PIC17 and select high pin count PIC18 devices.[31]
All PICs handle (and address) data in 8-bit chunks. However, the unit of addressability of the code space is not generally the same as the data space. For example, PICs in the baseline (PIC12) and mid-range (PIC16) families have program memory addressable in the same wordsize as the instruction width, i.e. 12 or 14 bits respectively. In contrast, in the PIC18 series, the program memory is addressed in 8-bit increments (bytes), which differs from the instruction width of 16 bits.
In order to be clear, the program memory capacity is usually stated in number of (single-word) instructions, rather than in bytes.
PICs have a hardwarecall stack, which is used to save return addresses. The hardware stack is not software-accessible on earlier devices, but this changed with the PIC18 series devices.
Hardware support for a general-purpose parameter stack was lacking in early series, but this greatly improved in the PIC18 series, making the PIC18 series architecture more friendly to high-level language compilers.
PIC instruction sets vary from about 35 instructions for the low-end PICs to over 80 instructions for the high-end PICs. The instruction set includes instructions to perform a variety of operations on registers directly, on theaccumulatorand a literal constant, or on the accumulator and aregister, as well as for conditional execution, and program branching.
A few operations, such as bit setting and testing, can be performed on any numbered register, but 2-input arithmetic operations always involve W (the accumulator), writing the result back to either W or the other operand register. To load a constant, it is necessary to load it into W before it can be moved into another register. On the older cores, all register moves needed to pass through W, but this changed on the "high-end" cores.
PIC cores have skip instructions, which are used for conditional execution and branching. The skip instructions are "skip if bit set" and "skip if bit not set". Because cores before PIC18 had only unconditional branch instructions, conditional jumps are implemented by a conditional skip (with the opposite condition) followed by an unconditional branch. Skips are also of utility for conditional execution of any immediate single following instruction. It is possible to skip skip instructions. For example, the instruction sequence "skip if A; skip if B; C" will execute C if A is true or if B is false.
The PIC18 series implemented shadow registers: these are registers which save several important registers during an interrupt, providing hardware support for automatically saving processor state when servicing interrupts.
In general, PIC instructions fall into five classes:
The architectural decisions are directed at the maximization of speed-to-cost ratio. The PIC architecture was among the first scalar CPU designs[citation needed]and is still among the simplest and cheapest. The Harvard architecture, in which instructions and data come from separate sources, simplifies timing and microcircuit design greatly, and this benefits clock speed, price, and power consumption.
The PIC instruction set is suited to implementation of fast lookup tables in the program space. Such lookups take one instruction and two instruction cycles. Many functions can be modeled in this way. Optimization is facilitated by the relatively large program space of the PIC (e.g. 4096 × 14-bit words on the 16F690) and by the design of the instruction set, which allows embedded constants. For example, a branch instruction's target may be indexed by W, and execute a "RETLW", which does as it is named – return with literal in W.
Interrupt latency is constant at three instruction cycles. External interrupts have to be synchronized with the four-clock instruction cycle, otherwise there can be a one instruction cycle jitter. Internal interrupts are already synchronized. The constant interrupt latency allows PICs to achieve interrupt-driven low-jitter timing sequences. An example of this is a video sync pulse generator. This is no longer true in the newest PIC models, because they have a synchronous interrupt latency of three or four cycles.
The following stack limitations have been addressed in thePIC18series, but still apply to earlier cores:
With paged program memory, there are two page sizes to worry about: one for CALL and GOTO and another for computed GOTO (typically used for table lookups). For example, on PIC16, CALL and GOTO have 11 bits of addressing, so the page size is 2048 instruction words. For computed GOTOs, where you add to PCL, the page size is 256 instruction words. In both cases, the upper address bits are provided by the PCLATH register. This register must be changed every time control transfers between pages. PCLATH must also be preserved by any interrupt handler.[33]
While several commercial compilers are available, in 2008, Microchip released their own C compilers, C18 and C30, for the line of 18F 24F and 30/33F processors.
As of 2013, Microchip offers their XC series of compilers, for use withMPLAB X. Microchip will eventually phase out its older compilers, such as C18, and recommends using their XC series compilers for new designs.[34]
The RISC instruction set of the PIC assembly language code can make the overall flow difficult to comprehend. Judicious use of simplemacroscan increase the readability of PIC assembly language. For example, the originalParallaxPIC assembler ("SPASM") has macros, which hide W and make the PIC look like a two-address machine. It has macro instructions likemov b, a(move the data from addressato addressb) andadd b, a(add data from addressato data in addressb). It also hides the skip instructions by providing three-operand branch macro instructions, such ascjne a, b, dest(compareawithband jump todestif they are not equal).
PIC devices generally feature:
Within a series, there are still many device variants depending on what hardware resources the chip features:
The first generation of PICs withEPROMstorage have been almost completely replaced by chips withflash memory. Likewise, the original 12-bit instruction set of the PIC1650 and its direct descendants has been superseded by 14-bit and 16-bit instruction sets. Microchip still sells OTP (one-time-programmable) and windowed (UV-erasable) versions of some of its EPROM based PICs for legacy support or volume orders. The Microchip website lists PICs that are not electrically erasable as OTP. UV erasable windowed versions of these chips can be ordered.
The F in a PICMicro part number generally indicates the PICmicro uses flash memory and can be erased electronically. Conversely, a C generally means it can only be erased by exposing the die to ultraviolet light (which is only possible if a windowed package style is used). An exception to this rule is the PIC16C84, which uses EEPROM and is therefore electrically erasable.
An L in the name indicates the part will run at a lower voltage, often with frequency limits imposed.[35]Parts designed specifically for low voltage operation, within a strict range of 3 – 3.6 volts, are marked with a J in the part number. These parts are also uniquely I/O tolerant as they will accept up to 5 V as inputs.[35]
Microchip provides afreewareIDEpackage calledMPLAB X, which includes an assembler, linker, softwaresimulator, and debugger. They also sell C compilers for the PIC10, PIC12, PIC16, PIC18, PIC24, PIC32 and dsPIC, which integrate cleanly with MPLAB X. Free versions of the C compilers are also available with all features. But for the free versions, optimizations will be disabled after 60 days.[36]
Several third parties developClanguagecompilersfor PICs, many of which integrate to MPLAB and/or feature their own IDE. A fully featured compiler for the PICBASIC language to program PIC microcontrollers is available from meLabs, Inc.Mikroelektronikaoffers PIC compilers in C, BASIC and Pascal programming languages.
A graphical programming language,Flowcode, exists capable of programming 8- and 16-bit PIC devices and generating PIC-compatible C code. It exists in numerous versions from a free demonstration to a more complete professional edition.
TheProteus Design Suiteis able to simulate many of the popular 8 and 16-bit PIC devices along with other circuitry that is connected to the PIC on the schematic. The program to be simulated can be developed within Proteus itself, MPLAB or any other development tool.[37]
Devices called "programmers" are traditionally used to get program code into the target PIC. Most PICs that Microchip currently sells featureICSP(in-circuit serial programming) and/or LVP (low-voltage programming) capabilities, allowing the PIC to be programmed while it is sitting in the targetcircuit.
Microchip offers programmers/debuggers under theMPLABandPICKitseries. MPLAB ICD5 and MPLAB REAL ICE are the current programmers and debuggers for professional engineering, while PICKit 5 is a low-cost programmer / debugger line for hobbyists and students.
Many of the higher end flash based PICs can also self-program (write to their own program memory), a process known as bootloading. Demo boards are available with a small factory-programmed bootloader that can be used to load user programs over an interface such asRS-232orUSB, thus obviating the need for a programmer device.
Alternatively there is bootloader firmware available that the user can load onto the PIC using ICSP. After programming the bootloader onto the PIC, the user can then reprogram the device using RS232 or USB, in conjunction with specialized computer software.
The advantages of a bootloader over ICSP is faster programming speeds, immediate program execution following programming, and the ability to both debug and program using the same cable.
There are many programmers for PIC microcontrollers, ranging from the extremely simple designs which rely on ICSP to allow direct download of code from a host computer, to intelligent programmers that can verify the device at several supply voltages. Many of these complex programmers use a pre-programmed PIC themselves to send the programming commands to the PIC that is to be programmed. The intelligent type of programmer is needed to program earlier PIC models (mostly EPROM type) which do not support in-circuit programming.
Third party programmers range from plans to build your own, to self-assembly kits and fully tested ready-to-go units. Some are simple designs which require a PC to do the low-level programming signalling (these typically connect to theserialorparallel portand consist of a few simple components), while others have the programming logic built into them (these typically use a serial or USB connection, are usually faster, and are often built using PICs themselves for control).
All newer PIC devices feature an ICD (in-circuit debugging) interface, built into the CPU core, that allows for interactive debugging of the program in conjunction withMPLABIDE.MPLAB ICDandMPLAB REAL ICEdebuggers can communicate with this interface using theICSPinterface.
This debugging system comes at a price however, namely limited breakpoint count (1 on older devices, 3 on newer devices), loss of some I/O (with the exception of some surface mount 44-pin PICs which have dedicated lines for debugging) and loss of some on-chip features.
Some devices do not have on-chip debug support, due to cost or lack of pins. Some larger chips also have no debug module. To debug these devices, a special -ICD version of the chip mounted on a daughter board which provides dedicated ports is required. Some of these debug chips are able to operate as more than one type of chip by the use of selectable jumpers on the daughter board. This allows broadly identical architectures that do not feature all the on-chip peripheral devices to be replaced by a single -ICD chip. For example: the 16F690-ICD will function as one of six different parts, each of which features none, some or all of five on-chip peripherals.[38]
Microchip offers three fullin-circuit emulators: theMPLAB ICE2000(parallel interface, a USB converter is available); the newerMPLAB ICE4000(USB 2.0 connection); and most recently, theREAL ICE(USB 2.0 connection). All such tools are typically used in conjunction withMPLABIDE for source-level interactive debugging of code running on the target.
PIC projects may utilizereal-time operating systemssuch asFreeRTOS, AVIX RTOS, uRTOS, Salvo RTOS or other similar libraries fortask schedulingand prioritization.
An open source project by Serge Vakulenko adapts2.11BSDto the PIC32 architecture, under the name RetroBSD. This brings a familiar Unix-like operating system, including an onboard development environment, to the microcontroller, within the constraints of the onboard hardware.[39]
Parallaxproduced a series of PICmicro-like microcontrollers known as theParallax SX. It is currently discontinued. Designed to be architecturally similar to the PIC microcontrollers used in the original versions of theBASIC Stamp, SX microcontrollers replaced the PIC in several subsequent versions of that product.
Parallax's SX are 8-bit RISC microcontrollers, using a 12-bit instruction word, which run fast at 75 MHz (75 MIPS). They include up to 4096 12-bit words offlash memoryand up to 262 bytes ofrandom access memory, an eight bit counter and other support logic. There are software library modules to emulateI²CandSPIinterfaces, UARTs, frequency generators, measurement counters andPWMand sigma-delta A/D converters. Other interfaces are relatively easy to write, and existing modules can be modified to get new features.
RussianPKK Milandr produces microcontrollers using thePIC17architecture as the 1886 series.[40][41][42][43]Program memory consists of up to 64kB Flash memory in the 1886VE2U (Russian:1886ВЕ2У) or 8kB EEPROM in the 1886VE5U (1886ВЕ5У). The 1886VE5U (1886ВЕ5У) through 1886VE7U (1886ВЕ7У) are specified for the military temperature range of -60 °C to +125 °C. Hardware interfaces in the various parts include USB, CAN, I2C, SPI, as well as A/D and D/A converters. The 1886VE3U (1886ВЕ3У) contains a hardware accelerator for cryptographic functions according toGOST 28147-89. There are evenradiation-hardenedchips with thedesignations1886VE8U (1886ВЕ8У) and 1886VE10U (1886ВЕ10У).[44]
ELAN Microelectronics Corp. in Taiwan make a line of microcontrollers based on the PIC16 architecture, with 13-bit instructions and a smaller (6-bit) RAM address space.[45]
Holtek Semiconductormake a large number of very cheap microcontrollers[46](as low as 8.5centsin quantity[47]) with a 14-bit instruction set strikingly similar to the PIC16.
Hycon Technology, a Taiwanese manufacturer ofmixed-signalchips for portable electronics (multimeters,kitchen scales, etc.), has a proprietary H08 microcontroller series with a 16-bit instruction word very similar to the PIC18 family. (No relation to theHitachi/Renesas H8 microcontrollers.) The H08A[48]is most like the PIC18; the H08B[49]is a subset.[50]
Although the available instructions are almost identical, theirencodingis different, as is the memory map and peripherals. For example, the PIC18 allows direct access to RAM at 0x000–0x07F or special function registers at 0xF80–0xFFF by sign-extending an 8-bit address. The H08 places special function registers at 0x000–0x07F and global RAM at 0x080–0x0FF, zero-extending the address.
Many ultra-low-costOTPmicrocontrollers from Asian manufacturers, found in low-cost consumer electronics are based on the PIC architecture or modified form. Most clones only target the baseline parts (PIC16C5x/PIC12C50x). With any patents on the basic architecture long since expired, Microchip has attempted to sue some manufacturers on copyright grounds,[51][52]without success.[53][54][better source needed]
.
|
https://en.wikipedia.org/wiki/PIC_microcontrollers
|
TheIntel 8008("eight-thousand-eight" or "eighty-oh-eight") is an early8-bitmicroprocessor capable of addressing 16 KB of memory, introduced in April 1972. The 8008 architecture was designed byComputer Terminal Corporation(CTC) and was implemented and manufactured byIntel. While the 8008 was originally designed for use in CTC'sDatapoint 2200programmable terminal, an agreement between CTC and Intel permitted Intel to market the chip to other customers afterSeikoexpressed an interest in using it for acalculator.
In order to address several issues with theDatapoint 3300, including excessive heat radiation,Computer Terminal Corporation(CTC) designed the architecture of the 3300's planned successor with a CPU as part of the internal circuitry re-implemented on a single chip. Looking for a company able to produce their chip design, CTC co-founder Austin O. "Gus" Roche turned to Intel, then primarily a vendor of memory chips.[3]Roche met withBob Noyce, who expressed concern with the concept;John Frassanitorecalls that:
"Noyce said it was an intriguing idea, and that Intel could do it, but it would be a dumb move. He said that if you have a computer chip, you can only sell one chip per computer, while with memory, you can sell hundreds of chips per computer."[3]
Another major concern was that Intel's existing customer base purchased their memory chips for use with their own processor designs; if Intel introduced their own processor, they might be seen as a competitor, and their customers might look elsewhere for memory. Nevertheless, Noyce agreed to a US$50,000 development contract in early 1970 (equivalent to $405,000 in 2024).Texas Instruments(TI) was also brought in as a second supplier.[citation needed]
In December 1969, Intel engineerStan Mazorand a representative of CTC met to discuss options for the logic chipset to power a new CTC business terminal. Mazor, who had been working withTed Hoffon the development of theIntel 4004, proposed that a one-chip programmable microprocessor might be less cumbersome and ultimately more cost effective than building a custom logic chipset. CTC agreed and development work began on the chip, which at the time was known as the 1201.[4]
TI was able to make samples of the 1201 based on Intel drawings, calling it the TMX 1795. These proved to be buggy and were rejected.[5]Intel's own versions were delayed. CTC decided to re-implement the new version of the terminal usingserialdiscreteTTLinstead of waiting for a single-chip CPU. The new system was released as theDatapoint 2200in the spring of 1970, with their first sale toGeneral Millson 25 May 1970.[3]CTC paused development of the 1201 after the 2200 was released, as it was no longer needed. Later in early 1971, Seiko approached Intel, expressing an interest in using the 1201 in a scientific calculator, likely after seeing the success of the simpler 4004 used by Busicom in their business calculators.[4]A small re-design followed, under the leadership ofFederico Faggin, the designer of the4004, now project leader of the 1201, expanding from a 16-pin to 18-pin design, and the new 1201 was delivered to CTC in late 1971.[3]
By that point, CTC had once again moved on, this time to the parallel-architectureDatapoint 2200 II, which was faster than the 1201. CTC voted to end their involvement with the 1201, leaving the design's intellectual property to Intel instead of paying the $50,000 contract. Intel renamed it the 8008 and put it in their catalog in April 1972 priced at US$120 (equivalent to $902 in 2024). This renaming tried to ride off the success of the 4004 chip, by presenting the 8008 as simply a 4 to 8 port, but the 8008 isnotbased on the4004.[6]The 8008 went on to be a commercially successful design. This was followed by the popularIntel 8080, and then the hugely successfulIntel x86family.[3]
One of the first teams to build a complete system around the 8008 was Bill Pentz's team atCalifornia State University, Sacramento. TheSac State 8008was possibly the first true microcomputer, with a disk operating system built withIBM Basic assembly languagein PROM,[disputed–discuss]all driving a color display, hard drive, keyboard, modem, audio/paper tape reader, and printer.[7]The project started in the spring of 1972, and with key help fromTektronix, the system was fully functional a year later.
In the UK, a team at S. E. Laboratories Engineering (EMI) led by Tom Spink in 1972 built a microcomputer based on a pre-release sample of the 8008. Joe Hardman extended the chip with an external stack. This, among other things, gave it power-fail save and recovery. Joe also developed a direct screen printer. The operating system was written using a meta-assembler developed by L. Crawford and J. Parnell for aDigital Equipment CorporationPDP-11.[8]The operating system was burnt into a PROM. It was interrupt-driven, queued, and based on a fixed page size for programs and data. An operational prototype was prepared for management, who decided not to continue with the project.[citation needed]
The 8008 was the CPU for the very first commercial non-calculatorpersonal computers(excluding the Datapoint 2200 itself): the USSCELBIkit and the pre-built FrenchMicral Nand CanadianMCM/70. It was also the controlling microprocessor for the first several models in Hewlett-Packard's2640family of computer terminals.[citation needed]
In 1973, Intel offered aninstruction set simulatorfor the 8008 named INTERP/8.[9]It was written inFORTRAN IVbyGary Kildallwhile he worked as a consultant for Intel.[10][11]
The 8008 was implemented in 10μmsilicon-gate enhancement-modePMOS logic. Initial versions could work at clock frequencies up to 0.5 MHz. This was later increased in the 8008-1 to a specified maximum of 0.8 MHz. Instructions take between 3 and 11 T-states, where each T-state is 2 clock cycles.[13]Register–register loads and ALU operations take 5T (20 μs at 0.5 MHz), register–memory 8T (32 μs), while calls and jumps (when taken) take 11 T-states (44 μs).[14]The 8008 is a little slower in terms ofinstructions per second(36,000 to 80,000 at 0.8 MHz) than the 4-bitIntel 4004andIntel 4040.[15]but since the 8008 processes data 8 bits at a time and can access significantly more RAM, in most applications it has a significant speed advantage over these processors. The 8008 has 3,500transistors.[16][17][18]
The chip, limited by its 18-pinDIP, has a single 8-bit bus working triple duty to transfer 8 data bits, 14 address bits, and two status bits. The small package requires about 30 TTL support chips to interface to memory.[19]For example, the 14-bit address, which can access "16 K × 8 bits of memory", needs to be latched by some of this logic into an external memory address register (MAR). The 8008 can access 8input portsand 24 output ports.[13]
For controller andCRT terminaluse, this is an acceptable design, but it is rather cumbersome to use for most other tasks, at least compared to the next generations of microprocessors. A few early computer designs were based on it, but most would use the later and greatly improvedIntel 8080instead.[citation needed]
The subsequent 40-pinNMOSIntel 8080expanded upon the 8008 registers and instruction set and implements a more efficient external bus interface (using the 22 additional pins). Despite a close architectural relationship, the 8080 was not made binary compatible with the 8008, so an 8008 program would not run on an 8080. However, as two different assembly syntaxes were used by Intel at the time, the 8080 could be used in an 8008 assembly-language backward-compatible fashion.
TheIntel 8085is an electrically modernized version of the 8080 that usesdepletion-modetransistors and also added two new instructions.
TheIntel 8086, the original x86 processor, is a non-strict extension of the 8080, so it loosely resembles the original Datapoint 2200 design as well. Almost every Datapoint 2200 and 8008 instruction has an equivalent not only in the instruction set of the 8080, 8085, andZ80, but also in the instruction set of modernx86processors (although the instruction encodings are different).
The 8008 architecture includes the following features:[citation needed]
Instructions are all one to three bytes long, consisting of an initial opcode byte, followed by up to two bytes of operands which can be an immediate operand or a program address. Instructions operate on 8-bits only; there are no 16-bit operations. There is only one mechanism to address data memory: indirect addressing pointed to by a concatenation of the H and L registers, referenced as M. The 8008 does, however, support 14-bit program addresses. It has automatic CAL and RET instructions for multi-level subroutine calls and returns which can be conditionally executed, like jumps. Eight one-byte call instructions (RST) for subroutines exist at the fixed addresses 00h, 08h, 10h, ..., 38h. These are intended to be supplied by external hardware in order to invoke interrupt service routines, but can employed as fast calls. Direct copying may be made between any two registers or a register and memory. Eight math/logic functions are supported between the accumulator (A) and any register, memory, or an immediate value. Results are always deposited in A. Increments and decrements are supported for most registers but, curiously, not A. Register A does, however, support four different rotate instructions. All instructions are executed in 3 to 11 states. Each state requires two clocks.
The following 8008assemblysource code is for a subroutine namedMEMCPYthat copies a block of data bytes of a given size from one location to another. Intel's 8008 assembler supported only + and - operators. This example borrows the 8080's assembler AND and SHR (shift right) operators to select the low and high bytes of a 14-bit address for placement into the 8 bit registers. A contemporaneous 8008 programmer was expected to calculate the numbers and type them in for the assembler.
In the code above, all values are given in octal. LocationsSRC,DST, andCNTare 16-bit parameters for the subroutine namedMEMCPY. In actuality, only 14 bits of the values are used, since the CPU has only a 14-bit addressable memory space. The values are stored inlittle-endianformat, although this is an arbitrary choice, since the CPU is incapable of reading or writing more than a single byte into memory at a time. Since there is no instruction to load a register directly from a given memory address, the HL register pair must first be loaded with the address, and the target register can then be loaded from the M operand, which is an indirect load from the memory location in the HL register pair. The BC register pair is loaded with theCNTparameter value and decremented at the end of the loop until it becomes zero. Note that most of the instructions used occupy a single 8-bit opcode.
The following 8008 assembly source code is for a simplified subroutine named MEMCPY2 that copies a block of data bytes from one location to another. By reducing the byte counter to 8 bits, there is enough room to load all the subroutine parameters into the 8008's register file.
Interruptson the 8008 are only partially implemented. After the INT line is asserted, the 8008 acknowledges the interrupt by outputting a state code of S0,S1,S2 = 011 at T1I time. At the subsequent instruction fetch cycle, an instruction is "jammed" (Intel's word) by external hardware on the bus. Typically this is a one-byte RST instruction.
At this point, there is a problem. The 8008 has no provision to save itsarchitectural state. The 8008 can only write to memory via an address in the HL register pair. When interrupted, there is no mechanism to save HL so there is no way to save the other registers and flags via HL. Because of this, some sort of external memory device such as a hardwarestackor a pair of read/writeregistersmust be attached to the 8008 via the I/O ports to help save the state of the 8008.[20]
|
https://en.wikipedia.org/wiki/INTERP/8
|
TheIntel 8080isIntel's second8-bitmicroprocessor. Introduced in April 1974, the 8080 was an enhanced successor to the earlierIntel 8008microprocessor, although withoutbinary compatibility.[3]Originally intended for use inembedded systemssuch ascalculators,cash registers,computer terminals, andindustrial robots,[4]its robust performance soon led to adoption in a broader range of systems, ultimately helping to launch themicrocomputerindustry.
Several key design choices contributed to the 8080’s success. Its 40‑pin package simplified interfacing compared to the 8008’s 18‑pin design, enabling a more efficientdata bus. The transition toNMOStechnology provided faster transistor speeds than the 8008'sPMOSwhile also simplifying interfacing by making itTTLcompatible. An expandedinstruction setand a full16-bitaddress bus allowed the 8080 to access up to 64 KB of memory, quadrupling the capacity of its predecessor. A broader selection of support chips further enhanced its functionality. Many of these improvements stemmed from customer feedback, as designerFederico Fagginand others at Intel heard about shortcomings in the 8008 architecture.
The 8080 found its way into earlypersonal computerssuch as theAltair 8800and subsequentS-100 bussystems, and it served as the original target CPU for theCP/Moperating systems. It also directly influenced the laterx86 architecturewhich was designed so that itsassembly languageclosely resembled that of the 8080, permitting many instructions to map directly from one to the other.[5]
Originally operating at aclock rateof 2MHz, with common instructions taking between 4 and 11 clock cycles, the 8080 was capable of executing several hundred thousandinstructions per second. Later, two faster variants, the 8080A-1 and 8080A-2, offered improved clock speeds of 3.125 MHz and 2.63 MHz, respectively.[6]In most applications, the processor was paired with two support chips, the 8224 clock generator/driver and the 8228 bus controller, to manage its timing and data flow.
Microprocessor customers were reluctant to adopt the 8008 because of limitations such as the single addressing mode, low clock speed, low pin count, and small on-chip stack, which restricted the scale and complexity of software. There were several proposed designs for the 8080, ranging from simply adding stack instructions to the 8008 to a complete departure from all previous Intel architectures.[7]The final design was a compromise between the proposals.
The conception of the 8080 began in the summer of 1971, whenIntelwrapped up development of the4004and were still working on the8008. After rumors about the "CPU on a chip" came out, Intel started to see interest in the microprocessor from all sorts of customers. At the same time,Federico Faggin– who led the design of the 4004 and became the primary architect of the 8080 – was giving some technical seminars on both of the aforementioned microprocessors and visiting customers. He found that they were complaining about the architecture and performance of said microprocessors, especially the 8008 – as its speed at 0.5 MHz was "not adequate."[7]
Faggin later proposed the chip to Intel's management and pushed for its implementation in the spring of 1972, as development of the 8008 was wrapping up. However, much to his surprise and frustration, Intel didn't approve the project. Faggin says that Intel wanted to see how the market would react to the 4004 and 8008 first, while others noted the problems Intel was having getting its latest generation of memory chips out the door and wanted to focus on that. As a result, Intel didn't approve of the project until fall of that year.[7]Faggin hiredMasatoshi Shima, who helped design the logic of the 4004 with him, from Japan in November 1972. Shima did the detailed design under Faggin's direction,[8]using the design methodology for random logic with silicon gate that Faggin had created for the 4000 family and the 8008.
The 8080 was explicitly designed to be a general-purpose microprocessor for a larger number of customers. Much of the development effort was spent trying to integrate the functionalities of the 8008's supplemental chips into one package. It was decided early in development that the 8080 was not to be binary-compatible with the 8008, instead opting for source compatibility once run through a transpiler, to allow new software to not be subject to the same restrictions as the 8008. For the same reason, as well as to expand the capabilities of stack-based routines and interrupts, the stack was moved to external memory.
Noting the specialized use of general-purpose registers by programmers in mainframe systems, Faggin with Shima andStanley Mazordecided the 8080's registers would be specialized, with register pairs having a different set of uses.[9]This also allowed the engineers to more effectively use transistors for other purposes.
Shima finished the layout in August 1973. Production of the chip later began in December of that year.[7]After the development ofNMOS logicfabrication, a prototype of the 8080 was completed in January 1974. It had a flaw, in that driving with standard TTL devices increased the ground voltage because high current flowed into the narrow line. Intel had already produced 40,000 units of the 8080 at the direction of the sales section before Shima characterized the prototype. After working out some typical last-minute issues, Intel introduced the product in March 1974.[7]It was released a month later as requiring Low-power Schottky TTL (LS TTL) devices. The 8080A fixed this flaw.[10]
Intel offered aninstruction set simulatorfor the 8080 named INTERP/80 to run compiledPL/Mprograms. It was written inFORTRAN IVbyGary Kildallwhile he worked as a consultant for Intel.[11][12]
There is only one patent on the 8080 with the following names: Federico Faggin, Masatoshi Shima, Stanley Mazor.
The Intel 8080 is the successor to the8008. It uses the same basicinstruction setandregistermodel as the 8008, although it is neithersource code compatiblenorbinary code compatiblewith its predecessor. Every instruction in the 8008 has an equivalent instruction in the 8080. The 8080 also adds 16-bit operations in its instruction set. Whereas the 8008 required the use of the HL register pair to indirectly access its 14-bit memory space, the 8080 added addressing modes to allow direct access to its full 16-bit memory space. The internal 7-level push-downcall stackof the 8008 was replaced by a dedicated 16-bit stack-pointer (SP) register. The 8080's 40-pinDIP packagingpermits it to provide a 16-bitaddress busand an 8-bitdata bus, enabling access to 64KiB(216bytes) of memory.
The processor has seven 8-bitregisters(A, B, C, D, E, H, and L), where A is the primary 8-bit accumulator. The other six registers can be used as either individual 8-bit registers or in three 16-bit register pairs (BC, DE, and HL, referred to as B, D and H in Intel documents) depending on the particular instruction. Some instructions also enable the HL register pair to be used as a (limited) 16-bit accumulator. A pseudo-register M, which refers to the dereferenced memory location pointed to by HL, can be used almost anywhere other registers can be used. The 8080 has a 16-bitstack pointerto memory, replacing the 8008's internalstack, and a 16-bitprogram counter.
The processor maintains internalflag bits(astatus register), which indicate the results of arithmetic and logical instructions. Only certain instructions affect the flags. The flags are:
The carry bit can be set or complemented by specific instructions. Conditional-branch instructions test the various flag status bits. The accumulator and the flags together are called the PSW, or program status word. PSW can be pushed to or popped from the stack.
As with many other 8-bit processors, all instructions are encoded in one byte (including register numbers, but excluding immediate data), for simplicity. Some can be followed by one or two bytes of data, which can be an immediate operand, a memory address, or a port number. Like more advanced processors, it has automatic CALL and RET instructions for multi-level procedure calls and returns (which can even be conditionally executed, like jumps) and instructions to save and restore any 16-bit register pair on the machine stack. Eight one-byte call instructions (RST) for subroutines exist at the fixed addresses 00h, 08h, 10h, ..., 38h. These are intended to be supplied by external hardware in order to invoke a correspondinginterrupt service routine, but are also often employed as fastsystem calls. The instruction that executes slowest isXTHL, which is used for exchanging the register pair HL with the value stored at the address indicated by the stack pointer.
All 8-bit operations with two operands can only be performed on the 8-bitaccumulator(the A register). The other operand can be either an immediate value, another 8-bit register, or a memory byte addressed by the 16-bit register pair HL. Increments and decrements can be performed on any 8 bit register or an HL-addressed memory byte. Direct copying is supported between any two 8-bit registers and between any 8-bit register and an HL-addressed memory byte. Due to the regular encoding of theMOVinstruction (using a quarter of available opcode space), there are redundant codes to copy a register into itself (MOV B,B, for instance), which are of little use, except for delays. However, the systematic opcode forMOV M,Mis instead used to encode the halt (HLT) instruction, halting execution until an external reset or interrupt occurs.
Although the 8080 is generally an 8-bit processor, it has limited abilities to perform 16-bit operations. Any of the three 16-bit register pairs (BC, DE, or HL, referred to as B, D, H in Intel documents) or SP can be loaded with an immediate 16-bit value (usingLXI), incremented or decremented (usingINXandDCX), or added to HL (usingDAD). By adding HL to itself, it is possible to achieve the same result as a 16-bit arithmetical left shift with one instruction. The only 16-bit instructions that affect any flag isDAD, which sets the CY (carry) flag in order to allow for programmed 24-bit or 32-bitarithmetic(or larger), needed to implementfloating-point arithmetic. BC, DE, HL, or PSW can be copied to and from the stack usingPUSHandPOP. A stack frame can be allocated usingDAD SPandSPHL. A branch to a computed pointer can be executed withPCHL.LHLDloads HL from directly addressed memory andSHLDstores HL likewise. TheXCHG[14]instruction exchanges the values of the HL and DE register pairs.XTHLexchanges last item pushed on stack with HL.
then if A4-7> 9 OR Cy = 1 then A ← A + 0x60
The 8080 supports 256input/output(I/O) ports,[15]accessed via dedicated I/O instructions taking port addresses as operands.[16]This I/O mapping scheme is regarded as an advantage, as it frees up the processor's limited address space. Many CPU architectures instead use so-calledmemory-mapped I/O(MMIO), in which a common address space is used for both RAM and peripheral chips. This removes the need for dedicated I/O instructions, although a drawback in such designs may be that special hardware must be used to insert wait states, as peripherals are often slower than memory. However, in some simple 8080 computers, I/O is indeed addressed as if they were memory cells, "memory-mapped", leaving the I/O commands unused. I/O addressing can also sometimes employ the fact that the processor outputs the same 8-bit port address to both the lower and the higher address byte (i.e.,IN 05hwould put the address 0505h on the 16-bit address bus). Similar I/O-port schemes are used in the backward-compatible Zilog Z80 and Intel 8085, and the closely related x86 microprocessor families.
One of the bits in the processor state word (see below) indicates that the processor is accessing data from the stack. Using this signal, it is possible to implement a separate stack memory space. This feature is seldom used.
For more advanced systems, during the beginning of each machine cycle, the processor places an eight bit status word on the data bus. This byte contains flags that determine whether the memory or I/O port is accessed and whether it is necessary to handle an interrupt.
The interrupt system state (enabled or disabled) is also output on a separate pin. For simple systems, where the interrupts are not used, it is possible to find cases where this pin is used as an additional single-bit output port (the popularRadio-86RKcomputer made in theSoviet Union, for instance).
The following 8080/8085assemblersource code is for a subroutine namedmemcpythat copies a block of data bytes of a given size from one location to another. The data block is copied one byte at a time, and the data movement and looping logic utilizes 16-bit operations.
The address bus has its own 16 pins, and the data bus has 8 pins that are usable without any multiplexing. Using the two additional pins (read and write signals), it is possible to assemble simple microprocessor devices very easily. Only the separate IO space, interrupts, and DMA need added chips to decode the processor pin signals. However, the pin load capacity is limited; even simple computers often require bus amplifiers.
The processor needs three power sources (−5, +5, and +12 V) and two non-overlapping high-amplitude synchronizing signals. However, at least the late Soviet version КР580ВМ80А was able to work with a single +5 V power source, the +12 V pin being connected to +5 V and the −5 V pin to ground.
The pin-out table, from the chip's accompanying documentation, describes the pins as follows:
A key factor in the success of the 8080 was the broad range of support chips available, providing serial communications, counter/timing, input/output, direct memory access, and programmable interrupt control amongst other functions:
The 8080integrated circuithas anNMOSdesign, which employed non‑saturatedenhancement modetransistors as loads,[18][19]which demanded supplementary voltage levels (+12Vand −5 V) alongside the standardTTLcompatible +5 V.
It was manufactured in asilicon gateprocess using a minimal feature size of 6 μm. A single layer of metal is used tointerconnectthe approximately 4,500 transistors[20]in the design, but the higherresistancepolysiliconlayer, which required higher voltage for some interconnects, is implemented with transistor gates. Thediesize is approximately 20 mm2.
The 8080 was used in many early microcomputers, such as the MITSAltair 8800Computer,Processor TechnologySOL-20Terminal Computer andIMSAI 8080Microcomputer, forming the basis for machines running theCP/Moperating system (the later, almost fully compatible and more able,Zilog Z80processor would capitalize on this, with Z80 and CP/M becoming the dominant CPU and OS combination of the periodc.1976to 1983 much as did thex86andDOSfor the PC a decade later).
In 1979, even after the introduction of the Z80 and 8085 processors, five manufacturers of the 8080 were selling an estimated 500,000 units per month at a price around $3 to $4 each.[21]
The firstsingle-board microcomputers, such asMYCRO-1and thedyna-micro/ MMD-1 (see:Single-board computer) were based on the Intel 8080. One of the early uses of the 8080 was made in the late 1970s by Cubic-Western Data of San Diego, California, in its Automated Fare Collection Systems custom designed for mass transit systems around the world. An early industrial use of the 8080 is as the "brain" of the DatagraphiX Auto-COM (Computer Output Microfiche) line of products which takes large amounts of user data from reel-to-reel tape and images it onto microfiche. The Auto-COM instruments also include an entire automated film cutting, processing, washing, and drying sub-system.
Several early videoarcade gameswere built around the 8080 microprocessor. The first commercially-available arcade video game to incorporate a microprocessor wasGun Fight,Midway Games' 8080-based reimplementation ofTaito's discrete-logicWestern Gun, which was released in November 1975.[22][23][24][25](A pinball machine which incorporated aMotorola6800processor,The Spirit of '76, had already been released the previous month.[26][27]) The 8080 was then used in later Midway arcade video games[28]and in Taito's 1978Space Invaders, one of the most successful and well-known of all arcade video games.[29][30]
Zilogintroduced theZ80, which has a compatiblemachine languageinstruction set and initially used the same assembly language as the 8080, but for legal reasons, Zilog developed a syntactically-different (but code compatible) alternative assembly language for the Z80. At Intel, the 8080 was followed by the compatible and electrically more elegant8085.
Later, Intel issued the assembly-language compatible (but not binary-compatible) 16-bit8086and then the 8/16-bit8088, which was selected byIBMfor its newPCto be launched in 1981. LaterNECmade theNEC V20(an 8088 clone withIntel 80186instruction set compatibility) which also supports an 8080 emulation mode. This is also supported by NEC'sV30(a similarly enhanced 8086 clone). Thus, the 8080, via itsinstruction set architecture(ISA), made a lasting impact on computer history.
A number of processors compatible with the Intel 8080A were manufactured in theEastern Bloc: theKR580VM80A(initially marked as КР580ИК80) in theSoviet Union, the MCY7880[31]made by Unitra CEMI inPoland, the MHB8080A[32]made byTESLAinCzechoslovakia, the 8080APC[32]made byTungsram/ MEV inHungary, and the MMN8080[32]made byMicroelectronica BucharestinRomania.
As of 2017[update], the 8080 is still in production at Lansdale Semiconductors.[33]
The 8080 also changed how computers were created. When the 8080 was introduced, computer systems were usually created by computer manufacturers such asDigital Equipment Corporation,Hewlett-Packard, orIBM. A manufacturer would produce the whole computer, including processor, terminals, and system software such as compilers and operating system. The 8080 was designed for almost any applicationexcepta complete computer system. Hewlett-Packard developed theHP 2640series of smart terminals around the 8080. TheHP 2647is a terminal which runs the programming languageBASICon the 8080.Microsoft's founding product,Microsoft BASIC, was originally programmed for the 8080.
The 8080 and8085gave rise to the 8086, which was designed as asource code compatible, albeit notbinary compatible, extension of the 8080.[34]This design, in turn, later spawned thex86family of chips, which continue to be Intel's primary line of processors. Many of the 8080's core machine instructions and concepts survive in the widespread x86 platform. Examples include the registers namedA,B,C, andDand many of the flags used to control conditional jumps. 8080 assembly code can still be directly translated into x86 instructions,[vague]since all of its core elements are still present.
|
https://en.wikipedia.org/wiki/INTERP/80
|
TheLittle Man Computer(LMC) is an instructionalmodelof acomputer, created by Dr.Stuart Madnickin 1965.[1]The LMC is generally used to teach students, because it models a simplevon Neumann architecturecomputer—which has all of the basic features of a modern computer. It can be programmed in machine code (albeit in decimal rather than binary) or assembly code.[2][3][4]
The LMC model is based on the concept of a little man shut in a closed mail room (analogous to a computer in this scenario). At one end of the room, there are 100 mailboxes (memory), numbered 0 to 99, that can each contain a 3 digit instruction or data (ranging from 000 to 999). Furthermore, there are two mailboxes at the other end labeledINBOXandOUTBOXwhich are used for receiving and outputting data. In the center of the room, there is a work area containing a simple two function (addition and subtraction) calculator known as theAccumulatorand a resettable counter known as the Program Counter. The Program Counter holds the address of the next instruction the Little Man will carry out. This Program Counter is normally incremented by 1 after each instruction is executed, allowing the Little Man to work through a program sequentially.Branchinstructions allow iteration (loops) andconditionalprogramming structures to be incorporated into a program. The latter is achieved by setting the Program Counter to a non-sequential memory address if a particular condition is met (typically the value stored in the accumulator being zero or positive).
As specified by thevon Neumann architecture, any mailbox (signifying a unique memory location) can contain either an instruction or data. Care therefore needs to be taken to stop the Program Counter from reaching a memory address containing data - or the Little Man will attempt to treat it as an instruction. One can take advantage of this by writing instructions into mailboxes that are meant to be interpreted as code, to create self-modifying code. To use the LMC, the user loads data into the mailboxes and then signals the Little Man to begin execution, starting with the instruction stored at memory address zero. Resetting the Program Counter to zero effectively restarts the program, albeit in a potentially different state.
To execute a program, the little man performs these steps:
While the LMC does reflect the actual workings ofbinaryprocessors, the simplicity ofdecimalnumbers was chosen to minimize the complexity for students who may not be comfortable working in binary/hexadecimal.
Some LMC simulators are programmed directly using 3-digit numeric instructions and some use 3-letter mnemonic codes and labels. In either case, the instruction set is deliberately very limited (typically about ten instructions) to simplify understanding. If the LMC uses mnemonic codes and labels then these are converted into 3-digit numeric instructions when the program is assembled.
The table below shows a typical numeric instruction set and the equivalent mnemonic codes.
This program (instruction901to instruction000) is written just using numeric codes. The program takes two numbers as input and outputs the difference. Notice that execution starts at Mailbox 00 and finishes at Mailbox 07. The disadvantages of programming the LMC using numeric instruction codes are discussed below.
LOAD the first value back into the calculator (erasing whatever was there)
Assembly language is a low-level programming language that uses mnemonics and labels instead of numeric instruction codes. Although the LMC only uses a limited set of mnemonics, the convenience of using amnemonicfor each instruction is made apparent from the assembly language of the same program shown below - the programmer is no longer required to memorize a set of anonymous numeric codes and can now program with a set of more memorable mnemonic codes. If the mnemonic is an instruction that involves a memory address (either a branch instruction or loading/saving data) then a label is used to name the memory address.
Without labels the programmer is required to manually calculate mailbox (memory) addresses. In thenumeric code example, if a new instruction was to be inserted before the final HLT instruction then that HLT instruction would move from address 07 to address 08 (address labelling starts at address location 00). Suppose the user entered 600 as the first input. The instruction 308 would mean that this value would be stored at address location 08 and overwrite the 000 (HLT) instruction. Since 600 means "branch to mailbox address 00" the program, instead of halting, would get stuck in an endless loop.
To work around this difficulty, most assembly languages (including the LMC) combine the mnemonics withlabels. A label is simply a word that is used to either name a memory address where an instruction or data is stored, or to refer to that address in an instruction.
When a program is assembled:
In theassembly languageexamplewhich uses mnemonics and labels, if a new instruction was inserted before the final HLT instruction then the address location labelled FIRST would now be at memory location 09 rather than 08 and the STA FIRST instruction would be converted to 309 (STA 09) rather than 308 (STA 08) when the program was assembled.
Labels are therefore used to:
The program below will take a user input, and count down to zero.
The program below will take a user input, square it, output the answer and then repeat. Entering a zero will end the program.(Note: an input that results in a value greater than 999 will have undefined behaviour due to the 3 digit number limit of the LMC).
Note: If there is no data after a DAT statement then the default value 0 is stored in the memory address.
In the example above, [BRZ ENDLOOP] depends on undefined behaviour, as COUNT-VALUE can be negative, after which the ACCUMULATOR value is undefined, resulting in BRZ either branching or not (ACCUMULATOR may be zero, or wrapped around). To make the code compatible with the specification, replace:
with the following version, which evaluates VALUE-COUNT instead of COUNT-VALUE, making sure the accumulator never underflows:
Another example is aquine, printing its own machine code (printing source is impossible because letters cannot be output):
This quine works usingself-modifying code. Position 0 is incremented by one in each iteration, outputting that line's code, until the code it is outputting is 1, at which point it branches to the ONE position. The value at the ONE position has 0 as opcode, so it is interpreted as a HALT/COB instruction.
|
https://en.wikipedia.org/wiki/Little_man_computer
|
MikroSimis aneducationalcomputer program for hardware-non-specific explanation of the general functioning and behaviour of a virtualprocessor, running on theMicrosoft Windowsoperating system. Devices like miniaturizedcalculators,microcontroller,microprocessors, andcomputercan be explained on custom-developedinstruction codeon aregister transfer levelcontrolled by sequences of microinstructions(microcode). Based on this it is possible to develop aninstruction setto control a virtual application board at higher level of abstraction.
Initially MikroSim was developed to be a processor simulation software to be widely available in educational areas. Since MikroSim operability starts on the basis of microcode development, defined as a sequence of micro instructions (microcoding) for a virtualcontrol unit, the software's intention is on first approach a microcode simulator with various levels of abstractions including the ability of CPU simulators and instruction set emulators. In the current software revision it is feasible for a microcode controlled virtual application to operate on own coded instruction sets. With MikroSim typical and well-known concepts in the area ofcomputer engineeringlikecomputer architectureandinstruction set architectureare non-specifically treated, which have been established since the early days of the information era and being still valid. In this fashion the simulation software gains a timeless, free didactical benefit without being restricted on special developments of the past and in the future. The detailed documentation and the bilingual application's graphical user interface (GUI) in German and English, as well as the software's upward compatibility given to some extent by Microsoft's operating system Windows, are reasons for being a well-established, valuable e-learning tool in the field of computer engineering since 1992 for educational use.
The software is based on a version written underTurbo Pascalcompiled forMS-DOSoperating systems which has been used for educational purposes in computer engineering andcomputer scienceat thePhilipps-University Marburg (Germany)until 1992. The concept was picked up by Martin Perner during his study of physics (1990–95) in summer 1992, revised, and converted into a windows application compiled with MicrosoftVisual Basicand running onWindows 3.1x. In doing so, at this time a simulator with huge conceptual improvements arose by exploiting the novel functionality and utilisation of MS Windows’ GUI for supporting the composition of microcode and the traceability of its instructional influence. The enhancements of the e-learning tool under Windows has been supported and promoted by the Fachbereich Mathematik/Informatik of the University of Marburg by Heinz-Peter Gumm until end 1995.
The Simulator has been awarded with the European Academic Software Award 1994 in the computer science category in Heidelberg (Germany) in November 1994. In March 1995 the simulator was presented at the computer exhibitionCeBIT’95 in Hannover at the exhibit of the Hessischen Hochschulen. Between 1995 and 2000 the simulator was published asMikrocodesimulator MikroSim 1.2without any significant improvements. At this time the tool received an award of 1000 ECU from the European Union in conjunction with the European Year of Livelong Learning 1996. In 1997, the software was presented at the Multimedia Transfer’97 contest in connection to the LearnTec’97 exhibition.[1]In its penultimate revision, the simulator has been published asMikrocodesimulator MikroSim2000, optimized forMS Windows 95’s 32-bit operation.
Between 2008 and 2009, the simulator concept has been revised, reworked, and thoughtful extended. So it has received wide-ranging improvements and extensions without touching the successful conceptual aspects of the microcode simulation abilities in the core. For this purpose, advantage is taken of today’s computing system’s performance determined by operating system and underlying computational power to extend MikroSim’s simulation possibilities up to the stage of a virtual application board. MikroSim is compiled and optimized for sake of unrestricted compatibility and for widest distribution possible forMS Windows XPas a 32-bit version. The program runs on all 32- and 64-bit operating systems ofMS Windows VistaandMS Windows 7. Thereby, no special XP compatibility mode is needed. Since January 2010, the simulator is distributed asMikrocodesimulator MikroSim 2010by 0/1-SimWare.
The Windows application allows for the gradual establishment of a virtual application that is predetermined and such unchangeable in its functionality.
In exploration mode, the operating principle and control of newly added components influenced by one microcode instruction within a cycle can be evaluated. The width of MikroSim’s micro instructions is 49 bits. A single micro instruction is executed in three phases of a 3-phase clock. The partial phases are referred to as "GET", "CALCULATE" and "PUT" phase, causing to fetch some register value, to execute a 32-bit calculation, and to store the calculation result into a CPU's internal register, finally.
In simulation mode, seamlessly executed micro instructions control the central processing unit of the simulator in subsequent cycles. Therefore, the intrinsic ability of one micro instruction is utilized to address the next micro instruction in thecontrol store. The control store holding the micro instruction set (commonly referred as "microcode") comprises 1024 micro instructions words each 49-bit wide.
Using structuring opportunities of the control store for addressable scheduling of the microcode and the implementation of a cyclically operating machine codeinterpreter, that is programmed in microcode as well allows the implementation of individualmicro-operationsequences, known asmachine instructions. The microcode can be regarded asfirmwarefor MikroSim, that can be modified, and stored in and reloaded from a microcode-ROM-file.
Within a micro instruction execution cycle, the CPU as well as an input / output controller is connected to an external 16 KB huge random access memory device (RAM). Via the input-output controller device, communication with virtual input and output devices is supported byDirect Memory Accessmode (DMA),Inter-Integrated CircuitConnection (I2C), andInterruptrequest functionality (IRQ). A output port, a display, a timer, an event trigger, a digital-analog converter, a keyboard and data input / output channel is provided as virtual IC device for explaining didactically the communication with external devices.
The microcode simulator uses eight freely usable register each 32-bit wide connected with a 32-bitarithmetic logic unit(ALU). The register content can be regarded as signed or unsigned integer values, or as 32-bitfloating pointnumbers. The register content can be easily viewed, interpreted, and modified bitwise an integrated system number editor.
The 32-bit ALU is the key unit of the central processing unit. It supports 128 different basic arithmetic operations for integer operation, interrupt control, and for floating point arithmetic.
The didactical approach to floating point calculations, which has been introduced in a comparable manner already in the early 1940s byKonrad Zuse, is introduced by using elemental sublevel operations for exponent and mantissa involved in the key operations of addition/subtraction and multiplication/division.
A set of powerful 32-bit floating point arithmetic commands in mantissa and exponent for the basic operations and elementary analytical functions are provided, as they are realized in today's mathematical coprocessors. Here, in the simulation with MikroSim it is ideally assumed that the execution of each supported ALU arithmetic operation requires only a distinct computing duration independent of circuit complexity realistically needed in practice.
The execution of micro instructions can be operated on various simulation levels with different temporal resolution:
With various additional options, visual CPU activities can be suppressed for the benefit of increasing the processing speed when the control of the application by machine programming is put forward. The performance index monitor provided with the simulator enables the user to benchmark the processing performance of MikroSim and setting it into relation with computing power of the simulator's hardware, measurable infloating-point operations per second(FLOPS) andinstructions per second(IPS).
With the "Basic Assembler Tool for MikroSim" MikroBAT, simple programs can be developed inassembler programming language. Here, all supportedmnemonicsof the assembler programming language are determined by the user's self-created machine's instruction set on micro instruction level. The add-on tool is able to translate the assembly language program intomachine codeand data and transferring the binary code into the external RAM for subsequent simulations. Together with MikroBAT the microcode simulator MikroSim supports the didactical introduction of teaching aspects in technical computer science from a switch-controlled calculating machine to an assembler programmable application.
|
https://en.wikipedia.org/wiki/MikroSim
|
OVPsimis amultiprocessorplatformemulator(often called afull-system simulator) used to run unchanged production binaries of the target hardware. It has publicAPIsallowing users to create their ownprocessor,peripheraland platform models. Various models are available as open source.[1]OVPsim is a key component of the Open Virtual Platforms initiative (OVP),[2]an organization created to promote the use of open virtual platforms for embedded software development. OVPsim requires OVP registration to download.
OVPsim is developed and maintained byImperas.[3]The core simulation platform isproprietary software; it is available free of charge for non-commercial usage. Commercial usage requires a low-cost license from Imperas to cover maintenance.
Various processor,peripheraland platform models are available asfree softwareunder theApache License version 2.0.
There are three main components of OVP:open-source models, fast OVPsim simulator, and modelingAPIs. These components are designed to make it easy to assemble multi-core heterogeneous or homogeneous platforms with complex memory hierarchies, cache systems and layers of embedded software that can run at hundreds of MIPS on standard desktop PCs. OVPSim is consideredinstruction accurate, but not cycle-accurate. There are many examples of components, and complete virtual platforms that can boot aLinuxkernel in under 5 seconds at OVP homepage.
Within OVP there are several different model categories. These models are provided as both pre-compiled object code, and as in some cases, source files. OVPsim no longer supplies source code for the ARM and MIPS processor models. Currently there are processor models ofARM(processors using the ARMv4, ARMv5, ARMv6, ARMv7, ARMv8 instruction sets) up to the ARM Cortex-A72MPx4 (and including multi-cluster ARMv8 models with GICv3),ImaginationMIPS(processors usingMIPS32,MIPS64, microMIPS, nanoMIPS and MIPS R6 instruction sets) up to the microAptiv, interAptiv, proAptiv, and Warrior cores,SynopsysVirageARC600/ARC700 and ARC EM series, Renesasv850, RH850, RL78 and m16c,PowerPC,Altera Nios II,Xilinx MicroBlaze,RISC-V(models using 32bit RV32I, RV32M, RV32IM, RV32A, RV32IMA, RV32IMAC, RV32F, RV32D, RV32E, RV32EC, RV32C, RV32G, RV32GC, RV32GCN, RV32IMAFD and 64bit RV64I, RV64M, RV64IMAC, RV64F, RV64D, RV64C, RV64G, RV64GC, RV64GCN, RV64IMAFD ISA subsets), Andes Technology N25/NX25, N25F/NX25F, A25/AX25, A25F/AX25F,MicrosemiCoreRISCV/MiV-RV32IMA,SiFiveE31, E51, U54, U54-MC, Freedom U540,CodasipSeries 1, 3, 5, 7 RISC-V cores, Intel NiosV RISC-V core, Texas Instruments TMS320 DSP, andOpenRiscfamilies. The OpenHW Group uses OVPsim as the golden reference for their open source RISC-V CV32E40 and CV32E20 cores. There are also models of many different types of system components including RAM, ROM, cache, and bridge. There are peripheral models such as Ethernet MAC, USB, DMA, UART, and FIFO. Several different pre-built platforms are available, including the most common operating systems[4]ucLinux,Linux,Android,FreeRTOS,Nucleus, Micrium.
One of the main uses of the OVP simulation infrastructure is the ability to create and simulate custom built models—either from scratch, or by using one of the open source models as a starting point. The OVP APIs are tailored to different model types: processors, behavioral models of peripherals, and platforms. There are over 100 source model variants available to download.
The OVPsim simulator is available as an OVP reference and is free for non-commercial usage. The simulator uses dynamicbinary translationtechnology to achieve very high simulation speeds. More than a billion simulated instructions per second is possible, in some cases on regular desktop PC machines. OVPsim is available forx86Windows andLinuxhosts.
OVPsim comes with aGDBRSP (Remote Serial Protocol) interface to allow applications running on simulated processors to be debugged with any standard debugger that supports this GDB RSP interface. OVPsim comes with the Imperas iGui Graphical Debugger and also an Eclipse IDE and CDT interface.
OVPsim can be encapsulated and called from within other simulation environments[5]and comes as standard with interface files forC,C++, andSystemC.[6]OVPsim includes native SystemCTLM2.0 interface files. It is also possible to encapsulate legacy models of processors and behavioral models so that they can be used by OVPsim.
OVP models are created usingC/C++APIs. There are three main APIs: OP, VMI, BHM/PPM.
The OP API is designed for controlling, connecting, and observing platforms. This API can be called from C, C++, or SystemC. The platform provides the basic structure of the design and creates, connects, and configures the components. The platform also specifies the address mapping, and software that is loaded on the processors. It is very easy with OP to specify very complex and complete platforms of many different processors, local and shared memories, caches, bus bridges, peripherals and all their complex address maps, interrupts and operating systems and application software.
The OP API superseded the ICM API during 2016. The ICM API is still usable for older platforms.
Processor modeling is provided by the VMI API. These API functions provide the ability to easily describe the behavior of the processor. A processor model written in C using the VMI decodes the target instruction to be simulated and translates this to native x86 instructions that are then executed on the PC. VMI can be used for modeling 8, 16, 32, and 64 bit architectures. There is an interception mechanism enabling emulation of calls to functions in the application runtime libraries (such as write, fstat etc.) without requiring modification of either the processor model or the simulated application.
Behavioral components, peripherals, and the overall environment is modeled using C code and calls to these two APIs. Underlying these APIs is an event based scheduling mechanism to enable modeling of time, events, and concurrency. Peripheral models provide callbacks that are called when the application software running on processors modeled in the platform access memory locations where the peripheral is enabled.
OVPsim is being used by multiple educational establishments to provide a simulation infrastructure for the research ofparallel computingplatforms,[7][8]hardware/software co-design,[9]performance analysis of embedded systems,[10]and as the basis of other embedded tool developments.[citation needed]It is also leveraged for educational courses to allow students to develop and debug application software and create virtual platforms and new models.
A number of leading commercial organizations also use OVPsim as the basis of their product offerings. The technology was licensed by MIPS[11]Technologies to provide modeling support for theirMIPS architectureembedded processor range, features in a partnership with leading processor provider ARM,[12][13]and is part of the Europractice[14]product range for general access to European universities. A version of OVPsim is used by theRISC-VFoundation's Compliance Working Group[15]as a reference simulator. Leading Semiconductor companies such as Renesas have used the simulator for its processor development work, as disclosed in leading electronic industry publications.[16]It was selected by NEPHRON+, an EU research project, for its software and test development environment.[17]VinChip Systems Inc. ofChennai, India used OpenOCD and OVPsim to develop what may be the first 32-bit processor developed in India.[18]The OVP models and virtual platforms form the basis for other activities being undertaken by Imperas.
|
https://en.wikipedia.org/wiki/OVPsim
|
TheSaturnfamily of4-bit(datapath)microprocessorswas developed byHewlett-Packardin the 1980s first for theHP-71Bhandheld computer, released in 1984, and later for various HP calculators (starting with theHP-18C). It succeeded theNutfamily of processors used in earlier calculators. The HP48SX and HP48S were the last models to use HP manufactured Saturn processors, later models used processors manufactured byNEC. TheHP 49 seriesinitially used the Saturn CPU until the NEC fab[nb 1]could no longer manufacture the processor for technical reasons in 2003. Starting with theHP 49g+model in 2003, the calculators switched to aSamsungS3C2410processor with anARM920Tcore (part of theARMv4Tarchitecture) which ran an emulator of the Saturn hardware in software. In 2000, theHP 39GandHP 40Gwere the last calculators introduced based on the actual NEC fabricated Saturn hardware. The last calculators introduced to use the Saturn emulator were theHP 39gs,HP 40gsandHP 50gin 2006, as well as the 2007 revision of thehp 48gII. The HP 50g was the last calculator sold by HP using this emulator when it was discontinued in 2015 due to Samsung stopping production of the ARM processor on which it was based.[1][2][3]
The Saturn hardware is anibbleserial design[4]as opposed to itsNutpredecessor, which wasbit-serial.[5]Internally, the Saturn CPU has four 4-bit data buses that allow for nearly 1-cycle per nibble performance with one or two buses acting as a source and one or two acting as a destination.[4]The smallest addressablewordis a 4-bit nibble which can hold onebinary-coded decimal(BCD) digit. Any unit of data in the registers larger than a nibble, up to 64-bits, can be operated on as a whole, however the Saturn CPU performs the operation serially on a nibble-by-nibble basis internally.[4]
The Saturn architecture has an internal register width of 64 bits and 20-bits of address, with memory being addressed to 4-bit (nibble) granularity. Saturn ALU instructions support variable data width, operating on one to 16 nibbles of a word. The original Saturn CPU chips provided a four-bit external data bus, but later Saturn-based SoCs included on chip bus conversion to an 8-bit external data bus and 19-bit external address bus.
The Saturn architecture has four 64-bitGPRs(General Purpose Registers), named A, B, C and D. In addition, there are also five 64-bit "scratch" registers named R0, R1, R2, R3 and R4. These can only store data. If an ALU operation is required for data in a scratch register, then the register in question must be transferred to a GPR first. Other registers include a 1-nibble "pointer" register named P, usually used to select a nibble in a GPR or a range of nibbles (or for aligning immediate data on a specific nibble in a GPR, with wrap-around). For memory access, there are two 20-bit data pointer registers named D0 and D1. The Saturn architecture also has a PC orprogram counterregister which can interoperate with the GPRs. There is also an 8-level, circular, LIFO 20-bit hardware return stack named RSTK used when a subroutine call instruction is issued. Additionally, the Saturn CPU is equipped with a 16-bit software status register named ST and a 1-nibble hardware status register named HS, which notably, contains the SB or "sticky bit" flag indicating whether a binary 1 was right shifted off of a GPR. Furthermore, the Saturn architecture has a 12-bit OUT register and a 16-bit IN register, which in the Yorke and Clarke SoCs, are used to capture input from the keyboard and also control the beeper. There is also a 1-bit carry flag register.
In addition to the above, the Saturn CPU has a simple, non-prioritized interrupt system. When an interrupt occurs, the CPU finishes executing the current instruction, saves the program counter to the hardware return stack (RSTK) and jumps to address 0x0000Fh, where the preceding value is in nibbles.[4]The CPU also interacts with the keyboard scanning logic directly.
The following diagram depicts the registers (with each white square being 4-bits / a nibble except for the Carry flag, which is 1 bit):
Saturn 64-bit GPR register format and fields:
Data in the general purpose registers can be accessed via fields that fall on nibble boundaries, whereas the scratch registers allow only load and store operations. The fields, as shown in the above diagram, are W (whole 64-bit GPR), A (address, first 5 nibbles of a GPR), S (sign of mantissa, most significant nibble of a GPR), XS (exponent sign, nibble 2 of a GPR), M (mantissa, nibbles 3–14 of a GPR), X (exponent, first 3 nibbles of a GPR) and B (first byte of a GPR). In addition, there is the P field which selects a nibble from a GPR based on the P register's 4-bit value. Also, there is the WP field which selects nibbles 0 through the nibble selected in the P register. The 64 bits (16 nibbles) can hold BCD-formatted codedfloating point numberscomposed of asignnibble (which is "9" if the number is negative), 12mantissadigits and a 3-digit 10's complement exponent stored inBCDformat (±499).[6]The internal representation of BCD floating point values are a 15-digit mantissa with one sign nibble in one register combined with a 20-bit exponent, in 10's complement format, in another register. The use of BCD instead of straight binary representation is advantageous for calculators as it avoidsroundingproblems that occur on thebinary/decimal conversion.
The Saturn CPU's instruction and data addresses are also nibble-based. The threepointerregisters (including the program counter) andaddressregisters are 20 bits wide. Due to this, the Saturn architecture can address 1Mnibbles or, equivalently, 512Kbytes. Beyond that size (e.g. in the 48GX),bank switchingis used.
The original HP-71B handheld computer and the HP-28C had the Saturn processor as a separate chip. In the HP 48S/SX, 48G/GX series and HP-28S, HP-27S, HP-42S, HP-32SII and HP-20S, theSaturnCPU core is integrated as part of a more complexintegrated circuit(IC)SoC.
The following is an integer implementation of a BCD decimal square root algorithm in Saturn Jazz / HP Tools assembly syntax:
The original Saturn CPU gave its name to the entireinstruction set architecture. Later chips had their own code names:
The CPU code-names are inspired by members of theLewis and Clark Expeditionof 1804–1806, the first United States overland expedition to thePacific coastand back. The virtual CPU / emulator code names were inspired by the prototype "New-Yorke" Saturn-based 8 MHz SoC that never made it to production.[12]According to one of the ACO (Australian Calculator Operation) members, "Big Apple" was derived from the code name "New-Yorke" of the prototype 8 MHz Saturn-based SoC in a reference to New York city, hence the names "Big apple", "Mid Apple" and "Little Apple".[12]
|
https://en.wikipedia.org/wiki/Saturn%2B
|
RPL[5]is ahandheld calculatoroperating system and applicationprogramming languageused onHewlett-Packard's scientific graphingRPN(Reverse Polish Notation) calculators of theHP 28,48,49and50series, but it is also usable on non-RPN calculators, such as the38,39 and 40series. Internally, it was also utilized by the17B,18C,19Band27S.[7]
RPL is astructured programminglanguage based on RPN, but equally capable of processingalgebraicexpressions and formulae, implemented as athreaded interpreter.[8]RPL has many similarities toForth, both languages beingstack-based, as well as the list-basedLISP. Contrary to previous HP RPN calculators, which had a fixedfour-level stack, thedynamic stackused by RPL is only limited by availableRAM, with the calculator displaying an error message when running out of memory rather than silently dropping arguments off the stack as in fixed-sized RPN stacks.[9]
RPL originated from HP'sCorvallis, Oregondevelopment facility in 1984 as a replacement for the previous practice of implementing theoperating systemsof calculators inassembly language.[7]The first calculator utilizing it internally was the HP-18C and the first calculator making it available to users was the HP-28C, both from 1986.[10][7]The last pocket calculator supporting RPL, the HP 50g, was discontinued in 2015.[11][12][13]However, multiple emulators that can emulate HP's RPL calculators exist that run on a range of operating systems, and devices, including iOS and Android smartphones.There are also a number of community projects to recreate and extend RPL on newer calculators, likenewRPL[14][15]orDB48X,[16][17]which may add features or improve performance.[18]
The internal low- to medium-level variant of RPL, calledSystem RPL(orSysRPL) is used on some earlier HP calculators as well as the aforementioned ones, as part of theiroperating systemimplementation language. In the HP 48 series this variant of RPL is not accessible to the calculator user without the use of external tools, but in the HP 49/50 series there is a compiler built into ROM to use SysRPL. It is possible to cause a serious crash while coding in SysRPL, so caution must be used while using it. The high-levelUser RPL(orUserRPL) version of the language is available on said graphing calculators for developing textual as well as graphical application programs. All UserRPL programs are internally represented as SysRPL programs, but use only a safe subset of the available SysRPL commands. The error checking that is a part of UserRPL commands, however, makes UserRPL programs noticeably slower than equivalent SysRPL programs. The UserRPL command SYSEVAL tells the calculator to process designated parts of a UserRPL program as SysRPL code.
RPL control blocks are not strictlypostfix. Although there are some notable exceptions, the control block structures appear as they would in a standard infix language. The calculator manages this by allowing the implementation of these blocks to skip ahead in the program stream as necessary.
RPL supports basic conditional testing through the IF/THEN/ELSE structure. The basic syntax of this block is:
The following example tests to see if the number at the bottom of the stack is "1" and, if so, replaces it with "Equal to one":
The IF construct evaluates the condition then tests the bottom of the stack for the result. As a result, RPL can optionally support FORTH-style IF blocks, allowing the condition to be determined before the block. By leaving the condition empty, the IF statement will not make any changes to the stack during the condition execution and will use the existing result at the bottom of the stack for the test:
Postfix conditional testing may be accomplished by using the IFT ("if-then") and IFTE ("if-then-else") functions.
IFT and IFTE pop two or three commands off the stack, respectively. The topmost value is evaluated as a Boolean and, if true, the second topmost value is pushed back on the stack. IFTE allows a third "else" value that will be pushed back on the stack if the Boolean is false.
The following example uses the IFT function to pop an object from the bottom of the stack and, if it is equal to 1, replaces it with "One":
The following example uses the IFTE function to pop an object from the bottom of the stack and, if it is equal to 1, replaces it with "One". If it does not equal 1, it replaces it with the string "Not one":
IFT and IFTE will evaluate a program block given as one of its arguments, allowing a more compact form of conditional logic than an IF/THEN/ELSE/END structure. The following example pops an object from the bottom of the stack, and replaces it with "One", "Less", or "More", depending on whether it is equal to, less than, or greater than 1.
To support more complex conditional logic, RPL provides the CASE/THEN/END structure for handling multiple exclusive tests. Only one of the branches within the CASE statement will be executed. The basic syntax of this block is:
The following code illustrates the use of a CASE/THEN/END block. Given a letter at the bottom of the stack, it replaces it with its string equivalent or "Unknown letter":
This code is identical to the following nested IF/THEN/ELSE/END block equivalent:
RPL provides a FOR/NEXT statement for looping from one index to another. The index for the loop is stored in a temporary local variable that can be accessed in the loop. The syntax of the FOR/NEXT block is:
The following example uses the FOR loop to sum the numbers from 1 to 10. The index variable of the FOR loop is "I":
The START/NEXT block is used for a simple block that runs from a start index to an end index. Unlike the FOR/NEXT loop, the looping variable is not available. The syntax of the START/NEXT block is:
Both FOR/NEXT and START/NEXT support a user-defined step increment. By replacing the terminating NEXT keyword with an increment and the STEP keyword, the loop variable will be incremented or decremented by a different value than the default of +1. For instance, the following loop steps back from 10 to 2 by decrementing the loop index by 2:
The WHILE/REPEAT/END block in RPL supports an indefinite loop with the condition test at the start of the loop. The syntax of the WHILE/REPEAT/END block is:
The DO/UNTIL/END block in RPL supports an indefinite loop with the condition test at the end of the loop. The syntax of the DO/UNTIL/END block is:
|
https://en.wikipedia.org/wiki/RPL_(programming_language)
|
Agraphing calculatoris a class of hand-held calculator that is capable of plotting graphs and solving complex functions. While there are several companies that manufacture models of graphing calculators,Hewlett-Packardis a major manufacturer.
The following table compares general and technical information for Hewlett-Packard graphing calculators:
|
https://en.wikipedia.org/wiki/Comparison_of_HP_graphing_calculators
|
Simicsis afull-system simulatoror virtual platform used to run unchanged production binaries of the target hardware. Simics was originally developed by theSwedish Institute of Computer Science(SICS), and then spun off toVirtutechfor commercial development in 1998. Virtutech was acquired byIntelin 2010. Currently, Simics is provided byIntelin a public release[1]and sold commercially byWind River Systems, which was in the past a subsidiary of Intel.
Simics contains bothinstruction set simulatorsand hardware models, and is or has been used to simulate systems such asAlpha,ARM(32- and 64-bit),IA-64,MIPS(32- and 64-bit),MSP430,PowerPC(32-and64-bit),RISC-V(32-and64-bit),SPARC-V8 and V9, andx86andx86-64CPUs.
Many different operating systems have been run on various simulated virtual platforms, includingLinux,MS-DOS,Windows,VxWorks,OSE,Solaris,FreeBSD,QNX,RTEMS,UEFI, andZephyr.
TheNetBSDAMD64 port was initially developed using Simics before the public release of the chip.[2]The purpose of simulation in Simics is often to develop software for a particular type of hardware without requiring access to that precise hardware, using Simics as avirtual platform. This can applied both to pre-release and pre-silicon software development for future hardware, as well as for existing hardware. Intel uses Simics to provide its ecosystem with access to future platform months or years ahead of the hardware launch.[3]
The current version of Simics is 6 which was released publicly in 2019.[4][5]Simics runs on 64-bit x86-64 machines runningMicrosoft WindowsandLinux(32-bit support was dropped with the release of Simics 5, since 64-bit provides significant performance advantages and is universally available on current hardware). The previous version, Simics 5, was released in 2015.[6]
Simics has the ability to execute a system in forward and reverse direction.[7]Reverse debuggingcan illuminate how an exceptional condition orbugoccurred. When executing an OS such asLinuxin reverse using Simics, previously deleted files reappear when the deletion point is passed in reverse and scrolling and other graphical display and console updates occur backwards as well.
Simics is built for high performance execution of full-system models, and uses bothbinary translationandhardware-assisted virtualizationto increase simulation speed. It is natively multithreaded and can simulate multiple target (or guest) processors and boards using multiple host threads. It has been used to run simulations containing hundreds of target processors.
Thisemulation-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Simics
|
SIMHis afree and open source, multi-platform multi-systememulator. It is maintained by Bob Supnik, a formerDECengineer and DEC vice president, and has been in development in one form or another since the 1960s.
SIMH was based on a much older systems emulator called MIMIC, which was written in the late 1960s at Applied Data Research.[1]SIMH was started in 1993 with the purpose of preservingminicomputerhardware and software that was fading into obscurity.[1]
In May 2022, theMIT Licenseof SIMH version 4 onGitHubwas unilaterally modified by a contributor to make itno longer free software, by adding a clause that revokes the right to use any subsequent revisions of the software containing their contributions if modifications that "influence the behaviour of the disk access activities" are made.[3]As of 27 May 2022, Supnik no longer endorses version 4 on his official website for SIMH due to these changes, only recognizing the "classic" version 3.x releases.[4]
On 3 June 2022, the last revision of SIMH not subject to this clause (licensed underBSD licensesand the MIT License) wasforkedby the group Open SIMH, with a new governance model and steering group that includes Supnik and others. The Open SIMH group cited that a "situation" had arisen in the project that compromised its principles.[5]
SIMH emulates hardware from the following companies.
Thisemulation-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/SIMH
|
Withinsoftware engineering, themining software repositories[1](MSR) field[2]analyzes the rich data available in software repositories, such asversion controlrepositories,mailing listarchives,bug tracking systems,issue tracking systems, etc. to uncover interesting and actionable information aboutsoftwaresystems, projects andsoftware engineering.
Herzig and Zeller define ”mining software archives” as a process to ”obtain lots of initial evidence” by extracting data from software repositories. Further they define ”data sources” as product-based artifacts like source code, requirement artefacts or version archives and claim that these sources are unbiased, but noisy and incomplete.[3]
The idea in coupled change analysis is that developers change code entities (e.g. files) together frequently for fixing defects or introducing new features. These couplings between the entities are often not made explicit in the code or other documents. Especially developers new on the project do not know which entities need to be changed together. Coupled change analysis aims to extract the coupling out of the version control system for a project. By the commits and the timing of changes, we might be able to identify which entities frequently change together. This information could then be presented to developers about to change one of the entities to support them in their further changes.[4]
There are many different kinds of commits in version control systems, e.g. bug fix commits, new feature commits, documentation commits, etc. To take data-driven decisions based on past commits, one needs to select subsets of commits that meet a given criterion. That can be done based on the commit message.[5]
It is possible to generate useful documentation from mining software repositories. For instance, Jadeite computes usage statistics and helps newcomers to quickly identify commonly used classes.[6]
The primary mining data comes from version control systems. Early mining experiments were done on CVS repositories.[7]Then, researchers have extensively analyzed SVN repositories.[8]Now, Git repositories are dominant.[9]Depending on the nature of the data required (size, domain, processing), one can either download data from one of these sources.[clarification needed]However,data governanceand data collection for the sake of buildinglarge language modelshave come to change the rules of the game, by integrating the use ofweb crawlersto obtain data from multiple sources and domains.
|
https://en.wikipedia.org/wiki/Mining_Software_Repositories
|
Software archaeologyorsource code archeologyis the study of poorly documented or undocumentedlegacy softwareimplementations, as part ofsoftware maintenance.[1][2]Software archaeology, named by analogy witharchaeology,[3]includes thereverse engineeringof software modules, and the application of a variety of tools and processes for extracting and understanding program structure and recovering design information.[1][4]Software archaeology may reveal dysfunctional team processes which have produced poorly designed or even unused software modules, and in some cases deliberatelyobfuscatory codemay be found.[5]The term has been in use for decades.[6]
Software archaeology has continued to be a topic of discussion at more recent software engineering conferences.[7]
A workshop on Software Archaeology at the 2001OOPSLA(Object-Oriented Programming, Systems, Languages & Applications) conference identified the following software archaeology techniques, some of which are specific toobject-oriented programming:[8]
More generally,Andy HuntandDave Thomasnote the importance ofversion control,dependency management, text indexing tools such as GLIMPSE andSWISH-E, and "[drawing] a map as you begin exploring."[8]
Like true archaeology, software archaeology involves investigative work to understand the thought processes of one's predecessors.[8]At the OOPSLA workshop,Ward Cunninghamsuggested a synoptic signature analysis technique which gave an overall "feel" for a program by showing only punctuation, such as semicolons andcurly braces.[9]In the same vein, Cunningham has suggested viewing programs in 2 point font in order to understand the overall structure.[10]Another technique identified at the workshop was the use ofaspect-oriented programmingtools such asAspectJto systematically introducetracingcode without directly editing the legacy program.[8]
Network and temporal analysis techniques can reveal the patterns of collaborative activity by the developers of legacy software, which in turn may shed light on the strengths and weaknesses of the software artifacts produced.[11]
Michael Rozlog ofEmbarcadero Technologieshas described software archaeology as a six-step process which enables programmers to answer questions such as "What have I just inherited?" and "Where are the scary sections of the code?"[12]These steps, similar to those identified by the OOPSLA workshop, include using visualization to obtain a visual representation of the program's design, usingsoftware metricsto look for design and style violations, usingunit testingandprofilingto look for bugs and performance bottlenecks, and assembling design information recovered by the process.[12]Software archaeology can also be a service provided to programmers by external consultants.[13]
The profession of "programmer–archaeologist" features prominently inVernor Vinge's 1999 sci-fi novelA Deepness in the Sky.[14]
|
https://en.wikipedia.org/wiki/Software_archaeology
|
User experience(UX) is how a user interacts with and experiences aproduct,systemorservice. It includes a person's perceptions ofutility,ease of use, andefficiency. Improving user experience is important to most companies, designers, and creators when creating and refining products because negative user experience can diminish the use of the product and, therefore, any desired positive impacts. Conversely, designing towardprofitabilityas a main objective often conflicts with ethical user experience objectives and even causes harm. User experience issubjective. However, the attributes that make up the user experience areobjective.
According toNielsen Norman Group, 'user experience' includes all the aspects of the interaction between the end-user with the company, its services, and its products.[1]
The international standard onergonomicsof human-system interaction,ISO 9241, defines user experience as a "user’s perceptions and responses that result from the use and/or anticipated use of a system, product or service".[2]According to the ISO definition, user experience includes all the users' emotions, beliefs, preferences, perceptions, physical and psychological responses, behaviors and accomplishments that occur before, during, and after use. The ISO also lists three factors that influence user experience: the system, the user, and the context of use.
Note 3 of the standard hints thatusabilityaddresses aspects of user experience, e.g. "usability criteria can be used to assess aspects of user experience". The standard does not go further in clarifying the relation between user experience and usability. Clearly, the two are overlapping concepts, with usability includingpragmaticaspects (getting a task done) and user experience focusing on users' feelings stemming both from pragmatic andhedonicaspects of the system. Many practitioners use the terms interchangeably. The term "usability" pre-dates the term "user experience". Part of the reason the terms are often used interchangeably is that, as a practical matter, a user will, at a minimum, require sufficient usability to accomplish a task while the feelings of the user may be less important, even to the user themselves. Since usability is about getting a task done, aspects of user experience likeinformation architectureanduser interfacecan help or hinder a user's experience. If a website has "bad" information architecture and a user has a difficult time finding what they are looking for, then a user will not have an effective, efficient, and satisfying search.
In addition to theISOstandard, there exist several other definitions for user experience.[3]Some of them have been studied by various researchers.[4]
Early developments in user experience can be traced back to theMachine Agethat includes the 19th and early 20th centuries. Inspired by the machine age intellectual framework, a quest for improving assembly processes to increase production efficiency and output led to the development of major technological advancements, such as mass production of high-volume goods on moving assembly lines, high-speed printing press, large hydroelectric power production plants, and radio technology, to name a few.
Frederick Winslow TaylorandHenry Fordexplored ways to make human labor more efficient and productive. Taylor's research into the efficiency of interactions between workers and their tools is the earliest example that resembles today's user experience fundamentals.[citation needed]
The termuser experiencewas brought to wider knowledge byDonald Normanin the mid-1990s.[5]He never intended the term "user experience" to be applied only to the affective aspects of usage. A review of his earlier work[6]suggests that the term "user experience" was used to signal a shift to include affective factors, along with the pre-requisite behavioral concerns, which had been traditionally considered in the field. Many usability practitioners continue to research and attend to affective factors associated with end-users, and have been doing so for years, long before the term "user experience" was introduced in the mid-1990s.[7]In an interview in 2007, Norman discusses the widespread use of the term "user experience" and its imprecise meaning as a consequence thereof.[8]
Several developments affected the rise of interest in the user experience:
The field of user experience represents an expansion and extension of the field of usability, to include theholisticperspective of how a person feels about using a system. The focus is on pleasure and value as well as on performance. The exact definition, framework, and elements of user experience are still evolving.
User experience of an interactive product or a website is usually measured by a number of methods, including questionnaires, focus groups, observed usability tests, user journey mapping and other methods. A freely available questionnaire (available in several languages) is the User Experience Questionnaire (UEQ).[15]The development and validation of this questionnaire is described in a computer science essay published in 2008.[16]
Higher levels of user experience have been linked to increased effectiveness ofdigital healthinterventions targeting improvements in physical activity,[17]nutrition, mental health and smoking.[18]
Google Ngram Viewershows wide use of the term starting in the 1930s.[19]"He suggested that more follow-up in the field would be welcomed by the user, and would be a means of incorporating the results of user's experience into the design of new machines." Use of the term in relation to computer software also pre-datesNorman.[20]
Many factors can influence a user's experience with a system. To address the variety, factors influencing user experience have been classified into three main categories: user's state and previous experience, system properties, and the usage context (situation).[21]Understanding representative users, working environments, interactions and emotional reactions help in designing the system duringUser experience design.
Single experiences influence the overall user experience:[22]the experience of a key click affects the experience of typing a text message, the experience of typing a message affects the experience of text messaging, and the experience of text messaging affects the overall user experience with the phone. The overall user experience is not simply a sum of smaller interaction experiences, because some experiences are more salient than others. Overall user experience is also influenced by factors outside the actual interaction episode:brand, pricing, friends' opinions, reports in media, etc.
One branch inuser experience researchfocuses on emotions. This includes momentary experiences during interaction: designing effective interaction and evaluatingemotions. Another branch is interested in understanding the long-term relation between user experience and product appreciation. The industry sees good overall user experience with a company's products as critical for securing brand loyalty and enhancing the growth of the customer base. All temporal levels of user experience (momentary, episodic, and long-term) are important, but the methods todesignandevaluatethese levels can be very different.
Developer experience (DX) is a user experience from a developer's point of view. It is defined by the tools, processes, and software that a developer uses when interacting with a product or system while in the process of production of another one, such as insoftware development.[23]DX has had increased attention paid to it especially in businesses who primarily offersoftware as a serviceto other businesses where ease of use is a key differentiator in the market.[24]
|
https://en.wikipedia.org/wiki/User_experience
|
Softwareconsists ofcomputer programsthat instruct theexecutionof acomputer.[1]Software also includes design documents and specifications.
The history of software is closely tied to the development of digital computers in the mid-20th century. Early programs were written in themachine languagespecific to the hardware. The introduction ofhigh-level programming languagesin 1958 allowed for more human-readable instructions, makingsoftware developmenteasier and more portable across differentcomputer architectures. Software in a programming language is run through acompilerorinterpretertoexecuteon the architecture's hardware. Over time, software has become complex, owing to developments innetworking,operating systems, anddatabases.
Software can generally be categorized into two main types:
The rise ofcloud computinghas introduced the new software delivery modelSoftware as a Service(SaaS). In SaaS, applications are hosted by aproviderandaccessedover theInternet.
The process of developing software involves several stages. The stages includesoftware design,programming,testing,release, andmaintenance.Software quality assuranceandsecurityare critical aspects of software development, asbugsandsecurity vulnerabilitiescan lead to system failures and security breaches. Additionally, legal issues such as software licenses and intellectual property rights play a significant role in the distribution of software products.
The first use of the wordsoftwareto describe computer programs is credited to mathematicianJohn Wilder Tukeyin 1958.[3]The first programmable computers, which appeared at the end of the 1940s,[4]were programmed inmachine language. Machine language is difficult to debug and notportableacross different computers.[5]Initially, hardware resources were more expensive thanhuman resources.[6]As programs became complex,programmer productivitybecame the bottleneck. The introduction ofhigh-level programming languagesin 1958hidthe details of the hardware and expressed the underlyingalgorithmsinto the code .[7][8]Early languages includeFortran,Lisp, andCOBOL.[8]
There are two main types of software:
Software can also be categorized by how it isdeployed. Traditional applications are purchased with a perpetuallicensefor a specific version of the software, downloaded, and run on hardware belonging to the purchaser.[17]The rise ofthe Internetandcloud computingenabled a new model,software as a service(SaaS),[18]in which the provider hosts the software (usually built on top of rentedinfrastructureorplatforms)[19]and provides the use of the software to customers, often in exchange for asubscription fee.[17]By 2023, SaaS products—which are usually delivered via aweb application—had become the primary method that companies deliver applications.[20]
Software companies aim to deliver a high-quality product on time and under budget. A challenge is thatsoftware development effort estimationis often inaccurate.[21]Software developmentbegins by conceiving the project, evaluating its feasibility, analyzing the business requirements, and making asoftware design.[22][23]Most software projects speed up their development byreusingor incorporating existing software, either in the form ofcommercial off-the-shelf(COTS) oropen-source software.[24][25]Software quality assuranceis typically a combination of manualcode reviewby other engineers[26]and automatedsoftware testing. Due to time constraints, testing cannot cover all aspects of the software's intended functionality, so developers often focus on the most critical functionality.[27]Formal methodsare used in some safety-critical systems to prove the correctness of code,[28]whileuser acceptance testinghelps to ensure that the product meets customer expectations.[29]There are a variety ofsoftware development methodologies, which vary from completing all steps in order to concurrent and iterative models.[30]Software development is driven byrequirementstaken from prospective users, as opposed to maintenance, which is driven by events such as a change request.[31]
Frequently, software isreleasedin an incomplete state when the development team runs out of time or funding.[32]Despitetestingandquality assurance, virtually all software containsbugswhere the system does not work as intended. Post-releasesoftware maintenanceis necessary to remediate these bugs when they are found and keep the software working as the environment changes over time.[33]New features are often added after the release. Over time, the level of maintenance becomes increasingly restricted before being cut off entirely when the product is withdrawn from the market.[34]As softwareages, it becomes known aslegacy softwareand can remain in use for decades, even if there is no one left who knows how to fix it.[35]Over the lifetime of the product, software maintenance is estimated to comprise 75 percent or more of the total development cost.[36][37]
Completing a software project involves various forms of expertise, not just insoftware programmersbut also testing, documentation writing,project management,graphic design,user experience, user support,marketing, and fundraising.[38][39][23]
Software qualityis defined as meeting the stated requirements as well as customer expectations.[40]Quality is an overarching term that can refer to a code's correct and efficient behavior, its reusability andportability, or the ease of modification.[41]It is usually more cost-effective to build quality into the product from the beginning rather than try to add it later in the development process.[42]Higher quality code will reduce lifetime cost to both suppliers and customers as it is more reliable andeasier to maintain.[43][44]Software failures insafety-critical systemscan be very serious including death.[43]By some estimates, the cost of poor quality software can be as high as 20 to 40 percent of sales.[45]Despite developers' goal of delivering a product that works entirely as intended, virtually all software contains bugs.[46]
The rise of the Internet also greatly increased the need forcomputer securityas it enabled malicious actors to conductcyberattacksremotely.[47][48]If a bug creates a security risk, it is called avulnerability.[49][50]Software patchesare often released to fix identified vulnerabilities, but those that remain unknown (zero days) as well as those that have not been patched are still liable for exploitation.[51]Vulnerabilities vary in their ability to beexploitedby malicious actors,[49]and the actual risk is dependent on the nature of the vulnerability as well as the value of the surrounding system.[52]Although some vulnerabilities can only be used fordenial of serviceattacks that compromise a system's availability, others allow the attacker toinjectand run their own code (calledmalware), without the user being aware of it.[49]To thwart cyberattacks, all software in the system must be designed to withstand and recover from external attack.[48]Despite efforts to ensure security, a significant fraction of computers are infected with malware.[53]
Programming languages are the format in which software is written. Since the 1950s, thousands of different programming languages have been invented; some have been in use for decades, while others have fallen into disuse.[54]Some definitions classifymachine code—the exact instructions directly implemented by the hardware—andassembly language—a more human-readable alternative to machine code whose statements can be translated one-to-one into machine code—as programming languages.[55]Programs written in thehigh-level programming languagesused to create software share a few main characteristics: knowledge of machine code is not necessary to write them, they can beportedto other computer systems, and they are more concise and human-readable than machine code.[56]They must be both human-readable and capable of being translated into unambiguous instructions for computer hardware.[57]
The invention of high-level programming languages was simultaneous with thecompilersneeded to translate them automatically into machine code.[58]Most programs do not contain all the resources needed to run them and rely on externallibraries. Part of the compiler's function is to link these files in such a way that the program can be executed by the hardware. Once compiled, the program can be saved as anobject fileand theloader(part of the operating system) can take this saved file andexecuteit as aprocesson the computer hardware.[59]Some programming languages use aninterpreterinstead of a compiler. An interpreter converts the program into machine code atrun time, which makes them 10 to 100 times slower than compiled programming languages.[60][61]
Software is often released with the knowledge that it is incomplete or contains bugs. Purchasers knowingly buy it in this state, which has led to a legal regime whereliabilityfor software products is significantly curtailed compared to other products.[62]
Since the mid-1970s, software and its source code have been protected bycopyright lawthat vests the owner with the exclusive right to copy the code. The underlying ideas or algorithms are not protected by copyright law, but are sometimes treated as atrade secretand concealed by such methods asnon-disclosure agreements.[63]Asoftware copyrightis often owned by the person or company that financed or made the software (depending on their contracts with employees orcontractorswho helped to write it).[64]Some software is in thepublic domainand has no restrictions on who can use it, copy or share it, or modify it; a notable example is software written by theUnited States Government.Free and open-source softwarealso allow free use, sharing, and modification, perhaps with a few specified conditions.[64]The use of some software is governed by an agreement (software license) written by the copyright holder and imposed on the user.Proprietary softwareis usually sold under a restrictive license that limits its use and sharing.[65]Some free software licenses require that modified versions must be released under the same license, which prevents the software from being sold
or distributed under proprietary restrictions.[66]
Patentsgive an inventor an exclusive, time-limited license for a novel product or process.[67]Ideas about what software could accomplish are not protected by law and concrete implementations are instead covered bycopyright law. In some countries, a requirement for the claimed invention to have an effect on the physical world may also be part of the requirements for a software patent to be held valid.[68]Software patentshave beenhistorically controversial. Before the 1998 caseState Street Bank & Trust Co. v. Signature Financial Group, Inc., software patents were generally not recognized in the United States. In that case, theSupreme Courtdecided that business processes could be patented.[69]Patent applications are complex and costly, and lawsuits involving patents can drive up the cost of products.[70]Unlike copyrights, patents generally only apply in the jurisdiction where they were issued.[71]
EngineerCapers Joneswrites that "computers and software are making profound changes to every aspect of human life: education, work, warfare, entertainment, medicine, law, and everything else".[73]It has become ubiquitous ineveryday lifeindeveloped countries.[74]In many cases, software augments the functionality of existing technologies such as householdappliancesandelevators.[75]Software also spawned entirely new technologies such asthe Internet,video games,mobile phones, andGPS.[75][76]New methods of communication, includingemail,forums,blogs,microblogging,wikis, andsocial media, were enabled by the Internet.[77]Massive amounts of knowledge exceeding any paper-based library are now available with a quickweb search.[76]Most creative professionals have switched to software-based tools such ascomputer-aided design,3D modeling, digitalimage editing, andcomputer animation.[72]Almost every complex device is controlled by software.[76]
|
https://en.wikipedia.org/wiki/Computer_software
|
Applicationsoftwareis anycomputer programthat is intended forend-useruse – notoperating,administeringorprogrammingthe computer. Anapplication(app,application program,software application) is any program that can be categorized as application software.[1][2]Common types of applications includeword processor,media playerandaccounting software.
The termapplication softwarerefers to all applications collectively[3]and can be used to differentiate fromsystemandutilitysoftware.
Applications may be bundled with thecomputerand its system software or published separately. Applications may beproprietaryoropen-source.[4]
The short termapp(coined in 1981 or earlier[5]) became popular with the 2008 introduction of theiOS App Store, to refer toapplications for mobile devicessuch assmartphonesandtablets. Later, with introduction of theMac App Store(in 2010) andWindows Store(in 2011), the term was extended in popular use to include desktop applications.
The delineation between system software such asoperating systemsand application software is not exact and is occasionally the object of controversy.[6]For example, one of the key questions in theUnited States v. Microsoft Corp.antitrusttrial was whether Microsoft'sInternet Explorerweb browserwas part of itsWindowsoperating system or a separate piece of application software. As another example, theGNU/Linux naming controversyis, in part, due to disagreement about the relationship between theLinux kerneland the operating systems built over thiskernel. In some types ofembedded systems, the application software and the operating system software may be indistinguishable by the user, as in the case of software used to control aVCR,DVDplayer, ormicrowave oven. The above definitions may exclude some applications that may exist on some computers in large organizations. For an alternative definition of an app:seeApplication Portfolio Management.
When used as an adjective,applicationis not restricted to mean: of or on application software.[6]For example, concepts such asapplication programming interface(API),application server,application virtualization,application lifecycle managementandportable applicationapply to all computer programs alike, not just application software.
Sometimes a new and popular application arises that only runs on oneplatformthat results in increasing the desirability of that platform. This is called akiller applicationorkiller app, coined in the late 1980s.[7][8]For example,VisiCalcwas the first modernspreadsheetsoftware for the Apple II and helped sell the then-newpersonal computersinto offices. For theBlackBerry, it was itsemailsoftware.
Some applications are available for multiple platforms while others only work on one and are thus called, for example, ageographyapplicationforMicrosoft Windows, or anAndroidapplication foreducation, or aLinuxgame.
There are many different and alternative ways to classify application software.
From the legal point of view, application software is mainly classified with ablack-box approach, about the rights of itsend-usersorsubscribers(with eventual intermediate and tiered subscription levels).
Softwareapplications are also classified with respect to the programming language in which the source code is written or executed, and concerning their purpose and outputs.
Application software is usually distinguished into two main classes: closed source vsopen source softwareapplications, andfreeorproprietary softwareapplications.
Proprietary software is placed under the exclusive copyright, and asoftware licensegrants limited usage rights. Theopen-closed principlestates that software may be "open only for extension, but not for modification". Such applications can only getadd-onsfrom third parties.
Free and open-source software (FOSS)shall be run, distributed, sold, or extended for any purpose, and -being open- shall be modified orreversedin the same way.
FOSS software applications released under afree licensemay beperpetualand alsoroyalty-free. Perhaps, theowner, theholderor third-partyenforcerof any right (copyright,trademark,patent, orius in re aliena) are entitled to add exceptions, limitations, time decays or expiring dates to the license terms of use.
Public-domain softwareis a type of FOSS which is royalty-free and - openly or reservedly- can be run, distributed, modified, reversed, republished, or created in derivative works without anycopyright attributionand thereforerevocation. It can even be sold, but without transferring the public domain property to other single subjects. Public-domain SW can be released under a (un)licensing legal statement, which enforces those terms and conditions for an indefinite duration (for a lifetime, or forever).
Since the development and near-universal adoption of theweb, an important distinction that has emerged, has been betweenweb applications— written withHTML,JavaScriptand other web-native technologies and typically requiring one to be online and running aweb browser— and the more traditional native applications written in whatever languages are available for one's particulartype of computer. There has been a contentious debate in the computing community regarding web applications replacing native applications for many purposes, especially on mobile devices such assmartphonesandtablets. Web apps have indeed greatly increased in popularity for some uses, but the advantages of applications make them unlikely to disappear soon, if ever. Furthermore, the two can be complementary, and even integrated.[9][10][11]
Application software can also be seen as being eitherhorizontalorvertical.[12][13]Horizontal applications are more popular and widespread, because they are general purpose, for example word processors or databases. Vertical applications areniche products, designed for a particular type of industry or business, or department within an organization. Integrated suites of software will try to handle every specific aspect possible of, for example, manufacturing or banking worker, accounting, or customer service.
There are many types of application software:[14]
Applications can also be classified bycomputing platformssuch as a desktop application for a particularoperating system,[16]delivery network such as incloud computingandWeb 2.0applications, or delivery devices such asmobile appsformobile devices.
The operating system itself can be considered application software when performing simple calculating, measuring, rendering, and word processing tasks not used to control hardware via acommand-line interfaceorgraphical user interface. This does not include application software bundled within operating systems such as asoftware calculatorortext editor.
|
https://en.wikipedia.org/wiki/Application_software
|
Thesoftware industryincludes businesses fordevelopment,maintenanceandpublicationofsoftwarethat are using differentbusiness models, mainly either "license/maintenance based" (on-premises) or "Cloudbased" (such asSaaS,PaaS,IaaS, MBaaS, MSaaS, DCaaS etc.). The industry also includessoftware services, such astraining,documentation, consulting and data recovery. The software and computer services industry spends more than 11% of its net sales for Research & Development which is in comparison with other industries the second highest share after pharmaceuticals & biotechnology.[1]
The first company founded to provide software products and services wasComputer Usage Companyin 1955.[2]Before that time, computers were programmed either by customers, or the few commercial computer vendors of the time, such asSperry RandandIBM.
Thesoftwareindustry expanded in the early 1960s, almost immediately after computers were first sold in mass-produced quantities. Universities, government, and business customers created a demand for software. Many of these programs were written in-house by full-time staff programmers. Some were distributed freely between users of a particular machine for no charge. Others were done on a commercial basis, and other firms such asComputer Sciences Corporation(founded in 1959) started to grow. Other influential or typical software companies begun in the early 1960s includedAdvanced Computer Techniques,Automatic Data Processing,Applied Data Research, andInformatics General.[3][4]The computer/hardwaremakers started bundlingoperating systems,systems softwareand programming environments with their machines.
WhenDigital Equipment Corporation(DEC) brought a relatively low-pricedmicrocomputerto market, it brought computing within the reach of many more companies and universities worldwide, and it spawned great innovation in terms of new, powerful programming languages and methodologies. New software was built for microcomputers, so other manufacturers including IBM, followed DEC's example quickly, resulting in theIBM AS/400amongst others.
The industry expanded greatly with the rise of thepersonal computer("PC") in the mid-1970s, which brought desktop computing to the office worker for the first time. In the following years, it also created a growing market for games, applications, and utilities.DOS,Microsoft's firstoperating systemproduct, was the dominant operating system at the time.
In the early years of the 21st century, another successfulbusiness modelhas arisen for hosted software, calledsoftware-as-a-service, or SaaS; this was at least the third time[citation needed]this model had been attempted. From the point of view of producers of someproprietary software, SaaS reduces the concerns aboutunauthorized copying, since it can only be accessed through the Web, and by definition noclient softwareis loaded onto the end user's PC.
Market research firm Gartner estimates the global market for IT spending in 2024 at $3.73 trillion. If telecoms services are included, this will rise to $5.26 trillion.[5]Major companies include Microsoft,HP,Oracle,Delland IBM.[6]
The software industry has been subject to a high degree of consolidation over the past couple of decades. Between 1995 and 2018 around 37,039mergers and acquisitionshave been announced with a total known value of US$1,166 billion.[7]The highest number and value of deals was set in 2000 during the high times of thedot-com bubblewith 2,674 transactions valued at US$105 billion. In 2017, 2,547 deals were announced valued at US$111 billion. Approaches to successfully acquire and integrate software companies are available.[8]
Software industry business models include SaaS (subscription-based), PaaS (platform services), IaaS (infrastructure services), and freemium (free with premium features). Others are perpetual licenses (one-time fee), ad-supported (free with ads), open source (free with paid support), pay-per-use (usage-based), and consulting/customization services. Hybrid models combine multiple approaches.
Business models of software companies have been widely discussed.[9][10]Network effectsinsoftware ecosystems, networks of companies, and their customers are an important element in the strategy of software companies.[11]
|
https://en.wikipedia.org/wiki/Software_industry
|
Analyticsis the systematic computational analysis of data orstatistics.[1]It is used for the discovery, interpretation, and communication of meaningful patterns indata, which also falls under and directly relates to the umbrella term,data science.[2]Analytics also entails applying data patterns toward effective decision-making. It can be valuable in areas rich with recorded information; analytics relies on the simultaneous application ofstatistics,computer programming, andoperations researchto quantify performance.
Organizations may apply analytics to business data to describe, predict, and improve business performance. Specifically, areas within analytics include descriptive analytics, diagnostic analytics,predictive analytics,prescriptive analytics, and cognitive analytics.[3]Analytics may apply to a variety of fields such asmarketing,management,finance, online systems,information security, andsoftware services. Since analytics can require extensive computation (seebig data), the algorithms and software used for analytics harness the most current methods in computer science, statistics, and mathematics.[4]According toInternational Data Corporation, global spending on big data and business analytics (BDA) solutions is estimated to reach $215.7 billion in 2021.[5][6]As perGartner, the overall analytic platforms software market grew by $25.5 billion in 2020.[7]
Data analysisfocuses on the process of examining past data through business understanding, data understanding, data preparation, modeling and evaluation, and deployment.[8]It is a subset of data analytics, which takes multiple data analysis processes to focus on why an event happened and what may happen in the future based on the previous data.[9][unreliable source?]Data analytics is used to formulate larger organizational decisions.[citation needed]
Data analytics is amultidisciplinaryfield. There is extensive use of computer skills, mathematics, statistics, the use of descriptive techniques and predictive models to gain valuable knowledge from data through analytics.[citation needed]There is increasing use of the termadvanced analytics, typically used to describe the technical aspects of analytics, especially in the emerging fields such as the use ofmachine learningtechniques likeneural networks, decision trees, logistic regression, linear to multipleregression analysis, and classification to dopredictive modeling.[10][8]It also includesunsupervised machine learning techniqueslikecluster analysis,principal component analysis, segmentation profile analysis and association analysis.[citation needed]
Marketing organizations use analytics to determine the outcomes of campaigns or efforts, and to guide decisions for investment and consumer targeting. Demographic studies, customer segmentation, conjoint analysis and other techniques allow marketers to use large amounts of consumer purchase, survey andpanel datato understand and communicate marketing strategy.[11]
Marketing analytics consists of both qualitative and quantitative, structured and unstructured data used to drive strategic decisions about brand and revenue outcomes. The process involves predictive modelling, marketing experimentation, automation and real-time sales communications. The data enables companies to make predictions and alter strategic execution to maximize performance results.[11]
Web analyticsallows marketers to collect session-level information about interactions on a website using an operation calledsessionization.Google Analyticsis an example of a popular free analytics tool that marketers use for this purpose.[12]Those interactions provideweb analyticsinformation systems with the information necessary to track the referrer, search keywords, identify the IP address,[13]and track the activities of the visitor. With this information, a marketer can improve marketing campaigns, website creative content, and information architecture.[14]
Analysis techniques frequently used in marketing include marketing mix modeling, pricing and promotion analyses, sales force optimization and customer analytics, e.g., segmentation. Web analytics and optimization of websites and online campaigns now frequently work hand in hand with the more traditional marketing analysis techniques. A focus on digital media has slightly changed the vocabulary so thatmarketing mix modelingis commonly referred to asattribution modelingin the digital ormarketing mix modelingcontext.[citation needed]
These tools and techniques support both strategic marketing decisions (such as how much overall to spend on marketing, how to allocate budgets across a portfolio of brands and the marketing mix) and more tactical campaign support, in terms of targeting the best potential customer with the optimal message in the most cost-effective medium at the ideal time.
People analytics uses behavioral data to understand how people work and change how companies are managed.[15]It can be referred to by various names, depending on the context, the purpose of the analytics, or the specific focus of the analysis. Some examples include workforce analytics, HR analytics, talent analytics, people insights, talent insights, colleague insights, human capital analytics, andhuman resources information system(HRIS) analytics. HR analytics is the application of analytics to help companies managehuman resources.[16]
HR analytics has become a strategic tool in analyzing and forecasting human-related trends in the changing labor markets, using career analytics tools.[17]The aim is to discern which employees to hire, which to reward or promote, what responsibilities to assign, and similar human resource problems.[18]For example, inspection of the strategic phenomenon of employee turnover utilizing people analytics tools may serve as an important analysis at times of disruption.[19]
It has been suggested that people analytics is a separate discipline to HR analytics, with a greater focus on addressing business issues, while HR Analytics is more concerned with metrics related to HR processes.[20]Additionally, people analytics may now extend beyond the human resources function in organizations.[21]However, experts find that many HR departments are burdened by operational tasks and need to prioritize people analytics and automation to become a more strategic and capable business function in the evolving world of work, rather than producing basic reports that offer limited long-term value.[22]Some experts argue that a change in the way HR departments operate is essential. Although HR functions were traditionally centered on administrative tasks, they are now evolving with a new generation of data-driven HR professionals who serve as strategic business partners.[23]
Examples of HR analytic metrics includeemployee lifetime value(ELTV), labour cost expense percent, union percentage, etc.[citation needed]
A common application of business analytics isportfolio analysis. In this, abankor lending agency has a collection of accounts of varyingvalueandrisk. The accounts may differ by the social status (wealthy, middle-class, poor, etc.) of the holder, the geographical location, its net value, and many other factors. The lender must balance the return on theloanwith the risk of default for each loan. The question is then how to evaluate the portfolio as a whole.[24]
The least risk loan may be to the very wealthy, but there are a very limited number of wealthy people. On the other hand, there are many poor that can be lent to, but at greater risk. Some balance must be struck that maximizes return and minimizes risk. The analytics solution may combinetime seriesanalysis with many other issues in order to make decisions on when to lend money to these different borrower segments, or decisions on the interest rate charged to members of a portfolio segment to cover any losses among members in that segment.[citation needed]
Predictive models in the banking industry are developed to bring certainty across the risk scores for individual customers.Credit scoresare built to predict an individual's delinquency behavior and are widely used to evaluate the credit worthiness of each applicant.[25]Furthermore, risk analyses are carried out in the scientific world[26]and the insurance industry.[27]It is also extensively used in financial institutions likeonline paymentgateway companies to analyse if a transaction was genuine or fraud.[28]For this purpose, they use the transaction history of the customer. This is more commonly used in Credit Card purchases, when there is a sudden spike in the customer transaction volume the customer gets a call of confirmation if the transaction was initiated by him/her. This helps in reducing loss due to such circumstances.[29]
Digital analytics is a set of business and technical activities that define, create, collect, verify or transform digital data into reporting, research, analyses, recommendations, optimizations, predictions, and automation.[30]This also includes the SEO (search engine optimization) where the keyword search is tracked and that data is used for marketing purposes.[31]Even banner ads and clicks come under digital analytics.[32]A growing number of brands and marketing firms rely on digital analytics for theirdigital marketingassignments, wheremarketing return on investment (MROI)is an importantkey performance indicator(KPI).[citation needed]
Security analytics refers to information technology (IT) to gather security events to understand and analyze events that pose the greatest security risks.[33][34]Products in this area includesecurity information and event managementand user behavior analytics.
Software analytics is the process of collecting information about the way a piece ofsoftwareis used and produced.[35]
In the industry of commercial analytics software, an emphasis has emerged on solving the challenges of analyzing massive, complex data sets, often when such data is in a constant state of change. Such data sets are commonly referred to asbig data.[36]Whereas once the problems posed by big data were only found in the scientific community, today big data is a problem for many businesses that operate transactional systems online and, as a result, amass large volumes of data quickly.[37][36]
The analysis ofunstructured datatypes is another challenge getting attention in the industry. Unstructured data differs fromstructured datain that its format varies widely and cannot be stored in traditional relational databases without significant effort at data transformation.[38]Sources of unstructured data, such as email, the contents of word processor documents, PDFs,geospatial data, etc., are rapidly becoming a relevant source ofbusiness intelligencefor businesses, governments and universities.[39][40]For example, in Britain the discovery that one company was illegally selling fraudulent doctor's notes in order to assist people in defrauding employers and insurance companies[41]is an opportunity for insurance firms to increase the vigilance of their unstructureddata analysis.[42][original research?]
These challenges are the current inspiration for much of the innovation in modern analytics information systems, giving birth to relatively new machine analysis concepts such ascomplex event processing,[43]full text search and analysis, and even new ideas in presentation. One such innovation is the introduction of grid-like architecture in machine analysis, allowing increases in the speed ofmassively parallelprocessing by distributing the workload to many computers all with equal access to the complete data set.[44]
Analytics is increasingly used ineducation, particularly at the district and government office levels. However, the complexity of student performance measures presents challenges when educators try to understand and use analytics to discern patterns in student performance, predict graduation likelihood, improve chances of student success, etc.[45]For example, in a study involving districts known for strong data use, 48% of teachers had difficulty posing questions prompted by data, 36% did not comprehend given data, and 52% incorrectly interpreted data.[46]To combat this, some analytics tools for educators adhere to anover-the-counter dataformat (embedding labels, supplemental documentation, and a help system, and making key package/display and content decisions) to improve educators' understanding and use of the analytics being displayed.[47]
Risks for the general population includediscriminationon the basis of characteristics such as gender, skin colour, ethnic origin or political opinions, through mechanisms such asprice discriminationorstatistical discrimination.[48]
|
https://en.wikipedia.org/wiki/Analytics
|
Anapplication service provider(ASP) is a business providingapplication softwaregenerally through theWeb.[1]ASPs that specialize in a particular application (such as a medical billing program) may be referred to as providingsoftware as a service.
The application software resides on thevendor's systemand is accessed by users through acommunication protocol. Alternatively, the vendor may provide special purpose client software. Client software may interface with these systems through anapplication programming interface.
ASP characteristics include:
The advantages to this approach include:
The disadvantages include:
|
https://en.wikipedia.org/wiki/Application_service_provider
|
Customer serviceis the assistance and advice provided by a company to those who buy or use its products or services, either in person or remotely. Customer service is often practiced in a way that reflects the strategies and values of a firm, and levels vary according to the industry.[1]Good quality customer service is usually measured throughcustomer retention. Successful customer service interactions are dependent on employees "who can adjust themselves to the personality of the customer".[2]
Customer service for some firms is part of the firm's intangible assets and can differentiate it from others in the industry. One good customer service experience can change the entire perception a customer holds towards the organization.[3]It is expected thatAI-basedchatbotswill significantly impact customer service andcall centreroles and will increase productivity substantially.[4][5][6]Many organisations have already adopted AI chatbots to improve their customer service experience.[6][7][5]
The evolution in the service industry has identified the needs of consumers. Companies usually create policies or standards to guide their personnel to follow their particular service package. A service package is a combination of tangible and intangible characteristics a firm uses to take care of its clients.[8]
Customer support is a range of consumer services to assist customers in making cost-effective and correct use of a product.[9]It includes assistance in planning, installation, training, troubleshooting, maintenance, upgrading, and disposal of a product.[9]These services may even be provided at the place in which the customer makes use of the product or service. In this case, it is called "at home customer service" or "at home customer support." Customer support is an effective strategy that ensures that the customer's needs have been attended to. Customer support helps ensure that the products and services that have been provided to the customer meet their expectations. Given an effective and efficient customer support experience, customers tend to be loyal to the organization, which creates a competitive advantage over its competitors. Organizations should ensure that any complaints from customers about customer support have been dealt with effectively.[10]
Customer service may be provided in person (e.g.sales/ service representative), or by automated means,[11]such as kiosks,websites, andapps. An advantage of automation is that it can provide service 24 hours a day which can complement face-to-face customer service.[12]There is also economic benefit to the firm. Through the evolution of technology, automated services become less expensive over time. This helps provide services to more customers for a fraction of the cost of employees' wages. Automation can facilitate customer service or replace it entirely.
A popular type of automated customer service is done throughartificial intelligence(AI). The customer benefit of AI is the feel for chatting with a live agent through improved speech technologies while giving customers the self-service benefit.[13]AI can learn through interaction to give a personalized service. The exchange the Internet of Things (IoT) facilitates within devices, lets us transfer data when we need it, where we need it. Each gadget catches the information it needs while it maintains communication with other devices. This is also done through advances in hardware and software technology. Another form of automated customer service istouch-tone phone, which usually involves IVR (Interactive Voice Response) a main menu and the use of a keypad as options (e.g. "Press 1 for English, Press 2 for Spanish").[14]
In theInternet era, a challenge is to maintain and/or enhance the personal experience while making use of the efficiencies ofonline commerce. "Online customers are literally invisible to you (and you to them), so it's easy to shortchange them emotionally. But this lack of visual and tactile presence makes it even more crucial to create a sense of personal, human-to-human connection in the online arena."[15]
Examples of customer service by artificial means areautomated online assistantsthat can be seen asavatarson websites,[12]which enterprises can use to reduce operating and training costs.[12]These are driven bychatbots, and a major underlying technology to such systems isnatural language processing.[12]
The two primary methods of gathering feedback are customer surveys andNet Promoter Scoremeasurement, used for calculating the loyalty that exists between a provider and a consumer.[16]
Many outfits have implemented feedback loops that allow them to capture feedback at point of experience. For example,National Expressin the UK has invited passengers to send text messages while riding the bus. This has been shown to be useful, as it allows companies to improve their customer service before the customer defects, thus making it far more likely that the customer will return next time.[17]
|
https://en.wikipedia.org/wiki/Customer_service
|
Outsourcingis a business practice in whichcompaniesuse external providers to carry outbusiness processesthat would otherwise be handled internally.[1][2][3]Outsourcing sometimes involves transferring employees and assets from one firm to another.
The termoutsourcing, which came from the phraseoutside resourcing, originated no later than 1981 at a time when industrial jobs in the United States were being moved overseas, contributing to the economic and cultural collapse of small, industrial towns.[4][5][6]In some contexts, the termsmartsourcingis also used.[7]
The concept, whichThe Economistsays has "made its presence felt since the time of theSecond World War",[8]often involves the contracting out of abusiness process(e.g.,payrollprocessing, claims processing), operational, and/or non-core functions, such as manufacturing,facility management,call center/call center support.
The practice of handing over control of public services to private enterprises (privatization), even if conducted on a limited, short-term basis,[9]may also be described as outsourcing.[10]
Outsourcing includes both foreign and domestic contracting,[11]and therefore should not be confused withoffshoringwhich is relocating a business process to another country but does not imply or preclude another company.[12]In practice, the concepts can be intertwined, i.e.offshore outsourcing, and can be individually or jointly, partially or completely reversed,[13]as described by terms such asreshoring,inshoring, andinsourcing.
Global labor arbitragecan provide major financial savings from lower international labor rates, which could be a major motivation for offshoring. Cost savings fromeconomies of scaleand specialization can also motivate outsourcing, even if not offshoring. Since about 2015 indirect revenue benefits have increasingly become additional motivators.[14][15]
Another motivation is speed to market. To make this work, a new process was developed: "outsource the outsourcing process".[16]Details of managingDuPont's chief information officerCinda Hallman's $4 billion 10-year outsourcing contract withComputer Sciences CorporationandAccenturewere outsourced, thus avoiding "inventing a process if we'd done it in-house". A term subsequently developed to describe this ismidsourcing.[17][18][19]
Outsourcing can offer greater budget flexibility and control by allowing organizations to pay for the services and business functions they need, when they need them. It is often perceived to reduce hiring and training specialized staff, to make available specialized expertise, and to decrease capital, operating expenses,[20]and risk.
"Do what you do best and outsource the rest" has become an internationally recognized business tagline first "coined and developed"[21]in the 1990s by management consultantPeter Drucker. The slogan was primarily used to advocate outsourcing as a viable business strategy. Drucker began explaining the concept of "outsourcing" as early as 1989 in hisWall Street Journalarticle entitled "Sell the Mailroom".[22]
From Drucker's perspective, a company should only seek to subcontract in those areas in which it demonstrated no special ability.[23]The business strategy outlined by his slogan recommended that companies should take advantage of a specialist provider's knowledge and economies of scale to improve performance and achieve the service needed.[24]
In 2009, by way of recognition, Peter Drucker posthumously received a significant honor when he was inducted into the Outsourcing Hall of Fame for his outstanding work in the field.[23]
The biggest difference between outsourcing and in-house provision is with regards to the difference in ownership: outsourcing usually presupposes the integration of business processes under a different ownership, over which the client business has minimal or no control. This requires the use ofoutsourcing relationship management.[25]
Sometimes the effect of what looks like outsourcing from one side and insourcing from the other side can be unexpected;The New York Timesreported in 2001 that "6.4 million Americans .. worked for foreign companies as of 2001, [but] more jobs are being outsourced than" [the reverse].[26]
While U.S. companies do not outsource to reduce high top level executive or managerial costs,[27]they primarily outsource to reduce peripheral and "non-core" business expenses.[28]Further reasons are higher taxes, high energy costs, and excessive government regulation or mandates.
Mandated benefits likesocial security,Medicare, and safety protection (e.g.Occupational Safety and Health Administrationregulations) are also motivators.[29]By contrast,executive pay in the U.S.in 2007, which could exceed 400 times more than average workers—a gap 20 times bigger than it was in 1965,[27]is not a factor.[30]
Other reasons include reducing and controlling operating costs,[31]improving company focus, gaining access to world-class capabilities, tax credits,[32]freeing internal resources for other purposes, streamlining or increasing efficiency for time-consuming functions, and maximizing use of external resources. For small businesses, contracting/subcontracting/"outsourcing" might be done to improvework-life balance.[33]
Two organizations may enter into a contractual agreement involving anexchangeofservices, expertise, andpayments. Outsourcing is said to help firms to perform well in their core competencies, fuelinnovation, and mitigate a shortage of skill or expertise in the areas where they want to outsource.[34]Established good practices include covering exit arrangements within an outsourcing agreement, with an exit period and a mutual commitment to maintaining continuity until the exit phase is completed.[35]
Following the adding of management layers in the 1950s and 1960s to support expansion for the sake of economy of scale, corporations found that agility and added profits could be obtained by focusing on core strengths; the 1970s and 1980s were the beginnings of what later was named outsourcing.[36]Kodak's 1989 "outsourcing most of its information technology systems"[37]was followed by others during the 1990s.[37]
In 2013, the International Association of Outsourcing Professionals gave recognition toElectronic Data SystemsCorporation'sMorton H. Meyerson[38]who, in 1967, proposed the business model that eventually became known as outsourcing.[39]
The growth of offshoring of IT-enabled services, although not universally accepted,[40][41]both to subsidiaries and to outside companies (offshore outsourcing) is linked to the availability of large amounts of reliable and affordable communication infrastructure following the telecommunication and Internet expansion of the late 1990s.[42]Services making use of low-cost countries included:
In the early 21st century, businesses increasingly outsourced to suppliers outside their own country, sometimes referred to as offshoring oroffshore outsourcing. Other options subsequently emerged including: nearshoring,crowdsourcing,multisourcing,[44][45]strategic alliances/strategic partnerships, strategic outsourcing.[46]
Forbesconsidered the2016 U.S. presidential election"the most disruptive change agent for the outsourcing industry",[47]especially the renewed "invest in America" goal highlighted in campaigning, but the magazine tepidly reversed direction in 2019 as to the outcome for employment.[48]In the case ofarmament acquisition, section 323 of theNational Defense Authorization Act for 2014requires military personnel "to solicit information from all U.S.-ownedarsenalsregarding the capability of that arsenal to fulfill the manufacturing requirement" when undertaking a make-or-buy analysis.[49]
Furthermore, there are growing legal requirements fordata protection, where obligations and implementation details must be understood by both sides.[50][51]This includes dealing with customer rights.[52]
UK government policynotes that certain services must remain in-house, citing the development ofpolicy, stewardship of tax spend and retention of certain critical knowledge as examples. Guidance states that specific criteria must govern the identification of such services, and that "everything else" could potentially be outsourced.[53]
Inflation, high domestic interest rates, and economic growth pushed India's IT salaries 10–15%, making some jobs relatively "too" expensive, compared to other offshoring destinations. Areas for advancing within the value chain included research and development, equity analysis, tax-return processing, radiological analysis, andmedical transcription.
Although offshoring initially focused on manufacturing,white-collaroffshoring/outsourcing has grown rapidly since the early 21st century. Thedigital workforceof countries likeIndiaandChinaare only paid a fraction of what would beminimum wagein the United States. On average,software engineersare getting paid between 250,000 and 1,500,000 rupees (US$4,000 to US$23,000) in India as opposed to $40,000–$100,000 in countries such as the U.S. andCanada.[54]Closer to the U.S.,Costa Ricahas become a major source for the advantages of a highly educated labor force, a large bilingual population, stable democratic government, and similar time zones as the U.S. It takes only a few hours to travel between Costa Rica and U.S. Companies such asIntel,Procter & Gamble, HP,Gensler,AmazonandBank of Americahave big operations in Costa Rica.[55]
Unlike outsourced manufacturing, outsourced white collar workers haveflextimeand can choose their working hours, and for which companies to work. Clients benefit fromremote work, reduced office space, management salary, and employee benefits as these individuals areindependent contractors.[56]
Ending a government outsourcing arrangement poses difficulties.[57]
There are many outsourcing models, with variations[58]by country,[59]year[60][61]and industry.[62]Japanese companies often outsource to China, particularly to formerly Japanese-occupied cities.[63]German companies have outsourced toEastern Europeancountries with German-language affiliation, such asPolandandRomania.[64]French companies outsource to North Africa for similar reasons. For Australian IT companies,Indonesiais one of the major choice of offshoring destination. Near-shore location, common time zone and adequate IT work force are the reasons for offshoring IT services to Indonesia.[citation needed]
Another approach is to differentiate between tactical and strategic outsourcing models. Tactical models include:
Strategic consultancy includes forbusiness process improvement.[65]
When offshore outsourcing knowledge work, firms heavily rely on the availability of technical personnel at offshore locations. One of the challenges in offshoring engineering innovation is a reduction in quality.[66]
Co-sourcing is a hybrid of internal staff supplemented by an external service provider.[67][68]Co-sourcing can minimize sourcing risks, increase transparency, clarity and lend toward better control than fully outsourced.[69]
Co-sourcing services can supplement internal audit staff with specialized skills such as informationrisk managementor integrity services, or help during peak periods, or similarly for other areas such as software development or human resources.
Identity managementco-sourcing is when on-site hardware[70][71]interacts with outside identity services.
This contrasts with an "all in-the-cloud" service scenario, where the identity service is built, hosted and operated by the service provider in an externally hosted,cloud computinginfrastructure.
Offshore software R&D is the provision ofsoftware developmentservices by a supplier (whether external or internal) located in a different country from the one where the software will be used. The global software R&D services market, as contrasted to information technology outsourcing (ITO) andbusiness process outsourcing(BPO), is rather young and currently is at a relatively early stage of development.[72]
Canada, India,Ireland, andIsraelwere the four leading countries as of 2003.[72]Although many countries have participated in the offshore outsourcing of software development, their involvement in co-sourced and outsourced Research & Development (R&D) was somewhat limited. Canada, the second largest by 2009, had 21%.[73]
As of 2018, the top three were deemed by one "research-based policy analysis and commentary from leading economists" as China, India and Israel."[74]
GartnerGroup adds inRussia, but does not make clear whether this is pure R&D or run-of-the-mill IT outsourcing.[75]
Focusing onsoftware qualitymetrics is a good way to maintain track of how well a project is performing.[76][better source needed]
Globalization and complexsupply chains, along with greater physical distance between higher management and the production-floor employees often requires a change in management methodologies, as inspection and feedback may not be as direct and frequent as in internal processes. This often requires the assimilation of new communication methods such asvoice over IP,instant messaging, andissue tracking systems, newtime managementmethods such astime tracking software, and new cost- and schedule-assessment tools such ascost estimation software.[77][78][79]
The term "transition methodology"[80]describes the process of migrating knowledge, systems, and operating capabilities between the two sides.[81]
In the area of call-center outsourcing, especially when combined with offshoring,[82]agents may speak with differentlinguisticfeatures such asaccents, word use and phraseology, which may impede comprehension.[83][84][85][86]
In 1979, Nobel laureateOliver E. Williamsonwrote that thegovernancestructure is the "framework within which the integrity of a transaction is decided", and that "because contracts are varied and complex, governance structures vary with the nature of the transaction".[87]University of Tennesseeresearchers have been studying complex outsourcing relationships since 2003. Emerging thinking regarding strategic outsourcing is focusing on creating a contract structure in which the parties have a vested interest in managing what are often highly complex business arrangements in a more collaborative, aligned, flexible, and credible way.[88][89]
Reduced security, sometimes related to lower loyalty[90]may occur, even when 'outsourced' staff change their legal status but not their desk. While security and compliance issues are supposed to be addressed through the contract between the client and the suppliers, fraud cases have been reported.
In April 2005, a high-profile case involved the theft of $350,000 from fourCitibankcustomers when call-center workers acquired the passwords to customer accounts and transferred the money to their own accounts opened under fictitious names. Citibank did not find out about the problem until the American customers noticed discrepancies with their accounts and notified the bank.[91]
Richard Baldwin's 2006The Great Unbundlingwork was followed in 2012 byGlobalization's Second Acceleration (the Second Unbundling)and in 2016 byThe Great Convergence: Information Technology and the New Globalization.[92]It is here, rather than in manufacturing, that the bits economy can advance in ways that the economy of atoms and things cannot: an early 1990sNewsweekran a half page cartoon showing someone who had just ordered a pizza online, and was seeking help to download it.[citation needed]
Step-in rights allow the client or a nominated third party the right to step-in and intervene, in particular to directly operate the outsourced services or to appoint a new operator. Circumstances where step-in rights may be contractually invoked may include supplierinsolvency, aforce majeureevent which prevents or impedes the outsourced service provision, where the client believes that there is a substantial risk to the provision of the services, or where performance fails to meet a defined critical level of service.[93]Suitable clauses in a contract may provide for the outsourced service provider to pay any additional costs which are faced by the client and specify that the provider's obligation to provide the services is annulled or suspended.[94]
If a contract has a clause granting step-in rights,[95]then there is a right, though not an obligation,[96]to take over a task that is not going well, or even the entire project.WhenandHoware important: "What is the process for stepping-in" must be clearly defined in thecollateral warranty.[97]
An example of when there is sometimes hesitancy about exercising this right was reported by the BBC in 2018, whenWealden District CouncilinEast Sussexwas "considering exercising 'step in rights' on its waste collection contract withKier" due to issues of poor service.[98]After some discussion in this case, a "recovery plan" was agreed with the contractor so that the step in rights were not actually exercised.[99]
Stabler notes that in the event that step-in rights are taken up, it is important to establish which elements of a process are business-critical and ensure these are made top priority when implementing the step-in.[93]
A number of outsourcings and offshorings that were deemed failures[100][101][66]led to reversals[102][103]signaled by use of terms such asinsourcingandreshoring.The New York Timesreported in 2017 that IBM "plans to hire 25,000 more workers in the United States over the next four years," overlapping India-basedInfosys's "10,000 workers in the United States over the next two years."[103]A clue to a tipping point having been reached was a short essay titled "Maybe You Shouldn't Outsource Everything After All"[104]and the longer "That Job Sent to India May Now Go to Indiana."
Among problems encountered were supply-and-demand induced raises in salaries and lost benefits of similar-time-zone. Other issues were differences in language and culture.[103][84]Another reason for a decrease in outsourcing is that many jobs that were subcontracted abroad have been replaced by technological advances.[105]
According to a 2005Deloitte Consultingsurvey, a quarter of the companies which had outsourced tasks reversed their strategy.[105]
These reversals, however, did not undo the damage. New factories often:
Public opinion in the U.S. and other Western powers opposing outsourcing was particularly strengthened by the drastic increase in unemployment as a result of the 2007–2008 financial crisis. From 2000 to 2010, the U.S. experienced a net loss of 687,000 jobs due to outsourcing, primarily in the computers and electronics sector. Public disenchantment with outsourcing has not only stirred political responses, as seen in the2012 U.S. presidential campaigns, but it has also made companies more reluctant to outsource or offshore jobs.[105]
A counterswing depicted by a 2016 Deloitte survey suggested that companies are no longer reluctant to outsource.[107]Deloitte's survey identified three trends:
Insourcing is the process of reversing an outsourcing, possibly using help from those not currently part of the in-house staff.[108][109][110]Some authors call this backsourcing,[111]reserving the terminsourcingto refer simply to conducting certain activities in-house.
Outsourcing has gone through many iterations and reinventions, and some outsourcing contracts have been partially or fully reversed. Often the reason is to maintain control of critical production or competencies, and insourcing is used to reduce costs of taxes, labor and transportation.[112]Sometimes there are problems with the outsourcing agreements, because of the pressure to bring jobs back to their home country, or simply because it has stopped being efficient to outsource particular tasks.[113]
Studies conducted at companies confirm the positive impact of using insourcing on financial performance.[114]
Regional insourcing, a related term, takes place when a company assigns work to a subsidiary that is within the same country. This differs fromonshoringandreshoring, which may be either inside or outside the company. For this process, a company establishes satellite locations for specific entities of their business, making use of advantages one state may have over another, such as taxes, education, or workforce skill sets,[115]This concept focuses on the delegating or reassigning of procedures, functions, or jobs from production within a business in one location to another internal entity that specializes in that operation. This allows companies to streamline production, boost competency, and increase their bottom line.
This competitive strategy applies the classical argument ofAdam Smith, which posits that two nations would benefit more from one another by trading the goods that they are more proficient at manufacturing.[116][117]
To those who are concerned that nations may be losing a net number of jobs due to outsourcing, some[118]point out that insourcing also occurs. A 2004 study[119]in the U.S., the UK, and many other industrialized countries more jobs are insourced than outsourced.The New York Timesdisagreed, and wrote that free trade with low-wage countries is win-lose for many employees who find their jobs offshored or with stagnating wages.[120]
The impact of offshore outsourcing, according to two estimates published byThe Economist, showed unequal effect during the period studied 2004 to 2015, ranging from 150,000 to as high as 300,000 jobs lost per year.[121]
In 2010, a group of manufacturers started the Reshoring Initiative, focusing on bringing manufacturing jobs for American companies back to the country. Their data indicated that
140,000 American jobs were lost in 2003 due to offshoring. Eleven years later in 2014, the U.S. recovered 10,000 of those offshored positions; this marked the highest net gain in 20 years.[122]More than 90% of the jobs that American companies "offshored" and outsourced manufacturing to low cost countries such as China,MalaysiaandVietnamdid not return.[122]
The fluctuation of prefixes and names give rise to many more "cross-breeds" of insourcing. For example, "offshore insourcing" is "when companies set up their own "captive" process centers overseas, sometimes called aCaptive Service,[123]taking advantage of their cheaper surroundings while maintaining control of their back-office work and business processes."[124]"Remote insourcing" refers to hiring developers to work in-house from virtual (remote) facilities.[125]
A 2012 series of articles inThe Atlantic[126][127][128][129]highlighted a turning of the tide for parts of the U.S.'s manufacturing industry. Specific causes identified include rising third-world wages, recognition of hidden off-shoring costs, innovations in design/manufacture/assembly/time-to-market, increasing fuel and transportation costs, falling energy costs in the U.S., increasing U.S. labor productivity, and union flexibility. Hiring at GE's giantAppliance ParkinLouisville, Kentucky, increased 90% during 2012.
More than one company uses a "100% U. Based" phrase, whether within or outside their envelopes. "100% US-based customer service available 24/7" is how, in 2024,Business Insiderdescribed the expectations of some customers.[130]
From the standpoint of labor, outsourcing may represent a new threat, contributing to worker insecurity, and is reflective of the general process ofglobalizationandeconomic polarization.[131]
Western governments may attempt to compensate workers affected by outsourcing through various forms of legislation. In Europe, theAcquired Rights Directiveattempts to address the issue. The directive is implemented differently in different nations. In the U.S., theTrade Adjustment AssistanceAct is meant to provide compensation for workers directly affected by international trade agreements. Whether or not these policies provide the security and fair compensation they promise is debatable.
In response to the recession, U.S. presidentBarack Obamalaunched theSelectUSAprogram in 2011. In January 2012, Obama issued a Call to Action to Invest in America at the White House "Insourcing American Jobs" Forum.[136]Obama met with representatives ofOtis Elevator,Apple, DuPont,Master Lock, and others which had recently brought jobs back or made significant investments in the U.S.
Governments may legislate to authorise the outsourcing of specific functions or the work of specific government agencies, for example in the United Kingdom, theSocial Security Administration Act 1992(as amended) authorises the contracting-out of work-focussed interviews and documentary work,[137]and the Contracting Out of Functions (Tribunal Staff) Order 2009 authorises the contracting-out oftribunals' administrative work.[138]
A main feature of outsourcing influencing policy-making is the unpredictability it generates, including its defense/military ramifications,[139]regarding the future of any particular sector or skill-group. The uncertainty of future conditions influences governance approaches to different aspects of long-term policies.
In particular, distinction is needed between
A governance that attempts adapting to the changing environment will facilitate growth and a stable transition to new economic structures[140]until the economic structures become detrimental to the social, political and cultural structures.
Automation increases output and allows for reduced cost per item. When these changes are not well synchronized, unemployment or underemployment is a likely result. When transportation costs remain unchanged, the negative effect may be permanent;[106]jobs in protected sectors may no longer exist.[141]
Studies suggest that the effect of U.S. outsourcing on Mexico is that for every 10% increase in U.S. wages, north Mexico cities along theborderexperienced wage rises of 2.5%, about 0.69% higher than in inner cities.[142]
By contrast, higher rates of saving and investment in Asian countries, along with rising levels of education, studies suggest, fueled the 'Asian miracle' rather than improvements in productivity and industrial efficiency. There was also an increase in patenting and research and development expenditures.[143]
Outsourcing results from an internationalization of labor markets as more tasks become tradable. According to leading economistGreg Mankiw, the labour market functions under the same forces as the market of goods, with the underlying implication that the greater the number of tasks available to being moved, the better for efficiency under the gains from trade. With technological progress, more tasks can be offshored at different stages of the overall corporate process.[144]
The tradeoffs are not always balanced, and a 2004 viewer of the situation said "the total number of jobs realized in the United States from insourcing is far less than those lost through outsourcing."[145]
Import competition has caused ade facto'race-to-the-bottom' where countries lower environmental regulations to secure a competitive edge for their industries relative to other countries.
As Mexico competes with China over Canadian and American markets, its nationalCommission for Environmental Cooperationhas not been active in enacting or enforcing regulations to prevent environmental damage from increasingly industrialized Export Processing Zones. Similarly, since the signing of theNorth American Free Trade Agreement, heavy industries have increasingly moved to the U.S., which has a comparative advantage due to its abundant presence of capital and well-developed technology. A further example of environmental de-regulation with the objective of protecting trade incentives have been the numerous exemptions to carbon taxes in European countries during the 1990s.
Although outsourcing can influence environmental de-regulatory trends, the added cost of preventing pollution does not majorly determine trade flows or industrialization.[146]
Companies such as ET Water Systems (now aJain Irrigation Systemscompany),[147]GE AppliancesandCaterpillarfound that with the increase of labor costs in Japan and China, the cost of shipping and custom fees, it cost only about 10% more to manufacture in America.[105]Advances in technology and automation such as3D printingtechnologies[148]have made bringing manufacturing back to the U.S., both cost effective and possible.Adidas, for example, plans to produce highly customized shoes with 3D printers in the U.S.[149]
Outsourcing has contributed to further levelling of global inequalities as it has led to general trends of industrialization in the Global South and deindustrialization in the Global North.[150]
Not all manufacturing should return to the U.S.[151]The rise of the middle class in China, India and other countries has created markets for the products made in those countries. Just as the U.S. has aMade in USAprogram, other countries support products being made domestically. Localization, the process of manufacturing products for the local market, is an approach to keeping some manufacturing offshore and bringing some of it back. Besides the cost savings of manufacturing closer to the market, the lead time for adapting to changes in the market is faster.
The rise in industrial efficiency which characterized development in developed countries has occurred as a result of labor-saving technological improvements. Although these improvements do not directly reduce employment levels but rather increase output per unit of work, they can indirectly diminish the amount of labor required for fixed levels of output.[152]
It has been suggested that "workers require more education and different skills, working with software rather than drill presses" rather than rely on limited growth labor requirements for non-tradable services.[106]
The main driver for offshoring development work has been the greater availability of developers at a lower cost than in the home country. However, the rise in offshore development has taken place in parallel with an increased awareness of the importance of usability, and the user experience, in software. Outsourced development poses special problems for development, i.e. the more formal, contractual relationship between the supplier and client, and geographical separation place greater distance between the developers and users, which makes it harder to reflect the users' needs in the final product. This problem is exacerbated if the development is offshore. Further complications arise from cultural differences, which apply even if the development is carried out by an in-house offshore team.[153]
Historically offshore development concentrated on back office functions but, as offshoring has grown, a wider range of applications have been developed. Offshore suppliers have had to respond to the commercial pressures arising from usability issues by building up their usability expertise. Indeed, this problem has presented an attractive opportunity to some suppliers to move up market and offer higher value services.[154][155][156]
Offshore Software R&D means that company A turns over responsibility, in whole or in part, of an in-house software development to company B whose location is outside of company A's national jurisdiction. Maximizing the economic value of an offshore software development asset critically depends on understanding how best to use the available forms of legal regulations to protect intellectual rights. If the vendor cannot be trusted to protect trade secrets, then the risks of an offshoring software development may outweigh its potential benefits. Hence, it is critical to review the intellectual property policy of the potential offshoring supplier. The intellectual property protection policy of an offshore software development company must be reflected in these crucial documents: General Agreement, Non-Disclosure Agreement, and Employee Confidentiality Contract.[157]
As forecast in 2003,[158]R&D is outsourced. Ownership ofintellectual propertyby the outsourcing company, despite outside development, was the goal. To defend against tax-motivated cost-shifting, the U.S. government passed regulations in 2006 to make outsourcing research harder.[159]Despite many R&D contracts given to Indian universities and labs, only some research solutions were patented.[160]
WhilePfizermoved some of its R&D from the UK to India,[161]aForbesarticle suggested that it is increasingly more dangerous to offshore IP-sensitive projects to India, because of India's continued ignorance of patent regulations.[162]In turn, companies such asPfizerand Novartis, have lost rights to sell many of their cancer medications in India because of lack ofIP protection.
A 2018University of Chicago Law Schoolarticle titled "The Future of Outsourcing" begins with "The future of outsourcing is digital."[50]According to other sources, the "Do what you do best and outsource the rest"[21]approach means that "integration with retained systems"[50]is the new transition challenge; people training still exists, but is merely an "also".
There is more complexity than before, especially when the outside company may be an integrator.[50]
While the number of technically skilled labor grows in India, Indian offshore companies are increasingly tapping into the skilled labor already available in Eastern Europe to better address the needs of the Western European R&D market.[163][citation needed]
Protection of some data involved in outsourcing, such as about patients (HIPAA) is one of the few federal protections.
"Outsourcing" is a continuing political issue in the U.S., having been conflated with offshoring during the2004 U.S. presidential election. The political debate centered on outsourcing's consequences for the domestic U.S. workforce.DemocraticU.S. presidential candidateJohn Kerrycalled U.S. firms that outsource jobs abroad or that incorporate overseas intax havensto avoid paying their "fair share" ofU.S. taxes"Benedict Arnoldcorporations".
AZogby InternationalAugust 2004 poll found that 71% of American voters believed "outsourcing jobs overseas" hurt the economy while another 62% believed that the U.S. government should impose some legislative action against these companies, possibly in the form of increased taxes.[164][165]President Obama promoted theBring Jobs Home Actto help reshore jobs by using tax cuts and credits for moving operations back to the U.S.[166][167]The same bill was reintroduced in the113th U.S. Congress.[168][169]
While labor advocates claimunion bustingas one possible cause of outsourcing,[170]another claim is high corporate income tax rate in the U.S. relative to other OECD nations,[171][172][needs update]and the practice of taxing revenues earned outside of U.S. jurisdiction, a very uncommon practice. Some counterclaim that the actual taxes paid by U.S. corporations may be considerably lower than "official" rates due to the use of tax loopholes, tax havens, and "gaming the system".[173][174]
Sarbanes-Oxleyhas also been cited as a factor.[citation needed]
The U.S. has a specialvisa, the H-1B, which enables American companies to temporarily (up to three years, or by extension, six) hireforeign workersto supplement their employees or replace those holding existing positions. In hearings on this matter, a U.S. senator called these "their outsourcing visa".[175]
TheEuropean Council's Directive 77/187 of 14 February 1977 protects employees' rights in the event of transfers of undertakings, businesses or parts of businesses (as amended 29 June 1998, Directive 98/50/EC and 12 March 2001's Directive 2001/23). Rights acquired by employees with the former employer are to be safeguarded when they, together with the undertaking in which they are employed, are transferred to another employer, i.e., the contractor.
Case subsequent to theEuropean Court of Justice'sChristel Schmidt v. Spar- und Leihkasse der früheren Ämter Bordesholm, Kiel und Cronshagen, Case C-392/92 [1994] have disputed whether a particular contracting-out exercise constituted a transfer of an undertaking (see, for example,Ayse Süzen v. Zehnacker Gebäudereinigung GmbH Krankenhausservice, Case C-13/95 [1997]). In principle, employees may benefit from the protection offered by the directive.
Countries which have been the focus of outsourcing include India and the Philippines for American and European companies, and China and Vietnam forJapanese companies.
The Asian IT service market is still in its infancy, but in 2008 industry think tank Nasscom-McKinsey predicted a $17 billion IT service industry in India alone.[176]
A China-based company,Lenovo, outsourced/reshored manufacturing of some time-critical customizedPCsto the U.S. since "If it made them in China they would spend six weeks on a ship."[105]
Article 44 of Japan's Employment Security Act implicitly bans the domestic/foreign workers supplied by unauthorized companies regardless of their operating locations. The law will apply if at least one party of suppliers, clients, labors reside in Japan, and if the labors are the integral part of the chain of command by the client company, or the supplier.
Victims can lodge a criminal complaint against the CEO of the suppliers and clients. The CEO risks arrest, and the Japanese company may face a private settlement with financial package in the range between 20 and 100 million JPY ($200,000 – US$1 million).
Print and mail outsourcingis the outsourcing of document printing and distribution.
ThePrint Services & Distribution Associationwas formed in 1946, and its members provide services that today might involve the word outsource. Similarly, members of theDirect Mail Marketing Association(established 1917) were the "outsourcers" for advertising agencies and others doing mailings.
The term "outsourcing" became very common in the print and mail business during the 1990s, and later expanded to be very broad and inclusive of most any process by 2000. Today, there are web based print to mail solutions for small to mid-size companies which allow the user to send one to thousands of documents into the mail stream, directly from a desktop or web interface.[179]
The termoutsource marketinghas been used in Britain to mean the outsourcing of the marketing function.[180]The motivation for this has been:
While much of this work is the "bread and butter" of specialized departments within advertising agencies, sometimes specialist are used, such as whenThe Guardianoutsourced most of its marketing design in May 2010.[185]
Business process outsourcing(BPO) is a subset of outsourcing that involves thecontractingof the operations and responsibilities of a specificbusiness processto a third-partyservice provider. Originally, this was associated withmanufacturingfirms, such asCoca-Colathat outsourced large segments of itssupply chain.[186]
BPO is typically categorized intoback officeandfront officeoutsourcing.[187]BPO can help your business remain competitive and efficient by leveraging the expertise of other companies that are more specialized in certain functions.[188]
BPO can be offshore outsourcing, near-shore outsourcing to a nearby country, or onshore outsourcing to the same country.Information technology-enabled service (ITES-BPO),[189]knowledge process outsourcing(KPO) andlegal process outsourcing(LPO), a.k.a. legal outsourcing, are some of the sub-segments of BPO.
Although BPO began as a cost-reducer, changes (specifically the move to more service-based rather than product-based contracts), companies now choose to outsource their back-office increasingly for time flexibility and direct quality control.[190]Business process outsourcing enhances the flexibility of an organization in different ways:
BPO vendor charges are project-based or fee-for-service, using business models such as remote in-sourcing or similar software development and outsourcing models.[191][192]This can help a company to become more flexible by transforming fixed intovariable costs.[193]A variable cost structure helps a company responding to changes in required capacity and does not require a company to invest in assets, thereby making the company more flexible.[194]
BPO also permits focusing on a company'score competencies.[195]
Supply chain management with effective use of supply chain partners and business process outsourcing can increase the speed of several business processes.[186]
Even various contractual compensation strategies may leave the company as having a new "single point of failure" (where even an after the fact payment is not enough to offset "complete failure of the customer's business").[196]Unclear contractual issues are not the only risks; there's also changing requirements and unforeseen charges, failure to meet service levels, and a dependence on the BPO which reduces flexibility. The latter is calledlock-in; flexibility may be lost due to penalty clauses and other contract terms.[197]Also, the selection criteria may seem vague and undifferentiated.[198]
Security risks can arise regarding both from physical communication and from a privacy perspective. Employee attitude may change, and the company risks losing independence.[199][200]
Risks and threats of outsourcing must therefore be managed, to achieve any benefits. In order to manage outsourcing in a structured way, maximizing positive outcome, minimizing risks and avoiding any threats, abusiness continuity management(BCM) model is set up. BCM consists of a set of steps, to successfully identify, manage and control the business processes that are, or can be outsourced.[201]
Analytic hierarchy process(AHP) is a framework of BPO focused on identifying potential outsourceable information systems.[202]L. Willcocks, M. Lacity and G. Fitzgerald identify several contracting problems companies face, ranging from unclear contract formatting, to a lack of understanding of technical IT processes.[203]
Industry analysts have identifiedrobotic process automation(RPA) software and in particular the enhanced self-guided RPAAI based onartificial intelligenceas a potential threat to the industry[204][205]and speculate as to the likely long-term impact.[206]In the short term, however, there is likely to be little impact as existing contracts run their course: it is only reasonable to expect demand for cost efficiency and innovation to result in transformative changes at the point of contract renewals. With the average length of a BPO contract being 5 years or more[207]– and many contracts being longer – this hypothesis will take some time to play out.
On the other hand, an academic study by theLondon School of Economicswas at pains to counter the so-called 'myth' that RPA will bring back many jobs from offshore.[208]One possible argument behind such an assertion is that new technology provides new opportunities for increased quality, reliability, scalability and cost control, thus enabling BPO providers to increasingly compete on an outcomes-based model rather than competing on cost alone. With the core offering potentially changing from a "lift and shift" approach based on fixed costs to a more qualitative, service based and outcomes-based model, there is perhaps a new opportunity to grow the BPO industry with a new offering.
One estimate of the worldwide BPO market from the BPO Services Global Industry Almanac 2017, puts the size of the industry in 2016 at about US$140 billion.[209]
India, China and the Philippines are major powerhouses in the industry. In 2017, in India, the BPO industry generated US$30 billion in revenue according to the national industry association.[210]The BPO industry is a small segment of the total outsourcing industry in India. The BPO industry workforce in India is expected to shrink by 14% in 2021.[211]
The BPO industry and IT services industry in combination are worth a total of US$154 billion in revenue in 2017.[212]The BPO industry in the Philippines generated $26.7 billion in revenues in 2020,[213]while around 700 thousand medium and high skill jobs would be created by 2022.[214]
In 2015, official statistics put the size of the total outsourcing industry in China, including not only the BPO industry but also IT outsourcing services, at $130.9 billion.[215]
|
https://en.wikipedia.org/wiki/Information_technology_outsourcing
|
Amanaged service company(MSC) is a form of company structure in theUnited Kingdomdesigned to reduce the individual tax liabilities of thedirectorsandshareholders.
This structure is largely born from theIR35legislation of 1999, which came into force in 2000. In an MSC, workers are appointed as shareholders and may also be directors. As shareholders, they can then receive minimum salary payments and the balance of income asdividends. Usually, the service provider would perform administrative andcompany secretaryduties and offer basic taxation advice.
This structure became popular with independent contractors and was used as a way of earning high net returns (up to 85% of gross)[citation needed]compared toPAYE, with little corporate responsibilities. In return, the providers charged a fee for delivering the service. To work within this form, workers must usually passIR35tests to ensure they can make dividend payments.
In December 2006 the UK Treasury/HMRCintroduced draft legislation "Tackling Managed Service Legislation" which sought to address the use of "composite" structures to avoid Income Tax and National Insurance on forms of trading that the Treasury deemed as being akin to "employed". After a period of consultation and re-draft, the new legislation became law in April 2007 with additional aspects coming into force in August 2007 and fully in January 2008. A PAYEumbrella companyis effectively exempted from the legislation, which also seeks to pass the possible burden of unpaid debt (should a provider "collapse" a structure) to interested parties e.g. A recruitment agency that has been deemed to encourage or facilitate the scheme.
Several MSC providers have since withdrawn from the market and have either converted toPAYEoperations or sought to become seen as true accountants rather than scheme promoters.
Managed service companies (MSC) differ from personal service companies (PSC) as the MSC manages and controls the affairs of the business, not the contractor.
The 2007 Budget legislated against Managed Service Companies (MSCs) by removing the associated tax advantages for contractors working through them. Prior to this government action, there were several types of MSCs.
One of the most common forms was the composite company, where typically up to 20 contractors became non-director shareholders. These contractors received a low salary and expenses, with the remainder paid as dividends. This method of remuneration offered significant financial benefits, as it avoided the payment of national insurance contributions and income tax that would otherwise have been due if the contractor was paid entirely under PAYE (salary).
HMRC became increasingly frustrated with the use of MSCs. When investigated, these companies could quickly liquidate (as they held no assets) and begin trading under a new company name the very next day. Following the MSC legislation, it is now the responsibility of an MSC provider to correctly operate PAYE and deduct the necessary tax and national insurance contributions on all income paid to a subcontractor.
To strengthen this law, the government has allowed the recovery of any underpaid taxes from relevant third parties—primarily those behind the MSC as well as connected or controlling parties
Some companies still offer variations on these schemes, so it can be confusing to a contractor to know what is legal and what is not. The simplest way[according to whom?]to operate compliantly is to work for one's own PSC, and not delegate control or key decisions to a third party supplier.
|
https://en.wikipedia.org/wiki/Managed_service_company
|
Managed private cloud(also known as "hosted private cloud" or "single-tenant SaaS") refers to a principle insoftware architecturewhere a single instance of the software runs on a server, serves a single clientorganization(tenant), and is managed by a third party. The third-party provider is responsible for providing the hardware for the server and also for preliminary maintenance. This is in contrast tomultitenancy, where multiple client organizations share a single server, or an on-premises deployment, where the client organization hosts its software instance.
Managed private clouds also fall under the larger umbrella ofcloud computing.
The need for private clouds arose due to enterprises requiring a dedicated service and infrastructure for their cloud computing needs, such as for business-critical operations, improved security, and better control over their resources. Managed private cloud adoption is a popular choice among organizations. It has been on the rise[1]due to enterprises requiring a dedicated cloud environment and preferring to avoid having to deal with management, maintenance, or future upgrade costs for the associated infrastructure and services. Such operational costs are unavoidable in on-premises private cloud data centers.
A managed private cloud cuts down on upkeep costs by outsourcing infrastructure management and maintenance to the managed cloud provider. It is easier to integrate an organization's existing software, services, and applications into a dedicated cloud hosting infrastructure which can be customized to the client's needs instead of a public cloud platform, whose hardware or infrastructure/software platform cannot be individualized to each client.[2]
Customers who choose a managed private cloud deployment usually choose them because of their desire for efficient cloud deployment, but also have the need for service customization or integration only available in a single-tenant environment.
This chart shows the key benefits[3]of the different types of deployments, and shows the overlap between these cloud solutions.
This chart shows key drawbacks.
Since deployments are done in a single-tenant environment, it is usually cost-prohibitive for small and medium-sized businesses. While server upkeep and maintenance are handled by the service provider, including network management and security, the client is charged for all such services. It is up to the potential client to determine if a managed private cloud solution aligns with their business objectives and budget. While the service provider maintains the upkeep of servers, network, and platform infrastructure, sensitive data is typically not stored on managed private clouds as it may leave business-critical information prone to breaches via third-party attacks on the cloud service provider.
Common customizations[4]and integrations include:
Software companies have taken a variety of strategies in the Managed Private Cloud realm. Some software organizations have provided managed private cloud options internally, such asMicrosoft. Companies that offer an on-premises deployment option, by definition, enable third-party companies to market Managed Private Cloud solutions. A few managed private cloud service providers are:
|
https://en.wikipedia.org/wiki/Managed_private_cloud
|
Remote monitoring and management(RMM) is the process of supervising and controlling IT systems (such as network devices, desktops, servers and mobile devices) by means of locally installedagentsthat can be accessed by a management service provider.[1][2]
Functions include the ability to:
Traditionally this function has been done on site at a company but many MSPs are performing this function remotely using integratedSoftware as a Service(SaaS) platforms.
|
https://en.wikipedia.org/wiki/Remote_monitoring_and_management
|
Aserviceis an act or use for which aconsumer,company, orgovernmentis willing topay.[1]Examples include work done by barbers, doctors, lawyers, mechanics, banks, insurance companies, and so on. Public services are those that society (nation state, fiscal union or region) as a whole pays for. Usingresources, skill, ingenuity, and experience, service providers benefit service consumers. Services may be defined as intangible acts or performances whereby the service provider provides value to the customer.
Services have three key characteristics:[2]
Services are by definition intangible. They are not manufactured, transported or stocked.
One cannot store services for future use. They are produced and consumed simultaneously.
Services are perishable in two regards:
The service provider must deliver the service at the exact time of service consumption. The service is not manifested in a physical object that is independent of the provider. The service consumer is also inseparable from service delivery. Examples: The service consumer must sit in the hairdresser's chair, or in the airplane seat. Correspondingly, the hairdresser or the pilot must be in the shop or plane, respectively, to deliver the service.
Each service is unique. It can never be exactly repeated as the time, location, circumstances, conditions, current configurations or assigned resources are different for the next delivery, even if the same service is requested by the consumer. Many services are regarded as heterogeneous and are typically modified for each service-consumer or for each service-context.[2]Example: The taxi service which transports the service consumer from home to work is different from the taxi service which transports the same service consumer from work to home – another point in time, the other direction, possibly another route, probably another taxi-driver and cab. Another and more commontermfor this isheterogeneity.[citation needed]
Mass generation and delivery of services must be mastered for a service provider to expand. This can be seen as a problem ofservice quality. Both inputs and outputs to the processes involved providing services are highly variable, as are the relationships between these processes, making it difficult to maintain consistent service quality. Many services involve variable human activity, rather than a precisely determined process; exceptions includeutilities. The human factor is often the key success factor in service provision. Demand can vary byseason,timeof day,business cycle, etc. Consistency is necessary to create enduring business relationships.
Any service can be clearly and completely, consistently and concisely specified by means of standard attributes that conform to theMECE principle(Mutually Exclusive, Collectively Exhaustive).
The delivery of a service typically involves six factors:
The service encounter is defined as all activities involved in the service delivery process. Some service managers use the term "moment of truth" to indicate that point in a service encounter where interactions are most intense.[citation needed]
Manybusiness theoristsview service provision as a performance or act (sometimes humorously referred to asdramalurgy, perhaps in reference todramaturgy). The location of the service delivery is referred to as thestageand the objects that facilitate the service process are calledprops. A script is a sequence ofbehaviorsfollowed by those involved, including the client(s). Some servicedramasare tightly scripted, others are moread lib. Role congruence occurs when eachactorfollows a script that harmonizes with therolesplayed by the other actors.[citation needed]
In some service industries, especially health care, dispute resolution and social services, a popular concept is the idea of the caseload, which refers to the total number of patients, clients, litigants, or claimants for which a given employee is responsible. Employees must balance the needs of each individual case against the needs of all other current cases as well as their own needs.[citation needed]
UnderEnglish law, if a service provider is induced to deliver services to adishonestclient by a deception, this is an offence under theTheft Act 1978.[citation needed]
Lovelock used the number of delivery sites (whether single or multiple) and the method of delivery to classify services in a 2 x 3 matrix. Then implications are that the convenience of receiving the service is the lowest when the customer has to come to the service and must use a single or specific outlet. Convenience increases (to a point) as the number of service points increase.[citation needed]
The distinction between a good and a service remains disputed. The perspective in the late-eighteenth and early-nineteenth centuries focused on creation and possession of wealth. Classical economists contended that goods were objects of value over which ownership rights could be established and exchanged. Ownership implied tangible possession of an object that had been acquired through purchase, barter or gift from the producer or previous owner and was legally identifiable as the property of the current owner.
Adam Smith's famous book,The Wealth of Nations, published in1776, distinguished between the outputs of what he termed "productive" and "unproductive" labor. The former, he stated, produced goods that could be stored after production and subsequently exchanged for money or other items of value. The latter, however useful or necessary, created services that perished at the time of production and therefore did not contribute to wealth. Building on this theme, French economist Jean-Baptiste Say argued that production and consumption were inseparable in services, coining the term "immaterial products" to describe them.
In the modern day, Gustofsson & Johnson describe a continuum with pure service on one terminal point and purecommodity goodon the other.[3]Mostproductsfall between these two extremes. For example, arestaurantprovides a physical good (thefood), but also provides services in the form of ambience, the setting and clearing of the table, etc. And although some utilities actually deliver physical goods — like water utilities that deliver water — utilities are usually treated as services.[citation needed]
The following is a list of service industries, grouped into sectors. Parenthetical notations indicate how specificoccupationsandorganizationscan be regarded as service industries to the extent they provide an intangible service, as opposed to a tangible good.
|
https://en.wikipedia.org/wiki/Service_(economics)
|
Aservice provider(SP) is an organization that provides services, such as consulting, legal, real estate, communications, storage, and processing services, to other organizations. Although a service provider can be a sub-unit of the organization that it serves, it is usually a third-party oroutsourcedsupplier. Examples includetelecommunications service providers(TSPs),application service providers(ASPs),storage service providers(SSPs), and internet service providers (ISPs).[citation needed]A more traditional term isservice bureau.
IT professionals sometimes differentiate between service providers by categorizing them as type I, II, or III.[1]The three service types are recognized by the IT industry although specifically defined byITILand the U.S.Telecommunications Act of 1996.
Type III SPs provide IT services to external customers and subsequently can be referred to as external service providers (ESPs)[2]which range from a full IT organization/service outsource viamanaged servicesor MSPs (managed service providers) to limited product feature delivery via ASPs (application service providers).[3]
This business-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Service_provider
|
Service science, management, and engineering(SSME) is a term introduced byIBMto describe an interdisciplinary approach to the study and innovation ofservice systems. More precisely, SSME has been defined as the application of science, management, and engineering disciplines to tasks that one organization beneficially performs for and with another. SSME is also a proposed academic discipline and research area that would complement – rather than replace – the many disciplines that contribute to knowledge about service.[1]The interdisciplinary nature of the field calls for a curriculum and competencies to advance the development and contribution of the field of SSME.[2]
Service systems are designed and constructed, are often very large, and, as complex systems, they have emergent properties. This makes them an engineering kind of system (in MIT's terms).[3][4]For instance, large-scale service systems include major metropolitan hospitals, highway or high-rise construction projects, and large IT outsourcing operations in which one company takes over the daily operations of IT infrastructure for another. In all these cases, systems are designed and constructed to provide and sustain service, yet because of their complexity and size, operations do not always go as planned or expected, and not all interactions or results can be anticipated or accurately predicted.
As the world becomes more complex and uncertain socially and economically, a computational thinking approach has been proposed to model the dynamics and adaptiveness of a service system, aimed at fully leveraging today's ubiquitous digitized information, computing capability and computational power so that the service system can be studied qualitatively and quantitatively.[5]
SSME has been used to describe higher education as a service delivered by colleges and universities that are viewed as complex systems.[6]
SSME is often referred to as service science for short.[7]The flagship journal Service Science is published by the professional associationINFORMS. The journal publishes innovative and original papers on all topics related to service, including work that crosses traditional disciplinary boundaries.
|
https://en.wikipedia.org/wiki/Service_science,_management_and_engineering
|
Aservice-level agreement(SLA) is an agreement between aservice providerand acustomer. Particular aspects of the service – quality, availability, responsibilities – are agreed between the service provider and the service user.[1]The most common component of an SLA is that the services should be provided to the customer as agreed upon in the contract. As an example,Internet service providersandtelcoswill commonly include service level agreements within the terms of their contracts with customers to define the level(s) of service being sold in plain language terms. In this case, the SLA will typically have a technical definition ofmean time between failures(MTBF),mean time to repairormean time to recovery(MTTR); identifying which party is responsible for reporting faults or paying fees; responsibility for various data rates;throughput;jitter; or similar measurable details.
A service-level agreement is an agreement between two or more parties, where one is the customer and the others are service providers. This can be a legally binding formal or an informal "contract" (for example, internal department relationships). The agreement may involve separate organizations or different teams within one organization. Contracts between the service provider and other third parties are often (incorrectly) called SLAs – because the level of service has been set by the (principal) customer, there can be no "agreement" between third parties; these agreements are simply "contracts."[citation needed]Operational-level agreementsor OLAs, however, may be used by internal groups to support SLAs. If some aspect of service has not been agreed upon with the customer, it is not an "SLA".
SLAs commonly include many components, from a definition of services to the termination of agreement.[2]To ensure that SLAs are consistently met, these agreements are often designed with specific lines ofdemarcationand the parties involved are required to meet regularly to create an open forum for communication. Rewards and penalties applying to the provider are often specified. Most SLAs also leave room for a periodic (annual) revisitation to make changes.[3]
Since the late 1980s SLAs have been used by fixed-line telecom operators. SLAs are so widely used these days that larger organizations have many different SLAs existing within the company itself. Two different units in an organization script an SLA with one unit being the customer and another being the service provider. This practice helps to maintain the same quality of service amongst different units in the organization and also across multiple locations of the organization. This internal scripting of SLA also helps to compare the quality of service between an in-house department and an external service provider.[4]
The output received by the customer as a result of the service provided is the main focus of the service level agreement.
Service level agreements are also defined at different levels:
A well-defined and typical SLA will contain the following components:[5]
A service-level agreement can track multipleperformance metrics. In this context, these metrics are calledservice level indicators(SLIs). The target value of a given SLI is called aservice-level objective(SLO).
InIT-service management, a common case is acall centerorservice desk. SLAs in such cases usually refer to the following SLIs:
Uptimeis also a common metric, often used for data services such asshared hosting,virtual private serversanddedicated servers. Common agreements include percentage of network uptime, power uptime, number of scheduledmaintenance windows, etc.
Many SLAs track to theITILspecifications when applied to IT services.
It is not uncommon for an internet backbone service provider (ornetwork service provider) to explicitly state its SLA on its website.[7][8][9]The U.S.Telecommunications Act of 1996does not expressly mandate that companies have SLAs, but it does provide a framework for firms to do so in Sections 251 and 252.[10]Section 252(c)(1) for example ("Duty to Negotiate") requiresincumbent local exchange carriers(ILECs) to negotiate in good faith about matters such as resale and access to rights of way.
New emerging technologies such as 5G bring new complexities to the network operators. With more stringent SLAs and customer expectations, problem resolutions must be prioritized based on impacted subscribers.[11]
With the introduction of5G network slicing, the need of having a 360º view of the 5G slices becomes imperative to deliver premium SLAs and monetize service faster.
For fixed networks subscribers, service modeling appears to be one of the most suitable ways to effectively monitor SLA's and ensure they are met.[12]
Aweb service level agreement(WSLA) is a standard for service level agreement compliance monitoring ofweb services. It allows authors to specify the performance metrics associated with a web service application, desired performance targets, and actions that should be performed when performance is not met.
WSLA Language Specification, version 1.0[13]was published by IBM in 2001.
The underlying benefit ofcloud computingis shared resources, which are supported by the underlying nature of a shared infrastructure environment. Thus, SLAs span across the cloud and are offered by service providers as a service-based agreements rather than a customer-based agreements. Measuring, monitoring and reporting on cloud performance is based on the end UX or their ability to consume resources. The downside of cloud computing relative to SLAs is the difficulty in determining the root cause of service interruptions due to the complex nature of the environment.
As applications are moved from dedicated hardware into thecloud, they need to achieve the same even more demanding levels of service than classical installations. SLAs for cloud services focus on characteristics of the data center and more recently include characteristics of the network (seecarrier cloud) to support end-to-end SLAs.[14]
Any SLA management strategy considers two well-differentiated phases: negotiating the contract and monitoring its fulfillment in real-time. Thus, SLA management encompasses the SLA contract definition: the basic schema with theQoSparameters; SLA negotiation; SLA monitoring; SLA violation detection; and SLA enforcement—according to defined policies.[citation needed]
The main point is to build a new layer upon the grid, cloud, orSOAmiddleware able to create a negotiation mechanism between the providers and consumers of services. An example is the EU–funded Framework 7 research project, SLA@SOI,[15]which is researching aspects of multi-level, multi-provider SLAs within service-oriented infrastructure and cloud computing, while another EU-funded project, VISION Cloud,[16]has provided results concerning content-oriented SLAs.
FP7 IRMOS also investigated aspects of translating application-level SLA terms to resource-based attributes to bridge the gap between client-side expectations and cloud-provider resource-management mechanisms.[17][18]A summary of the results of various research projects in the area of SLAs (ranging from specifications to monitoring, management and enforcement) has been provided by the European Commission.[19]
Outsourcinginvolves the transfer of responsibility from an organization to a supplier. This new arrangement is managed through a contract that may include one or more SLAs. The contract may involve financial penalties and the right to terminate if any of the SLA metrics are consistently missed. The setting, tracking and managing SLAs is an important part of theoutsourcing relationship management(ORM) discipline. Specific SLAs are typically negotiated upfront as part of the outsourcing contract and used as one of the primary tools of outsourcing governance.
In software development, specific SLAs can apply to application outsourcing contracts in line with standards insoftware quality, as well as recommendations provided by neutral organizations likeCISQ, which has published numerous papers on the topic (such asUsing Software Measurement in SLAs[20]) that are available in to the public.
|
https://en.wikipedia.org/wiki/Service-level_agreement
|
Technical support, commonly shortened astech support, is acustomer serviceprovided to customers to resolve issues, commonly withconsumer electronics. This is commonly provided viacall centers,online chatandemail.[1]Many companies providediscussion boardsfor users to provide support to other users, decreasing load and cost on these companies.[2]
With the increasing use of technology in modern times, there is a growing requirement to provide technical support. Many organizations locate their technical support departments orcall centersin countries or regions with lower costs.Dellwas amongst the first companies to outsource their technical support and customer service departments to India in 2001.[3]There has also been a growth in companies specializing in providing technical support to other organizations. These are often referred to as MSPs (Managed Service Providers).[4]
For businesses needing to provide technical support,outsourcingallows them to maintain high availability of service. Such need may result from peaks in call volumes during the day, periods of high activity due to the introduction of new products or maintenance service packs, or the requirement to provide customers with a high level of service at a low cost to the business. For businesses needing technical support assets,outsourcingenables their core employees to focus more on their work in order to maintain productivity.[5]It also enables them to utilize specialized personnel whose technicalknowledge baseand experience may exceed the scope of the business, thus providing a higher level of technical support to their employees.
Technical support is often subdivided into tiers, or levels, in order to better serve a business or customer base. The number of levels a business uses to organize their technical support group is dependent on the business's needs regarding their ability to sufficiently serve their customers or users. The reason for providing a multi-tiered support system instead of one general support group is to provide the best possible service in the most efficient possible manner. Success of the organizational structure is dependent on thetechnicians' understanding of their level of responsibility and commitments, their customer response time commitments, and when to appropriately escalate an issue and to which level.[6]A common support structure revolves around a three-tiered technical support system. Remote computer repair is a method fortroubleshootingsoftware related problems viaremote desktopconnections.[7]
Tier I (or Level 1, abbreviated as T1 or L1) is the first technical support level. The first job of a Tier I specialist is to gather the customer's information and to determine the customer's issue by analyzing the symptoms and figuring out the underlying problem.[6]When analyzing the symptoms, it is important for the technician to identify what the customer is trying to accomplish so that time is not wasted on "attempting to solve a symptom instead of a problem."[6]
Once identification of the underlying problem is established, the specialist can begin sorting through the possible solutions available. Technical support specialists in this group typically handle straightforward and simple problems while "possibly using some kind of knowledge management tool."[8]This includes troubleshooting methods such as verifyingphysical layerissues, resolving username and password problems, uninstalling/reinstalling basicsoftware applications, verification of proper hardware and software set up, and assistance with navigating around application menus. Personnel at this level have a basic to general understanding of the product or service and may not always contain the competency required for solving complex issues.[9]Nevertheless, the goal for this group is to handle 70–80% of the user problems before finding it necessary to escalate the issue to a higher level.[9]
Tier II (or Level 2, abbreviated asT2orL2) is a more in-depth technical support level than Tier I and therefore costs more as the technicians are more experienced and knowledgeable on a particular product or service. It is synonymous withlevel 2 support,support line 2,administrative level support, and various other headings denoting advanced technicaltroubleshootingand analysis methods. Technicians in this realm of knowledge are responsible for assisting Tier I personnel in solving basic technical problems and for investigating elevated issues by confirming the validity of the problem and seeking for known solutions related to these more complex issues.[9]However, prior to thetroubleshootingprocess, it is important that the technician review the work order to see what has already been accomplished by the Tier I technician and how long the technician has been working with the particular customer. This is a key element in meeting both the customer and business needs as it allows the technician to prioritize the troubleshooting process and properly manage their time.[6]
If a problem is new and/or personnel from this group cannot determine a solution, they are responsible for elevating this issue to the Tier III technical support group. In addition, many companies may specify that certain troubleshooting solutions be performed by this group to help ensure the intricacies of a challenging issue are solved by providing experienced and knowledgeable technicians. This may include, but is not limited to, onsite installations or replacement of various hardware components, software repair, diagnostic testing, or the utilization of remote control tools to take over the user's machine for the sole purpose of troubleshooting and finding a solution to the problem.[6][10]
Tier III (or Level 3, abbreviated as T3 or L3) is the highest level of support in a three-tiered technical support model responsible for handling the most difficult or advanced problems. It is synonymous with level 3 support, 3rd line support, back-end support, support line 3, high-end support, and various other headings denoting expert level troubleshooting and analysis methods. These individuals are experts in their fields and are responsible for not only assisting both Tier I and Tier II personnel, but with the research and development of solutions to new or unknown issues. Note that Tier III technicians have the same responsibility as Tier II technicians in reviewing the work order and assessing the time already spent with the customer so that the work is prioritized and time management is sufficiently utilized.[6]If it is at all possible, the technician will work to solve the problem with the customer as it may become apparent that the Tier I and/or Tier II technicians simply failed to discover the proper solution. Upon encountering new problems, however, Tier III personnel must first determine whether or not to solve the problem and may require the customer's contact information so that the technician can have adequate time to troubleshoot the issue and find a solution.[9]It is typical for a developer or someone who knows the code or backend of the product, to be the Tier 3 support person.
In some instances, an issue may be so problematic to the point where the product cannot be salvaged and must be replaced. Such extreme problems are also sent to the original developers for in-depth analysis. If it is determined that a problem can be solved, this group is responsible for designing and developing one or more courses of action, evaluating each of these courses in a test case environment, and implementing the best solution to the problem.[9]
While not universally used, a fourth level often represents an escalation point beyond the organization. L4 support is generally a hardware or software vendor.[11]
A common scam typically involves acold callerclaiming to be from a technical support department of a company likeMicrosoft. Such cold calls are often made fromcall centersbased inIndiato users inEnglish-speaking countries, although increasingly these scams operate within the same country. The scammer will instruct the user to download aremote desktopprogram and once connected, usesocial engineeringtechniques that typically involveWindowscomponents to persuade the victim that they need to pay in order for the computer to be fixed and then proceeds to steal money from the victim's credit card.[12]
|
https://en.wikipedia.org/wiki/Technical_support
|
Insoftware development, the programming languageJavawas historically considered slower than the fastestthird-generationtypedlanguages such asCandC++.[1]In contrast to those languages, Java compiles by default to aJava Virtual Machine(JVM) with operations distinct from those of the actual computer hardware. Early JVM implementations wereinterpreters; they simulated the virtual operations one-by-one rather than translating them intomachine codefor direct hardware execution.
Since the late 1990s, the execution speed of Java programs improved significantly via introduction ofjust-in-time compilation(JIT) (in 1997 forJava 1.1),[2][3][4]the addition of language features supporting better code analysis, and optimizations in the JVM (such asHotSpotbecoming the default forSun's JVM in 2000). Sophisticatedgarbage collectionstrategies were also an area of improvement. Hardware execution of Java bytecode, such as that offered by ARM'sJazelle, was explored but not deployed.
Theperformanceof aJava bytecodecompiled Java program depends on how optimally its given tasks are managed by the hostJava virtual machine(JVM), and how well the JVM exploits the features of thecomputer hardwareandoperating system(OS) in doing so. Thus, any Javaperformance testor comparison has to always report the version, vendor, OS and hardware architecture of the used JVM. In a similar manner, the performance of the equivalent natively compiled program will depend on the quality of its generated machine code, so the test or comparison also has to report the name, version and vendor of the used compiler, and its activatedcompiler optimizationdirectives.
Many optimizations have improved the performance of the JVM over time. However, although Java was often the firstvirtual machineto implement them successfully, they have often been used in other similar platforms as well.
Early JVMs always interpretedJava bytecodes. This had a large performance penalty of between a factor 10 and 20 for Java versus C in average applications.[5]To combat this, a just-in-time (JIT) compiler was introduced into Java 1.1. Due to the high cost of compiling, an added system calledHotSpotwas introduced in Java 1.2 and was made the default in Java 1.3. Using this framework, theJava virtual machinecontinually analyses program performance forhot spotswhich are executed frequently or repeatedly. These are then targeted foroptimizing, leading to high performance execution with a minimum ofoverheadfor less performance-critical code.[6][7]Some benchmarks show a 10-fold speed gain by this means.[8]However, due to time constraints, the compiler cannot fully optimize the program, and thus the resulting program is slower than native code alternatives.[9][10]
Adaptive optimizing is a method in computer science that performsdynamic recompilationof parts of a program based on the current execution profile. With a simple implementation, an adaptive optimizer may simply make a trade-off between just-in-time compiling and interpreting instructions. At another level, adaptive optimizing may exploit local data conditions to optimize away branches and use inline expansion.
AJava virtual machinelikeHotSpotcan alsodeoptimizecode formerly JITed. This allows performing aggressive (and potentially unsafe) optimizations, while still being able to later deoptimize the code and fall back to a safe path.[11][12]
The 1.0 and 1.1Java virtual machines(JVMs) used amark-sweep collector, which could fragment theheapafter a garbage collection.
Starting with Java 1.2, the JVMs changed to agenerational collector, which has a much better defragmentation behaviour.[13]Modern JVMs use a variety of methods that have further improvedgarbage collectionperformance.[14]
Compressed Oops allow Java 5.0+ to address up to 32 GB of heap with 32-bit references. Java does not support access to individual bytes, only objects which are 8-byte aligned by default. Because of this, the lowest 3 bits of a heap reference will always be 0. By lowering the resolution of 32-bit references to 8 byte blocks, the addressable space can be increased to 32 GB. This significantly reduces memory use compared to using 64-bit references as Java uses references much more than some languages like C++. Java 8 supports larger alignments such as 16-byte alignment to support up to 64 GB with 32-bit references.[citation needed]
Before executing aclass, the Sun JVM verifies itsJava bytecodes(seebytecode verifier). This verification is performed lazily: classes' bytecodes are only loaded and verified when the specific class is loaded and prepared for use, and not at the beginning of the program. However, as the Javaclass librariesare also regular Java classes, they must also be loaded when they are used, which means that the start-up time of a Java program is often longer than forC++programs, for example.
A method namedsplit-time verification, first introduced in theJava Platform, Micro Edition(J2ME), is used in the JVM sinceJava version 6. It splits the verification ofJava bytecodein two phases:[15]
In practice this method works by capturing knowledge that the Java compiler has of class flow and annotating the compiled method bytecodes with a synopsis of the class flow information. This does not makeruntime verificationappreciably less complex, but does allow some shortcuts.[citation needed]
Java is able to managemultithreadingat the language level. Multithreading allows programs to perform multiple processes concurrently, thus improving the performance for programs running oncomputer systemswith multiple processors or cores. Also, a multithreaded application can remain responsive to input, even while performing long running tasks.
However, programs that use multithreading need to take extra care ofobjectsshared between threads, locking access to sharedmethodsorblockswhen they are used by one of the threads. Locking a block or an object is a time-consuming operation due to the nature of the underlyingoperating system-level operation involved (seeconcurrency controlandlock granularity).
As the Java library does not know which methods will be used by more than one thread, the standard library always locksblockswhen needed in a multithreaded environment.
Before Java 6, the virtual machine alwayslockedobjects and blocks when asked to by the program, even if there was no risk of an object being modified by two different threads at once. For example, in this case, a localVectorwas locked before each of theaddoperations to ensure that it would not be modified by other threads (Vectoris synchronized), but because it is strictly local to the method this is needless:
Starting with Java 6, code blocks and objects are locked only when needed,[16]so in the above case, the virtual machine would not lock the Vector object at all.
Since version 6u23, Java includes support for escape analysis.[17]
BeforeJava 6,allocation of registerswas very primitive in theclientvirtual machine (they did not live acrossblocks), which was a problem inCPU designswhich had fewerprocessor registersavailable, as inx86s. If there are no more registers available for an operation, the compiler mustcopy from register to memory(or memory to register), which takes time (registers are significantly faster to access). However, theservervirtual machine used acolor-graphallocator and did not have this problem.
An optimization of register allocation was introduced in Sun's JDK 6;[18]it was then possible to use the same registers across blocks (when applicable), reducing accesses to the memory. This led to a reported performance gain of about 60% in some benchmarks.[19]
Class data sharing (called CDS by Sun) is a mechanism which reduces the startup time for Java applications, and also reducesmemory footprint. When theJREis installed, the installer loads a set of classes from the systemJARfile (the JAR file holding all the Java class library, called rt.jar) into a private internal representation, and dumps that representation to a file, called a "shared archive". During subsequent JVM invocations, this shared archive ismemory-mappedin, saving the cost of loading those classes and allowing much of the JVM'smetadatafor these classes to be shared among multiple JVM processes.[20]
The corresponding improvement in start-up time is more obvious for small programs.[21]
Apart from the improvements listed here, each release of Java introduced many performance improvements in the JVM and Javaapplication programming interface(API).
JDK 1.1.6: Firstjust-in-time compilation(Symantec's JIT-compiler)[2][22]
J2SE 1.2: Use of agenerational collector.
J2SE 1.3:Just-in-time compilingbyHotSpot.
J2SE 1.4: Seehere, for a Sun overview of performance improvements between 1.3 and 1.4 versions.
Java SE 5.0:Class data sharing[23]
Java SE 6:
Other improvements:
See also 'Sun overview of performance improvements between Java 5 and Java 6'.[26]
Several performance improvements have been released for Java 7:
Future performance improvements are planned for an update of Java 6 or Java 7:[31]
Objectively comparing the performance of a Java program and an equivalent one written in another language such asC++needs a carefully and thoughtfully constructed benchmark which compares programs completing identical tasks. The targetplatformof Java'sbytecodecompiler is theJava platform, and the bytecode is either interpreted or compiled into machine code by the JVM. Other compilers almost always target a specific hardware and software platform, producing machine code that will stay virtually unchanged during execution[citation needed]. Very different and hard-to-compare scenarios arise from these two different approaches: static vs.dynamic compilationsandrecompilations, the availability of precise information about the runtime environment and others.
Java is oftencompiled just-in-timeat runtime by the Javavirtual machine, but may also becompiled ahead-of-time, as is C++. When compiled just-in-time, the micro-benchmarks ofThe Computer Language Benchmarks Gameindicate the following about its performance:[38]
Benchmarks often measure performance for small numerically intensive programs. In some rare real-life programs, Java out-performs C. One example is the benchmark ofJake2(a clone ofQuake IIwritten in Java by translating the originalGPLC code). The Java 5.0 version performs better in some hardware configurations than its C counterpart.[42]While it is not specified how the data was measured (for example if the original Quake II executable compiled in 1997 was used, which may be considered bad as current C compilers may achieve better optimizations for Quake), it notes how the same Java source code can have a huge speed boost just by updating the VM, something impossible to achieve with a 100% static approach.
For other programs, the C++ counterpart can, and usually does, run significantly faster than the Java equivalent. A benchmark performed by Google in 2011 showed a factor 10 between C++ and Java.[43]At the other extreme, an academic benchmark performed in 2012 with a 3D modelling algorithm showed theJava 6JVM being from 1.09 to 1.91 times slower than C++ under Windows.[44]
Some optimizations that are possible in Java and similar languages may not be possible in certain circumstances in C++:[45]
The JVM is also able to perform processor specific optimizations orinline expansion. And, the ability to deoptimize code already compiled or inlined sometimes allows it to perform more aggressive optimizations than those performed by statically typed languages when external library functions are involved.[46][47]
Results formicrobenchmarksbetween Java and C++ highly depend on which operations are compared. For example, when comparing with Java 5.0:
The scalability and performance of Java applications on multi-core systems is limited by the object allocation rate. This effect is sometimes called an "allocation wall".[54]However, in practice, modern garbage collector algorithms use multiple cores to perform garbage collection, which to some degree alleviates this problem. Some garbage collectors are reported to sustain allocation rates of over a gigabyte per second,[55]and there exist Java-based systems that have no problems scaling to several hundreds of CPU cores and heaps sized several hundreds of GB.[56]
Automatic memory management in Java allows for efficient use of lockless and immutable data structures that are extremely hard or sometimes impossible to implement without some kind of a garbage collection.[citation needed]Java offers a number of such high-level structures in its standard library in the java.util.concurrent package, while many languages historically used for high performance systems like C or C++ are still lacking them.[citation needed]
Java startup time is often much slower than many languages, includingC,C++,PerlorPython, because many classes (and first of all classes from theplatform Class libraries) must be loaded before being used.
When compared against similar popular runtimes, for small programs running on a Windows machine, the startup time appears to be similar toMono'sand a little slower than.NET's.[57]
It seems that much of the startup time is due to input-output (IO) bound operations rather than JVM initialization or class loading (thert.jarclass data file alone is 40 MB and the JVM must seek much data in this big file).[27]Some tests showed that although the newsplit bytecode verificationmethod improved class loading by roughly 40%, it only realized about 5% startup
improvement for large programs.[58]
Albeit a small improvement, it is more visible in small programs that perform a simple operation and then exit, because the Java platform data loading can represent many times the load of the actual program's operation.
Starting with Java SE 6 Update 10, the Sun JRE comes with a Quick Starter that preloads class data at OS startup to get data from thedisk cacherather than from the disk.
Excelsior JETapproaches the problem from the other side. Its Startup Optimizer reduces the amount of data that must be read from the disk on application startup, and makes the reads more sequential.
In November 2004,Nailgun, a "client, protocol, and server for running Java programs from the command line without incurring the JVM startup overhead" was publicly released.[59]introducing for the first time an option forscriptsto use a JVM as adaemon, for running one or more Java applications with no JVM startup overhead. The Nailgun daemon is insecure: "all programs are run with the same permissions as the server". Wheremulti-usersecurity is needed, Nailgun is inappropriate without special precautions. Scripts where per-application JVM startup dominates resource use, see one to twoorder of magnituderuntime performance improvements.[60]
Java memory use is much higher than C++'s memory use because:
In most cases a C++ application will consume less memory than an equivalent Java application due to the large overhead of Java's virtual machine, class loading and automatic memory resizing. For programs in which memory is a critical factor for choosing between languages and runtime environments, a cost/benefit analysis is needed.
Performance of trigonometric functions is bad compared to C, because Java has strict specifications for the results of mathematical operations, which may not correspond to the underlying hardware implementation.[65]On thex87floating point subset, Java since 1.4 does argument reduction for sin and cos in software,[66]causing a big performance hit for values outside the range.[67][clarification needed]
TheJava Native Interfaceinvokes a high overhead, making it costly to cross the boundary between code running on the JVM and native code.[68][69][70]Java Native Access(JNA) providesJavaprograms easy access to nativeshared libraries(dynamic-link library(DLLs) on Windows) via Java code only, with no JNI or native code. This functionality is comparable to Windows' Platform/Invoke andPython'sctypes. Access is dynamic at runtime without code generation. But it has a cost, and JNA is usually slower than JNI.[71]
Swinghas been perceived as slower than nativewidget toolkits, because it delegates the rendering of widgets to the pureJava 2DAPI. However, benchmarks comparing the performance of Swing versus theStandard Widget Toolkit, which delegates the rendering to the native GUI libraries of the operating system, show no clear winner, and the results greatly depend on the context and the environments.[72]Additionally, the newerJavaFXframework, intended to replace Swing, addresses many of Swing's inherent issues.
Some people believe that Java performance forhigh performance computing(HPC) is similar toFortranon compute-intensive benchmarks, but that JVMs still have scalability issues for performing intensive communication on agrid computingnetwork.[73]
However, high performance computing applications written in Java have won benchmark competitions. In 2008,[74]and 2009,[75][76]an ApacheHadoop(an open-source high performance computing project written in Java) based cluster was able to sort a terabyte and petabyte of integers the fastest. The hardware setup of the competing systems was not fixed, however.[77][78]
Programs in Java start slower than those in other compiled languages.[79][80]Thus, some online judge systems, notably those hosted by Chinese universities, use longer time limits for Java programs[81][82][83][84][85]to be fair to contestants using Java.
|
https://en.wikipedia.org/wiki/Java_performance
|
Incomputer science,Performance Application Programming Interface (PAPI)is a portableinterface(in the form of alibrary) tohardware performance counterson modernmicroprocessors. It is being widely used to collect low level performance metrics (e.g.instructioncounts,clock cycles,cache misses) of computer systems runningUNIX/Linuxoperating systems.
PAPI provides predefined high level hardware events summarized from popular processors and direct access to low level native events of one particular processor. Countermultiplexingandoverflowhandling are also supported.
Operating systemsupport for accessing hardware counters is needed to use PAPI.
For example, prior to 2010, aLinux/x86 kernelhad to be patched with a performance monitoring counters driver (perfctrlink) to support PAPI.
Since Linux version 2.6.32, and PAPI 2010 releases, PAPI can leverage the existing perf subsystem in Linux, and thus does not need any out of tree driver to be functional anymore.
Supported Operating Systems and requirements are listed in the official repository's documentationINSTALL.txt.
Thiscomputer-library-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Performance_Application_Programming_Interface
|
Performance engineeringencompasses the techniques applied during asystems development life cycleto ensure thenon-functional requirementsfor performance (such asthroughput,latency, ormemoryusage) will be met. It may be alternatively referred to assystems performance engineeringwithinsystems engineering, andsoftware performance engineeringorapplication performance engineeringwithinsoftware engineering.
As the connection between application success and business success continues to gain recognition, particularly in the mobile space, application performance engineering has taken on a preventive and perfective[1]role within the software development life cycle. As such, the term is typically used to describe the processes, people and technologies required to effectively test non-functional requirements, ensure adherence to service levels and optimize application performance prior to deployment.
The termperformance engineeringencompasses more than just the software and supporting infrastructure, and as such the term performance engineering is preferable from a macro view. Adherence to the non-functional requirements is also validated post-deployment by monitoring the production systems. This is part ofIT service management(see alsoITIL).
Performance engineering has become a separate discipline at a number of large corporations, with tasking separate but parallel to systems engineering. It is pervasive, involving people from multiple organizational units; but predominantly within theinformation technologyorganization.
Because this discipline is applied within multiple methodologies, the following activities will occur within differently specified phases. However, if the phases of therational unified process(RUP) are used as a framework, then the activities will occur as follows:
During the first, Conceptual phase of a program or project, criticalbusiness processesare identified. Typically they are classified as critical based upon revenue value, cost savings, or other assigned business value. This classification is done by the business unit, not the IT organization. High level risks that may impact system performance are identified and described at this time. An example might be known performance risks for a particular vendor system. Finally, performance activities, roles and deliverables are identified for the Elaboration phase. Activities and resource loading are incorporated into the Elaboration phase project plans.
During this defining phase, the critical business processes are decomposed to criticaluse cases. Probe cases will be decomposed further, as needed, to single page (screen) transitions. These are the use cases that will be subjected to script drivenperformance testing.
The type of requirements that relate to performance engineering are the non-functional requirements, or NFR. While a functional requirement relates to which business operations are to be performed, a performance related non-functional requirement will relate to how fast that business operation performs under defined circumstances.
Early in this phase a number of performance tool related activities are required. These include:
The performance test team normally does not execute performance tests in the development environment, but rather in a specialized pre-deployment environment that is configured to be as close as possible to the planned production environment. This team will executeperformance testingagainsttest cases, validating that the critical use cases conform to the specified non-functional requirements. The team will executeload testingagainst a normally expected (median) load as well as a peak load. They will often runstress teststhat will identify the system bottlenecks. The data gathered, and the analysis, will be fed back to the group that doesperformance tuning. Where necessary, the system will be tuned to bring nonconforming tests into conformance with the non-functional requirements.
If performance engineering has been properly applied at each iteration and phase of the project to this point, hopefully this will be sufficient to enable the system to receive performance certification. However, if for some reason (perhaps proper performance engineering working practices were not applied) there are tests that cannot be tuned into compliance, then it will be necessary to return portions of the system to development for refactoring. In some cases the problem can be resolved with additional hardware, but adding more hardware leads quickly to diminishing returns.
During this final phase the system is deployed to the production environment. A number of preparatory steps are required. These include:
Once the new system is deployed, ongoing operations pick up performance activities, including:
In the operational domain (post production deployment) performance engineering focuses primarily within three areas:service level management,capacity management, andproblem management.
In the service level management area, performance engineering is concerned withservice level agreementsand the associated systems monitoring that serves to validate service level compliance, detect problems, and identify trends. For example, when real user monitoring is deployed it is possible to ensure that user transactions are being executed in conformance with specified non-functional requirements. Transaction response time is logged in a database such that queries and reports can be run against the data. This permits trend analysis that can be useful for capacity management. When user transactions fall out of band, the events should generate alerts so that attention may be applied to the situation.
For capacity management, performance engineering focuses on ensuring that the systems will remain within performance compliance. This means executingtrend analysison historical monitoring generated data, such that the future time of non compliance is predictable. For example, if a system is showing a trend of slowing transaction processing (which might be due to growing data set sizes, or increasing numbers of concurrent users, or other factors) then at some point the system will no longer meet the criteria specified within the service level agreements. Capacity management is charged with ensuring that additional capacity is added in advance of that point (additional CPUs, more memory, new database indexing, et cetera) so that the trend lines are reset and the system will remain within the specified performance range.
Within the problem management domain, the performance engineering practices are focused on resolving the root cause of performance related problems. These typically involve system tuning, changing operating system or device parameters, or even refactoring the application software to resolve poor performance due to poor design or bad coding practices.
To ensure that there is proper feedback validating that the system meets the NFR specified performance metrics, any major system needs a monitoring subsystem. The planning, design, installation, configuration, and control of the monitoring subsystem are specified by an appropriately defined monitoring process.
The benefits are as follows:
The trend analysis component of this cannot be undervalued. This functionality, properly implemented, will enable predicting when a given application undergoing gradually increasing user loads and growing data sets will exceed the specified non functional performance requirements for a given use case. This permits proper management budgeting, acquisition of, and deployment of the required resources to keep the system running within the parameters of the non functional performance requirements.
|
https://en.wikipedia.org/wiki/Performance_engineering
|
Incomputer science,performance predictionmeans to estimate the execution time or other performance factors (such ascachemisses) of a program on a given computer. It is being widely used for computer architects to evaluate new computer designs, forcompilerwriters to explore new optimizations, and also for advanced developers to tune their programs.
There are many approaches to predict program 's performance on computers. They can be roughly divided into three major categories:
Performance data can be directly obtained from computersimulators, within which each instruction of the target program is actually dynamically executed given a particular input data set. Simulators can predict program's performance very accurately, but takes considerable time to handle large programs. Examples include the PACE[1]and Wisconsin Wind Tunnel simulators[2]as well as the more recent WARPP simulation toolkit,[3]which attempts to significantly reduce the time required for parallel system simulation.
Another approach, based ontrace-based simulationdoes not run every instruction, but runs a trace file which store important program events only. This approach loses some flexibility and accuracy compared to cycle-accurate simulation mentioned above but can be much faster. The generation of traces often consumes considerable amounts of storage space and can severely impact the runtime of applications if large amount of data are recorded during execution.
The classic approach of performance prediction treats a program as a set ofbasic blocksconnected by execution path. Thus the execution time of the whole program is the sum of execution time of each basic block multiplied by its execution frequency, as shown in the following formula:
Tprogram=∑i=1n(TBBi∗FBBi){\displaystyle T_{program}=\sum _{i=1}^{n}{(T_{BB_{i}}*F_{BB_{i}})}}
The execution frequencies of basic blocks are generated from aprofiler, which is why this method is called profile-based prediction. The execution time of a basic block is usually obtained from a simple instruction scheduler.
Classic profile-based prediction worked well for early single-issue, in-order execution processors, but fails to accurately predict the performance of modern processors. The major reason is that modern processors can issue and execute several instructions at the same time, sometimes out of the original order and cross the boundary of basic blocks.
|
https://en.wikipedia.org/wiki/Performance_prediction
|
Performance tuningis the improvement ofsystemperformance. Typically in computer systems, the motivation for such activity is called a performance problem, which can be either real or anticipated. Most systems will respond to increasedloadwith some degree of decreasing performance. A system's ability to accept higher load is calledscalability, and modifying a system to handle a higher load is synonymous to performance tuning.
Systematic tuning follows these steps:
This is an instance of the measure-evaluate-improve-learn cycle fromquality assurance.
A performance problem may be identified by slow or unresponsive systems. This usually occurs because high systemloading, causing some part of the system to reach a limit in its ability to respond. This limit within the system is referred to as a bottleneck.
A handful of techniques are used to improve performance. Among them are code optimization, load balancing, caching strategy, distributed computing and self-tuning.
Performance analysis, commonly known as profiling, is the investigation of a program's behavior using information gathered as the program executes. Its goal is to determine which sections of a program to optimize.
A profiler is a performance analysis tool that measures the behavior of a program as it executes, particularly the frequency and duration of function calls. Performance analysis tools existed at least from the early 1970s. Profilers may be classified according to their output types, or their methods for data gathering.
Performance engineering is the discipline encompassing roles, skills, activities, practices, tools, and deliverables used to meet thenon-functional requirementsof a designed system, such as increase business revenue, reduction of system failure, delayed projects, and avoidance of unnecessary usage of resources or work.
Several common activities have been identified in different methodologies:
Some optimizations include improving the code so that work is done once before a loop rather than inside a loop or replacing a call to a simpleselection sortwith a call to the more complicated algorithm for aquicksort.
Modern software systems, e.g., Big data systems, comprises several frameworks (e.g., Apache Storm, Spark, Hadoop). Each of these frameworks exposes hundreds configuration parameters that considerably influence the performance of such applications. Some optimizations (tuning) include improving the performance of the application finding the best configuration for such applications.
Caching is a fundamental method of removing performance bottlenecks that are the result of slow access to data. Caching improves performance by retaining frequently used information in high speed memory, reducing access time and avoiding repeated computation. Caching is an effective manner of improving performance in situations where the principle oflocality of referenceapplies. The methods used to determine which data is stored in progressively faster storage are collectively calledcaching strategies.Examples areASP.NET cache,CPU cache, etc.
A system can consist of independent components, each able to service requests. If all the requests are serviced by one of these systems (or a small number) while others remain idle then time is wasted waiting for used system to be available. Arranging so all systems are used equally is referred to asload balancingand can improve overall performance.
Load balancing is often used to achieve further gains from a distributed system by intelligently selecting which machine to run an operation on based on how busy all potential candidates are, and how well suited each machine is to the type of operation that needs to be performed.
Distributed computingis used for increasing the potential for parallel execution on modern CPU architectures continues, the use of distributed systems is essential to achieve performance benefits from the availableparallelism. High-performancecluster computingis a well-known use of distributed systems for performance improvements.
Distributed computing and clustering can negatively impact latency while simultaneously increasing load on shared resources, such as database systems. To minimize latency and avoid bottlenecks, distributed computing can benefit significantly from distributedcaches.
A self-tuning system is capable of optimizing its own internal running parameters in order to maximize or minimize the fulfillment of anobjective function; typically the maximization ofefficiencyorerrorminimization. Self-tuning systems typically exhibitnon-linearadaptive control. Self-tuning systems have been a hallmark of the aerospace industry for decades, as this sort of feedback is necessary to generateoptimal multi-variable controlfor nonlinear processes.
The bottleneck is the part of a system which is at capacity. Other parts of the system will be idle waiting for it to perform its task.
In the process of finding and removing bottlenecks, it is important to prove their existence, typically by measurements, before acting to remove them. There is a strong temptation toguess. Guesses are often wrong.
|
https://en.wikipedia.org/wiki/Performance_tuning
|
perf(sometimes calledperf_events[1]orperf tools, originallyPerformance Counters for Linux,PCL)[2]is aperformance analyzingtool inLinux, available fromLinux kernelversion 2.6.31 in 2009.[3]Userspacecontrolling utility, namedperf, is accessed from thecommand lineand provides a number ofsubcommands; it is capable of statistical profiling of the entire system (both kernel anduserlandcode).
It supportshardware performance counters,tracepoints, software performance counters (e.g. hrtimer), and dynamic probes (for example,kprobesor uprobes).[4]In 2012, twoIBMengineers recognized perf (along withOProfile) as one of the two most commonly usedperformance counterprofiling tools on Linux.[5]
The interface between the perf utility and the kernel consists of only onesyscalland is done via afile descriptorand a mapped memory region.[6]UnlikeLTTngor older versions ofoprofile, no servicedaemonsare needed, as most functionality is integrated into the kernel. The perf utility dumps raw data from the mapped buffer to disk when the buffer becomes filled up. According to R. Vitillo (LBNL), profiling performed by perf involves a very low overhead.[6]
As of 2010[update], architectures that provide support for hardware counters includex86,PowerPC64,UltraSPARC(IIIandIV),ARM(v5, v6, v7,Cortex-A8and-A9),AlphaEV56 andSuperH.[4]Usage of Last Branch Records,[7]abranch tracingimplementation available inIntelCPUs sincePentium 4, is available as a patch.[6]Since version 3.14 of theLinux kernel mainline, released on March 31, 2014, perf also supportsrunning average power limit(RAPL) for power consumption measurements, which is available as a feature of certain Intel CPUs.[8][9][10]
Perf is natively supported in many popular Linux distributions, includingRed Hat Enterprise Linux(since its version 6 released in 2010)[11]andDebianin thelinux-tools-commonpackage (sinceDebian 6.0(Squeeze) released in 2011).[12]
perf is used with several subcommands:
The documentation of perf is not very detailed (as of 2014); for example, it does not document most events or explain their aliases (often external tools are used to get names and codes of events[15]).[16]Perf tools also cannot profile based on true wall-clock time.,[16]something that has been addressed by the addition of off-CPU profiling.
The perf subsystem of Linux kernels from 2.6.37 up to 3.8.8 and RHEL6 kernel 2.6.32 contained a security vulnerability (CVE-2013-2094), which was exploited to gain root privileges by a local user.[17][18]The problem was due to an incorrect type being used (32-bit int instead of 64-bit) in the event_id verification code path.[19]
|
https://en.wikipedia.org/wiki/Perf_(Linux)
|
Rowhammer(also written asrow hammerorRowHammer) is a computer security exploit that takes advantage of an unintended and undesirable side effect indynamic random-access memory(DRAM) in whichmemory cellsinteract electrically between themselves by leaking their charges, possibly changing the contents of nearbymemory rowsthat were notaddressedin the original memory access. This circumvention of the isolation between DRAM memory cells results from the high cell density in modern DRAM, and can be triggered by specially craftedmemory access patternsthat rapidly activate the same memory rows numerous times.[1][2][3]
The Rowhammer effect has been used in someprivilege escalationcomputer securityexploits,[2][4][5][6]and network-based attacks are also theoretically possible.[7][8]
Different hardware-based techniques exist to prevent the Rowhammer effect from occurring, including required support in someprocessorsand types of DRAMmemory modules.[9][10]
Indynamic RAM(DRAM), eachbitof stored data occupies a separate memory cell that is electrically implemented with onecapacitorand onetransistor. The charge state of a capacitor (charged or discharged) is what determines whether a DRAM cell stores "1" or "0" as abinary value. Huge numbers of DRAM memory cells are packed intointegrated circuits, together with some additional logic that organizes the cells for the purposes of reading, writing, andrefreshingthe data.[11][12]
Memory cells (blue squares in both illustrations) are further organized intomatricesand addressed through rows and columns. A memory address applied to a matrix is broken into the row address and column address, which are processed by the row and columnaddress decoders(in both illustrations, vertical and horizontal green rectangles, respectively). After a row address selects the row for a read operation (the selection is also known asrow activation), bits from all cells in the row are transferred into thesense amplifiersthat form the row buffer (red squares in both illustrations), from which the exact bit is selected using the column address. Consequently, read operations are of a destructive nature because the design of DRAM requires memory cells to be rewritten after their values have been read by transferring the cell charges into the row buffer. Write operations decode the addresses in a similar way, but as a result of the design entire rows must be rewritten for the value of a single bit to be changed.[1]: 2–3[11][12][13]
As a result of storing data bits using capacitors that have a natural discharge rate, DRAM memory cells lose their state over time and require periodicrewritingof all memory cells, which is a process known as refreshing.[1]: 3[11]As another result of the design, DRAM memory is susceptible to random changes in stored data, which are known assoft memory errorsand attributed tocosmic raysand other causes. There are different techniques that counteract soft memory errors and improve the reliability of DRAM, of whicherror-correcting code (ECC) memoryand its advanced variants (such aslockstep memory) are most commonly used.[14]
Increased densities ofDRAMintegrated circuitshave led to physically smaller memory cells containing less charge, resulting in lower operationalnoise margins, increased rates of electromagnetic interactions between memory cells, and greater possibility of data loss. As a result,disturbance errorshave been observed, being caused by cells interfering with each other's operation and manifesting as random changes in the values of bits stored in affected memory cells. The awareness of disturbance errors dates back to the early 1970s andIntel 1103as the first commercially available DRAM integrated circuits; since then, DRAM manufacturers have employed variousmitigationtechniques to counteract disturbance errors, such as improving the isolation between cells and performing production testing. However, researchers proved in a 2014 analysis that commercially availableDDR3 SDRAMchips manufactured in 2012 and 2013 are susceptible to disturbance errors, while using the termRowhammerto name the associated side effect that led to observedbit flips.[1][3][15]
The opportunity for the Rowhammer effect to occur in DDR3 memory[16]is primarily attributed to DDR3's high density of memory cells and the results of associated interactions between the cells, while rapid DRAM row activations have been determined as the primary cause. Frequent row activations causevoltagefluctuations on the associated row selection lines, which have been observed to induce higher-than-natural discharge rates in capacitors belonging to nearby (adjacent, in most cases) memory rows, which are calledvictim rows; if the affected memory cells are notrefreshedbefore they lose too much charge, disturbance errors occur. Tests show that a disturbance error may be observed after performing around 139,000 subsequent memory row accesses (withcache flushes), and that up to one memory cell in every 1,700 cells may be susceptible. Those tests also show that the rate of disturbance errors is not substantially affected by increased environment temperature, while it depends on the actual contents of DRAM because certainbit patternsresult in significantly higher disturbance error rates.[1][2][15][17]
A variant calleddouble-sided hammeringinvolves targeted activations of two DRAM rows surrounding a victim row: in the illustration provided in this section, this variant would be activating both yellow rows with the aim of inducing bit flips in the purple row, which in this case would be the victim row. Tests show that this approach may result in a significantly higher rate of disturbance errors, compared to the variant that activates only one of the victim row's neighboring DRAM rows.[4][18]: 19–20[19]
As DRAM vendors have deployed mitigations, patterns had to become more sophisticated to bypass Rowhammer mitigations. More recent Rowhammer patterns include non-uniform, frequency-based patterns.[20]These patterns consist of many double-sided aggressors pairs where each of them is hammered with a different frequency, phase, and amplitude. Using this and synchronizing patterns with the REFRESH command, it is possible to very effectively determine "blind spots" where the mitigation is not able to provide protection anymore. Based on this idea, academics built a Rowhammer fuzzer namedBlacksmith[21]that can bypass existing mitigations on all DDR4 devices.
Different methods exist for more or less successful detection, prevention, correction or mitigation of the Rowhammer effect. Tests show that simpleerror correction code, providingsingle-error correction and double-error detection(SECDED) capabilities, are not able to correct or detect all observed disturbance errors because some of them include more than two flipped bits permemory word.[1]: 8[15]: 32Furthermore, research shows that precisely targeted three-bit Rowhammer flips prevents ECC memory from noticing the modifications.[22][23]
A less effective solution is to introduce more frequent memory refreshing, with therefresh intervalsshorter than the usual 64 ms,[a]but this technique results in higher power consumption and increased processing overhead; some vendors providefirmwareupdates that implement this type of mitigation.[24]One of the more complex prevention measures performscounter-based identification of frequently accessed memory rows and proactively refreshes their neighboring rows; another method issues additional infrequent random refreshes of memory rows neighboring the accessed rows regardless of their access frequency. Research shows that these two prevention measures cause negligible performance impacts.[1]: 10–11[25]
Since the release ofIvy Bridgemicroarchitecture,IntelXeonprocessors support the so-calledpseudo target row refresh(pTRR) that can be used in combination with pTRR-compliant DDR3dual in-line memory modules(DIMMs) to mitigate the Rowhammer effect by automatically refreshing possible victim rows, with no negative impact on performance or power consumption. When used with DIMMs that are not pTRR-compliant, these Xeon processors by default fall back on performing DRAM refreshes at twice the usual frequency, which results in slightly higher memory access latency and may reduce the memory bandwidth by up to 2–4%.[9]
TheLPDDR4mobile memory standard published byJEDEC[26]includes optional hardware support for the so-calledtarget row refresh(TRR) that prevents the Rowhammer effect without negatively impacting performance or power consumption.[10][27][28]Additionally, some manufacturers implement TRR in theirDDR4products,[29][30]although it is not part of the DDR4 memory standard published by JEDEC.[31]Internally, TRR identifies possible victim rows, by counting the number of row activations and comparing it against predefinedchip-specificmaximum activate count(MAC) andmaximum activate window(tMAW) values, and refreshes these rows to prevent bit flips. The MAC value is the maximum total number of row activations that may be encountered on a particular DRAM row within a time interval that is equal or shorter than the tMAWamount of time before its neighboring rows are identified as victim rows; TRR may also flag a row as a victim row if the sum of row activations for its two neighboring rows reaches the MAC limit within the tMAWtime window.[26][32]Research showed that TRR mitigations deployed on DDR4 UDIMMs and LPDDR4X chips from devices produced between 2019 and 2020 are not effective in protecting against Rowhammer.[20]
Due to their necessity of huge numbers of rapidly performed DRAM row activations, Rowhammer exploits issue large numbers of uncached memory accesses that causecache misses, which can be detected by monitoring the rate of cache misses for unusual peaks usinghardware performance counters.[4][33]
Version 5.0 of theMemTest86memory diagnostic software, released on December 3, 2013, added a Rowhammer test that checks whether computer RAM is susceptible to disturbance errors, but it only works if the computer bootsUEFI; without UEFI, it boots an older version with no hammer test.[34]
Memory protection, as a way of preventingprocessesfrom accessing memory that has not beenassignedto each of them, is one of the concepts behind most modernoperating systems. By using memory protection in combination with other security-related mechanisms such asprotection rings, it is possible to achieveprivilege separationbetween processes, in whichprogramsand computer systems in general are divided into parts limited to the specificprivilegesthey require to perform a particular task. Using privilege separation can also reduce the extent of potential damage caused bycomputer securityattacks by restricting their effects to specific parts of the system.[35][36]
Disturbance errors (explained in thesection above) effectively defeat various layers of memory protection by "short circuiting" them at a very low hardware level, practically creating a uniqueattack vectortype that allows processes to alter the contents of arbitrary parts of themain memoryby directly manipulating the underlying memory hardware.[2][4][18][37]In comparison, "conventional" attack vectors such asbuffer overflowsaim at circumventing the protection mechanisms at the software level, byexploitingvarious programming mistakes to achieve alterations of otherwise inaccessible main memory contents.[38]
The initial research into the Rowhammer effect, published and presented in June 2014 at theInternational Symposium on Computer Architecture, described and analyzed the nature of DRAM read disturbance errors in DDR3 DRAM chips. This paper[1]experimentally studied 129 real DDR3 DRAM modules from three DRAM manufacturers and demonstrated read disturbance bitflips in 110 of them. It also showed that a user-level program run on two real systems from Intel and AMD induces bitflips in main memory. The work indicated the potential for constructing an attack, saying that "With some engineering effort, we believe we can develop Code 1a into a disturbance attack that injects errors into
other programs, crashes the system, or perhaps even hijacks control of the system. We leave such research for the future since the primary objective in this work is to understand and prevent DRAM disturbance errors."[1]
A subsequent October 2014 research paper did not imply the existence of any security-related issues arising from the Rowhammer effect.[16]
On March 9, 2015,Google'sProject Zerorevealed two workingprivilege escalationexploits based on the Rowhammer effect, establishing its exploitable nature on thex86-64architecture. One of the revealed exploits targets theGoogle Native Client(NaCl) mechanism for running a limited subset of x86-64machine instructionswithin asandbox,[18]: 27exploiting the Rowhammer effect to escape from the sandbox and gain the ability to issuesystem callsdirectly. This NaClvulnerability, tracked asCVE-2015-0565, has been mitigated by modifying the NaCl so it does not allow execution of theclflush(cache lineflush[39]) machine instruction, which was previously believed to be required for constructing an effective Rowhammer attack.[2][4][37]
The second exploit revealed by Project Zero runs as an unprivilegedLinuxprocess on the x86-64 architecture, exploiting the Rowhammer effect to gain unrestricted access to allphysical memoryinstalled in a computer. By combining the disturbance errors withmemory spraying, this exploit is capable of alteringpage table entries[18]: 35used by thevirtual memorysystem for mappingvirtual addressestophysical addresses, which results in the exploit gaining unrestricted memory access.[18]: 34, 36–57Due to its nature and the inability of the x86-64 architecture to makeclflusha privileged machine instruction, this exploit can hardly be mitigated on computers that do not use hardware with built-in Rowhammer prevention mechanisms. While testing the viability of exploits, Project Zero found that about half of the 29 testedlaptopsexperienced disturbance errors, with some of them occurring on vulnerable laptops in less than five minutes of running row-hammer-inducing code; the tested laptops were manufactured between 2010 and 2014 and used non-ECC DDR3 memory.[2][4][37]
In July 2015, a group of security researchers published a paper that describes anarchitecture- andinstruction-set-independent way for exploiting the Rowhammer effect. Instead of relying on theclflushinstruction to perform cache flushes, this approach achieves uncached memory accesses by causing a very high rate ofcache evictionusing carefully selected memory access patterns. Although thecache replacement policiesdiffer between processors, this approach overcomes the architectural differences by employing an adaptive cache eviction strategyalgorithm.[18]: 64–68Theproof of conceptfor this approach is provided both as anative codeimplementation, and as a pureJavaScriptimplementation that runs onFirefox39. The JavaScript implementation, calledRowhammer.js,[40]uses largetypedarraysand relies on their internalallocationusinglarge pages; as a result, it demonstrates a very high-level exploit of a very low-level vulnerability.[41][42][43][44]
In October 2016, researchers published DRAMMER, an Android application that uses Rowhammer, together with other methods, to reliably gain root access on several popular smartphones.[45]The vulnerability was acknowledged asCVE-2016-6728[46]and a mitigation was released by Google within a month. However, due to the general nature of possible implementations of the attack, an effective software patch is difficult to be reliably implemented. As of June 2018, most patch proposals made by academia and industry were either impractical to deploy or insufficient in stopping all attacks. As a mitigation, researchers proposed a lightweight defense that prevents attacks based ondirect memory access(DMA) by isolating DMA buffers with guard rows.[47][48]
In May 2020, the TRRespass work[49]showed that existing DDR4 DRAM chips, which are claimed to be protected and resilient against Rowhammer, are actually vulnerable to Rowhammer. This work introduced a new access pattern, called many-sided hammering, which circumvents Rowhammer protections that were put into place inside DDR4 DRAM chips.
In May 2021, a Google research team announced a new exploit, Half-Double that takes advantage of the worsening physics of some of the newer DRAM chips.[50]
In March 2024, a group of researchers atETH Zürichannounced ZenHammer, a rowhammer exploit forAMD Zenchips, and also announced the first use of rowhammer to exploitDDR5 SDRAM.[51][52]
In June 2024, a group of researchers atETH Zürichannounced RISC-H, a rowhammer exploit forRISC-Vchips, this is the first Rowhammer study on RISC-V.[53]
|
https://en.wikipedia.org/wiki/Row_hammer
|
AIX(pronounced/ˌeɪ.aɪ.ˈɛks/ay-eye-EKS[5]) is a series ofproprietaryUnixoperating systemsdeveloped and sold byIBMsince 1986. The name stands for "Advanced Interactive eXecutive". Current versions are designed to work withPower ISAbasedserverandworkstationcomputers such as IBM'sPowerline.
Originally released for theIBM RT PCRISCworkstationin 1986, AIX has supported a wide range of hardware platforms, including the IBMRS/6000series and laterPowerandPowerPC-based systems,IBM System i,System/370mainframes,PS/2personal computers, and theApple Network Server. Currently, it is supported onIBM Power SystemsalongsideIBM iandLinux.
AIX is based onUNIX System Vwith4.3BSD-compatible extensions. It is certified to the UNIX 03 and UNIX V7 specifications of theSingle UNIX Specification, beginning with AIX versions 5.3 and 7.2 TL5, respectively.[6]Older versions were certified to the UNIX 95 and UNIX 98 specifications.[7]
AIX was the first operating system to implement ajournaling file system. IBM has continuously enhanced the software with features such as processor, disk, and networkvirtualization, dynamic hardware resource allocation (including fractional processor units), andreliability engineeringconcepts derived from itsmainframedesigns.[8]
Unix began in the early 1970s atAT&T'sBell Labsresearch center, running onDECminicomputers. By 1976, the operating system was used in various academic institutions, includingPrinceton University, where Tom Lyon and others ported it to theS/370to run as a guest OS underVM/370.[9]This port becameAmdahl UTSfrom IBM's mainframe rival.[10][11]
IBM's involvement with Unix began in 1979 when it assisted Bell Labs in porting Unix to the S/370 platform to be used as abuild hostfor the5ESS switch's software. During this process, IBM made modifications to theTSS/370Resident Supervisor to better support Unix.[12]
In 1984, IBM introduced its own Unix variant for the S/370 platform called VM/IX, developed byInteractive Systems Corporationusing Unix System III. However, VM/IX was only available as a PRPQ (Programming Request for Price Quotation) and was not a General Availability product.
It was replaced in 1985 by IBM IX/370, a fully supported product based on AT&T's Unix System V, intended to compete against UTS.[13]
In 1986, IBM introduced AIX Version 1 for theIBM RT PCworkstation. It was based onUNIX System VReleases 1 and 2, incorporating source code from 4.2 and 4.3BSDUNIX.[14]
AIX Version 2 followed in 1987 for the RT PC.[15]
In 1990, AIX Version 3 was released for thePOWER-basedRS/6000platform.[16]It became the primary operating system for the RS/6000 series, which was later renamedIBM eServerpSeries,IBM System p, and finallyIBM Power Systems.
AIX Version 4, introduced in 1994, addedsymmetric multiprocessingand evolved through the 1990s, culminating with AIX 4.3.3 in 1999. A modified version of Version 4.1 was also used as the standard OS for theApple Network Serverline byApple Computer.
In the late 1990s, underProject Monterey, IBM and theSanta Cruz Operationattempted to integrate AIX andUnixWareinto a multiplatform Unix forIntelIA-64architecture. The project was discontinued in 2002 after limited commercial success.[17]
In 2003, theSCO Groupfiled a lawsuit against IBM, alleging misappropriation ofUNIX System Vsource code in AIX. The case was resolved in 2010 when a jury ruled thatNovellowned the rights to Unix, not SCO.[17]
AIX 6 was announced in May 2007 and became generally available on November 9, 2007. Key features includedrole-based access control,workload partitions, andLive Partition Mobility.
AIX 7.1 was released in September 2010 with enhancements such as Cluster Aware AIX and support for large-scale memory and real-time application requirements.[18]
The original AIX (sometimes calledAIX/RT) was developed for the IBM RT PC workstation by IBM in conjunction withInteractive Systems Corporation, who had previously portedUNIX System IIIto theIBM PCfor IBM asPC/IX.[19]According to its developers, the AIX source (for this initial version) consisted of one million lines of code.[20]Installation media consisted of eight1.2M floppy disks. The RT was based on theIBM ROMPmicroprocessor, the first commercialRISCchip. This was based on a design pioneered at IBM Research (theIBM 801).
One of the novel aspects of the RT design was the use of amicrokernel, called Virtual Resource Manager (VRM). The keyboard, mouse, display, disk drives and network were all controlled by a microkernel. One could "hotkey" from one operating system to the next using the Alt-Tab key combination. Each OS in turn would get possession of the keyboard, mouse and display. Besides AIX v2, thePICK OSalso included this microkernel.
Much of the AIX v2 kernel was written in thePL.8programming language, which proved troublesome during the migration to AIX v3.[citation needed]AIX v2 included fullTCP/IPnetworking, as well asSNAand two networking file systems:NFS, licensed fromSun Microsystems, andDistributed Services(DS). DS had the distinction of being built on top of SNA, and thereby being fully compatible with DS onIBM mainframe systems[clarification needed]and on midrange systems runningOS/400throughIBM i. For the graphical user interfaces, AIX v2 came with the X10R3 and later the X10R4 and X11 versions of theX Window Systemfrom MIT, together with theAthena widget set. Compilers forFortranandCwere available.
AIX PS/2(also known asAIX/386) was developed byLocus Computing Corporationunder contract to IBM.[19]AIX PS/2, first released in October 1988,[21]ran onIBM PS/2personal computers withIntel 386and compatible processors.
The product was announced in September 1988 with a baseline tag price of $595, although some utilities, such asUUCP, were included in a separate Extension package priced at $250.nroffandtrofffor AIX were also sold separately in a Text Formatting System package priced at $200. TheTCP/IPstack for AIX PS/2 retailed for another $300. TheX Window Systempackage was priced at $195, and featured a graphical environment called theAIXwindows Desktop, based onIXI'sX.desktop.[22]The C and FORTRAN compilers each had a price tag of $275. Locus also made available theirDOS Mergevirtual machine environment for AIX, which could run MS DOS 3.3 applications inside AIX; DOS Merge was sold separately for another $250.[23]IBM also offered a $150 AIX PS/2 DOS Server Program, which providedfile serverandprint serverservices for client computers running PC DOS 3.3.[24]
The last version of PS/2 AIX is 1.3. It was released in 1992 and announced to add support for non-IBM (non-microchannel) computers as well.[25]Support for PS/2 AIX ended in March 1995.[26]
In 1988, IBM announcedAIX/370,[27]also developed by Locus Computing. AIX/370 was IBM's fourth attempt to offerUnix-likefunctionality for their mainframe line, specifically theSystem/370(the prior versions were aTSS/370-based Unix system developed jointly with AT&T c.1980,[12]aVM/370-based system namedVM/IXdeveloped jointly withInteractive Systems Corporationc.1984,[citation needed]and aVM/370-based version of TSS/370[citation needed]namedIX/370which was upgraded to be compatible withUNIX System V[citation needed]). AIX/370 was released in 1990 with functional equivalence to System V Release 2 and 4.3BSD as well as IBM enhancements. With the introduction of theESA/390architecture, AIX/370 was replaced byAIX/ESA[28]in 1991, which was based onOSF/1, and also ran on theSystem/390platform. Unlike AIX/370, AIX/ESA ran both natively as the host operating system, and as a guest underVM. AIX/ESA, while technically advanced, had little commercial success, partially because[citation needed]UNIX functionality was added as an option to the existing mainframe operating system,MVS, asMVS/ESA SP Version 4 Release 3 OpenEdition[29]in 1994, and continued as an integral part of MVS/ESA SP Version 5, OS/390 and z/OS, with the name eventually changing fromOpenEditiontoUnix System Services. IBM also provided OpenEdition in VM/ESA Version 2[30]through z/VM.
As part ofProject Monterey, IBM released abeta testversion of AIX 5L for the IA-64 (Itanium) architecture in 2001, but this never became an official product due to lack of interest.[31]
TheApple Network Server(ANS) systems were PowerPC-based systems designed byApple Computerto have numerous high-end features that standard Apple hardware did not have, including swappable hard drives, redundant power supplies, and external monitoring capability. These systems were more or less based on thePower Macintoshhardware available at the time but were designed to use AIX (versions 4.1.4 or 4.1.5) as their native operating system in a specialized version specific to the ANS called AIX for Apple Network Servers.
AIX was only compatible with the Network Servers and was not ported to standard Power Macintosh hardware. It should not be confused withA/UX, Apple's earlier version of Unix for68k-basedMacintoshes.
The release of AIX version 3 (sometimes calledAIX/6000) coincided with the announcement of the firstPOWER1-based IBMRS/6000models in 1990.
AIX v3 innovated in several ways on the software side. It was the first operating system to introduce the idea of ajournaling file system,JFS, which allowed for fast boot times by avoiding the need to ensure the consistency of the file systems on disks (seefsck) on every reboot. Another innovation wasshared librarieswhich avoid the need for static linking from an application to the libraries it used. The resulting smaller binaries used less of the hardware RAM to run, and used less disk space to install. Besides improving performance, it was a boon to developers: executable binaries could be in the tens ofkilobytesinstead of a megabyte for an executable statically linked to theC library. AIX v3 also scrapped the microkernel of AIX v2, a contentious move that resulted in v3 containing noPL.8code and being somewhat more "pure" than v2.
Other notable subsystems included:
In addition, AIX applications can run in thePASEsubsystem underIBM i.
IBM formerly made the AIX for RS/6000 source code available to customers for a fee; in 1991, IBM customers could order the AIX 3.0 source code for a one-time charge of US$60,000;[32]subsequently, IBM released the AIX 3.1 source code in 1992,[33]and AIX 3.2 in 1993.[34]These source code distributions excluded certain files (authored by third-parties) which IBM did not have rights to redistribute, and also excluded layered products such as the MS-DOS emulator and the C compiler. Furthermore, in order to be able to license the AIX source code, the customer first had to procure source code license agreements with AT&T and the University of California, Berkeley.[32]
The default shell wasBourne shellup to AIX version 3, but was changed toKornShell(ksh88) in version 4 forXPG4andPOSIXcompliance.[3]
TheCommon Desktop Environment(CDE) is AIX's defaultgraphical user interface. As part of Linux Affinity and the freeAIX Toolbox for Linux Applications(ATLA), open-sourceKDEandGNOMEdesktops are also available.[57]
SMITis the System Management Interface Tool for AIX. It allows a user to navigate a menu hierarchy of commands, rather than using the command line. Invocation is typically achieved with the commandsmit. Experienced system administrators make use of theF6function key which generates the command line that SMIT will invoke to complete it.
SMIT also generates a log of commands that are performed in thesmit.scriptfile. Thesmit.scriptfile automatically records the commands with the command flags and parameters used. Thesmit.scriptfile can be used as an executable shell script to rerun system configuration tasks. SMIT also creates thesmit.logfile, which contains additional detailed information that can be used by programmers in extending the SMIT system.
smitandsmittyrefer to the same program, thoughsmittyinvokes the text-based version, whilesmitwill invoke an X Window System based interface if possible; however, ifsmitdetermines that X Window System capabilities are not present, it will present the text-based version instead of failing. Determination of X Window System capabilities is typically performed by checking for the existence of theDISPLAYvariable.[citation needed]
Object Data Manager(ODM) is a database of system information integrated into AIX,[58][59]analogous to theregistryinMicrosoft Windows.[60]A good understanding of the ODM is essential for managing AIX systems.[61]
Data managed in ODM is stored and maintained asobjectswith associatedattributes.[62]Interaction with ODM is possible viaapplication programming interface(API)libraryfor programs, andcommand-line utilitiessuch asodmshow,odmget,odmadd,odmchangeandodmdeleteforshell scriptsand users.SMITand its associated AIX commands can also be used to query and modify information in the ODM.[63]ODM is stored on disk usingBerkeley DBfiles.[64]
Example of information stored in the ODM database are:
|
https://en.wikipedia.org/wiki/IBM_AIX
|
HP-UX(from "Hewlett Packard Unix") is aproprietaryimplementation of theUnixoperating systemdeveloped byHewlett Packard Enterprise; current versions supportHPE Integrity Servers, based onIntel'sItaniumarchitecture. It is based onUnix System V(initiallySystem III) and first released in 1984.
Earlier versions of HP-UX supported theHP Integral PCandHP 9000Series 200, 300, and 400 computer systems based on theMotorola 68000series of processors, the HP 9000 Series 500 computers based on HP's proprietaryFOCUSarchitecture, and later HP 9000 Series models based on HP'sPA-RISCinstruction set architecture. HP-UX was the first Unix to offeraccess-control listsfor file access permissions as an alternative to the standard Unix permissions system.[citation needed]HP-UX was also among the first Unix systems to include a built-inlogical volume manager.[citation needed]
HP has had a long partnership withVeritas Software, and usesVxFSas the primaryfile system. It is one of three commercial operating systems that have versions certified toThe Open Group'sUNIX 03 standard(the others aremacOSandAIX).[2]Following the discontinuation of Itanium processors, HP-UX is set to reachend-of-lifeby December 2025.[3]
HP-UX 11i offers a common shared disks for its clustered file system.HP Serviceguardis theclustersolution for HP-UX. HP Global Workload Management adjusts workloads to optimize performance, and integrates with Instant Capacity on Demand so installed resources can be paid for in 30-minute increments as needed for peak workload demands.
HP-UX offersoperating system-level virtualizationfeatures such as hardware partitions, isolated OS virtual partitions on cell-based servers, andHP Integrity Virtual Machines(HPVM) on all Integrity servers. HPVM supports guests running on HP-UX 11i v3 hosts – guests can runLinux,Windows Server,OpenVMSor HP-UX. HP supports online VM guest migration, where encryption can secure the guest contents during migration.
HP-UX 11i v3 scales as follows (on a SuperDome 2 with 32 Intel Itanium 9560 processors):
The 11i v2 release introduced kernel-basedintrusion detection, strongrandom number generation,stack buffer overflowprotection, security partitioning, role-based access management, and various open-source security tools.
HP classifies the operating system's security features into three categories: data, system and identity:[5]
Release 6.x (together with 3.x) introduced the context dependent files (CDF) feature, a method of allowing afileserverto serve different configurations and binaries (and even architectures) to different client machines in a heterogeneous environment. A directory containing such files had itssuidbit set and was made hidden from both ordinary and root processes under normal use. Such a scheme was sometimes exploited by intruders to hide malicious programs or data.[7]CDFs and the CDF filesystem were dropped with release 10.0.
HP-UX operating systems supports a variety ofPA-RISCsystems. The 11.0 added support forIntegrity-based servers for the transition from PA-RISC toItanium. HP-UX 11i v1.5 is the first version that supported Itanium. On the introduction of HP-UX 11i v2 the operating system supported both of these architectures.[8]
HP-UX 11i supportsHPE Integrity Serversof HP BL server blade family. These servers use the IntelItaniumarchitecture.
HP-UX 11i v2 and 11i v3 support HP's CX series servers. CX stands for carrier grade and is used mainly for telco industry with -48V DC support and is NEBS certified. Both of these systems containItaniumMad6M processors and are discontinued.
HP-UX supports HP's RX series of servers.[citation needed]
Prior to the release of HP-UX version 11.11, HP used a decimalversion numberingscheme with the first number giving the major release and the number following the decimal showing the minor release. With 11.11, HP made amarketingdecision to name their releases 11ifollowed by a v(decimal-number) for the version. Theiwas intended to indicate the OS isInternet-enabled, but the effective result was a dual version-numbering scheme.
HP bundles HP-UX 11i with programs in packages they call Operating Environments (OEs).[22]
The following lists the currently available HP-UX 11i v3 OEs:
|
https://en.wikipedia.org/wiki/HP-UX
|
Illumos(stylized as "illumos") is a partlyfree and open-sourceUnixoperating system.[3]It has been developed since 2010 and is based onOpenSolaris, after the discontinuation of that product byOracle. It comprises akernel,device drivers, systemlibraries, andutility softwareforsystem administration. Its core has become the base for many different open-sourcedIllumos distributions,[4]in a way similar to how theLinux kernelis used in differentLinux distributions.[5]
The maintainers writeillumosin lowercase,[6]since somecomputer fontsdo not clearly distinguish a lowercaseLfrom an uppercasei:Il(seehomoglyph).[7]The project name is a combination of wordsilluminarefrom the Latin forto light, andOSforOperating System.[8]
Illumos was announced viawebinaron 3 August 2010,[9]as a community effort of a group of core Solaris engineers to create a truly open source Solaris, by swapping closed source bits ofOpenSolariswith open implementations.[10][11][12]OpenSolaris itself is based onSystem V Release 4(SVR4) and theBerkeley Software Distribution(BSD).
The original plan explicitly stated that Illumos would not be a distribution or afork. However, afterOracleannounced the discontinuation of OpenSolaris, plans were made to fork the final version of the Solaris ON kernel,[a]allowing Illumos to evolve into a kernel of its own.[13]As of 2010[update], efforts focused on libc, theNFSlock manager, the crypto module, and many device drivers, to create a Solaris-like OS with no closed, proprietary code. As of 2012[update], development emphasis includes transitioning from the historical compiler,Studio, toGCC.[14]The "userland" software is now built withGNU make,[15]and contains many GNU utilities such asGNU tar. At the time,[clarification needed]Illumos had been lightly led by founder Garrett D'Amore and other community members/developers such asBryan CantrillandAdam Leventhal, via a Developers' Council.[16]
As of 2019 its primary development project, illumos-gate, derives from OS/Net (aka ON),[17]which is aSolariskernel with the bulk of the drivers, core libraries, and basic utilities, similar to what is delivered by aBSD"src" tree. It was originally dependent onOpenSolarisOS/Net, but a fork was made afterOraclesilently decided to close the development of Solaris and unofficially killed the OpenSolaris project.[18][19][20]
Distributions, at illumos.org[21]
Discontinued:
TheIllumos Foundationwas incorporated in theState of Californiain 2012 as a501(c)6trade association, with founding board members Jason Hoffman (formerly atJoyent), Evan Powell (Nexenta), and Garrett D'Amore. As of 2024, its status in California is "dissolved".[29]
|
https://en.wikipedia.org/wiki/Illumos
|
Trusted Solarisis a discontinued security-evaluatedoperating systembased onSolarisbySun Microsystems, featuring amandatory access controlmodel. The features were migrated into the baseSolarissystem.
Trusted Solaris 8 isCommon Criteriacertified atEvaluation Assurance LevelEAL4+ against the CAPP, RBACPP, and LSPP protection profiles. It is the basis for the DoDIIS Trusted Workstation program.[1]
Features that were previously only available in Trusted Solaris, such as fine-grained privileges, are now part of the standard Solaris release. In the Solaris 10 11/06 update a new component calledSolaris Trusted Extensionswas introduced, making it no longer necessary to have a different release with a modified kernel for labeled security environments. Solaris Trusted Extensions was included in theOpenSolarisproject.
Solaris Trusted Extensions, when enabled, enforces a mandatory access control policy on all aspects of the operating system, including device access, file, networking, print and window management services. This is achieved by adding sensitivity labels to objects, thereby establishing explicit relationships between these objects. Only appropriate (and explicit) authorization allows applications and users read and/or write access to the objects.
The component also provides labeled security features in a desktop environment. In addition to extending support for theCommon Desktop Environmentfrom the Trusted Solaris 8 release, it delivered the first labeled environment based onGNOME.[2]Solaris Trusted Extensions facilitates the access of data at multiple classification levels through a single desktop environment. The labeled desktop support was removed in Oracle Solaris 11.4,[3]support for labeled zones and file and process labels remains.
Solaris Trusted Extensions also implements labeled device access and labeled network communication, through theCommercial Internet Protocol Security Option(CIPSO) standard. CIPSO is used to pass security information within and between labeledzones.
Oracle Solaris 11.4 introduced a new "File and Process Labeling" feature that instead of using zones to represent all of the processes at a given label the label is stored in the process cred, this is similar to how labeling had been implemented in Trusted Solaris 8 and earlier. While this is still aMandatory access controlpolicy it is intended to be used as part of a data loss prevention strategy rather than the traditionalMultilevel_securityenvironment. TheZFSfilesystem also supports per file labels via the multilevel dataset option.
Common Criteriaevaluations that include the labeled security protection profile were performed for:
Oracle Solaris 10 11/06 at EAL4+,[4]Oracle Solaris 11.1.[5]
|
https://en.wikipedia.org/wiki/Trusted_Solaris
|
Logical Domains(LDomsorLDOM) is theserver virtualization and partitioning technologyforSPARC V9processors. It was first released bySun Microsystemsin April 2007. After theOracle acquisition of Sunin January 2010, the product has been re-branded asOracle VM Server for SPARCfrom version 2.0 onwards.
Each domain is a full virtual machine with a reconfigurable subset of hardware resources. Domains can be securelylive migratedbetween servers while running.Operating systemsrunning inside Logical Domains can be started, stopped, and rebooted independently. A running domain can be dynamically reconfigured to add or remove CPUs, RAM, or I/O devices without requiring a reboot. Using Dynamic Resource Management, CPU resources can be automatically reconfigured as needed.[2]
SPARC hypervisors run in hyperprivileged execution mode, which was introduced in the sun4v architecture. The sun4v processors released as of October 2015 are theUltraSPARC T1,T2,T2+,T3,[3]T4,[4]T5, M5, M6, M10, and M7. Systems based on UltraSPARC T1 support only Logical Domains versions 1.0-1.2.[5]The newer types of T-series servers support both older Logical Domains and newer Oracle VM Server for SPARC product version 2.0 and later. These include:
UltraSPARC T1-based:
UltraSPARC T2-based:
UltraSPARC T2 Plus systems:
SPARC T3 systems:[6]
SPARC T4 systems[4]
SPARC T5 systems[7]
SPARC T7 systems,[8]which use the same SPARC M7 processor as the M7-8 and M7-16 servers listed below.
SPARC M-Series systems[9][7][10]
Technically, the virtualization product consists of two interdependent components: the hypervisor in the SPARC serverfirmwareand the Logical Domains Manager software installed on theSolarisoperating system running within the control domain (seeLogical Domain roles, below). Because of this, each particular version of Logical Domains (Oracle VM Server for SPARC) software requires a certain minimum version of the hypervisor to be installed into the server firmware.
Logical Domains exploits the chip multithreading (CMT) nature of the "CoolThreads" processors. A single chip contains up to 32 CPU cores, and each core has either four hardwarethreads(for the UltraSPARC T1) or eight hardware threads (for the UltraSPARC T2/T2+, and SPARC T3/T4 and later) that act as virtual CPUs. All CPU cores execute instructions concurrently, and each core switches between threads—typically when a thread stalls on a cache miss or goes idle—within a single clock cycle. This lets the processor gain throughput that is lost during cache misses in conventional CPU designs. Each domain is assigned its own CPU threads and executes CPU instructions at native speed, avoiding the virtualization overhead for privileged operation trap-and-emulate or binary rewrite typical of most VM designs.
Each server can support as many as one domain per hardware thread up to a maximum of 128. That's up to 32 domains for the UltraSPARC T1, 64 domains for the UltraSPARC T2 and SPARC T4-1, and 128 domains for UltraSPARC T3 as examples single-processor (single-socket) servers. Servers with 2-4 UltraSPARC T2+ or 2-8 SPARC T3-T5 CPUs support as many logical domains as the number of processors multiplied by the number of threads of each CPU up to the limit of 128.[11]M-series servers can be subdivided into physical domains ("PDoms"), each of which can host up to 128 logical domains. Typically, a given domain is assigned multiple CPU threads or CPU cores for additional capacity within a single OS instance. CPU threads, RAM, and virtual I/O devices can be added to or removed from a domain by administrator issuing command in the control domain. This change takes effect immediately without the need to reboot the affected domain, which can immediately make use of added CPU threads or continue operating with reduced resources.
When hosts are connected to shared storage (SANorNAS), running guest domains can be securelylive migratedbetween servers without outage (starting with Oracle VM Server for SPARC version 2.1). The process encrypts guest VM memory contents before they are transmitted between servers, using cryptographic accelerators available on all processors with sun4v architecture.
All logical domains are the same except for the roles that they are assigned. There are multiple roles that logical domains can perform such as:
Control domain, as its name implies, controls the logical domain environment. It is used to configure machine resources and guest domains, and provides services necessary for domain operation, such asvirtual consoleservice. The control domain also normally acts as a service domain.
Service domainspresent virtual services, such as virtual disk drives and network switches, to other domains. In most cases, guest domains perform I/O via bridged access through services domains, which are usually I/O domains and directly connected to the physical devices. Service domains can provide virtual LANs and SANs as well as bridge through to physical devices. Disk images can reside on complete local physical disks, shared SAN block devices, theirslices, or even on files contained on a localUFSorZFSfile system, or on a sharedNFSexport or iSCSI target.
Control and service functions can be combined within domains, however it is recommended that user applications not run within control or service domains in order to protect domain stability and performance.
I/O domainshave direct ownership of a PCI bus, or card on a bus, or Single Root I/O Virtualization (SR-IOV) function, providing direct access to physical I/O devices, such as a network card in a PCI controller. An I/O domain may use its devices to have native I/O performance its own applications, or act as a service domain and share the devices to other domains as virtual devices.
Root domainshave direct ownership of PCIe "root complex" and all associated PCIe slots. This can be used to grant access to physical I/O devices. A root domain is also an I/O domain. There are a maximum of two root domains for the UltraSPARC T1 (Niagara) servers, one of which also must be the control domain. UltraSPARC T2 Plus, SPARC T3, and SPARC T4 servers can have as many as 4 root domains, limited by the number of PCIe root complexes installed on the server. SPARC T5 servers can have up to 16 root complex domains. Multiple I/O domains can be configured to provide resiliency against failures.
Guest domainsrun an operating system instance without performing any of the above roles, but leverage the services provided by the above in order to run applications.
The only operating system supported by the vendor for running within logical domains isSolaris 1011/06 and later updates, and allSolaris 11releases.
There are operating systems that are not officially supported, but may still be capable of running within logical domains:
|
https://en.wikipedia.org/wiki/Oracle_VM_Server_for_SPARC
|
TheDockis a prominent feature of thegraphical user interfaceofmacOS. It is used to launch applications and to switch between running applications. The Dock is also a prominent feature of macOS's predecessorNeXTSTEPandOPENSTEPoperating systems. The earliest known implementations of a dock are found in operating systems such asRISC OSand NeXTSTEP.iOShas its own version of the Dock for theiPhoneandiPod Touch, as doesiPadOSfor theiPad.
Apple applied for a US patent for the design of the Dock in 1999 and was grantedthe patentin October 2008, nearly a decade later.[1]Any application can bedragged and droppedonto the Dock to add it to the dock, and any application can be dragged from the dock to remove it, except forFinderandTrash, which are permanent fixtures as the leftmost and rightmost items (or highest and lowest items if the Dock is vertically oriented), respectively. Part of the macOSCore Services,Dock.appis located at/System/Library/CoreServices/.
In NeXTSTEP and OPENSTEP, the Dock is an application launcher that holdsiconsfor frequently usedprograms. The icon for the Workspace Manager and the Recycler are always visible. The Dock indicates if a program is not running by showing anellipsisbelow its icon. If the program is running, there isn't an ellipsis on the icon. In macOS, running applications have been variously identified by a small black triangle (Mac OS X 10.0-10.4) a blue-tinted luminous dot (Mac OS X 10.5-10.7), a horizontal light bar (OS X 10.8 and 10.9), and a simple black or white dot (OS X 10.10-present).
In macOS, however, the Dock is used as a repository for any program or file in the operating system. It can hold any number of items and resizes them dynamically to fit while using magnification to better view smaller items. By default, it appears on the bottom edge of the screen, but it can also instead be placed on the left or right edges of the screen if the user wishes. Applications that do not normally keep icons in the Dock will still appear there when running and remain until they are quit. These features are unlike those of the dock in the NeXT operating systems where the capacity of the Dock is dependent ondisplay resolution. This may be an attempt to recover someShelffunctionality since macOS inherits no other such technology from NeXTSTEP. (Minimal Shelf functionality has been implemented in theFinder.)
The changes to the dock bring its functionality also close to that ofApple'sNewton OSButton Bar, as found in the MessagePad 2x00 series and the likes. Applications could be dragged in and out of the Extras Drawer, aFinder-like app, onto the bar. Also, when the screen was put into landscape mode, the user could choose to position the Button Bar at the right or left side of the screen, just like the Dock in macOS.
The macOS Dock also has extended menus that control applications without making them visible on screen. On most applications it has simple options such as Quit, Keep In Dock, Remove From Dock, and other options, though some applications use these menus for other purposes, such as iTunes, which uses this menu as a way for a user to control certain playback options. Other Applications include changing the status of an online alias (MSN, AIM/iChat etc.) or automatically saving the changes that have been made in a document (There is no current application with this feature made available for macOS). Docklings (in Mac OS X 10.4 or earlier) can also be opened by using the right-mouse button, if the mouse has one, but most of the time either clicking and holding or control-click will bring the menu up.
InMac OS X Leopard, docklings were replaced byStacks. Stacks "stack" files into a small organized folder on the Dock, and they can be opened by left-clicking.
Stacks could be shown in three ways: a "fan", a "grid", or a "list", which is similar to docklings. In grid view, the folders in that stack can be opened directly in that stack without the need to open Finder.
IniOS, the dock is used to store applications and, sinceiOS 4, folders containing applications. Unlike the macOS dock, a maximum of 4 icons can be placed in the dock on the iPhone and theiPod Touch. The maximum for the iPad however is 16 icons (13 apps and 3 recently opened apps). The size of the dock on iOS cannot be changed.
When an application on the Dock is launched by clicking on it, it will jump until the software is finished loading. Additionally, when an application requires attention from a user, it will jump even higher until its icon is clicked and the user attends to its demands.
The original version of the dock, found in Mac OS X Public Beta to 10.0, presents a flat white translucent interface with the Aqua styled pinstripes. The dock found in Mac OS X 10.1 to 10.4 removes the pinstripes, but otherwise is identical. Mac OS X 10.5 to 10.7 presents the applications on a three-dimensional glassy surface from a perspective instead of the traditional flat one, resemblingSun Microsystems'Project Looking Glassapplication dock.[2]OS X 10.8 to 10.9 changes the look to resemble frosted glass with rounded corners. OS X 10.10 and later revert to a two-dimensional appearance, similar to Mac OS X 10.4, although more translucent and with a iOS 7 blur effect. And inmacOS Big Surand later, the dock remained two dimensional, but was redesigned to a more circular look.
In iPhone OS 1 to 3, the dock used a metal look which looks similar to the front of thePower Mac G5(2003-2005) andMac Pro(2006-2012 or 2019-). iPhone OS 3.2 for iPad and iOS 4 to 6 adopted the dock design from Mac OS X 10.5 to 10.7 which was used until iOS 7, which uses a similar dock from Mac OS X Tiger but with iOS 7 styled blur effects.[citation needed]IniOS 11, the dock for theiPadand iPhone X is redesigned to more resemble the macOS dock.[3][4]
Theclassic Mac OSdoes have a dock-like application called Launcher, which was first introduced withMacintosh Performamodels in 1993 and later included as part ofSystem 7.5.1. It performs the same basic function.[5]Also, add-ons such asDragThingadded a dock for users of earlier versions.
MacOS was not the first operating system to implement dock-like features.RISC OScontains a feature called theIcon bar, which is remarkably similar to the macOS Dock. The Icon Bar was first implemented in 1987 for the first version of RISC OS, namedArthur.
Microsoft implemented a simplified dock feature in theWindows Desktop Updatethat shipped withInternet Explorer 4. This Quick Launch toolbar feature remained untilWindows 7, where it was replaced by theSuperbar, which implements functionality similar to the macOS Dock.
Variousdocksare also used inLinuxandBSD. Some examples areWindow Maker(which emulates the look and feel of the NeXTstep GUI),Docky, andAvant Window Navigator,KXDocker(amongst others) forKDEand various others likegDesklets, adesklets,AfterStep's Wharf (a derivation from the NeXTstep UI), iTask NG (a module used with someEnlightenment-basedLinux distributionssuch asgOS) andBlackbox's Slit.
Bruce Tognazzini, a usability consultant who worked for Apple in the 1980s and 1990s beforeMac OS Xwas developed, wrote an article in 2001 listing ten problems he saw with the Dock. This article was updated in 2004, removing two of the original criticisms and adding a new one. One of his concerns was that the Dock uses too much screen space. Another was that icons only show their labels when the pointer hovers over them, so similar-looking folders, files, and windows are difficult to distinguish. Tognazzini also criticized the fact that when icons are dragged out of the Dock, they vanish with no easy way to get them back; he called this behavior "object annihilation".[6]
John Siracusa, writing forArs Technica, also pointed out some issues with the Dock around the releases ofMac OS X Public Betain 2000. He noted that because the Dock is centered, adding and removing icons changes the location of the other icons.[7]In a review ofMac OS X v10.0the following year, he also noted that the Dock does far too many tasks than it should for optimum ease-of-use, including launching apps, switching apps, opening files, and holding minimized windows.[8]Siracusa further criticized the Dock after the release ofMac OS X v10.5, noting that it was made less usable for the sake of eye-candy. Siracusa criticized the 3D look and reflections, the faint blue indicator for open applications, and less distinguishable files and folders.[9]
Thom Holwerda, a managing editorOSNews, stated some concerns with the Dock, including the facts that it grows in both directions, holds the Trash icon, and has no persistent labels. Holwerda also criticized the revised Dock appearance inMac OS X v10.5.[10]
|
https://en.wikipedia.org/wiki/Dock_(macOS)
|
Mac OS(originallySystem Software;retronym:Classic Mac OS[a]) is the series ofoperating systemsdeveloped for theMacintoshfamily ofpersonal computersbyApple Computer, Inc.from 1984 to 2001, starting withSystem 1and ending withMac OS 9. The Macintosh operating system is credited with having popularized thegraphical user interfaceconcept.[4]It was included with every Macintosh that was sold during the era in which it was developed, and many updates to the system software were done in conjunction with the introduction of new Macintosh systems.
Apple released theoriginal Macintoshon January 24, 1984. Thefirst version of the system software, which had no official name, was partially based on theLisa OS, which Apple previously released for theLisacomputer in 1983. As part of an agreement allowingXeroxto buysharesin Apple at a favorable price, it also used concepts from theXerox PARCAltocomputer, which former Apple CEOSteve Jobsand other Lisa team members had previewed.[1]This operating system consisted of theMacintosh ToolboxROMand the "System Folder", a set of files that were loaded from disk. The nameMacintosh System Softwarecame into use in 1987 with System 5. Apple rebranded the system asMac OSin 1996, starting officially with version 7.6, due in part to itsMacintosh clone program.[5]That program ended after the release ofMac OS 8in 1997.[6]The last major release of the system was Mac OS 9 in 1999.[7]
Initial versions of the System Software ran one application at a time. With theMacintosh 512K, a system extension called theSwitcherwas developed to use this additional memory to allow multiple programs to remain loaded. The software of each loaded program used the memory exclusively; only when activated by the Switcher did the program appear, even the Finder's desktop. With the Switcher, the now familiar Clipboard feature allowed copy and paste between the loaded programs across switches including the desktop.
With the introduction of System 5, acooperative multitaskingextension calledMultiFinderwas added, which allowed content in windows of each program to remain in a layered view over the desktop, and was later integrated into System 7 as part of the operating system along with support forvirtual memory. By the mid-1990s, however, contemporary operating systems such asWindows NT,OS/2,NeXTSTEP,BSD, andLinuxhad all broughtpre-emptive multitasking,protected memory,access controls, and multi-user capabilities to desktop computers. The Macintosh's limitedmemory managementand susceptibility to conflicts amongextensionsthat provide additional functionality, such as networking or support for a particular device,[8]led to significant criticism of the operating system, and was a factor in Apple's declining market share at the time.
After two aborted attempts at creating a successor to the Macintosh System Software calledTaligentandCopland, and afour-year development effortspearheaded bySteve Jobs's return to Applein 1997, Apple replaced Mac OS with a new operating system in 2001 namedMac OS X. It retained most of the user interface design elements of the Classic Mac OS, and there was some overlap ofapplication frameworksfor compatibility, but the two operating systems otherwise have completely different origins and architectures.[citation needed]
Thefinal updates to Mac OS 9released in 2001 provided interoperability with Mac OS X. The name "Classic" that now signifies the historical Mac OS as a whole is a reference to theClassic Environment, acompatibility layerthat helped ease the transition to Mac OS X (now macOS).[9]
The Macintosh project started in late 1978 withJef Raskin, who envisioned an easy-to-use, low-cost computer for the average consumer. In September 1979, Raskin began looking for an engineer who could put together a prototype.Bill Atkinson, a member of theApple Lisateam, introduced Raskin toBurrell Smith, a service technician who had been hired earlier that year.
Apple's concept for the Macintosh deliberately sought to minimize the user's awareness of the operating system. Many basic tasks that required more operating system knowledge on other systems could be accomplished by mouse gestures and graphic controls on a Macintosh. This would differentiate it from its contemporaries such asMS-DOS, which use acommand-line interfaceconsisting of terse, abbreviated textual commands.
In January 1981,Steve Jobscompletely took over the Macintosh project. Jobs and a number of Apple engineers visited Xerox PARC in December 1979, three months after the Lisa and Macintosh projects had begun. After hearing about the pioneeringGUItechnology being developed atXerox PARCfrom former Xerox employees like Raskin, Jobs negotiated a visit to see theXerox Altocomputer andSmalltalkdevelopment tools in exchange for Apple stock options.[10]The final Lisa and Macintosh operating systems use concepts from the Xerox Alto, but many elements of the graphical user interface were created by Apple including the menu bar, pull-down menus, and the concepts ofdrag and dropanddirect manipulation.[11]
Unlike theIBM PC, which uses 8 kB of systemROMforpower-on self-test(POST) and basic input/output system (BIOS), the Mac ROM is significantly larger (64 kB) and holds key OS code. Much of the original Mac ROM code was written byAndy Hertzfeld, a member of the original Macintosh team. He was able to conserve precious ROM space by writing routines inassembly languagecode optimized with "hacks", or clever programming tricks.[12]In addition to the ROM, he also coded thekernel, theMacintosh Toolbox, and some of thedesktop accessories(DAs). Theiconsof the operating system, which representfoldersandapplication software, were designed bySusan Kare, who later designed the icons forMicrosoft Windows 3.0.Bruce HornandSteve Cappswrote theMacintosh Finder, as well as a number of Macintosh system utilities.
Apple aggressively advertised their new machine. After its release, the company bought all 39 pages of advertisement space in the 1984 November/December edition ofNewsweekmagazine. The Macintosh quickly outsold its more sophisticated but much more expensive predecessor, theLisa. Apple quickly developedMacWorks, a product that allowed the Lisa to emulate Macintosh system software through System 3, by which time it had been discontinued as the rebrandedMacintosh XL. Many of the Lisa's operating system advances would not appear in the Macintosh operating system untilSystem 7or later.
Early versions of Mac OS are compatible only withMotorola 68000-family Macintoshes. As Apple introduced computers withPowerPChardware, the OS was ported to support this architecture. Mac OS 8.1 is the last version that could run on a 68k processor (the68040).
In systems prior toPowerPC G3-based systems, significant parts of the system are stored in physicalROMon the motherboard. The initial purpose of this is to avoid having the OS use up most of the 128KiB RAM of the initial Macintosh—the initial ROMs were 64KiB. This architecture also allows for a completely graphical OS interface at the lowest level without the need for a text-only console or command-line mode: boot time errors, such as finding no functioning disk drives, are communicated to the user graphically, usually with an icon or the distinctiveChicagobitmap font and aChime of Deathor a series of beeps. This is in contrast toMS-DOSandCP/Mcomputers of the time, which display such messages in a mono-spaced font on a black background, and require the use of the keyboard rather than a mouse, for input. To provide such niceties at a low level, early Mac OS depends on core system software inROMon the motherboard, which also ensured that only Apple computers or licensed clones (with the copyright-protected ROMs from Apple) can run Mac OS.
Several computer manufacturers over the years madeMacintosh clonesthat were capable of running Mac OS. From 1995 to 1997, Apple licensed Macintosh ROMs to several companies, notablyPower Computing,UMAXandMotorola. These machines normally ran various versions of Classic Mac OS.Steve Jobsended the clone-licensing program after returning to Apple in 1997.
Support for Macintosh clones was first exhibited in System 7.5.1, which was the first version to include the "Mac OS" logo (a variation on the originalHappy Macstartup icon), and Mac OS 7.6 was the first to be named "Mac OS" instead of "System". These changes were made to disassociate the operating system from Apple's own Macintosh models.[13]
The Macintosh originally used theMacintosh File System(MFS), aflat file systemwith only one level of folders. This was quickly replaced in 1985 by theHierarchical File System(HFS), which had a truedirectorytree. Both file systems are otherwise compatible. An improved file system namedHFS Plus("HFS+" or "Mac OS Extended") was announced in 1997 and implemented in 1998.[14]
Files in most file systems used withDOS,Windows,Unix, or other operating systems have only one "fork". By contrast, MFS and HFS give files two different "forks". The data fork contains the same sort of information as a file in other file systems, such as the text of a document or the bitmaps of an image file. Theresource forkcontains other structured data such as menu definitions, graphics, sounds, or code segments that would be incorporated into a program'sfile formaton other systems. Anexecutable filemight consist only of resources (includingcode segments) with an empty data fork, while adata filemight have only a data fork with no resource fork. Aword processorfile could contain its text in the data fork and styling information in the resource fork so that an application that does not recognize the styling information can still read the raw text.
On the other hand, these forks would challengeinteroperabilitywith different operating systems. In copying or transferring a Mac OS file to a non-Mac system, the default implementations would strip the file of its resource fork. Mostdata filescontained only nonessential information in their resource fork, such as window size and location, but program files would be inoperative without their resources. This necessitated such encoding schemes asBinHexandMacBinary, which allowed a user to encode a dual-forked file into a single stream, or inversely take a single stream so-encoded and reconstitute it into a dual-forked file usable by Mac OS.
As part of Apple's goal of creating a computer with appliance-like simplicity, there is no explicit distinction made between the operating system software and the hardware it runs on. Because of this, early versions of the operating system do not have a distinct name. The software consists of two user-visible files: the System file, and theFinder, anapplicationused for file management that also displays theDesktop. The two files are contained in a folder directory labeled "System Folder", which contains other resource files, like aprinter driver, needed to interact with the System.[5]Version numbers of the operating system are based on the version numbers of these two files.
These releases can only run one application at a time, except for desk accessories, though special application shells such asMulti-Mac[16]orSwitcher(discussed underMultiFinder) could work around this. Visible changes are best reflected in the version number of theFinder, where major leaps are found between 1.x, 4.x, 5.x, and 6.x.
In the late 1990s, Apple retroactively gave these older releases a single name.
System: Introduced screenshots using⌘ Command+⇧ Shift+3
Towards the end of 1987, Apple introduced a package titled "Apple Macintosh System Software Update 5.0".[22]For the first time, the Macintosh operating system was offered as a distinct retail product that included four 800K disks and three manuals, at a cost of US$49. The software itself was still freely available through user groups and bulletin board services. While the product box presented this update to the operating system as "version 5.0", this number does not appear in the software itself. Three of the four disks (System Tools 1, System Tools 2 and Utilities 1) are all bootable, and the user can boot off whichever floppy contains the tools the user needs. For instance, System Tools 2 is the only disk with printer drivers, and Utilities 1 is the only disk withDisk First AidandApple HD SC Setup. Because the disks are named System Tools, users and the press commonly referred to this version as "System Tools 5.0".
The primary new feature of System 5 isMultiFinder, an extension that lets the system run several programs at once. The system uses acooperative multitaskingmodel, meaning that time is given to the background applications only when the foreground application yields control. A change in system functions that applications were already calling to handle events make many existing applications share time automatically, as well as being allowed to perform tasks in the background.[22]Users can also choose not to use MultiFinder, thereby using a single application at a time. In 1990InfoWorldtested four multitasking options for PC and Mac, viewing MultiFinder positively overall, but noting that its presence halved the speed of file transfer and printing compared to the single-tasking System 6 without MultiFinder.[23]
System Software 6 (also referred to as "System 6") is a consolidation release of the Macintosh system software, producing a complete, stable, and long-lasting operating system. Two major hardware introductions requiring additional support under System 6 are the68030processor and 1.44 MBSuperDrivedebuting with theMacintosh IIxandMacintosh SE/30. Later updates include support for the first specialized laptop features with the introduction of theMacintosh Portable. From System 6 forward, the Finder has a unified version number closely matching that of the System, alleviating much of the confusion caused by the often considerable differences between earlier Systems.[25]
On May 13, 1991, System 7 was released. It was a major upgrade over System 6, adding a significantuser interfaceoverhaul, new applications, stability improvements and many new features. Its introduction coincides with the release of and provided support for the68040Macintosh line. The System 7 era saw numerous changes in the Macintosh platform includinga proliferation of Macintosh models, the 68k toPower Macintoshtransition as well as the rise ofMicrosoft Windows, increasing use ofcomputer networkingand the explosion in the popularity of theInternet.
One of the most significant features of System 7 isvirtual memorysupport, an essential subsystem anticipated for years, which only exists for previous Systems in a third party extension named Virtual fromConnectix.[23]Accompanying this was a move to32-bitmemory addressing, necessary for the ever-increasing amounts of RAM available to the Motorola 68030 CPU, and 68020 CPUs with a68851PMMU. This process involves making all of the routines in OS code use the full 32-bits of a pointer as an address—prior systems used the upper 8 bits asflags. This change is known as being "32-bit clean". While System 7 itself is 32-bit clean, many existing machines and thousands of applications were not, so it was some time before the process was completed. To ease the transition, the "Memory" control panel contains a switch to disable this feature, allowing for compatibility with older applications.
Another notable System 7 feature is built-incooperative multitasking. In System Software 6, this function was optional through theMultiFinder. System 7 also introducedaliases, similar tosymbolic linksonUnix,shortcutsthat were introduced in later versions of Microsoft Windows, andshadowsin IBMOS/2.System extensionswere enhanced by being moved to their own subfolder; a subfolder in theSystem Folderwas also created for thecontrol panels. In System 7.5, Apple includes theExtensions Manager, a previously third-party program which simplified the process of enabling and disabling extensions.
The Apple menu, home only to desk accessories in System 6, was made more general-purpose: the user could now make often-used folders and applications—or anything else they desired—appear in the menu by placing aliases to them in an "Apple Menu Items" subfolder of the System Folder. System 7 also introduced the following:AppleScript, ascripting languagefor automating tasks;32-bitQuickDraw, supporting so-called "true color" imaging, previously available as a system extension; andTrueType, anoutline fontstandard.
The Trash, under System 6 and earlier, empties itself automatically when shutting down the computer—or, if MultiFinder is not running, when launching an application. System 7 reimplements the Trash as a special hidden folder, allowing files to remain in it across reboots until the user deliberately chose the "Empty Trash" command.
System 7.1 is mainly a bugfix release, with a few minor features added. One of the major new features of System 7.1 was moving fonts out of the System file into the Fonts folder in the System Folder. Previously a resource-copying utility such as ResEdit or Font D/A Mover was required for installing fonts. System 7.1 is not only the first Macintosh operating system to cost money (all previous versions were free or sold at the cost of the floppies), but also received a "Pro" sibling (version 7.1.1) with extra features. System 7.1.2 was the first version to support PowerPC-based Macs. System 7.1 also introduces the System Enablers as a method to support new models without updating the actual System file. This leads to extra files inside the system folder (one per new model supported).
System 7.5 introduces a large number of new features, many of which are based on shareware applications that Apple bought and included into the new system.[28]On the newer PowerPC machines, System 7.5 may have stability problems partly due to a new memory manager (which can be turned off),[citation needed]and issues with the handling of errors in the PowerPC code (all PowerPC exceptions map to Type 11). These issues do not affect 68k-architecture machines. System 7.5 is contemporary with Apple's failedCoplandeffort as well as the release ofWindows 95.
Stability improved in PowerPC-based Macs with Mac OS 7.6, which dropped the "System" moniker as a more trademarkable name was needed in order to license the OS to the growing market of third-partyMacintosh clonemanufacturers. Mac OS 7.6 required 32-bit-clean ROMs, and so it dropped support for every Mac with a68000processor, as well as theMac II,Mac IIx,Mac IIcx, andMac SE/30.
Mac OS 8 was released on July 26, 1997, the same monthSteve Jobsbecame thede factoCEO of Apple. It was mainly released to keep the Mac OS moving forward during a difficult time for Apple. Initially planned as Mac OS 7.7, it was renumbered "8" to exploit a legalloopholeand accomplish Jobs's goal of terminating third-party manufacturers' licenses to System 7 and shutting down theMacintosh clonemarket.[29]
Mac OS 8 added a number of features from the abandonedCoplandproject, while leaving the underlying operating system unchanged. Amulti-threadedFinder was included; files could now be copied in the background. The GUI was changed in appearance to a new shaded greyscale look namedPlatinum, and the ability to change the appearance themes (also known asskins) was added with a new control panel (though Platinum was the only one shipped). This capability was provided by a new "appearance" API layer within the OS, one of the few significant changes.
Apple sold 1.2 million copies of Mac OS 8 in its first two weeks of availability and 3 million within six months. In light of Apple's financial difficulties at the time, there was a large grassroots movement among Mac users to upgrade and "help save Apple". Even some pirate groups refused to redistribute the OS.[30]
Mac OS 8.1 introduced an updated version of theHierarchical File SystemnamedHFS+, which fixed many limitations of the earlier system and has been used in subsequent versions ofmacOSup untilmacOS High Sierra, when it was replaced with theApple File System. There are some other interface changes such as separating network features from printing, and some improvements to application switching. However, in underlying technical respects, Mac OS 8 is not very different from System 7.
Mac OS 8.5 focuses on speed and stability, with most 68k code replaced by modern code native to the PowerPC. It also improved the appearance of the user interface, although the theming feature was cut late in development.
Mac OS 9, the last major revision of the Classic Mac OS, was released on October 23, 1999.[7]It is generally a steady evolution from Mac OS 8. Early development releases of Mac OS 9 were numbered 8.7.
Mac OS 9 added improved support forAirPortwireless networking. It introduced an early implementation of multi-user support. Though not a true multi-user operating system, Mac OS 9 does allow multiple desktop users to have their own data and system settings. An improvedSherlocksearch engine added several new search plug-ins. Mac OS 9 also provides a much improved memory implementation and management.AppleScriptwas improved to allowTCP/IPand networking control. Mac OS 9 also makes the first use of the centralizedApple Software Updateto find and install OS and hardware updates.
Other new features included its on-the-flyfile encryptionsoftware withcode signingandKeychaintechnologies, Remote Networking andFile Serverpackages, and much improved list ofUSBdrivers.
Mac OS 9 also added some transitional technologies to help application developers adopt someMac OS Xfeatures before the introduction of the new OS to the public, to help ease the transition. These included new APIs for thefile systemand the bundling of theCarbonlibrary that apps could link against instead of the traditional API libraries—apps that were adapted to do this could be run natively on Mac OS X as well. Other changes were made beginning with the Mac OS 9.1 update to allow it to be launched in theClassic Environmentwithin Mac OS X.
The final update to the Classic Mac OS was version 9.2.2, released on December 5, 2001.[31]
macOS(originally Mac OS X and later OS X)[32]is Apple's current Mac operating system that officially succeeded the Classic Mac OS in 2001. Although it was originally marketed as simply "version 10" of Mac OS, it hasa historythat is largely independent of the earlier Mac OS releases.
The first version of Mac OS X,Mac OS X Server 1.0, released in 1999, retains the "Platinum" appearance from the Classic Mac OS and even resemblesOPENSTEPin places, with the first version to arrive with the newAqua user interface. The first consumer version,Mac OS X 10.0, was released on March 24, 2001, supporting the newAqua user interface.
Apple shortened the name to "OS X" in 2011 and then changed it to "macOS" in 2016 to align with the branding of Apple's other operating systems.
The macOS architectural legacy is the successor toMac OS 9and the Classic Mac OS legacy. However, unlike the Classic Mac OS, it is aUnix-based operating system[33]built onNeXTSTEPand technology developed atNeXTfrom the late 1980s until early 1997, when Apple purchased the company, and its CEOSteve Jobsreturned to Apple.[34]macOS also makes use of theBSDcodebase and theXNUkernel,[35]and its core set of components is based upon Apple'sopen sourceDarwin operating system.
Users of the Classic Mac OS generally upgraded to Mac OS X, but it was criticized in its early years as more difficult and less user-friendly than the original Mac OS, for the lack of certain features that had not yet been reimplemented in the new OS, for being slower on the same hardware (especially older hardware), and for incompatibilities with the older OS.[36]Because drivers (for printers, scanners, tablets, etc.) written for the older Mac OS were not compatible with Mac OS X, inconsistent program support with the Classic Environment program used to run the older operating system's programs on Mac OS X, and the lack of Mac OS X support for older Apple computers before late 1997; some Macintosh users continued using the older Classic Mac OS for a few years after the original release of Mac OS X.Steve Jobsencouraged people to upgrade to Mac OS X by staging a mockfuneralfor Mac OS 9 atWWDC2002.[37]
PowerPCversions of Mac OS X up to and includingMac OS X 10.4 Tigerinclude acompatibility layerfor running older Mac applications, the Classic Environment. Originally codenamed the "blue box", the environment runs a nearly complete Mac OS 9 operating system, version 9.1 or later, as a Mac OS X application. This allows applications that have not been ported to theCarbon APIto run on Mac OS X. This is reasonably seamless, though "classic" applications retain their original Mac OS 9 appearance and do not gain the Mac OS X "Aqua" appearance.
EarlyNew World ROMPowerPC-based Macs shipped with Mac OS 9.2 as well as Mac OS X. Mac OS 9.2 had to be installed by the user—it was not installed by default on hardware revisions released after Mac OS X 10.4. Most well-written "classic" Mac OS applications function properly under this environment, but compatibility is assured only if the software was written to be unaware of the actual hardware and to interact solely with the operating system. The Classic Environment is not available onIntel-based Mac systemsor the latestApple silicon Macsdue to the incompatibility ofMac OS 9with both thex86andARMhardware.
Third-party Macintoshemulators, such asvMac,Basilisk II, andExecutor, eventually made it possible to run the Classic Mac OS onIntel-based PCs. These emulators were restricted to emulating the68kseries of processors, and as such most could not run versions of the Mac OS that succeeded 8.1, which requiredPowerPCprocessors. Most also required a Mac ROM image or a hardware interface supporting a real Mac ROM chip; those requiring an image are of dubious legal standing as the ROM image may infringe on Apple's intellectual property.
A notable exception was theExecutorcommercial software product from Abacus Research & Development, the only product that used 100% reverse-engineered code without the use of Apple technology. It ran extremely quickly but never achieved more than a minor subset of functionality. Few programs were completely compatible and many were extremely crash-prone if they ran at all. Executor filled a niche market for porting 68k Mac applications tox86platforms; development ceased in 2002 and the source code was released by the author in late 2008.[38]Emulators using Mac ROM images offered near complete Mac OS compatibility, and later versions offered excellent performance as modern x86 processor performance increased exponentially.
Apple included its ownMac 68k emulatorthat ran seamlessly on all PowerPC-based versions of the Classic Mac OS.[39]Apple also sold a Mac 68k emulator forSPARC-based (Solaris) andPA-RISCbased (HP-UX) systems calledMacintosh Application Environment(MAE), which could run variants of System 7.x inside anX11window.
As of 2021 the most capablePowerPCemulator isQEMU[40]In comparison with 68k-emulator development,PowerPCemulation is more complex and requires more CPU power. The emulator is capable of running Classic Mac OS and OS X at full speed with networking and sound in most cases.[41]QEMU has official support for Classic Mac OS version 9.0 through 9.2 and Mac OS X 10.0 up to and including 10.5.[42]QEMU has several advantages over other PowerPC emulators namely supporting a wide range of platforms from Linux to Mac and Windows on current CPU architectures.[42]
Another PowerPC emulator isSheepShaver, which has been around since 1998 forBeOSon the PowerPC platform, but in 2002 wasopen-sourced, and efforts began to port it to other platforms. Originally it was not designed for use on x86 platforms and required an actual PowerPC processor present in the machine it was running on similar to ahypervisor. Although it provides PowerPC processor support, it can run only up toMac OS 9.0.4because it does not emulate amemory management unit.
Other examples include ShapeShifter (by the same developer that createdSheepShaver), Fusion, PearPC and iFusion. The latter ran Classic Mac OS with a PowerPC "coprocessor" accelerator card. Using this method has been said to equal or better the speed of a Macintosh with the same processor, especially with respect to the68kseries due to real Macs running inMMUtrap mode, hampering performance.[citation needed]
Apple's initial version ofRosettais a PowerPC emulator allowingIntel-based Macsto run PowerPC Mac OS X applications, but is unable to run non-CarbonClassic Mac OS (9.2.2 or earlier) applications.[43]Rosetta was available for all Intel releases of OS X until version10.7 Lion.
|
https://en.wikipedia.org/wiki/Classic_Mac_OS
|
There are a number ofUnix-likeoperating systemsbased on or descended from theBerkeley Software Distribution(BSD) series ofUnixvariant options. The three most notable descendants in current use areFreeBSD,OpenBSD, andNetBSD, which are all derived from386BSDand4.4BSD-Lite, by various routes. Both NetBSD and FreeBSD started life in 1993, initially derived from 386BSD, but in 1994 migrated to a 4.4BSD-Litecode base. OpenBSD wasforkedfrom NetBSD in 1995. Other notable derivatives includeDragonFly BSD, which was forked from FreeBSD 4.8.
Most of the current BSD operating systems areopen sourceand available for download, free of charge, under theBSD License. They also generally use amonolithic kernelarchitecture, apart from DragonFly BSD which featurehybrid kernels. The various open source BSD projects generally develop the kernel anduserlandprograms and libraries together, the source code being managed using a single central source repository.
In the past, BSD was also used as a basis for several proprietary versions of UNIX, such asSun'sSunOS,Sequent'sDynix,NeXT'sNeXTSTEP,DEC'sUltrixand OSF/1 AXP (which became the now discontinuedTru64 UNIX).
FreeBSDaims to make an operating system usable for any purpose.[1]It is intended to run a wide variety of applications, be easy to use, contain cutting edge features, and be highly scalable, including for network servers with very high loads.[2]FreeBSD is free software, and the project prefers theFreeBSD license. However, they sometimes acceptnon-disclosure agreements(NDAs) and include a limited number of nonfreehardware abstraction layer(HAL) modules for specific device drivers in their source tree, to support the hardware of companies who do not provide purely libre drivers (such as HALs to programsoftware-defined radiosso that vendors do not share their nonfree algorithms).
To maintain a high level of quality and provide good support for "production qualitycommercial off-the-shelf(COTS) workstation, server, and high-end embedded systems", FreeBSD focuses on a narrow set of architectures.[3]A significant focus of development since 2000[4]has been fine-grained locking andsymmetric multiprocessing(SMP) scalability. From 2007 on, most of the kernel was fine-locked and scaling improvements started to be seen.[5]Other recent work includesCommon Criteriasecurity functionality, such as mandatory access control and security event audit support.
Derivatives:
NetBSDaims to provide a freely redistributable operating system that professionals, hobbyists, and researchers can use in any manner they wish. The main focus isportability, through the use of clear distinctions between machine-dependent andmachine-independentcode. It runs on a wide variety of32-bitand64-bitCPUarchitecturesand hardware platforms, and is intended to interoperate well with otheroperating systems.
NetBSD places emphasis oncorrect design, well-written code, stability, and efficiency, where practical, close compliance withopen APIandprotocol standardsis also aimed for. A powerfulTCP/IP stack, combined with a smallfootprint,[10]make NetBSD well suited to beembeddedinnetworking applications,[11]as well as to revivevintage hardware.[12]
In June 2008, the NetBSD Foundation moved to a2-clause BSD license, citing changes at UCB and industry applicability.[13]
Projects spawned by NetBSD includeNPF,Rump kernels,busdma,pkgsrcand NVMM.[14]
Derivatives:
OpenBSDis a security-focused BSD known for its developers' insistence on extensive, ongoingcode auditingfor security and correct functionality, a "secure by default" philosophy, good documentation, and adherence to strictlyopen sourcelicensing. The system incorporatesnumerous security featuresthat are absent or optional in other versions of BSD. The OpenBSD policy on openness extends to hardware documentation and drivers, since without these, there can be no trust in the correct operation of the kernel and its security, and vendorsoftware bugswould be hard to resolve.[24]
OpenBSD emphasizes very high standards in all areas. Security policies include disabling all non-essential services and having sane initial settings; and integratedcryptography(originally made easier due to relaxed Canadian export laws relative to the United States),full public disclosureof all security flaws discovered; thoroughlyauditingcode for bugs and security issues; various security features, including theW^Xpage protection technology and heavy use of randomization to mitigate attacks. Coding approaches include an emphasis on searching for similar issues throughout thecode baseif any code issue is identified. Concerning software freedom, OpenBSD prefers theBSDorISC license, with theGPLacceptable only for existing software which is impractical to replace, such as theGNU Compiler Collection. NDAs are never considered acceptable. In common with its parent, NetBSD, OpenBSD strives to run on a wide variety of hardware.[25]Where licenses conflict with OpenBSD's philosophy, the OpenBSD team has re-implemented major pieces of software from scratch, which have often become the standard used within other versions of BSD. Examples include thepfpacket filter, newprivilege separationtechniques used to safeguard tools such astcpdumpandtmux, much of theOpenSSHcodebase, and replacing GPL licensed tools such asdiff,grepandpkg-configwithISCorBSDlicensed equivalents.
OpenBSD prominently notes the success of its security approach on its website home page. As of July 2024[update], only two vulnerabilities have ever been found in its default install (anOpenSSHvulnerability found in 2002, and a remote network vulnerability found in 2007) in a period of almost 22 years. According to OpenBSD expert Michael W. Lucas, OpenBSD "is widely regarded as the most secure operating system available anywhere, under any licensing terms."[26]
OpenBSD has spawned numerous child projects such asOpenSSH,OpenNTPD,OpenBGPD,OpenSMTPD,PF,CARP, andLibreSSL. Many of these are designed to replace restricted alternatives.
Derivatives:
DragonFly BSDaims to be inherently easy to understand and develop formulti-processorinfrastructures. The main goal of the project, forked from FreeBSD 4.8, is to radically change the kernel architecture, introducingmicrokernel-likemessage passingwhich will enhancescalingandreliabilityonsymmetric multiprocessing(SMP) platforms while also being applicable toNUMAandclusteredsystems. The long-term goal is to provide a transparentsingle system imagein clustered environments. DragonFly BSD originally supported both theIA-32andx86-64platforms, however support for IA-32 was dropped in version 4.0.[35][36]Matthew Dillon, the founder of DragonFly BSD, believes supporting fewer platforms makes it easier for a project to do a proper, ground-upsymmetric multiprocessingimplementation.[37]
In September 2005, the BSD Certification Group, after advertising on a number of mailing lists, surveyed 4,330 BSD users, 3,958 of whom took the survey in English, to assess the relative popularity of the various BSD operating systems. About 77% of respondents used FreeBSD, 33% used OpenBSD, 16% used NetBSD, 2.6% used Dragonfly, and 6.6% used other (potentially non-BSD) systems. Other languages offered were Brazilian and European Portuguese, German, Italian, and Polish. Note that there was no control group or pre-screening of the survey takers. Those who checked "Other" were asked to specify that operating system.[38]
Because survey takers were permitted to select more than one answer, the percentages shown in the graph, which are out of the number survey of participants, add up to greater than 100%. If a survey taker filled in more than one choice for "other", this is still only counted as one vote for other on this chart.[38]
Another attempt to profile worldwide BSD usage is the *BSDstats Project, whose primary goal is to demonstrate to hardware vendors the penetration of BSD and viability of hardware drivers for the operating system. The project collects data monthly from any BSD system administrators willing to participate, and currently records the BSD market share of participating FreeBSD, OpenBSD, NetBSD, DragonflyBSD, Debian GNU/kFreeBSD, TrueOS, and MirBSD systems.[39]
In 2020, a new independent project was introduced to collect statistics with the goal of significantly increasing the number of observed parameters.[40][41]
DistroWatch, well known in the Linux community and often used as a rough guide to free operating system popularity, publishes page hits for each of the Linux distributions and other operating systems it covers. As of 27 March 2020, using a data span of the last six months it placed FreeBSD in 21st place with 452 hits per day, GhostBSD in 51st place with 243 hits, TrueOS in 54th place with 182 hits per day, DragonflyBSD in 75th place with 180 hits, OpenBSD in 80th place with 169 hits per day and NetBSD in 109th place with 105 hits per day.[42]
The names FreeBSD and OpenBSD are references to software freedom: both in cost andopen source.[43]NetBSD's name is a tribute to theInternet, which brought the original developers together.[44]
The first BSD mascot was theBSD daemon, named after a common type ofUnixsoftware program, adaemon. FreeBSD still uses the image, a red cartoondaemonnamed Beastie, wielding apitchfork, as its mascot today. In 2005, after a competition, a stylized version of Beastie's head designed and drawn by Anton Gural was chosen as the FreeBSD logo.[45]The FreeBSD slogan is "The Power to Serve."
The NetBSD flag, designed in 2004 by Grant Bissett, is inspired by the original NetBSD logo,[46]designed in 1994 by Shawn Mueller, portraying a number of BSD daemons raising a flag on top of a mound of computer equipment. This was based on aWorld War IIphotograph,Raising the Flag on Iwo Jima. The Board of Directors of The NetBSD Foundation believed this was too complicated, too hard to reproduce and had negative cultural ramifications and was thus not a suitable image for NetBSD in the corporate world. The new, simpler flag design replaced this.[47]The NetBSD slogan is "Of course it runs NetBSD", referring to the operating system's portability.
Originally, OpenBSD used the BSD daemon as a mascot, sometimes with an addedhaloas a distinguishing mark, but OpenBSD later replaced its BSD daemon withPuffy. Although Puffy is usually referred to as apufferfish, the spikes on the cartoon images give him a closer likeness to theporcupinefish. The logo is a reference to the fish's defensive capabilities and to theBlowfishcryptography algorithm used in OpenSSH. OpenBSD also has a number of slogans including "Secure by default", which was used in the first OpenBSD song, "E-railed", and "Free, Functional & Secure",[48]and OpenBSD has released at least one original song with every release since 3.0.[49]
The DragonFly BSD logo, designed by Joe Angrisano, is adragonflynamed Fred.[50]A number of unofficial logos[51]by various authors also show the dragonfly or stylized versions of it. DragonFly BSD considers itself to be "the logical continuation of the FreeBSD 4.x series."[52]FireflyBSD has a similar logo, a firefly, showing its close relationship to DragonFly BSD. In fact, the FireflyBSD website states that proceeds from sales will go to the development of DragonFly BSD, suggesting that the two may in fact be very closely related.
PicoBSD's slogan is "For the little BSD in all of us," and its logo includes a version of FreeBSD's Beastie as a child,[53]showing its close connection to FreeBSD, and the minimal amount of code needed to run as aLive CD.
A number of BSD OSes use stylized version of their respective names for logos. This includes TrueOS, GhostBSD, DesktopBSD, ClosedBSD,[54]andMicroBSD.[55]TrueOS's slogan is "Personal computing, served up BSD style!", GhostBSD's "A simple, secure BSD served on a Desktop." DesktopBSD's "A Step Towards BSD on the Desktop." MicroBSD's slogan is "The small secure unix like OS."
MirOS's site collects a variety of BSD mascots andTux, theLinuxmascot, together, illustrating the project's aim of supporting both BSD and Linux kernels. MirOS's slogan is "a wonderful operating system for a world of peace."[56]
|
https://en.wikipedia.org/wiki/Comparison_of_BSD_operating_systems
|
The following is alist ofMacsoftware– notable computer applications for currentmacOSoperating systems.
For software designed for theClassic Mac OS, seeList of old Macintosh software.
This section listsbitmap graphics editorsandvector graphics editors.
macOS includes the built-in XProtect antimalware as part ofGateKeeper.
The software listed in this section isantivirus softwareandmalwareremoval software.
This section lists software forfile archiving,backup and restore,data compressionanddata recovery.
|
https://en.wikipedia.org/wiki/List_of_Mac_software
|
Mac operating systemswere developed byApple Inc.in a succession of two major series.
In 1984, Apple debuted theoperating systemthat is now known as theclassic Mac OSwith its release of theoriginal Macintosh System Software. The system, rebranded Mac OS in 1997, was pre-installed on every Macintosh until 2002 and offered onMacintosh clonesshortly in the 1990s. It was noted for its ease of use, and also criticized for its lack of modern technologies compared to its competitors.[1][2]
The current Mac operating system ismacOS, originally named Mac OS X until 2012 and then OS X until 2016.[3]It was developed between 1997 and 2001 after Apple's purchase ofNeXT. It brought an entirely new architecture based onNeXTSTEP, aUnixsystem, that eliminated many of the technical challenges that the classic Mac OS faced, such as problems with memory management. The current macOS is pre-installed with every Mac and receives a major update annually.[4]It is the basis of Apple's current system software for its other devices –iOS,iPadOS,watchOS, andtvOS.[5]
Prior to the introduction of Mac OS X, Apple experimented with several other concepts, releasing different products designed to bring the Macintosh interface or applications toUnix-likesystems or vice versa,A/UX,MAE, andMkLinux. Apple's effort to expand upon and develop a replacement for its classic Mac OS in the 1990s led to a few cancelled projects,code namedStar Trek,Taligent, andCopland.
Although the classic Mac OS and macOS (Mac OS X) have different architectures, they share a common set ofGUIprinciples, including amenu baracross the top of the screen; theFindershell, featuring adesktop metaphorthat representsfilesandapplicationsusingiconsand relates concepts likedirectoriesandfile deletionto real-world objects likefoldersand atrash can; and overlappingwindowsformultitasking.
Before the arrival of the Macintosh in 1984, Apple's history of operating systems began with itsApple IIcomputers in 1977, which runApple DOS,ProDOS, andGS/OS; theApple IIIin 1980 runsApple SOS; and theLisain 1983 which runsLisa OSand laterMacWorks XL, a Macintoshemulator. Apple developed theNewton OSfor itsNewtonpersonal digital assistantfrom 1993 to 1997.
Apple launched several new operating systems based on the core ofmacOS, includingiOSin 2007 for itsiPhone,iPad, andiPod Touchmobile devicesand in 2017 for itsHomePodsmart speakers;watchOSin 2015 for theApple Watch; andtvOSin 2015 for theApple TVset-top box.
Theclassic Mac OSis the original Macintosh operating system introduced in 1984 alongside thefirst Macintoshand remained in primary use on Macs untilMac OS Xin 2001.[6][7]
Apple released theoriginal Macintoshon January 24, 1984; itsearly system softwareis partially based onLisa OS, and inspired by theAltocomputer, which former Apple CEOSteve Jobspreviewed atXerox PARC.[6]It was originally named "System Software", or simply "System"; Apple rebranded it as "Mac OS" in 1996 due in part to itsMacintosh cloneprogram that ended one year later.[8]
Classic Mac OS is characterized by itsmonolithicdesign. Initial versions of the System Software run one application at a time. System 5 introducedcooperative multitasking. System 7 supports32-bitmemory addressingandvirtual memory, allowing larger programs. Later updates to the System 7 enable the transition to thePowerPCarchitecture. The system was considereduser-friendly, but its architectural limitations were critiqued, such as limitedmemory management, lack ofprotected memoryandaccess controls, and susceptibility to conflicts amongextensions.[2]
Nine major versions of the classic Mac OS were released. The name "Classic" that now signifies the system as a whole is a reference toa compatibility layerthat helped ease the transition toMac OS X.[9]
The system was launched as Mac OS X, renamed OS X from 2012—2016,[10]and then renamedmacOSas the current Mac operating system that officially succeeded the classic Mac OS in 2001.
The system was originally marketed as simply "version 10" of Mac OS, but it hasa history that is largely independentof the classic Mac OS. It is aUnix-based operating system[11][12]built onNeXTSTEPand otherNeXTtechnology from the late 1980s until early 1997, when Apple purchased the company and its CEOSteve Jobsreturned to Apple.[13]Precursors to Mac OS X includeOPENSTEP, Apple'sRhapsodyproject, and theMac OS X Public Beta.
macOS is based on Apple'sopen sourceDarwin operating system, which is based on theXNUkernel andBSD.[14]
macOS is the basis for some of Apple's other operating systems, includingiPhone OS/iOS,iPadOS,watchOS,tvOS, andvisionOS.
The first version of the system was released on March 24, 2001, supporting theAqua user interface. Since then, several more versions adding newer features and technologies have been released. Since 2011, new releases have been offered annually.[4]
macOS 10.16's version number was updated to 11.0 in the third beta. The third beta version of macOS Big Sur is 11.0 Beta 3 instead of 10.16 Beta 3.
An earlyserver computingversion of the system was released in 1999 as a technology preview. It was followed by several more official server-based releases. Server functionality has instead been offered as an add-on for the desktop system since 2011.[15]
TheApple Real-time Operating System Environment(A/ROSE) is a smallembedded operating systemwhich runs on the Macintosh Coprocessor Platform, anexpansion cardfor the Macintosh. It is a single "overdesigned" hardware platform on which third-party vendors build practically any product, reducing the otherwise heavy workload of developing aNuBus-basedexpansion card. The first version of the system was ready for use in February 1988.[16]
In 1988, Apple released its firstUNIX-based OS,A/UX, which is a UNIX operating system with the Mac OSlook and feel. It was not very competitive for its time, due in part to the crowded UNIX market and Macintosh hardware lacking high-end design features present onworkstation-class computers. Most of its sales was to theU.S. government, where MacOS lacksPOSIXcompliance.[17]
TheMacintosh Application Environment(MAE) is a software package introduced by Apple in 1994 that allows certainUnix-based computer workstations to run Macintosh applications. MAE uses theX Window Systemto emulate aMacintosh Finder-style graphical user interface. The last version, MAE 3.0, is compatible withSystem 7.5.3. MAE was published forSun MicrosystemsSPARCstationandHewlett-Packardsystems. It was discontinued on May 14, 1998.[18]
Announced at the 1996Worldwide Developers Conference(WWDC),MkLinuxis anopen sourceoperating system that was started by theOSF Research Instituteand Apple in February 1996 to portLinuxto thePowerPCplatform, and thus Macintosh computers. In mid 1998, the community-led MkLinux Developers Association took over development of the operating system. MkLinux is short for "Microkernel Linux", which refers to its adaptation of the monolithicLinux kernelto run as a server hosted atop theMach microkernelversion 3.0.[19]
TheStar Trek project(as in "to boldly go where no Mac has gone before") was a secret prototype beginning in 1992, to port the classic Mac OS toIntel-compatiblex86personal computers. In partnership with Apple and with support from Intel, the project was instigated byNovell, which was looking to integrate itsDR-DOSwith the Mac OS GUI as a mutual response to the monopoly ofMicrosoft'sWindows 3.0and MS-DOS. A team consisting of four from Apple and four from Novell was got theMacintosh Finderand some basic applications such asQuickTime, running smoothly. The project was canceled one year later in early 1993, but was partially reused when porting the Mac OS toPowerPC.[20][21]
Taligent(aportmanteauof "talent" and "intelligent") is anobject-oriented operating systemand the company producing it. Started as the Pink project within Apple to provide a replacement for theclassic Mac OS, it was later spun off into a joint venture withIBMas part of theAIM alliance, with the purpose of building a competing platform toMicrosoft CairoandNeXTSTEP. The development process never worked, and has been cited as an example of aproject death march. Apple pulled out of the project in 1995 before the code had been delivered.[22]
Coplandwas a project at Apple to create an updated version of theclassic Mac OS. It was to have introducedprotected memory,preemptive multitasking, and new underlying operating system features, yet still be compatible with existing Mac software. They originally planned the follow-up release Gershwin to addmultithreadingand other advanced features. New features were added more rapidly than they could be completed, and the completion date slipped into the future with no sign of a release. In 1996, Apple canceled the project outright and sought a suitable third-party replacement. Copland development ended in August 1996, and in December 1996, Apple announced that it was buyingNeXTfor itsNeXTSTEPoperating system.[23]
|
https://en.wikipedia.org/wiki/Mac_operating_systems
|
Jini(/ˈdʒiːni/), also calledApache River,is anetwork architecturefor the construction ofdistributed systemsin the form of modular co-operating services.[2]JavaSpaces is a part of the Jini.
Originally developed bySun Microsystems, Jini was released under theApache License 2.0.[3]Responsibility for Jini was transferred toApachein 2007[4]under the project name "River",[5]but the project was retired in early 2022 due to lack of activity.[4]
Sun Microsystemsintroduced Jini in July 1998.[2]In November 1998, Sun announced that there were some firms supporting Jini.
The Jini team at Sun stated thatJiniis not an acronym.Ken Arnoldhas joked that it means "Jini Is Not Initials", making it arecursive anti-acronym,[6]but it has always been justJini. The word 'jini' means "the devil" inSwahili; this is borrowed from theArabicword for a mythological spirit, originated from the Latingenius, which is also the origin of the English word 'genie'.
Jini provides the infrastructure for the Service-object-oriented architecture (SOOA).
Locating services is done through a lookup service.[7]Services try to contact a lookup service (LUS), either byunicastinteraction, when it knows the actual location of the lookup service, or by dynamicmulticastdiscovery. The lookup service returns an object called the service registrar that can be used by services to register themselves so they can be found by clients. Clients can use the lookup service to retrieve a proxy object to the service; calls to the proxy translate the call to a service request, performs this request on the service, and returns the result to the client. This strategy is more convenient thanJava remote method invocation, which requires the client to know the location of the remote service in advance.
Jini uses a lookup service to broker communication between the client and service. This appears to be a centralized model (though the communication between client and service can be seen as decentralized) that does not scale well to very large systems. However, the lookup service can be horizontally scaled by running multiple instances that listen to the same multicast group.[citation needed]
|
https://en.wikipedia.org/wiki/Jini
|
Network managementis the process ofadministeringand managingcomputer networks. Services provided by this discipline include fault analysis, performance management,provisioningof networks and maintainingquality of service.Network management softwareis used bynetwork administratorsto help perform these functions.
A small number of accessory methods exist to support network and network device management. Network management allows IT professionals to monitor network components within large network area. Access methods include theSNMP,command-line interface(CLI), customXML,CMIP,Windows Management Instrumentation(WMI),Transaction Language 1(TL1),CORBA,NETCONF,RESTCONFand theJava Management Extensions(JMX).
Schemas include theStructure of Management Information(SMI),YANG,WBEM, theCommon Information Model(CIM Schema), andMTOSIamongst others.
Effective network management can provide positive strategic impacts. For example, in the case of developing an infrastructure, providing participants with some interactive space allows them to collaborate with each other, thereby promoting overall benefits. At the same time, the value of network management to the strategic network is also affected by the relationship between participants. Active participation, interaction and collaboration can make them more trusting of each other and enhance cohesion.[1]
|
https://en.wikipedia.org/wiki/Network_management
|
Simple Network Management Protocol(SNMP) is anInternet Standardprotocol for collecting and organizing information about managed devices onIPnetworks and for modifying that information to change device behavior. Devices that typically support SNMP includecable modems,routers,network switches, servers, workstations, printers, and more.[1]
SNMP is widely used innetwork managementfornetwork monitoring. SNMP exposes management data in the form of variables on the managed systems organized in amanagement information base(MIB), which describes the system status and configuration. These variables can then be remotely queried (and, in some circumstances, manipulated) by managing applications.
Three significant versions of SNMP have been developed and deployed. SNMPv1 is the original version of the protocol. More recent versions, SNMPv2c and SNMPv3, feature improvements in performance, flexibility and security.
SNMP is a component of theInternet Protocol Suiteas defined by theInternet Engineering Task Force(IETF). It consists of a set ofstandardsfor network management, including anapplication layerprotocol, a databaseschema, and a set ofdata objects.[2]
In typical uses of SNMP, one or more administrative computers calledmanagershave the task of monitoring or managing a group of hosts or devices on acomputer network. Each managed system executes a software component called anagentthat reports information via SNMP to the manager.
An SNMP-managed network consists of three key components:
Amanaged deviceis a network node that implements an SNMP interface that allows unidirectional (read-only) or bidirectional (read and write) access to node-specific information. Managed devices exchange node-specific information with the NMSs. Sometimes called network elements, the managed devices can be any type of device, including, but not limited to,routers,access servers,switches,cable modems,bridges,hubs,IP telephones,IP video cameras, computerhosts, andprinters.
Anagentis a network-management software module that resides on a managed device. An agent has local knowledge of management information and translates that information to or from an SNMP-specific form.
Anetwork management stationexecutes applications that monitor and control managed devices. NMSs provide the bulk of the processing and memory resources required for network management. One or more NMSs may exist on any managed network.
SNMP agents expose management data on the managed systems as variables. The protocol also permits active management tasks, such as configuration changes, through remote modification of these variables. The variables accessible via SNMP are organized in hierarchies. SNMP itself does not define which variables a managed system should offer. Rather, SNMP uses an extensible design that allows applications to define their own hierarchies. These hierarchies are described as amanagement information base(MIB). MIBs describe the structure of the management data of a device subsystem; they use ahierarchical namespacecontainingobject identifiers(OID). Each OID identifies a variable that can be read or set via SNMP. MIBs use the notation defined byStructure of Management InformationVersion 2.0 (SMIv2,RFC2578), a subset ofASN.1.
SNMP operates in theapplication layerof theInternet protocol suite. All SNMP messages are transported viaUser Datagram Protocol(UDP). The SNMP agent receives requests onUDP port161. The manager may send requests from any available source port to port 161 in the agent. The agent response is sent back to the source port on the manager. The manager receives notifications (TrapsandInformRequests) on port 162. The agent may generate notifications from any available port. When used withTransport Layer SecurityorDatagram Transport Layer Security, requests are received on port 10161 and notifications are sent to port 10162.[3]
SNMPv1 specifies five coreprotocol data units(PDUs). Two other PDUs,GetBulkRequestandInformRequestwere added in SNMPv2 and theReportPDU was added in SNMPv3. All SNMP PDUs are constructed as follows:
The seven SNMP PDU types as identified by thePDU-typefield are as follows:
RFC1157specifies that an SNMP implementation must accept a message of at least 484 bytes in length. In practice, SNMP implementations accept longer messages.[8]: 1870If implemented correctly, an SNMP message is discarded if the decoding of the message fails and thus malformed SNMP requests are ignored. A successfully decoded SNMP request is then authenticated using the community string. If the authentication fails, a trap is generated indicating an authentication failure and the message is dropped.[8]: 1871
SNMPv1 and SNMPv2c usecommunitiesto establish trust between managers and agents. Most agents support three community names, one each for read-only, read-write and trap. These threecommunity stringscontrol different types of activities. The read-only community applies togetrequests. The read-write community string applies tosetrequests. The trap community string applies to receipt oftraps. SNMPv3 also uses community strings, but allows for secure authentication and communication between SNMP manager and agent.[9]
In practice, SNMP implementations often support multiple versions: typically SNMPv1, SNMPv2c, and SNMPv3.[10][11]
SNMP version 1 (SNMPv1) is the initial implementation of the SNMP protocol. The design of SNMPv1 was done in the 1980s by a group of collaborators who viewed the officially sponsored OSI/IETF/NSF (National Science Foundation) effort (HEMS/CMIS/CMIP) as both unimplementable in the computing platforms of the time as well as potentially unworkable. SNMP was approved based on a belief that it was an interim protocol needed for taking steps towards large-scale deployment of the Internet and its commercialization.
The firstRequest for Comments(RFCs) for SNMP, now known as SNMPv1, appeared in 1988:
In 1990, these documents were superseded by:
In 1991,RFC1156(MIB-1) was replaced by the more often used:
SNMPv1 is widely used and is thede factonetwork management protocol in the Internet community.[12]
SNMPv1 may be carried bytransport layerprotocols such as User Datagram Protocol (UDP), OSIConnectionless-mode Network Service(CLNS), AppleTalkDatagram Delivery Protocol(DDP), and NovellInternetwork Packet Exchange(IPX).
Version 1 has been criticized for its poor security.[13]The specification does, in fact, allow room for custom authentication to be used, but widely used implementations "support only a trivial authentication service that identifies all SNMP messages as authentic SNMP messages."[14]The security of the messages, therefore, becomes dependent on the security of the channels over which the messages are sent. For example, an organization may consider their internal network to be sufficiently secure that no encryption is necessary for its SNMP messages. In such cases, thecommunity name, which is transmitted incleartext, tends to be viewed as a de facto password, in spite of the original specification.
SNMPv2, defined byRFC1441andRFC1452, revises version 1 and includes improvements in the areas of performance, security and manager-to-manager communications. It introducedGetBulkRequest, an alternative to iterative GetNextRequests for retrieving large amounts of management data in a single request. The new party-based security system introduced in SNMPv2, viewed by many as overly complex, was not widely adopted.[13]This version of SNMP reached the Proposed Standard level of maturity, but was deemed obsolete by later versions.[15]
Community-Based Simple Network Management Protocol version 2, orSNMPv2c, is defined inRFC1901–RFC1908. SNMPv2c comprises SNMPv2withoutthe controversial new SNMP v2 security model, using instead the simple community-based security scheme of SNMPv1. This version is one of relatively few standards to meet the IETF's Draft Standard maturity level, and was widely considered thede factoSNMPv2 standard.[15]It was later restated as part of SNMPv3.[16]
User-Based Simple Network Management Protocol version 2, orSNMPv2u, is defined inRFC1909–RFC1910. This is a compromise that attempts to offer greater security than SNMPv1, but without incurring the high complexity of SNMPv2. A variant of this was commercialized asSNMP v2*, and the mechanism was eventually adopted as one of two security frameworks in SNMP v3.[17]
SNMP version 2 introduces the option for 64-bit data counters. Version 1 was designed only with 32-bit counters, which can store integer values from zero to 4.29 billion (precisely4294967295). A 32-bit version 1 counter cannot store the maximum speed of a 10 gigabit or larger interface, expressed in bits per second. Similarly, a 32-bit counter tracking statistics for a 10 gigabit or larger interface can roll over back to zero again in less than one minute, which may be a shorter time interval than a counter is polled to read its current state. This would result in lost or invalid data due to the undetected value rollover, and corruption of trend-tracking data.
The 64-bit version 2 counter can store values from zero to 18.4 quintillion (precisely 18,446,744,073,709,551,615) and so is currently unlikely to experience a counter rollover between polling events. For example, 1.6terabit Ethernetis predicted to become available by 2025. A 64-bit counter incrementing at a rate of 1.6 trillion bits per second would be able to retain information for such an interface without rolling over for 133 days.
SNMPv2c is incompatible with SNMPv1 in two key areas: message formats and protocol operations. SNMPv2c messages use different header and protocol data unit (PDU) formats than SNMPv1 messages. SNMPv2c also uses two protocol operations that are not specified in SNMPv1. To overcome incompatibility,RFC3584defines two SNMPv1/v2c coexistence strategies: proxy agents and bilingual network-management systems.
An SNMPv2 agent can act as a proxy agent on behalf of SNMPv1-managed devices. When an SNMPv2 NMS issues a command intended for an SNMPv1 agent it sends it to the SNMPv2 proxy agent instead. The proxy agent forwardsGet,GetNext, andSetmessages to the SNMPv1 agent unchanged. GetBulk messages are converted by the proxy agent toGetNextmessages and then are forwarded to the SNMPv1 agent. Additionally, the proxy agent receives and maps SNMPv1 trap messages to SNMPv2 trap messages and then forwards them to the NMS.
Bilingual SNMPv2 network-management systems support both SNMPv1 and SNMPv2. To support this dual-management environment, a management application examines information stored in a local database to determine whether the agent supports SNMPv1 or SNMPv2. Based on the information in the database, the NMS communicates with the agent using the appropriate version of SNMP.
Although SNMPv3 makes no changes to the protocol aside from the addition of cryptographic security, it looks very different due to new textual conventions, concepts, and terminology.[1]The most visible change was to define a secure version of SNMP, by adding security and remote configuration enhancements to SNMP.[18]The security aspect is addressed by offering both strong authentication and data encryption for privacy. For the administration aspect, SNMPv3 focuses on two parts, namely notification originators and proxy forwarders. The changes also facilitate remote configuration and administration of the SNMP entities, as well as addressing issues related to the large-scale deployment, accounting, and fault management.
Features and enhancements included:
Security was one of the biggest weaknesses of SNMP until v3. Authentication in SNMP Versions 1 and 2 amounts to nothing more than a password (community string) sent in clear text between a manager and agent.[1]Each SNMPv3 message contains security parameters that are encoded as an octet string. The meaning of these security parameters depends on the security model being used.[20]The security approach in v3 targets:[21]
v3 also defines the USM and VACM, which were later followed by a transport security model (TSM) that provided support for SNMPv3 over SSH and SNMPv3 over TLS and DTLS.
As of 2004[update]theIETFrecognizesSimple Network Management Protocol version 3as defined byRFC3411–RFC3418[22](also known as STD0062) as the current standard version of SNMP. TheIETFhas designated SNMPv3 a fullInternet standard,[23]the highestmaturity levelfor an RFC. It considers earlier versions to be obsolete (designating them variouslyHistoricorObsolete).[15]
SNMP's powerful write capabilities, which would allow the configuration of network devices, are not being fully utilized by many vendors, partly because of a lack of security in SNMP versions before SNMPv3, and partly because many devices simply are not capable of being configured via individual MIB object changes.
Some SNMP values (especially tabular values) require specific knowledge of table indexing schemes, and these index values are not necessarily consistent across platforms. This can cause correlation issues when fetching information from multiple devices that may not employ the same table indexing scheme (for example fetching disk utilization metrics, where a specific disk identifier is different across platforms.)[24]
Some major equipment vendors tend to over-extend their proprietarycommand line interface(CLI) centric configuration and control systems.[25][failed verification]
In February 2002 theCarnegie Mellon Software Engineering Institute(CM-SEI) Computer Emergency Response Team Coordination Center (CERT-CC) issued an Advisory on SNMPv1,[26]after theOulu University Secure Programming Groupconducted a thorough analysis of SNMP message handling. Most SNMP implementations, regardless of which version of the protocol they support, use the same program code for decodingprotocol data units(PDU) and problems were identified in this code. Other problems were found with decoding SNMP trap messages received by the SNMP management station or requests received by the SNMP agent on the network device. Many vendors had to issue patches for their SNMP implementations.[8]: 1875
Because SNMP is designed to allow administrators to monitor and configure network devices remotely it can also be used to penetrate a network. A significant number of software tools can scan the entire network using SNMP, therefore mistakes in the configuration of the read-write mode can make a network susceptible to attacks.[27]: 52
In 2001,Ciscoreleased information that indicated that, even in read-only mode, the SNMP implementation ofCisco IOSis vulnerable to certaindenial of serviceattacks. These security issues can be fixed through an IOS upgrade.[28]
If SNMP is not used in a network it should be disabled in network devices. When configuring SNMP read-only mode, close attention should be paid to the configuration of theaccess controland from which IP addresses SNMP messages are accepted. If the SNMP servers are identified by their IP, SNMP is only allowed to respond to these IPs and SNMP messages from other IP addresses would be denied. However,IP address spoofingremains a security concern.[27]: 54
SNMP is available in different versions, and each version has its own security issues. SNMP v1 sends passwords inplaintextover the network. Therefore, passwords can be read withpacket sniffing. SNMP v2 allowspassword hashingwithMD5, but this has to be configured. Virtually allnetwork management softwaresupport SNMP v1, but not necessarily SNMP v2 or v3. SNMP v2 was specifically developed to providedata security, that isauthentication,privacyandauthorization, but only SNMP version 2c gained the endorsement of theInternet Engineering Task Force(IETF), while versions 2u and 2* failed to gain IETF approval due to security issues. SNMP v3 uses MD5,Secure Hash Algorithm(SHA) and keyed algorithms to offer protection against unauthorized data modification andspoofing attacks. If a higher level of security is needed theData Encryption Standard(DES) can be optionally used in thecipher block chainingmode. SNMP v3 is implemented on Cisco IOS since release 12.0(3)T.[27]: 52
SNMPv3 may be subject tobrute forceanddictionary attacksfor guessing the authentication keys, or encryption keys, if these keys are generated from short (weak) passwords or passwords that can be found in a dictionary. SNMPv3 allows both providing random uniformly distributed cryptographic keys and generating cryptographic keys from a password supplied by the user. The risk of guessing authentication strings from hash values transmitted over the network depends on thecryptographic hash functionused and the length of the hash value. SNMPv3 uses theHMAC-SHA-2authentication protocolfor the User-based Security Model (USM).[29]SNMP does not use a more securechallenge-handshake authentication protocol. SNMPv3 (like other SNMP protocol versions) is astateless protocol, and it has been designed with a minimal amount of interactions between the agent and the manager. Thus introducing a challenge-response handshake for each command would impose a burden on the agent (and possibly on the network itself) that the protocol designers deemed excessive and unacceptable.[citation needed]
The security deficiencies of all SNMP versions can be mitigated byIPsecauthentication and confidentiality mechanisms.[citation needed]SNMP also may be carried securely overDatagram Transport Layer Security(DTLS).[10]
Many SNMP implementations include a type of automatic discovery where a new network component, such as a switch or router, is discovered and polled automatically. In SNMPv1 and SNMPv2c this is done through acommunity stringthat is transmitted in clear-text to other devices.[10]Clear-text passwords are a significant security risk. Once the community string is known outside the organization it could become the target for an attack. To alert administrators of other attempts to glean community strings, SNMP can be configured to pass community-name authentication failure traps.[27]: 54If SNMPv2 is used, the issue can be avoided by enabling password encryption on the SNMP agents of network devices.
The common default configuration for community strings are "public" for read-only access and "private" for read-write.[8]: 1874Because of the well-known defaults, SNMP topped the list of theSANS Institute's Common Default Configuration Issues and was number ten on the SANS Top 10 Most Critical Internet Security Threats for the year 2000.[30]System and network administrators frequently do not change these configurations.[8]: 1874
Whether it runs over TCP or UDP, SNMPv1 and v2 are vulnerable toIP spoofingattacks. With spoofing, attackers may bypass device access lists in agents that are implemented to restrict SNMP access. SNMPv3 security mechanisms such as USM or TSM can prevent spoofing attacks.
|
https://en.wikipedia.org/wiki/Simple_Network_Management_Protocol
|
API testingis a type ofsoftware testingthat involves testingapplication programming interfaces(APIs) directly and as part ofintegration testingto determine if they meet expectations for functionality, reliability, performance, andsecurity.[1]Since APIs lack aGUI, API testing is performed at themessage layer.[2]API testing is now considered critical for automating testing because APIs serve as the primary interface toapplication logicand becauseGUI testsare difficult to maintain with the short release cycles and frequent changes commonly used withAgile software developmentandDevOps.[3][4]
API testing involves testing APIs directly (in isolation) and as part of the end-to-end transactions exercised during integration testing.[1]BeyondRESTful APIs, these transactions include multiple types of endpoints such asweb services,ESBs,databases,mainframes,web UIs, andERPs.API testingis performed on APIs that the development team produces as well as APIs that the team consumes within their application (including third-party APIs).[5]
API testing is used to determine whether APIs return the correct response (in the expected format) for a broad range of feasible requests, react properly toedge casessuch as failures and unexpected/extreme inputs, deliver responses in anacceptable amount of time, and respond securely to potentialsecurity attacks.[1][4]Service virtualizationis used in conjunction with API testing to isolate the services under test as well as expand test environment access by simulating APIs/services that are not accessible for testing.[6]
API testing commonly includes testingRESTAPIs orSOAPweb serviceswithJSONorXMLmessage payloadsbeing sent overHTTP,HTTPS,JMS, andMQ.[2][7]It can also include message formats such asSWIFT,FIX,EDIand similar fixed-length formats,CSV,ISO 8583andProtocol Buffersbeing sent overtransports/protocolssuch asTCP/IP,ISO 8583,MQTT,FIX,RMI,SMTP,TIBCO Rendezvous, andFIX.[8][9]
API Testing is recognised as being more suitable fortest automationandcontinuous testing(especially the automation used withAgile software developmentandDevOps) than GUI testing.[3][4]Reasons cited include:
For these reasons, it is recommended that teams increase their level of API testing while decreasing their reliance on GUI testing. API testing is recommended for the vast majority of test automation efforts and as much edge testing as possible. GUI testing is then reserved for validating typical use cases at the system level, mobile testing, and usability testing.[3][4][10]
There are several types of tests that can be performed on APIs. Some of these include smoke testing, functional testing, security testing, penetration testing, and validation testing.
|
https://en.wikipedia.org/wiki/API_testing
|
AnAPI writeris atechnical writerwho writes documents that describe anapplication programming interface(API). The primary audience includes programmers, developers, system architects, and system designers.
An API is alibraryconsisting of interfaces, functions,classes, structures, enumerations, etc. for building a software application. It is used by developers to interact with and extend the software. An API for a givenprogramming languageor system may consist of system-defined and user-defined constructs. As the number and complexity of these constructs increases, it becomes very tedious for developers to remember all of the functions and the parameters defined. Hence, the API writers play a key role in buildingsoftwareapplications.
Due to the technical subject matter, API writers must understand applicationsource codeenough to extract the information that API documents require. API writers often use tooling that extractssoftware documentationplaced by programmers in the source code in a structured manner, preserving the relationships between the comments and the programming constructs they document.
API writers must also understand the software product and document the new features or changes as part of the new software release. The schedule of software releases varies from organization to organization. API writers need to understand the software life cycle well and integrate themselves into thesystems development life cycle(SDLC).
API writers in theUnited Statesgenerally followThe Chicago Manual of Styleforgrammarandpunctuation.[citation needed]
API writers typically possess a mix of programming and language skills; many API writers have backgrounds inprogrammingortechnical writing.
Expert API/software development kit(SDK) writers can easily becomeprogrammingwriters.
The API writing process is typically split between analyzing and understanding thesource code, planning, writing, and reviewing. It is often the case that the analytical, planning, and writing stages do not occur in a strictly linear fashion.
The writing and evaluation criteria vary between organizations. Some of the most effective API documents are written by those who are adequately capable of understanding the workings of a particular application, so that they can relate the software to the users or the various component constructs to the overall purpose of the program. API writers may also be responsible for authoringend-userproduct documentation.
While reference documentation may be auto-generated to ensure completeness, documentation that helps developers get started should be written by a professional API writer and reviewed by subject matter experts.[1]This helps ensure that developers understand key concepts and can get started quickly.
API writers produce documents that include:
|
https://en.wikipedia.org/wiki/API_writer
|
WebAR, previously known as theAugmented Web, is a web technology that allows foraugmented realityfunctionality within aweb browser. It is a combination ofHTML,Web Audio,WebGL, andWebRTC.[1]From 2020s more known as web-based Augmented Reality or WebAR, which is about the use ofaugmented realityelements in browsers.
It was the focus of a Birds of a Feather meeting atISMAR2012and is now the focus of the W3C Augmented Web Community Group.[2]
Browser augmented reality for smartphones has a number of features that distinguish it from similar content in special apps.
Where WebAR can be used from virtual guides, which can help students navigate through campus to virtual film posters:
Taking AR to the web may be the best option to grant this technology a future. By freeing smartphone users from having to install numerous apps, WebAR can make Augmented Reality far more accessible for them and more beneficial for business.
The further development of the WebAR can be accelerated by the widespread social acceptance of the headsets that can give the whole other level of AR experience. This means instant access to the information when the contextually relevant content is appearing as the person's real background is changing.[5]
|
https://en.wikipedia.org/wiki/Augmented_web
|
Incomputer science, acalling conventionis animplementation-level (low-level) scheme for howsubroutinesor functions receiveparametersfrom their caller and how theyreturna result.[1]When some code calls a function, design choices have been taken for where and how parameters are passed to that function, and where and how results are returned from that function, with these transfers typically done via certain registers or within astack frameon thecall stack. There are design choices for how the tasks of preparing for a function call and restoring the environment after the function has completed are divided between the caller and the callee. Some calling convention specifies the way every function should get called. The correct calling convention should be used for every function call, to allow the correct and reliable execution of the whole program using these functions.
Calling conventions are usually considered part of theapplication binary interface(ABI). They may be considered acontractbetween the caller and the called function.[1]
The names or meanings of the parameters and return values are defined in theapplication programming interface(API, as opposed to ABI), which is a separate though related concept to ABI and calling convention. The names of members within passed structures and objects would also be considered part of the API, and not ABI. Sometimes APIs do include keywords to specify the calling convention for functions.
Calling conventions do not typically include information on handling lifespan of dynamically-allocated structures and objects. Other supplementary documentation may state where the responsibility for freeing up allocated memory lies.
Calling conventions are unlikely to specify the layout of items within structures and objects, such as byte ordering or structure packing.
For some languages, the calling convention includes details of error or exception handling, (e.g.Go,Java) and for others, it does not (e.g.C++).
ForRemote procedure calls, there is an analogous concept calledMarshalling.
Calling conventions may be related to a particular programming language'sevaluation strategy, but most often are not considered part of it (or vice versa), as the evaluation strategy is usually defined on a higher abstraction level and seen as a part of the language rather than as a low-level implementation detail of a particular language'scompiler.
Calling conventions may differ in:
Sometimes multiple calling conventions appear on a single platform; a given platform and language implementation may offer a choice of calling conventions. Reasons for this include performance, adaptation of conventions of other popular languages, and restrictions or conventions imposed by various "computing platforms".
Many architectures only have one widely-used calling convention, often suggested by the architect. ForRISCsincluding SPARC, MIPS, andRISC-V, registers names based on this calling convention are often used. For example, MIPS registers$4through$7have "ABI names"$a0through$a3, reflecting their use for parameter passing in the standard calling convention. (RISC CPUs have many equivalent general-purpose registers so there's typically no hardware reason for giving them names other than numbers.)
The calling convention of a given program's language may differ from the calling convention of the underlying platform, OS, or of some library being linked to. For example, on32-bit Windows, operating system calls have thestdcallcalling convention, whereas manyCprograms that run there use thecdeclcalling convention. To accommodate these differences in calling convention, compilers often permit keywords that specify the calling convention for a given function. Thefunction declarationswill include additional platform-specific keywords that indicate the calling convention to be used. When handled correctly, the compiler will generate code to call functions in the appropriate manner.
Some languages allow the calling convention for a function to be explicitly specified with that function; other languages will have some calling convention but it will be hidden from the users of that language, and therefore will not typically be a consideration for the programmer.
The 32-bit version of thex86 architectureis used with many different calling conventions. Due to the small number of architectural registers, and historical focus on simplicity and small code-size, many x86 calling conventions pass arguments on the stack. The return value (or a pointer to it) is returned in a register. Some conventions use registers for the first few parameters which may improve performance, especially for short and simple leaf-routines very frequently invoked (i.e. routines that do not call other routines).
Example call:
Typical callee structure: (some or all (except ret) of the instructions below may be optimized away in simple procedures). Some conventions leave the parameter space allocated, using plainretinstead ofret imm16. In that case, the caller couldadd esp,12in this example, or otherwise deal with the change to ESP.
The 64-bit version of the x86 architecture, known asx86-64, AMD64, and Intel 64, has two calling sequences in common use. One calling sequence, defined by Microsoft, is used on Windows; the other calling sequence, specified in the AMD64 System V ABI, is used byUnix-likesystems and, with some changes, byOpenVMS. As x86-64 has more general-purpose registers than does 32-bit x86, both conventions pass some arguments in registers.
The standard 32-bitARMcalling convention allocates the 16 general-purpose registers as:
If the type of value returned is too large to fit in r0 to r3, or whose size cannot be determined statically at compile time, then the caller must allocate space for that value at run time, and pass a pointer to that space in r0.
Subroutines must preserve the contents of r4 to r11 and the stack pointer (perhaps by saving them to the stack in thefunction prologue, then using them as scratch space, then restoring them from the stack in thefunction epilogue). In particular, subroutines that call other subroutinesmustsave the return address in the link register r14 to the stack before calling those other subroutines. However, such subroutines do not need to return that value to r14—they merely need to load that value into r15, the program counter, to return.
The ARM calling convention mandates using a full-descending stack. In addition, the stack pointer must always be 4-byte aligned, and must always be 8-byte aligned at a function call with a public interface.[3]
This calling convention causes a "typical" ARM subroutine to:
The 64-bit ARM (AArch64) calling convention allocates the 31 general-purpose registers as:[4]
All registers starting withxhave a corresponding 32-bit register prefixed withw. Thus, a 32-bit x0 is called w0.
Similarly, the 32 floating-point registers are allocated as:[5]
RISC-Vhas a defined calling convention with two flavors, with or without floating point.[6]It passes arguments in registers whenever possible.
ThePOWER,PowerPC, andPower ISAarchitectures have a large number of registers so most functions can pass all arguments in registers forsingle levelcalls. Additional arguments are passed on the stack, and space for register-based arguments is also always allocated on the stack as a convenience to the called function in case multi-level calls are used (recursive or otherwise) and the registers must be saved. This is also of use invariadic functions, such asprintf(), where the function's arguments need to be accessed as an array. A single calling convention is used for all procedural languages.
Branch-and-link instructions store the return address in a speciallink registerseparate from the general-purpose registers; a routine returns to its caller with a branch instruction that uses the link register as the destination address. Leaf routines do not need to save or restore the link register; non-leaf routines must save the return address before making a call to another routine and restore it before it returns, saving it by using the Move From Special Purpose Register instruction to move the link register to a general-purpose register and, if necessary, then saving it to the stack, and restoring it by, if it was saved to the stack, loading the saved link register value to a general-purpose register, and then using the Move To Special Purpose Register instruction to move the register containing the saved link-register value to the link register.
The O32[7]ABIisthemost commonly-used ABI, owing to its status as the originalSystem VABI for MIPS.[8]It is strictly stack-based, with only four registers$a0-$a3available to pass arguments. This perceived slowness, along with an antique floating-point model with 16 registers only, has encouraged the proliferation of many other calling conventions. The ABI took shape in 1990 and was never updated since 1994. It is only defined for 32-bit MIPS, butGCChas created a 64-bit variation called O64.[9]
For 64-bit, the N64 ABI (not related toNintendo 64) by Silicon Graphics is most commonly used. The most important improvement is that eight registers are now available for argument passing; It also increases the number of floating-point registers to 32. There is also an ILP32 version called N32, which uses 32-bit pointers for smaller code, analogous to thex32 ABI. Both run under the 64-bit mode of the CPU.[9]
A few attempts have been made to replace O32 with a 32-bit ABI that resembles N32 more. A 1995 conference came up with MIPS EABI, for which the 32-bit version was quite similar.[10]EABI inspired MIPS Technologies to propose a more radical "NUBI" ABI that additionally reuses argument registers for the return value.[11]MIPS EABI is supported by GCC but not LLVM; neither supports NUBI.
For all of O32 and N32/N64, the return address is stored in a$raregister. This is automatically set with the use of theJAL(jump and link) orJALR(jump and link register) instructions. The stack grows downwards.
TheSPARCarchitecture, unlike mostRISCarchitectures, is built onregister windows. There are 24 accessible registers in each register window: 8 are the "in" registers (%i0-%i7), 8 are the "local" registers (%l0-%l7), and 8 are the "out" registers (%o0-%o7). The "in" registers are used to pass arguments to the function being called, and any additional arguments need to be pushed onto thestack. However, space is always allocated by the called function to handle a potential register window overflow, local variables, and (on 32-bit SPARC) returning a struct by value. To call a function, one places the arguments for the function to be called in the "out" registers; when the function is called, the "out" registers become the "in" registers and the called function accesses the arguments in its "in" registers. When the called function completes, it places the return value in the first "in" register, which becomes the first "out" register when the called function returns.
TheSystem VABI,[12]which most modernUnix-like systems follow, passes the first six arguments in "in" registers %i0 through %i5, reserving %i6 for the frame pointer and %i7 for the return address.
TheIBM System/360is another architecture without a hardware stack. The examples below illustrate the calling convention used byOS/360 and successorsprior to the introduction of 64-bitz/Architecture; other operating systems for System/360 might have different calling conventions.
Calling program:
Called program:
Standard entry sequence:
Standard return sequence:
Notes:
In theSystem/390ABI[13]and thez/ArchitectureABI,[14]used in Linux:
Additional arguments are passed on the stack.
Note: "preserved" reserves to callee saving; same goes for "guaranteed".
The most common calling convention for theMotorola 68000 seriesis:[15][16][17][18]
TheIBM 1130was a small 16-bit word-addressable machine. It had only six registers plus condition indicators, and no stack. The registers areInstruction Address Register (IAR),Accumulator (ACC),Accumulator Extension (EXT), and three index registers X1–X3. The calling program is responsible for saving ACC, EXT, X1, and X2.[19]There are twopseudo-operationsfor calling subroutines,CALLto code non-relocatable subroutines directly linked with the main program, andLIBFto call relocatable library subroutines through atransfer vector.[20]Both pseudo-ops resolve to aBranch and Store IAR(BSI) machine instruction that stores the address of the next instruction at its effective address (EA) and branches to EA+1.
Arguments follow theBSI—usually these are one-word addresses of arguments—the called routine must know how many arguments to expect so that it can skip over them on return. Alternatively, arguments can be passed in registers. Function routines returned the result in ACC for real arguments, or in a memory location referred to as the Real Number Pseudo-Accumulator (FAC). Arguments and the return address were addressed using an offset to the IAR value stored in the first location of the subroutine.
Subroutines in IBM 1130,CDC 6600andPDP-8(all three computers were introduced in 1965) store the return address in the first location of a subroutine.[21]
Threaded code places all the responsibility for setting up for and cleaning up after a function call on the called code. The calling code does nothing but list the subroutines to be called. This puts all the function setup and clean-up code in one place—the prologue and epilogue of the function—rather than in the many places that function is called. This makes threaded code the most compact calling convention.
Threaded code passes all arguments on the stack. All return values are returned on the stack. This makes naive implementations slower than calling conventions that keep more values in registers. However, threaded code implementations that cache several of the top stack values in registers—in particular, the return address—are usually faster than subroutine calling conventions that always push and pop the return address to the stack.[22][23][24]
The default calling convention for programs written in thePL/Ilanguage passes all argumentsby reference, although other conventions may optionally be specified. The arguments are handled differently for different compilers and platforms, but typically the argument addresses are passed via an argument list in memory. A final, hidden, address may be passed pointing to an area to contain the return value. Because of the wide variety of data types supported by PL/I adata descriptormay also be passed to define, for example, the lengths of character or bit strings, the dimension and bounds of arrays (dope vectors), or the layout and contents of adata structure.Dummy argumentsare created for arguments which are constants or which do not agree with the type of argument the called procedure expects.
|
https://en.wikipedia.org/wiki/Calling_convention
|
TheCommon Object Request Broker Architecture(CORBA) is astandarddefined by theObject Management Group(OMG) designed to facilitate the communication of systems that are deployed on diverseplatforms. CORBA enables collaboration between systems on different operating systems,programming languages, and computing hardware. CORBA uses an object-oriented model although the systems that use the CORBA do not have to be object-oriented. CORBA is an example of thedistributed objectparadigm.
While briefly popular in the mid to late 1990s, CORBA's complexity, inconsistency, and high licensing costs have relegated it to being a niche technology.[1]
CORBA enables communication between software written in different languages and running on different computers. Implementation details from specific operating systems, programming languages, and hardware platforms are all removed from the responsibility of developers who use CORBA. CORBA normalizes the method-call semantics between application objects residing either in the same address-space (application) or in remote address-spaces (same host, or remote host on a network). Version 1.0 was released in October 1991.
CORBA uses aninterface definition language(IDL) to specify the interfaces that objects present to the outer world. CORBA then specifies amappingfrom IDL to a specific implementation language likeC++orJava. Standard mappings exist forAda,C,C++,C++11,COBOL,Java,Lisp,PL/I,Object Pascal,Python,Ruby, andSmalltalk. Non-standard mappings exist forC#,Erlang,Perl,Tcl, andVisual Basicimplemented byobject request brokers(ORBs) written for those languages. Versions of IDL have changed significantly with annotations replacing some pragmas.
The CORBA specification dictates there shall be an ORB through which an application would interact with other objects. This is how it is implemented in practice:
Some IDL mappings are more difficult to use than others. For example, due to the nature of Java, the IDL-Java mapping is rather straightforward and makes usage of CORBA very simple in a Java application. This is also true of the IDL to Python mapping. The C++ mapping requires the programmer to learn datatypes that predate the C++Standard Template Library(STL). By contrast, the C++11 mapping is easier to use, but requires heavy use of the STL. Since the C language is not object-oriented, the IDL to C mapping requires a C programmer to manually emulate object-oriented features.
In order to build a system that uses or implements a CORBA-based distributed object interface, a developer must either obtain or write the IDL code that defines the object-oriented interface to the logic the system will use or implement. Typically, an ORB implementation includes a tool called an IDL compiler that translates the IDL interface into the target language for use in that part of the system. A traditional compiler then compiles the generated code to create the linkable-object files for use in the application. This diagram illustrates how the generated code is used within the CORBA infrastructure:
This figure illustrates the high-level paradigm for remote interprocess communications using CORBA. The CORBA specification further addresses data typing, exceptions, network protocols, communication timeouts, etc. For example: Normally the server side has thePortable Object Adapter(POA) that redirects calls either to the localservantsor (to balance the load) to the other servers. The CORBA specification (and thus this figure) leaves various aspects of distributed system to the application to define including object lifetimes (although reference counting semantics are available to applications), redundancy/fail-over, memory management, dynamic load balancing, and application-oriented models such as the separation between display/data/control semantics (e.g. seeModel–view–controller), etc.
In addition to providing users with a language and a platform-neutralremote procedure call(RPC) specification, CORBA defines commonly needed services such as transactions and security, events, time, and other domain-specific interface models.
This table presents the history of CORBA standard versions.[2][3][4]
Note that IDL changes have progressed with annotations (e.g. @unit, @topic) replacing some pragmas.
Aservantis the invocation target containing methods for handling theremote method invocations. In the newer CORBA versions, the remote object (on the server side) is split into theobject(that is exposed to remote invocations)andservant(to which the former partforwardsthe method calls). It can be oneservantper remoteobject, or the same servant can support several (possibly all) objects, associated with the givenPortable Object Adapter. Theservantfor eachobjectcan be set or found "once and forever" (servant activation) or dynamically chosen each time the method on that object is invoked (servant location). Both servant locator and servant activator can forward the calls to another server. In total, this system provides a very powerful means to balance the load, distributing requests between several machines. In the object-oriented languages, both remoteobjectand itsservantare objects from the viewpoint of the object-oriented programming.
Incarnationis the act of associating a servant with a CORBA object so that it may service requests. Incarnation provides a concrete servant form for the virtual CORBA object. Activation and deactivation refer only to CORBA objects, while the terms incarnation and etherealization refer to servants. However, the lifetimes of objects and servants are independent. You always incarnate a servant before calling activate_object(), but the reverse is also possible, create_reference() activates an object without incarnating a servant, and servant incarnation is later done on demand with a Servant Manager.
ThePortable Object Adapter(POA) is the CORBA object responsible for splitting the server side remote invocation handler into the remoteobjectand itsservant. The object is exposed for the remote invocations, while the servant contains the methods that are actually handling the requests. The servant for each object can be chosen either statically (once) or dynamically (for each remote invocation), in both cases allowing the call forwarding to another server.
On the server side, the POAs form a tree-like structure, where each POA is responsible for one or more objects being served. The branches of this tree can be independently activated/deactivated, have the different code for the servant location or activation and the different request handling policies.
The following describes some of the most significant ways that CORBA can be used to facilitate communication among distributed objects.
This reference is either acquired through a stringifiedUniform Resource Locator(URL), NameService lookup (similar toDomain Name System(DNS)), or passed-in as a method parameter during a call.
Object references are lightweight objects matching the interface of the real object (remote or local). Method calls on the reference result in subsequent calls to the ORB and blocking on the thread while waiting for a reply, success, or failure. The parameters, return data (if any), and exception data are marshaled internally by the ORB according to the local language and OS mapping.
The CORBA Interface Definition Language provides the language- and OS-neutral inter-object communication definition. CORBA Objects are passed by reference, while data (integers, doubles, structs, enums, etc.) are passed by value. The combination of Objects-by-reference and data-by-value provides the means to enforce great data typing while compiling clients and servers, yet preserve the flexibility inherent in the CORBA problem-space.
Apart from remote objects, the CORBA andRMI-IIOPdefine the concept of the OBV and Valuetypes. The code inside the methods of Valuetype objects is executed locally by default. If the OBV has been received from the remote side, the needed code must be eithera prioriknown for both sides or dynamically downloaded from the sender. To make this possible, the record, defining OBV, contains the Code Base that is a space-separated list ofURLswhence this code should be downloaded. The OBV can also have the remote methods.
CORBA Component Model (CCM) is an addition to the family of CORBA definitions.[5]It was introduced with CORBA 3 and it describes a standard application framework for CORBA components. Though not dependent on "language dependentEnterprise Java Beans(EJB)", it is a more general form of EJB, providing four component types instead of the two that EJB defines. It provides an abstraction of entities that can provide and accept services through well-defined named interfaces calledports.
The CCM has a component container, where software components can be deployed. The container offers a set of services that the components can use. These services include (but are not limited to)notification,authentication,persistence, andtransaction processing. These are the most-used services any distributed system requires, and, by moving the implementation of these services from the software components to the component container, the complexity of the components is dramatically reduced.
Portable interceptors are the "hooks", used by CORBA andRMI-IIOPto mediate the most important functions of the CORBA system. The CORBA standard defines the following types of interceptors:
The interceptors can attach the specific information to the messages being sent and IORs being created. This information can be later read by the corresponding interceptor on the remote side. Interceptors can also throw forwarding exceptions, redirecting request to another target.
TheGIOPis an abstract protocol by whichObject request brokers(ORBs) communicate. Standards associated with the protocol are maintained by theObject Management Group(OMG). The GIOP architecture provides several concrete protocols, including:
Each standard CORBA exception includes a minor code to designate the subcategory of the exception. Minor exception codes are of type unsigned long and consist of a 20-bit "Vendor Minor Codeset ID" (VMCID), which occupies the high order 20 bits, and the minor code proper which occupies the low order 12 bits.
Minor codes for the standard exceptions are prefaced by the VMCID assigned to OMG, defined as the unsigned long constant CORBA::OMGVMCID, which has the VMCID allocated to OMG occupying the high order 20 bits. The minor exception codes associated with the standard exceptions that are found in Table 3–13 on page 3-58 are or-ed with OMGVMCID to get the minor code value that is returned in the ex_body structure (see Section 3.17.1, "Standard Exception Definitions", on page 3-52 and Section 3.17.2, "Standard Minor Exception Codes", on page 3-58).
Within a vendor assigned space, the assignment of values to minor codes is left to the vendor. Vendors may request allocation of VMCIDs by sending email totagrequest@omg.org. A list of currently assigned VMCIDs can be found on the OMG website at:https://www.omg.org/cgi-bin/doc?vendor-tags
The VMCID 0 and 0xfffff are reserved for experimental use. The VMCID OMGVMCID (Section 3.17.1, "Standard Exception Definitions", on page 3-52) and 1 through 0xf are reserved for OMG use.
The Common Object Request Broker: Architecture and Specification (CORBA 2.3)
Corba Location (CorbaLoc) refers to a stringified object reference for a CORBA object that looks similar to a URL.
All CORBA products must support two OMG-defined URLs: "corbaloc:" and "corbaname:". The purpose of these is to provide a human readable and editable way to specify a location where an IOR can be obtained.
An example of corbaloc is shown below:
A CORBA product may optionally support the "http:", "ftp:", and "file:" formats. The semantics of these is that they provide details of how to download a stringified IOR (or, recursively, download another URL that will eventually provide a stringified IOR). Some ORBs do deliver additional formats which are proprietary for that ORB.
CORBA's benefits include language- and OS-independence, freedom from technology-linked implementations, strong data-typing, high level of tunability, and freedom from the details of distributed data transfers.
While CORBA delivered much in the way code was written and software constructed, it has been the subject of criticism.[8]
Much of the criticism of CORBA stems from poor implementations of the standard and not deficiencies of the standard itself. Some of the failures of the standard itself were due to the process by which the CORBA specification was created and the compromises inherent in the politics and business of writing a common standard sourced by many competing implementors.
|
https://en.wikipedia.org/wiki/Common_Object_Request_Broker_Architecture
|
Application virtualization softwarerefers to both applicationvirtual machinesand software responsible for implementing them. Application virtual machines are typically used to allow applicationbytecodeto run portably on many different computer architectures and operating systems. The application is usually run on the computer using aninterpreterorjust-in-time compilation(JIT). There are often several implementations of a given virtual machine, each covering a different set of functions.
The table here summarizes elements for which the virtual machine designs are intended to be efficient, not the list of abilities present in any implementation.
Virtual machine instructions process data in local variables using a mainmodel of computation, typically that of astack machine,register machine, orrandom access machineoften called the memory machine. Use of these three methods is motivated by different tradeoffs in virtual machines vs physical machines, such as ease of interpreting, compiling, and verifying for security.
Memory managementin these portable virtual machines is addressed at a higher level of abstraction than in physical machines. Some virtual machines, such as the popularJava virtual machines(JVM), are involved with addresses in such a way as to require safe automatic memory management by allowing the virtual machine to trace pointer references, and disallow machine instructions from manually constructing pointers to memory. Other virtual machines, such as LLVM, are more like traditional physical machines, allowing direct use and manipulation of pointers.Common Intermediate Language(CIL) offers a hybrid in between, allowing both controlled use of memory (like the JVM, which allows safe automatic memory management), while also allowing an 'unsafe' mode that allows direct pointer manipulation in ways that can violate type boundaries and permission.
Code securitygenerally refers to the ability of the portable virtual machine to run code while offering it only a prescribed set of abilities. For example, the virtual machine might only allow the code access to a certain set of functions or data. The same controls over pointers which make automatic memory management possible and allow the virtual machine to ensure typesafe data access are used to assure that a code fragment is only allowed to certain elements of memory and cannot bypass the virtual machine itself. Other security mechanisms are then layered on top as code verifiers, stack verifiers, and other methods.
Aninterpreterallows programs made of virtual instructions to be loaded and run immediately without a potentially costly compile into native machine instructions. Any virtual machine which can be run can be interpreted, so the column designation here refers to whether the design includes provisions for efficient interpreting (for common usage).
Just-in-time compilation(JIT), refers to a method of compiling to native instructions at the latest possible time, usually immediately before or during the running of the program. The challenge of JIT is more one of implementation than of virtual machine design, however, modern designs have begun to make considerations to help efficiency. The simplest JIT methods simply compile to a code fragment similar to an offline compiler. However, more complex methods are often employed, which specialize compiled code fragments to parameters known only at runtime (seeAdaptive optimization).
Ahead-of-time compilation(AOT) refers to the more classic method of using a precompiler to generate a set of native instructions which do not change during the runtime of the program. Because aggressive compiling and optimizing can take time, a precompiled program may launch faster than one which relies on JIT alone for execution. JVM implementations have mitigated this startup cost by initial interpreting to speed launch times, until native code fragments can be generated by JIT.
Shared librariesare a facility to reuse segments of native code across multiple running programs. In modern operating systems, this generally means usingvirtual memoryto share the memory pages containing a shared library across different processes which are protected from each other viamemory protection. It is interesting that aggressive JIT methods such as adaptive optimization often produce code fragments unsuitable for sharing across processes or successive runs of the program, requiring a tradeoff be made between the efficiencies of precompiled and shared code and the advantages of adaptively specialized code. For example, several design provisions of CIL are present to allow for efficient shared libraries, possibly at the cost of more specialized JIT code. The JVM implementation onOS Xuses a Java Shared Archive[3]to provide some of the benefits of shared libraries.
In addition to the portable virtual machines described above, virtual machines are often used as an execution model for individual scripting languages, usually by an interpreter. This table lists specific virtual machine implementations, both of the above portable virtual machines, and of scripting language virtual machines.
|
https://en.wikipedia.org/wiki/Comparison_of_application_virtual_machines
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.