text
stringlengths
11
320k
source
stringlengths
26
161
Nearables(alsonearable technology) is a term for a type ofsmart object, invented byEstimote Inc.. The term is used to describe everyday items that have small, wireless computing devices attached to them. These devices can be equipped with a variety of sensors and work as transmitters to broadcast digital data through a variety of methods, but they usually use theBluetooth Smartprotocol. These objects provide mobile devices within their range with information about their location, state, and immediate surroundings. The word 'nearables' is a reference towearable technology– electronic devices worn as part of clothing or jewelry.[1] The term 'nearables' was first introduced byEstimote Inc.in 2014 as part of a marketing campaign associated with a product launch of the next generation of Bluetooth Smart beacons.[2]Using the language of Estimote, 'nearables' were an implementation of theiBeaconstandard that provided orientation, temperature, and motion information – enabling functionality for Internet of Things applications.[3] Nearables are a further development of theInternet of Things(also referred to as Internet of Everything). It's a vision of a wide, global network of interconnected devices, using the existing Internet infrastructure to provide services beyond standardmachine-to-machine communications. Although the term Internet of Things was coined byKevin Ashtonin 1999,[4]the idea can be traced to the late 1980s, whenMark Weiserintroduced the idea ofubiquitous computing.[5] Location-based services emerged in the 1990s with widespread adoption of mobile phones and development of location- and proximity-based technologies, such asGPSandRFID. This, in turn, led to first attempts at wirelessproximity marketingin 2000s with early versions ofBluetooth,NFCandWi-Fistandards as predominant technologies. However, it was not until 2013, whenApple Inc.announced theiBeaconprotocol for Bluetooth Smart-enabled devices, that the idea of creating smart objects by attaching wireless beacons to them started gaining traction.[6] In August 2014 Estimote Inc. launched Estimote Stickers, a new generation of small Bluetooth Smart-based beacons. The term ‘nearables’ was inspired by the wearable computers gaining increasing popularity in 2013 and 2014. Two such computers were thePebblesmartwatch andGoogle Glass. Originally, nearables were described assmart, connected objects that broadcast data about their location, motion and temperature.[7] In its first interpretation, Nearables are not devices themselves. Any object (or a live being, like a human or animal) can become a nearable after a wireless, electronic sensor is attached to it and starts broadcasting data to nearby mobile devices. Due to the continued miniaturization of sensor technology, a single transmitter could be equipped with a whole set of these, for example:accelerometer,thermometer,ambient light sensor,humiditysensor ormagnetometer. In the second interpretation the actual nearable devices can be part of an infinite array of smart interconnected objects, programmed to improve an individual's vicinity in every way, usually to be used in a smart home environment. Making today's homes smart by having nearable technologies creating these devices to act intuitively depending on the needs of individuals through self learning software. First examples of nearables were objects tagged with Bluetooth Smart beacons supporting accelerometer and temperature sensor and broadcasting their signal in the range of approximately 50 meters. They can communicate with mobile applications installed on devices withBluetooth 4.0, compatible with Bluetooth Smart protocol on the software side. At the moment of their launch, it included mainlyiOS 7and high-endAndroidmobile devices. To create a nearable, one must attach an electronic device, working as both a sensor and a transmitter, to an object. Since the only limitation is the size of the device, both items and living beings can act as nearables. The most often cited examples, however, include retail andhome automationenvironments.[8]
https://en.wikipedia.org/wiki/Nearables
Abeaconis an intentionally conspicuous device designed to attract attention to aspecific location. A common example is thelighthouse, which draws attention to a fixed point that can be used to navigate around obstacles or into port. More modern examples include a variety ofradio beaconsthat can be read onradio direction findersin all weather, andradar transpondersthat appear onradardisplays. Beacons can also be combined with semaphoric or other indicators to provide importantinformation, such as the status of an airport, by the colour and rotational pattern of itsairport beacon, or of pending weather as indicated on aweather beaconmounted at the top of a tall building or similar site. When used in such fashion, beacons can be considered a form ofoptical telegraphy. Beacons help guidenavigatorsto their destinations. Types of navigational beacons includeradarreflectors,radio beacons, sonic and visual signals. Visual beacons range from small, single-pile structures to largelighthousesor light stations and can be located on land or on water. Lighted beacons are calledlights; unlighted beacons are calleddaybeacons.Aerodrome beaconsare used to indicate locations of airports and helipads.[1] In the United States, a series of beacons were constructed across the country in the 1920s and 1930s to help guide pilots deliveringair mail. They were placed about 25 miles apart from each other, and included large concrete arrows with accompanying lights to illuminate them.[2] Handheld beacons are also employed inaircraft marshalling, and are used by the marshal to deliver instructions to the crew of aircraft as they move around an active airport, heliport or aircraft carrier.[citation needed] Historically, beacons were fires lit at well-known locations on hills or high places, used either aslighthousesfornavigation at sea, or for signalling over land that enemy troops were approaching, in order to alert defenses. As signals, beacons are an ancient form ofoptical telegraphand were part of arelay league. Systems of this kind have existed for centuries over much of the world. The ancient Greeks called themphryctoriae, while beacons figure on several occasions on thecolumn of Trajan. In imperial China, sentinels on and near theGreat Wall of Chinaused a sophisticated system of daytime smoke and nighttime flame to send signals along long chains of beacon towers.[3] Legend has it thatKing You of Zhouplayed a trick multiple times in order to amuse his often melancholy concubine, ordering beacon towers lit to fool his vassals and soldiers. But when enemies, led by theMarquess of Shenreally arrived at the wall, although the towers were lit, no defenders came, leading to King Yōu's death and the collapse of the Western Zhou dynasty.[3][4][5]China's system of beacon towers was not extant prior to theHan dynasty. Thucydideswrote that during thePeloponnesian War, thePeloponnesianswho were inCorcyrawere informed by night-time beacon signals of the approach of sixty Athenian vessels fromLefkada.[6] In the 10th century, during theArab–Byzantine wars, theByzantine Empireused abeacon systemto transmit messages from the border with theAbbasid Caliphate, acrossAnatoliato theimperial palacein the Byzantine capital,Constantinople. It was devised byLeo the Mathematicianfor EmperorTheophilos, but either abolished or radically curtailed by Theophilos' son and successor,Michael III.[7]Beacons were later used in Greece as well, while the surviving parts of the beacon system in Anatolia seem to have been reactivated in the 12th century by EmperorManuel I Komnenos.[7] In the Nordic countries,hill fortsand beacon networks were important for warning against invasions.[8]In Sweden and Finland, these beacons, known asvårdkasarorböte, formed an extensive coastal warning system from the Late Iron Age and through the Middle Ages. Beacons were strategically placed on high ground for visibility, constructed fromtar-rich wood to ensure bright flames. They were mentioned in medieval laws likeUpplandslagenand described by Swedish writerOlaus Magnusin 1555 as tools for mobilising armed defenders during crises.[8]In Finland, similar beacons calledvainovalkeat("persecution fires") orvartiotulet("guard fires") warned settlements of raids.[9] InWales, theBrecon Beaconswere named for beacons used to warn of approaching English raiders. In England, the most famous examples are the beacons used inElizabethan Englandto warn of the approachingSpanish Armada. Many hills in England were named Beacon Hill after such beacons. In England the authority to erect beacons originally lay with the King and later was delegated to theLord High Admiral. The money due for the maintenance of beacons was calledBeaconagiumand was levied by the sheriff of each county.[10]In theScottish borderscountry, a system of beacon fires was at one time established to warn of incursions by the English.Humeand Eggerstone castles and Soltra Edge were part of this network.[11] In Spain, the border ofGranadain the territory of theCrown of Castilehad a complex beacon network to warn against Moorish raiders and military campaigns.[12]Due to the progressive advance of the borders throughout the process of the Reconquista, the entire Spanish geography is full of defensive lines of castles, towers and fortifications, visually connected to each other, which served as fortified beacons. Some examples are the Route of the Vinalopó castles or the distribution of the castles in Jaén. In later centuries, advancements in technology, such as thetelegraph, rendered beacon systems obsolete for rapid communication.[13]The use of such beacons transitioned from practical communication to symbolic and ceremonial roles,[14]where the lighting of beacons was repurposed to mark significant national events. Beacons were lit across the United Kingdom to celebrate Queen Victoria'sDiamond Jubileein 1897, Queen Elizabeth II'sPlatinum Jubilee in 2022,[15]and to commemorate events such as the 70th anniversary ofVE Day, and the 80th anniversary of theD-Day landingsin 2024.[14] South Korea maintains a daily ceremonial beacon lighting atNamsanBeacon Mound inSeoul, where visitors witness a reenactment of the traditionalbongsuceremony, which historically signaled emergencies.[16] Infraredstrobes and other infrared beacons have increasingly been used in modern combat when operating at night as they can only be seen throughnight vision goggles. As a result, they are often used to mark friendly positions as a form ofIFFto prevent friendly fire and improve coordination. Soldiers will typically affix them to theirhelmetsor other gear so they are easily visible to others using night vision including other infantry, ground vehicles, and aerial platforms (drones, helicopters, planes, etc.).[17] Passive markers include IR patches, which reflect infrared light, andchemlights. The earliest such beacons were often IR chemlights taped to helmets. As time went on, more sophisticated options began to emerge with electronically powered infrared strobes with specific mounting solutions for attaching to helmets or load bearing equipment. These strobes may have settings which allow constant on or strobes of IR light, hence the name.[18] Advancements in near-peer technology, however, present risk since if friendly units can see the strobe with night vision so could enemies with night vision capabilities. As a result, some in the American military have stressed that efforts should be made to improve training regarding light discipline (IR and visible) and other means of reducing a unit's visible signature.[17] Vehicular beacons are rotating or flashing lights affixed to the top of a vehicle to attract the attention of surrounding vehicles and pedestrians.Emergency vehiclessuch as fire engines, ambulances, police cars, tow trucks, construction vehicles, and snow-removal vehicles carry beacon lights. The color of the lamps varies by jurisdiction; typical colors are blue and/or red for police, fire, and medical-emergency vehicles; amber for hazards (slow-moving vehicles, wide loads, tow trucks, security personnel, construction vehicles, etc.); green for volunteer firefighters or for medical personnel, and violet for funerary vehicles. Beacons may be constructed withhalogen bulbssimilar to those used in vehicleheadlamps, xenonflashtubes, orLEDs.[19]Incandescent and xenon light sources require the vehicle's engine to continue running to ensure that the battery is not depleted when the lights are used for a prolonged period. The low power consumption of LEDs allows the vehicle's engine to remain turned off while the lights operate. Beacons have also allegedly been abused byshipwreckers. An illicit fire at a wrong position would be used to direct a ship againstshoalsorbeaches, so that its cargo could be looted after the ship sank or ran aground. There are, however, no historically substantiated occurrences of such intentional shipwrecking. In wireless networks, abeaconis a type offramewhich is sent by the access point (or WiFi router) to indicate that it is on. Bluetooth based beacons periodically send out a data packet and this could be used by software to identify the beacon location. This is typically used byindoor navigation and positioningapplications.[20] Beaconingis the process that allows a network to self-repair network problems. The stations on the network notify the other stations on the ring when they are not receiving the transmissions. Beaconing is used in Token ring and FDDI networks. InAeschylus' tragedyAgamemnon,[21]a chain of eight beacons staffed by so-calledlampadóphoroiinformClytemnestrainArgos, within a single night's time, thatTroyhas just fallen under her husband king Agamemnon's control, after a famousten years siege. InJ. R. R. Tolkien'shigh fantasynovel,The Lord of the Rings, aseries of beaconsalerts the entire realm ofGondorwhen the kingdom is under attack. These beacon posts were staffed by messengers who would carry word of their lighting to eitherRohanorBelfalas.[22]InPeter Jackson'sfilm adaptation of the novel, the beacons serve as a connection between the two realms of Rohan and Gondor, alerting one another directly when they require military aid, as opposed to relying on messengers as in the novel. The Beaconwas an influential Caribbean magazine published in Trinidad in the 1930s.New Beacon Bookswas the first Caribbean publishing house in England, founded in London in 1966, was named after theBeaconjournal.[23] Beacons are sometimes used in retail to send digital coupons or invitations to customers passing by.[24][25] An infrared beacon (IR beacon) transmits a modulated light beam in the infrared spectrum, which can be identified easily and positively. A line of sight clear of obstacles between the transmitter and the receiver is essential. IR beacons have a number of applications inroboticsand inCombat Identification(CID). Infrared beacons are the key infrastructure for the Universal Traffic Management System (UTMS) in Japan. They perform two-way communication with travelling vehicles based on highly directional infrared communication technology and have a vehicle detecting capability to provide more accurate traffic information.[26] A sonar beacon is an underwater device which transmits sonic or ultrasonic signals for the purpose of providing bearing information. The most common type is that of a rugged watertight sonar transmitter attached to a submarine and capable of operating independently of the electrical system of the boat. It can be used in cases of emergencies to guide salvage vessels to the location of a disabled submarine.[27]
https://en.wikipedia.org/wiki/Beacons
Proximity marketingis the localizedwirelessdistribution of advertising content associated with a particular place. Transmissions can be received by individuals in that location who wish to receive them and have the necessary equipment to do so. Distribution may be via a traditional localizedbroadcast, or more commonly is specifically targeted to devices known to be in a particular area. The location of a device may be determined by: Communications may be further targeted to specific groups within a given location, for example content in tourist hot spots may only be distributed to devices registered outside the local area. Communications may be both time and place specific, e.g. content at a conference venue may depend on the event in progress. Uses of proximity marketing include distribution of media at concerts, information (weblinks on local facilities), gaming and social applications, and advertising. Bluetooth, a short-rangewirelesssystem supported by many mobile devices, is one transmission medium used for proximity marketing. The process of Bluetooth-based proximity marketing involves setting up Bluetooth "broadcasting" equipment at a particular location and then sending information which can be text, images, audio or video to Bluetooth enabled devices within range of the broadcast server. These devices are often referred to as beacons. Other standard data exchange formats such asvCardcan also be used. This form of proximity marketing is also referred to asclose range marketing. It used to be the case that due to security fears, or a desire to save battery life, many users keep their Bluetooth devices in OFF mode, or ON but not set to be 'discoverable'. Because of this, often regions where Bluetooth proximity marketing is in operation it is accompanied by advising via traditional media - such as posters, television screens or field marketing teams - suggesting people make their Bluetooth handsets 'discoverable' in order to receive free content - this is often referred to as a "Call-to-Action." A 'discoverable' Bluetooth device within range of the server is automatically sent a message asking if the user would like to receive the free content. Current mobile phones usually have bluetooth switched ON by default,[1]and some users leave bluetooth switched on for easy connection with car kits and headsets. There are systems capable of detecting certain signals periodically emitted by any electronic devices equipped withWi-FiorBluetoothtechnology, and the subsequent use of gathered information to detect the position or presence of, and/or flows of information to and from, said devices, in a statistical or aggregate form. This technology is used in a manner equivalent to other systems, such as Radio-frequency Identification (RFID), which serve for locating devices within a controlled environment; it works in conjunction with signals from Wi-Fi issuers (also called wireless tags) and receiving antennas, in different locations, so that the movements and presence of Wi-Fi-equipped devices can be analyzed in terms of arrival time, length of visit per zone, paths of movement, general flows, etc. The continuously increasing use of smartphones and tablets has fueled a boom in Wi-Fi tracking technology, specially in the retail environment. Such technology can be used by managers of a physical business to ascertain how many devices are present in a given area, and to observe or optimize business marketing and management. Technically, such technology is based on two main models: 1. Re-use of standardAccess Point (AP)technologies with aCaptive Portal, already deployed in numerous locations (airports, malls, shops, etc.). 2. Use of antennas for the detection of signals in the 2.4 or 5 GHz frequency bands, positioning the detected devices within strategic areas, in order to obtain a unique identifier about every mobile device is detected in such locations, and with the corresponding HTML5, iOS and Android SDKs integrated in any APP or Web, allowing interaction by proximity with the users through the mobile devices. The first option manifests weaker ability to detect and send messages to the public, because AP devices were created for purposes other than wireless tracking and operate by extracting information only from select devices (smartphones or tablets which have previously connected to the AP in question). In practice, and depending on the environment, as many as 10-20% of visitors access to the captive portal when they visit a point of sale The second option is to analyze all signals detected within the bands used by the Wi-Fi and Bluetooth technology, offering a higher detection ratio of total visitors (about 60%-70%) and extracting behavior patterns that allow the assignment of a unique identifier, each time a device is detected. Such identifiers are not linked to any data present on the device, nor to any information from the device manufacturer, so that relation to any particular user of the device cannot be made. Unlike in the above case, visitor security (in the sense of anonymity) is total. Assignment of the same unique identifier to tracking information obtained by the antennas, APP and Webs APIs remains a challenge, and allows to have both online and offline behaviour information to optimize proximity communication campaigns in a non-intrusive way. Near Field Communication(NFC)tagsare embedded in the NFC Smart Poster, Smart Product or Smart Book. The tag has a RFID chip with an embedded command. The command can be to open the mobile browser on a given page or offer. Any NFC-enabled phone can activate this tag by placing the device in close proximity. The information can be anything from product details, special accommodation deals, and information on local restaurants. The German drugstore chain,Budnikowsky, launched the first NFC-enabled Smart Poster in October 2011 which allowed train commuters to tap their phones on the poster to shop and find more information. in November 2011, Atria Books/Simon & Schusterlaunched theImpulse Economy, the first NFC-enabled Smart Book. In the UK NFC is being adopted by most of the outdoor poster contractors. Clear Channel have installed over 25,000 Adshel posters with NFC tags (and QR codes for Apple phones). Retailers are also looking at NFC as it offers a cost-effective method by which consumers can engage with brands but doesn't require integrating the technology into their IT systems - which is a barrier to many new technologies like BLE. A number of retailers have already started using NFC to enhance the shopping experience, Casino in France and Vic in Holland. Proximity Marketing Strategy using NFC Technology has been widely adopted in Japan and uses 'pull' rather than 'push' marketing allowing the consumer the choice of where and when they receive marketing messages. There are a number NFC-enabled phones entering the market spurred by NFC mobile wallet trials globally. NFC wallets include theGoogle WalletandISIS (mobile payment system). Whilemobile paymentis the driver for NFC, proximity marketing is an immediate beneficiary in-market. Apple did not include this technology in their initial smartphone models. Apple added NFC to theiPhone 6andiPhone 6 Plus. Proximity Marketing via SMS relies on GSM 03.41 which defines the Short Message Service - Cell Broadcast. SMS-CB allows messages (such as advertising or public information) to be broadcast to all mobile users in a specified geographical area. In the Philippines, GSM-based proximity broadcast systems are used by select Government Agencies for information dissemination on Government-run community-based programs to take advantage of its reach and popularity (Philippineshas the world's highest traffic of SMS). It is also used for commercial service known as Proxima SMS. Bluewater, a super-regional shopping centre in the UK, has a GSM based system supplied by NTL to help its GSM coverage for calls, it also allows each customer with a mobile phone to be tracked though the centre which shops they go into and for how long. The system enables special offer texts to be sent to the phone.
https://en.wikipedia.org/wiki/Proximity_marketing
AirTagis atracking devicedeveloped byApple.[1]AirTag is designed to act as akey finder, which helps people find personal objects such as keys, bags, apparel, small electronic devices and vehicles. To locate lost or stolen items, AirTags use Apple's crowdsourcedFind Mynetwork, estimated in early 2021 to consist of approximately one billion devices worldwide that detect and anonymously report emittedBluetoothsignals.[2]AirTags are compatible with anyiPhone,iPad, oriPod Touchdevice capable of runningiOS/iPadOS14.5 or later, includingiPhone 6S or later(includingiPhone SE 1,2and3). Using the built-inU1chip oniPhone 11 or later(exceptiPhone SEandiPhone 16emodels), users can more precisely locate items usingultra-wideband(UWB) technology. AirTag was announced on April 20, 2021,[3][4]made available for pre-order on April 23, and released on April 30. The product was rumored to be under development in April 2019.[5]In February 2020, it was reported thatAsahi Kaseiwas prepared to supply Apple with tens of millions ofultra-wideband(UWB) parts for the rumored AirTag in the second and third quarters of 2020, though the shipment was ultimately delayed.[6]On April 2, 2020, a YouTube video on Apple Support[7]page also confirmed AirTag.[8]In Apple'siOS 14.0 release, code was discovered that described the reusable and removable battery that would be used in the AirTag.[9][10]In March 2021,Macworldstated that iOS 14.5 beta'sFind Myuser interface included "Items" and "Accessories" features meant for AirTag support for a user's "backpack, luggage, headphones" and other objects.[11]AppleInsidernoted that the beta included safety warnings for "unauthorized AirTags" persistently in a user's vicinity.[12] In May 2024, Bloomberg reported that Apple was preparing a new version of the AirTag, codenamed B589.[13][14] AirTags can be interacted with using theFind Myapp. Users may trigger the AirTag to play a sound from the app. iPhones equipped with theU1 chipcan use "Precision Tracking" to provide direction to and precise distance from an AirTag. Precision Tracking utilizesultra-wideband.[15] AirTags are notsatellite navigation devices. AirTags are located on a map within the Find My app by utilizing Bluetooth signals from other anonymous iOS and iPadOS devices out in the world. To help prevent unwanted tracking, aniOS/iPadOSdevice will alert their owner if someone else's AirTag seems to be with them, instead of with the AirTag's owner, for too long.[16]If an AirTag is out of range of any Apple device for a random duration between 8 to 24 hours,[17]it will begin to beep to alert a person that an AirTag may have been placed in their possessions.[18] Users can mark an AirTag as lost and provide a phone number and a message. Any iPhone user can see this phone number and message with the "Identify Found Item" feature within the Find My app, which utilizesnear-field communication(NFC) technology. Additionally, Android and Windows 10 Mobile phones with NFC can identify an AirTag with a tap, which will redirect to a website containing the message and phone number.[15][19] AirTag requires anApple Accountand iOS or iPadOS 14.5 or later.[20]It uses the CR2032 button cell, replaceable with one year of battery life (though some batteries with child-resistantbitterantscannot be used due to the design of the AirTagbattery terminal).[21]The maximum range of Bluetooth tracking is estimated to be around 30 meters.[22][23]The water-resistance of an AirTag is ratedIP67water and dust; an AirTag can withstand 30 minutes of water immersion in standard laboratory conditions. Each Apple Account is limited to 32 AirTags.[9] Apple does not provide a way for users to force an AirTag to carry out a firmware update.[24]Firmware updates may happen automatically whenever an AirTag is in Bluetooth range of the paired iPhone (running iOS 14.5 or later) and both devices have sufficient battery.[25] AirTags have become extremely popular among travelers to track checked luggage on flights and empower them when luggage is lost by carriers.[41][42][43][44]In response,Lufthansastated that AirTags were not permissible in luggage checked with the carrier.[45][46][47][48]The carrier backtracked after a risk assessment by German risk authorities following widespread criticism and accusations that it was seeking to avoid accountability.[49][50]TheFederal Aviation Administrationhas ruled that storing AirTags in checked luggage is permitted and not a safety hazard as they contain less than 100 mg of Lithium.[51] AirTags have been used to track stolen property and assist police in recovering them for return to their rightful owners.[52]In February 2023, a North Carolina family discovered that their car had been stolen. In coordination with local police, they utilized an AirTag placed in the vehicle to locate the car and were able to recover their property.[53]Police were reportedly elated at the ease at which they were able to arrest the criminals and recover the property thanks to the AirTags.[54] Despite Apple's inclusion of technologies to help prevent unwanted tracking orstalking,The Washington Postfound that it was "frighteningly easy" to bypass the systems put in place. It has been described as "a gift to stalkers".[55]Concerns included the built-in audible alarm taking three days to sound (since reduced to 8–24 hours[56]), and the fact that most Americans hadAndroiddevices that would not receive alerts about nearby AirTags thatiPhonedevices receive.[57]AirTags cannot have most of their components replaced correctly, but it has been found that AirTags with their speakers forcibly removed from the rest of the components were being used to track people. The AirTag cannot detect this change, making it harder for people to find out that an AirTag had been stalking them.[58]AirTags with their speakers removed have been found for sale on sites such aseBayandEtsy.[59]In January 2022,BBC Newsspoke to six women who stated that they had found unregistered AirTags inside belongings such as cars and bags.[60] In late 2021, Apple released an app called Tracker Detect on theGoogle PlayStore to help users of Android 9 or later to discover unknown AirTags near them in a "lost" state and potentially being used for malicious tracking purposes.[61][62]The app does not run in the background.[63] In February 2022, Apple added a warning for users setting up their AirTag, notifying them that using the device to track people is illegal and the device is only meant for tracking personal belongings.[64]The AirTag chirps within 8 to 24 hours if it has been separated from the device it is paired with.[56] The National Postin Canada reported that AirTags were placed on vehicles at shopping malls and parking lots without the drivers' knowledge, in order to track them to their homes, where the vehicles would be stolen.[65]In response, Apple announced just beforeWWDC 2021that it had begun rolling out updates that would allow anyone with anNFC-capable phone to tap an unwanted AirTag for instructions on how to disable it, and that they had decreased the delay time for the audible alert that sounds after the AirTag is separated from its owner from three days to a random time between 8 and 24 hours.[66] Users who set their AirTags to lost mode are prompted to provide a contact phone number for finders to call. In September 2021, security researcherBrian Krebs, citing fellow security researcher Bobby Rauch,[67]reported that the phone number field will actually accept any type of input, including arbitrary computer code, opening up the potential use of AirTags asTrojan horsedevices.[68] Similar product manufacturerTilecriticized Apple for using similar technologies and designs to Tile's trackers.[16]Spokespeople for Tile made atestimonyto theUnited States Congresssaying that Apple was supporting "anti-competitive practices",[69]claiming that Apple had done this in the past, and that they think it is "entirely appropriate for Congress to take a closer look at Apple's business practices".[70] AirTags do not have holes or other mechanical features that would allow them to be positively attached or affixed to the item being tracked; solutions include adhesives (glue, tape) and purpose-built accessories. ThepolyurethaneAirTag Loop is the least expensive solution sold by Apple; it costs the same as a single AirTag and has been criticized as an "accessory tax".[71] Other similar trackers claiming compatibility with Apple's "Find My AirTag" are available. While only compatible with Apple devices, they do not use Apple's proprietary Ultra Wideband (UWB) technology, and are not as accurate at finding direction and distance, though claim to be reliable;[72]however, consumer magazineWhich?reported about AirTags "it's unlikely to be much help if you've lost it somewhere really rural. We've found that the Find My network ... has the widest coverage, but it's still reliant on someone with an iOS device walking past your tracker."[73] AirTags and equivalents using Apple's "Find My" are not supported by theAndroid operating system; while they can scan a nearby AirTag using Near Field Communication (NFC) and detect unknown AirTags, they cannot pair with a device or use advanced tracking. There are trackers for Android, including Tile,Samsung GalaxySmartTag, and Chipolo One.[74]Various trackers use other networks, including Google'sFind My Deviceand proprietary networks. Accuracy in finding and locating distant objects is mostly reported as of 2025[update]to be relatively poor, while trackers using Apple's network are accurate.[73]
https://en.wikipedia.org/wiki/AirTag
Beaconsare small devices that enable relatively accurate location within a narrow range. Beacons periodically transmit small amounts of data within a range of approximately 70 meters, and are often used for indoor location technology.[1]Compared to devices based onGlobal Positioning System(GPS), beacons provide more accurate location information and can be used for indoor location. Various types of beacons exist, which can be classified based on their type of Beacon protocol, power source and location technology. In December 2013,AppleannouncediBeacon: the first beacon protocol in the market. iBeacon works with Apple's iOS and Google's Android. The beacon using the iBeacon protocol transmits a so-called UUID. The UUID is a string of 24 numbers, which communicate with an installed Mobile App.[2] Advantages: GoogleannouncedEddystonein July 2015, after it was renamed from its former name UriBeacon. Beacons with support from Eddystone are able to transmit three different frame-types, which work with both iOS and Android.[3]A single beacon can transmit one, two or all three frametypes. The three frametypes are: Advantages: Radius Networks announced AltBeacon in July 2014. This open source beacon protocol was designed to overcome the issue of protocols favouring one vendor over the other.[4] Advantages: The Web & Information Systems engineering lab (WISE) at the Vrije Universiteit Brussel (VUB) announced SemBeacon in September 2023. It is an open source[5]beacon protocol andontologybased on AltBeacon and Eddystone-URL to create interoperable applications that do not require a local database.[6] Advantages: Tecno-World (Pitius Tec S.L., Manufacture-ID 0x015C) announced GeoBeacon in July 2017. This open source beacon protocol was designed for usage in GeoCaching applications due to the very compact type of data storage.[7] Advantages: In general, there are three types of power source for beacons: Most beacons use bluetooth technology to communicate with other devices and retrieve thelocationinformation. Apart from bluetooth technology however, several other location technologies exist. The most common location technologies are the following: The majority of beacon location devices rely onBluetooth low energy(BLE) technology. Compared to 'classic'Bluetoothtechnology, BLE consumes less power, has a lower range, and transmits less data. BLE is designed for periodic transfers of very small amounts of data. In July 2015, theWi-Fi Allianceannounced Wi-Fi Aware. Similar to BLE, Wi-Fi Aware has a lower power consumption than regular Wi-Fi and is designed for indoor location purposes. Whereas most beacon vendors focus on merely one technology, some vendors combine multiple location technologies.
https://en.wikipedia.org/wiki/Types_of_beacons
Proximity marketingis the localizedwirelessdistribution of advertising content associated with a particular place. Transmissions can be received by individuals in that location who wish to receive them and have the necessary equipment to do so. Distribution may be via a traditional localizedbroadcast, or more commonly is specifically targeted to devices known to be in a particular area. The location of a device may be determined by: Communications may be further targeted to specific groups within a given location, for example content in tourist hot spots may only be distributed to devices registered outside the local area. Communications may be both time and place specific, e.g. content at a conference venue may depend on the event in progress. Uses of proximity marketing include distribution of media at concerts, information (weblinks on local facilities), gaming and social applications, and advertising. Bluetooth, a short-rangewirelesssystem supported by many mobile devices, is one transmission medium used for proximity marketing. The process of Bluetooth-based proximity marketing involves setting up Bluetooth "broadcasting" equipment at a particular location and then sending information which can be text, images, audio or video to Bluetooth enabled devices within range of the broadcast server. These devices are often referred to as beacons. Other standard data exchange formats such asvCardcan also be used. This form of proximity marketing is also referred to asclose range marketing. It used to be the case that due to security fears, or a desire to save battery life, many users keep their Bluetooth devices in OFF mode, or ON but not set to be 'discoverable'. Because of this, often regions where Bluetooth proximity marketing is in operation it is accompanied by advising via traditional media - such as posters, television screens or field marketing teams - suggesting people make their Bluetooth handsets 'discoverable' in order to receive free content - this is often referred to as a "Call-to-Action." A 'discoverable' Bluetooth device within range of the server is automatically sent a message asking if the user would like to receive the free content. Current mobile phones usually have bluetooth switched ON by default,[1]and some users leave bluetooth switched on for easy connection with car kits and headsets. There are systems capable of detecting certain signals periodically emitted by any electronic devices equipped withWi-FiorBluetoothtechnology, and the subsequent use of gathered information to detect the position or presence of, and/or flows of information to and from, said devices, in a statistical or aggregate form. This technology is used in a manner equivalent to other systems, such as Radio-frequency Identification (RFID), which serve for locating devices within a controlled environment; it works in conjunction with signals from Wi-Fi issuers (also called wireless tags) and receiving antennas, in different locations, so that the movements and presence of Wi-Fi-equipped devices can be analyzed in terms of arrival time, length of visit per zone, paths of movement, general flows, etc. The continuously increasing use of smartphones and tablets has fueled a boom in Wi-Fi tracking technology, specially in the retail environment. Such technology can be used by managers of a physical business to ascertain how many devices are present in a given area, and to observe or optimize business marketing and management. Technically, such technology is based on two main models: 1. Re-use of standardAccess Point (AP)technologies with aCaptive Portal, already deployed in numerous locations (airports, malls, shops, etc.). 2. Use of antennas for the detection of signals in the 2.4 or 5 GHz frequency bands, positioning the detected devices within strategic areas, in order to obtain a unique identifier about every mobile device is detected in such locations, and with the corresponding HTML5, iOS and Android SDKs integrated in any APP or Web, allowing interaction by proximity with the users through the mobile devices. The first option manifests weaker ability to detect and send messages to the public, because AP devices were created for purposes other than wireless tracking and operate by extracting information only from select devices (smartphones or tablets which have previously connected to the AP in question). In practice, and depending on the environment, as many as 10-20% of visitors access to the captive portal when they visit a point of sale The second option is to analyze all signals detected within the bands used by the Wi-Fi and Bluetooth technology, offering a higher detection ratio of total visitors (about 60%-70%) and extracting behavior patterns that allow the assignment of a unique identifier, each time a device is detected. Such identifiers are not linked to any data present on the device, nor to any information from the device manufacturer, so that relation to any particular user of the device cannot be made. Unlike in the above case, visitor security (in the sense of anonymity) is total. Assignment of the same unique identifier to tracking information obtained by the antennas, APP and Webs APIs remains a challenge, and allows to have both online and offline behaviour information to optimize proximity communication campaigns in a non-intrusive way. Near Field Communication(NFC)tagsare embedded in the NFC Smart Poster, Smart Product or Smart Book. The tag has a RFID chip with an embedded command. The command can be to open the mobile browser on a given page or offer. Any NFC-enabled phone can activate this tag by placing the device in close proximity. The information can be anything from product details, special accommodation deals, and information on local restaurants. The German drugstore chain,Budnikowsky, launched the first NFC-enabled Smart Poster in October 2011 which allowed train commuters to tap their phones on the poster to shop and find more information. in November 2011, Atria Books/Simon & Schusterlaunched theImpulse Economy, the first NFC-enabled Smart Book. In the UK NFC is being adopted by most of the outdoor poster contractors. Clear Channel have installed over 25,000 Adshel posters with NFC tags (and QR codes for Apple phones). Retailers are also looking at NFC as it offers a cost-effective method by which consumers can engage with brands but doesn't require integrating the technology into their IT systems - which is a barrier to many new technologies like BLE. A number of retailers have already started using NFC to enhance the shopping experience, Casino in France and Vic in Holland. Proximity Marketing Strategy using NFC Technology has been widely adopted in Japan and uses 'pull' rather than 'push' marketing allowing the consumer the choice of where and when they receive marketing messages. There are a number NFC-enabled phones entering the market spurred by NFC mobile wallet trials globally. NFC wallets include theGoogle WalletandISIS (mobile payment system). Whilemobile paymentis the driver for NFC, proximity marketing is an immediate beneficiary in-market. Apple did not include this technology in their initial smartphone models. Apple added NFC to theiPhone 6andiPhone 6 Plus. Proximity Marketing via SMS relies on GSM 03.41 which defines the Short Message Service - Cell Broadcast. SMS-CB allows messages (such as advertising or public information) to be broadcast to all mobile users in a specified geographical area. In the Philippines, GSM-based proximity broadcast systems are used by select Government Agencies for information dissemination on Government-run community-based programs to take advantage of its reach and popularity (Philippineshas the world's highest traffic of SMS). It is also used for commercial service known as Proxima SMS. Bluewater, a super-regional shopping centre in the UK, has a GSM based system supplied by NTL to help its GSM coverage for calls, it also allows each customer with a mobile phone to be tracked though the centre which shops they go into and for how long. The system enables special offer texts to be sent to the phone.
https://en.wikipedia.org/wiki/Proximity_Marketing
Mobile location analytics(MLA) is a type ofcustomer intelligenceand refers to technology forretailers, including developing aggregate reports used to reduce waiting times at checkouts, improving store layouts, and understanding consumershopping patterns. The reports are generated by recognizing theWi-FiorBluetoothaddresses of cell phones as they interact with store networks.[1] By seeing the movement of devices, retailers can gather data that will help them optimize such things as floor plan layouts, advertisement placement and checkout lane staffing. MLA products work by capturing a device'sMAC address, the unique 12-digit number that is assigned to a specific hardware device. This number can be detected by Wi-Fi or Bluetooth sensors. There are separate MAC addresses for Wi-Fi and Bluetooth.Beaconsare also used for MLA purposes and they work with Bluetooth. Through this technology, they are also able to sendpush notifications. Recently companies started using the combination of Wi-Fi and Bluetooth to improve accuracy and reliability of the MLA devices. The technology works as people walk through stores; the tracking companies find their wireless signal and assign the device a random number. They monitor that number as it moves across the screen and analyze patterns in the data. Because of the use of the automatically transmitted MAC address, the customers need not belogged intothe shops' Wi-Fi or website. This feature is highlighted in the alternative term "offline tracking" (as opposed toonline tracking[2]) that is sometimes used for "MLA" e.g. in Germany.[3] A number of industries can benefit from MLA services including retail,real estate,energy,insurance, manufacturing,healthcare, government, planning, andpublic safety. For example, retail businesses can comparesales revenueand evaluatemarketing campaigneffectiveness. Businesses can determine where to open stores and distribute their products. MLA is also beneficial for emergency cases. Hospitals can determine demand for new vaccines or make sense of suddendisease outbreaks. This is possible because every information system, desktop solution, ormobile appcan take advantage of the location.[4] The physical stores have tools to collect data on their shoppers by monitoring their movement and their pauses.Video monitoringcan provide up to 10,000 data points per store visitor. This allows stores to developheat mapsso they can put the items they want to sell in high traffic areas. If amobile deviceis stolen by theft, this can be found by police bytracingthe mobile number. According to a study,brick-and-mortarsaccounts for 93% of sales.[5]Therefore, the retail store remains a critical focus. In-store analytics has become more likeonline storeanalytics. Using MLA, stores can see where shoppers go and where they linger, detect whether they are shopping alone or with friends or children, and match shopping to weather. One company with small stores located in malls found that the space just inside the entry was a dead zone, so they moved the popular items further inside the store. Another store couldn’t tell which display sold more effectively because they had duplicate inventory. They were about to remove the wall displays when they decided to check traffic with a Mobile Location Analytics company. After creating a heat map, they made the floor displays smaller and easier for customers to walk through to reach the wall displays. The stores want analytics to see if the displays erected at the end of aisles eat away the sales of the same item stacked halfway down the aisle or if they contribute to additional sales.[6] MLA-based counts can help ensure stores are staffed appropriately for the traffic at all times of the day. It helps businesses make correlations between transaction data and traffic. The privacy agreement comes at a time when brick-and-mortar retailers are eager to have access to the kind of information about consumer behavior that can matchWeb retailerslike Amazon. The move also reflects how industry is responding to public concern over thecollectionofpersonal data. As companies increasingly use data in more robust ways such as targetingonline adsortrackingphysical location, they are realizing the need to give users more control over how data is used.[7] Although products dealing with mobile location analytics do not recordpersonally identifiable informationabout specific customers, they have generated concerns aboutcustomer dataintegration andconsumer privacy.[8]Several MLA companies have worked with United States Senator, Charles Schemer and theFuture of Privacy Forumto develop asmart phone trackingcode of conduct. Under thevoluntarycode of conduct, MLA vendors and retailers will inform customers when they are being tracked and allow individual customers toopt out. Nowadays most Wi-Fi router orwireless access pointvendors provide an API for listening the device's MAC address of the signals toidentify smartphones, this is the base of solution vendors of Wi-Fi tracking andanalyticssystems, but almost all new smartphones emit more than one MAC address when they are not connected to the Wi-Fi. AniPhonecan produce a lot of different and/or false MAC addresses when visit a venue during 30–40 minutes, because every time you touch the screen and awake fromsleep modethe MAC address changes. Wi-Fi tracking solution vendors are dealing with data based on these false MAC addresses if they try to detect not associated devices, and only if the smartphone connects to the Wi-Fi access point to have free internet access (then it is associated) they can detected the true MAC address of the iPhone. But few customers use thefree internet service- in general less than 10-20%.
https://en.wikipedia.org/wiki/Mobile_location_analytics
Bluetooth beaconsare hardware transmitters — a class ofBluetooth Low Energy(LE) devices that broadcast their identifier to nearbyportable electronicdevices. The technology enablessmartphones,tabletsand other devices to perform actions when in close proximity to a beacon. Bluetooth beacons useBluetooth Low Energy proximity sensingto transmit auniversally unique identifier[1]picked up by a compatible app or operating system. The identifier and several bytes sent with it can be used to determine the device's physical location,[2]track customers, or trigger alocation-basedaction on the device such as acheck-in on social mediaor apush notification. One application is distributing messages at a specificpoint of interest, for example a store, a bus stop, a room or a more specific location like a piece of furniture or a vending machine. This is similar to previously used geopush technology based onGPS, but with a much reduced impact on battery life and much extended precision. Another application is anindoor positioning system,[3][4][5]which helps smartphones determine their approximate location or context. With the help of a Bluetooth beacon, a smartphone's software can approximately find its relative location to a Bluetooth beacon in a store.Brick and mortarretail stores use the beacons formobile commerce, offering customers special deals throughmobile marketing,[6]and can enablemobile paymentsthroughpoint of salesystems. Bluetooth beacons differ from some other location-based technologies as the broadcasting device (beacon) is only a 1-way transmitter to the receiving smartphone or receiving device, and necessitates a specific app installed on the device to interact with the beacons. Thus only the installed app, and not the Bluetooth beacon transmitter, can track users. Bluetooth beacon transmitters come in a variety of form factors, including small coin cell devices, USB sticks, and generic Bluetooth 4.0 capable USBdongles.[7] The development of the "short-link" radio technology, later named Bluetooth, was initiated in 1989 by Dr. Nils Rydbeck CTO at Ericsson Mobile inLundand Dr.Johan Ullman. The purpose was to develop wireless headsets, according to two inventions byJohan Ullman, SE 8902098–6, issued 1989-06-12 and SE 9202239, issued 1992-07-24. Since its creation the Bluetooth standard has gone through many generations each adding different features. Bluetooth 1.2 allowed for faster speed up to ≈700 kbit/s. Bluetooth 2.0 improved on this for speeds up to 3 Mbit/s. Bluetooth 2.1 improved device pairing speed and security. Bluetooth 3.0 again improved transfer speed up to 24 Mbit/s. In 2010 Bluetooth 4.0 (Low Energy) was released with its main focus being reduced power consumption. Before Bluetooth 4.0 the majority of connections using Bluetooth were two way, both devices listen and talk to each other. Although this two way communication is still possible with Bluetooth 4.0, one way communication is also possible. This one way communication allows a bluetooth device to transmit information but not listen for it. These one way "beacons" do not require a paired connection like previous Bluetooth devices so they have new useful applications. Bluetooth beacons operate using the Bluetooth 4.0 Low Energy standard so battery powered devices are possible. Battery life of devices varies depending on manufacturer. The Bluetooth LE protocol is significantly more power efficient than Bluetooth Classic. Several chipsets makers, includingTexas Instruments[8]andNordic Semiconductornow supply chipsets optimized for iBeacon use. Power consumption depends on iBeacon configuration parameters of advertising interval and transmit power. Battery life can range between 1–48 months. Apple's recommended setting of 100 ms advertising interval with a coin cell battery provides for 1–3 months of life, which increases to 2–3 years as advertising interval is increased to 900 ms.[9] Battery consumption of the phones is a factor that must be taken into account when deploying beacon enabled apps. A recent report has shown that older phones tend to draw more battery power in the vicinity of iBeacons, while the newer phones can be more efficient in the same environment.[10]In addition to the time spent by the phone scanning, number of scans and number of beacons in the vicinity are also significant factors for battery drain. An energy efficient iBeacon application needs to consider these aspects in order to strike a good balance between app responsiveness and battery consumption. Bluetooth beacons can also come in the form of USB dongles. These small USB beacons can be powered by a standard USB port which makes them ideal for long term permanent installations. Bluetooth beacons can be used to send a packet of information that contains a Universally Unique Identifier (UUID). This UUID is used to trigger events specific to that beacon. In the case of Apple's iBeacon the UUID will be recognized by an app on the user device that will trigger an event. This event is fully customizable by the app developer but in the case of advertising the event might be a push notification with an ad. However, with a UID based system the users device must connect to an online server which is capable of understanding the beacons UUID. Once the UUID is sent to the server the appropriate message action is sent to a users device. Other methods of advertising are also possible with beacons, URIBeacon and Google's Eddystone allow for a URI transmission mode that unlike iBeacons UID doesn't require an outside server for recognition. The URI beacons transmit a URI which could be a link to a webpage and the user will see that URI directly on their phone.[11] Beacons can be associated with the artpieces in a museum to encourage further interaction. For example, a notification can be sent to user's mobile device when user is in the proximity to a particular artpiece. By sending user the notification, user is alerted with the artpiece in his proximity, and if user indicates their further interest, a specific app can be installed to interact with the encountered artpiece.[12]In general, a native app is needed for a mobile device to interact with the beacon if the beacon uses iBeacon protocol; whereas if Eddystone is employed, user can interact with the artpiece through a physical web URL broadcast by the Eddystone. Indoor positioning with beacons falls into three categories. Implementations with many beacons per room, implementations with one beacon per room, and implementations with a few beacons per building. Indoor navigation with Bluetooth is still in its infancy but attempts have been made to find a working solution. With multiple beacons per roomtrilaterationcan be used to estimate a users' position to within about 2 meters.[13]Bluetooth beacons are capable of transmitting their Received Signal Strength Indicator (RSSI) value in addition to other data. This RSSI value is calibrated by the manufacturer of the beacon to be the signal strength of the beacon at a known distance, typically one meter. Using the known output signal strength of the beacon and the signal strength observed by the receiving device an approximation can be made about the distance between the beacon and the device. However this approximation is not very reliable, so for more accurate position tracking other methods are preferred. Since its release in 2010 many studies have been connected using Bluetooth beacons for tracking. A few methods have been tested to find the best way of combining the RSSI values for tracking. Neural networks have been proposed as a good way of reducing the error in estimation.[13]AStigmergicapproach has also been tested, this method uses an intensity map to estimate a users location.[14]Bluetooth LE specification 5.1 added further more precise methods for position determination using multiple beacons. With only one beacon per room, a user can use their known room position in conjunction with a virtual map of all the rooms in a building to navigate a building. A building with many separate rooms may need a different beacon configuration for navigation. With one beacon in each room a user can use an app to know the room they are in, and a simple shortest path algorithm can be used to give them the best route to the room they are looking for. This configuration requires a digital map of the building but attempts have been made to make this map creation easier.[15] Beacons can be used in conjunction withpedestrian dead reckoningtechniques to add checkpoints to a large open space.[16]PDR uses a known last location in conjunction with direction and speed information provided by the user to estimate a person's location. This technique can be used to estimate a person's location as they walk through a building. Using Bluetooth beacons as checkpoints the user's location can be recalculated to reduce error. In this way a few Bluetooth beacons can be used to cover a large area like a mall. Using the device tracking capabilities of Bluetooth beacons, in-home patient monitoring is possible. Using bluetooth beacons a person's movements and activities can be tracked in their home.[17]Bluetooth beacons are a good alternative to in house cameras due to their increased level of privacy. Additionally bluetooth beacons can be used in hospitals or other workplaces to ensure workers meet certain standards. For example, a beacon may be placed at a hand sanitizer dispenser in a hospital – the beacons can help ensure employees are using the station regularly. One use of beacons is as a "key finder" where a beacon is attached to, for example, a keyring and a smartphone app can be used to track the last time the device came in range. Another similar use is to track pets, objects (e.g. baggage) or people. The precision and range of BLE doesn't match GPS, but beacons are significantly less expensive. Several commercial and free solutions exist, which are based on proximity detection, not precise positioning. For example, Nivea launched the "kid-tracker" campaign in Brazil back in 2014.[18] In mid-2013,AppleintroducediBeaconsand experts wrote about how it is designed to help the retail industry by simplifying payments and enabling on-site offers. On December 6, 2013, Apple activated iBeacons across its 254 US retail stores.[19]McDonald's has used the devices to give special offers to consumers in its fast-food stores.[6]As of May 2014, different hardware iBeacons can be purchased for as little as $5 per device to more than $30 per device.[20]Each of these different iBeacons have varying default settings for their default transmit power and iBeacon advertisement frequency. Some hardware iBeacons advertise at as low as 1 Hz while others can be as fast as 10 Hz.[21] AltBeacon is an open source alternative to iBeacon created by Radius Networks.[22] URIBeacons are different from iBeacons and AltBeacons because rather than broadcasting an identifier, they send an URL which can be understood immediately.[22] Eddystoneis Google's standard for Bluetooth beacons. It supports three types of packets, Eddystone-UID, Eddystone-URL, and Eddystone-TLM.[11]Eddystone-UID functions in a very similar way to Apple's iBeacon, however, it supports additional telemetry data with Eddystone-TLM. The telemetry information is sent along with the UID data. The beacon information available includes battery voltage, beacon temperature, number of packets sent since last startup, and beacon uptime.[11]Using the Eddystone protocol, Google had built the now discontinued[23]Google Nearby that allowed Android users to receive beacon notifications without an app. Although thenear-field communication(NFC) environment is very different and has many non-overlapping applications, it is still compared with iBeacons.
https://en.wikipedia.org/wiki/Bluetooth_low_energy_beacon
Cabir(also known asCaribe,SybmOS/Cabir,Symbian/CabirandEPOC.cabir) is the name of acomputer wormdeveloped in 2004[1]that is designed to infect mobile phones runningSymbian OS. It is believed to be the first computer worm that can infectmobile phones.[2]When a phone is infected with Cabir, the message "Caribe" is displayed on the phone's display, and is displayed every time the phone is turned on. The worm then attempts to spread to other phones in the area using wirelessBluetoothsignals. Several firms subsequently released tools to remove the worm, the first of which was the Australian business TSG Pacific.[3] The worm can attack and replicate onBluetoothenabledSeries 60phones. The worm tries to send itself to all Bluetooth enabled devices that support the "Object Push Profile", which can also be non-Symbian phones, desktop computers or even printers. The worm spreads as a.sisfile installed in the Apps directory. Cabir does not spread if the user does not accept the file transfer or does not agree with the installation, though some older phones would keep on displaying popups, as Cabir re-sent itself, rendering the UI useless until yes is clicked. Cabir is the first mobile malware ever discovered.[4] While the worm is considered harmless because it replicates but does not perform any other activity, it will result in shortened battery life on portable devices due to constant scanning for other Bluetooth enabled devices. Cabir was named by the employees ofKaspersky Labafter their colleague Elena Kabirova.[5] Mabir, a variant of Cabir, is capable of spreading not only via Bluetooth but also via MMS. By sending out copies of itself as a .sis file over cellular networks, it can affect even users who are outside the 10m range of Bluetooth.
https://en.wikipedia.org/wiki/Cabir_(computer_worm)
BlackArchis apenetration testingdistributionbased onArch Linuxthat provides a large number ofsecuritytools. It is anopen-sourcedistrocreated specially for penetration testers and security researchers. The repository contains more than 2800 tools that can be installed individually or in groups. BlackArch Linux is compatible with existingArch Linuxinstallations.[1][2] BlackArch is similar in usage to bothParrot OSandKali Linuxwhen fully installed, with a major difference being BlackArch is based onArch Linuxinstead ofDebian. BlackArch only provides theXfcedesktop environment in the "Slim ISO" but provides multiple preconfigured Window Managers in the "Full ISO". Similar to Kali Linux and Parrot OS, BlackArch can be burned to anISO imageand run as a live system.[1]BlackArch can also be installed as an unofficial user repository on any current Arch Linux installation.[3] BlackArch currently contains 2817 packages and tools, along with their dependencies.[4]BlackArch is developed by a small number ofcyber securityspecialists and researchers that add the packages as well as dependencies needed to run these tools. Tools categories within the BlackArch distribution (Counting date: 15 April 2024):[4]
https://en.wikipedia.org/wiki/BlackArch
Bluesnarfingis theunauthorized accessof information from awireless devicethrough aBluetoothconnection, often between phones, desktops, laptops, and PDAs (personal digital assistant).[1]This allows access to calendars, contact lists, emails and text messages, and on some phones, users can copy pictures and private videos. Both Bluesnarfing andBluejackingexploit others' Bluetooth connections without their knowledge. While Bluejacking is essentially harmless as it only transmits data to the target device, Bluesnarfing is thetheft of informationfrom the target device.[2] For a Bluesnarfing attack to succeed, the attacker generally needs to be within a maximum range of 10 meters from the target device. In some cases, though, attackers can initiate a Bluesnarfing attack from a greater distance.[3] Bluesnarfing exploits vulnerabilities in theOBject EXchangeprotocol used for Bluetooth device communication, involving hackers who use tools like Bluediving to detect susceptible devices. Once a vulnerable device is identified, hackers establish a connection and employ Bluesnarfing tools to extract data. These tools, available on thedark webor developed by hackers, enable attackers to access sensitive information from compromised devices.[3] Any device with its Bluetooth connection turned on and set to "discoverable" (able to be found by other Bluetooth devices in range) may be susceptible to Bluejacking and possibly to Bluesnarfing if there is a vulnerability in the vendor's software. By turning off this feature, the potential victim can be safer from the possibility of being Bluesnarfed; although a device that is set to "hidden" may be Bluesnarfable by guessing the device'sMAC addressvia abrute force attack. As with all brute force attacks, the main obstacle to this approach is the sheer number of possible MAC addresses. Bluetooth uses a 48-bit unique MAC Address, of which the first 24 bits are common to a manufacturer.[4]The remaining 24 bits have approximately 16.8 million possible combinations, requiring anaverageof 8.4 million attempts to guess by brute force. Attacks on wireless systems have increased along with the popularity ofwireless networks. Attackers often search forrogue access points, or unauthorized wireless devices installed in an organization's network and allow an attacker to circumventnetwork security. Rogue access points and unsecured wireless networks are often detected through war driving, which is using an automobile or other means of transportation to search for a wireless signal over a large area. Bluesnarfing is an attack to access information from wireless devices that transmit using the Bluetooth protocol. With mobile devices, this type of attack is often used to target theinternational mobile equipment identity(IMEI). Access to this unique piece of data enables the attackers to divert incoming calls and messages to another device without the user's knowledge. Bluetooth vendors advise customers with vulnerable Bluetooth devices to either turn them off in areas regarded as unsafe or set them to undiscoverable.[5]This Bluetooth setting allows users to keep their Bluetooth on so that compatible Bluetooth products can be used but other Bluetooth devices cannot discover them. Because Bluesnarfing is an invasion ofprivacy, it is illegal in many countries. Bluesnipinghas emerged as a specific form of Bluesnarfing that is effective at longer ranges than normally possible. According toWiredmagazine, this method surfaced at theBlack Hat BriefingsandDEF CONhacker conferences of 2004 where it was shown on theG4techTVshowThe Screen Savers.[6]For example, a "rifle" with a directional antenna,Linux-powered embeddedPC, andBluetoothmodule mounted on aRuger 10/22folding stock has been used for long-range Bluesnarfing.[7] In the TV seriesPerson of Interest, Bluesnarfing, often mistakenly referred to as Bluejacking in the show and at other times forced pairing and phone cloning, is a common element in the show used to spy on and track the people the main characters are trying to save or stop. Mark Ciampa (2009), Security+ Guide to Network Security Fundamentals Third Edition. Printed in Canada.Roberto Martelloni'shome pageArchived2017-12-27 at theWayback Machinewith Linux source code of released Bluesnarfer proof-of-concept.
https://en.wikipedia.org/wiki/Bluesnarfing
AirDropis a file-sharing service inApple'siOS,macOS,iPadOSandvisionOSoperating systems that operates over awireless ad hoc network. Airdrop was introduced inMac OS X Lion(10.7) andiOS 7,[1]and can transfer files among supportedMaccomputers and iOS devices by means of close-range wireless communication.[1]This communication takes place over Apple Wireless Direct Link 'Action Frames' and 'Data Frames' using generated link-localIPv6 addressesinstead of the Wi-Fi chip's fixedMAC address.[2] Prior toOS X Yosemite(10.10), and under OS X Lion, Mountain Lion, and Mavericks (10.7–10.9, respectively) the AirDrop protocol in macOS was different from the AirDrop protocol of iOS, and the two were therefore not interoperable.[3]OS X Yosemite and later support the iOS AirDrop protocol, which is used for transfers between a Mac and an iOS device as well as between two 2012 or newer Mac computers, and which uses bothWi-FiandBluetooth.[4][5]Legacy mode for the old AirDrop protocol (which only uses Wi-Fi) between a 2012 or older Mac computer (or a computer running OS X Lion through OS X Mavericks) and another Mac computer was also available untilmacOS Mojave.[5][6] Apple reveals no limit on the size of the file which AirDrop can transfer. However, some Apple users have indicated that oversized[vague]files are almost impossible to transfer, with a high probability of failure.[citation needed] On iOS 7 and later, AirDrop can be accessed by either tapping on Settings > General > AirDrop,[7]or via theControl Center.[8]BothWi-FiandBluetoothare automatically switched on when AirDrop is enabled as they are both utilized.[8]NFC can also be utilized to initiate a transfer in iOS 17 or later. Options for controlling AirDrop discovery by other devices include:[8] In iOS 16.2 or later, the Everyone option reverts to Contacts Only after 10 minutes. If an application implements AirDrop support, it is available through the share button. AirDrop is subject to a number of restrictions on iOS, such as the inability to share music or videos from the native apps.[9] On Macs running OS X 10.7 and greater, AirDrop is available in theFinderwindow sidebar.[10]On Macs running OS X 10.8.1 or later, it can also be accessed through the menu option Go → AirDrop or by pressing⇧ Shift+⌘ Cmd+R.[11] AirDrop must be selected in a Finder window sidebar to be able to transfer files. Furthermore, files are not automatically accepted, but instead give a prompt asking to receive or decline the file sent. RunningiOS 7or later:[8] AirDrop can be enabled unofficially oniPad (3rd generation)byjailbreakingthe device and installing "AirDrop Enabler 7.0+" fromCydia. This procedure is not endorsed by Apple. RunningOS X Yosemite(10.10) or later:[5] To transfer files between a Mac and an iPhone, iPad or iPod touch, the following minimum requirements have to be met:[12]All iOS devices with AirDrop are supported withiOS 8or later:[8] RunningOS X Yosemite(10.10) or later:[5] Bluetooth and Wi-Fi have to be turned on for both Mac and iOS devices. (Both devices are not required to be connected to the same Wi-Fi network.) AirDrop usesTLS encryptionover a direct Apple-createdpeer-to-peerWi-Ficonnection fortransferring files.[13]TheWi-Fi radiosof the source and target devices communicate directly without using an Internet connection orWi-Fi Access Point.[13] The technical details of AirDrop and the proprietary peer-to-peer Wi-Fi protocol calledApple Wireless Direct Link (AWDL)have been reverse engineered[14]and the resulting open source implementations published asOWL[15]andOpenDrop.[16] During the initial handshake devices exchange fullSHA-256hashes of users' phone numbers and email addresses, which might be used by attackers to infer the phone numbers and in some cases email addresses themselves.[17]In 2024, The Beijing Municipal Bureau of Justice claimed that following complaints from the public about "anonymous dissemination of inappropriate messages" in public places using AirDrop, a forensic institute in Beijing was commissioned to analyze iPhone's encrypted device logs. Arainbow tablecorrelating phone numbers and email accounts was created during investigation, and has "effectively assisted the police in identifying several suspects" involved in such cases.[18][19][20] Researchers at theTechnische Universität Darmstadtstated that Apple knew that AirDrop users could be identified and tracked as early as 2019 and did not implement a proposed fix in 2021.[21] Following the 2022Beijing Sitong Bridge protest, users in China used AirDrop to distribute similar protest posters and slogans.[22][23]Apple reportedly limited the AirDrop function in China just weeks before2022 COVID-19 protests in China.[24][25][26]The AirDrop restrictions triggered a hunger strike at Apple's headquarters.[27] There have been numerous reported cases where iOS device users with AirDrop privacy set to "Everyone" have received unwanted files from nearby strangers; the phenomenon has been termed "cyber-flashing."[28][29]As of iOS 16.1.1, Apple has silently replaced the "Everyone" mode with "Everyone for 10 minutes" for users in China at first, which automatically reverts to contacts only after time elapses. After it was discovered, Apple stated that this feature was intended to reduce unsolicited content, and became available worldwide with iOS 16.2.[30]It did not comment upon the timing of the change or why it is initially limited to China, with reports suggesting that the limitation was implemented due to theBeijing Sitong Bridge protest.[31][32] In March 2022 a flight between Seattle and Orlando was detained on the runway atOrlando International Airportuntil police decided a hijack threat was "not credible", after a 10-year-old child on board the plane airdropped a threat to another passenger, who alerted the crew.[33] In May 2022, an AnadoluJet flight between Israel and Turkey was deboarded after Israeli users used AirDrop to share pictures of a Turkish airline crash, leading to at least one injury to a passenger. After a search of the luggage, the flight was reboarded and resumed its trip some hours later.[34] In July 2022, an 18-year-old Spanish man flying from Rome to Alicante airdropped pictures of skulls and a generic menace inAmharicto some of the passengers, before takeoff. As the crew was informed and the captain asked for police intervention, the flight left with a two-hour delay and the young man was charged with procuring an alarm.[35] In late August 2022, a man on an airplane that was taxiing for take off, airdropped nude photos of himself to others on the Southwest Airlines flight fromHoustontoCabo San Lucas. When a passenger reported this to the flight crew, the pilot announced that if this didn't stop he would return to the gate, which would ruin their vacations, and the activity stopped.[36][37]
https://en.wikipedia.org/wiki/AirDrop
Connected healthis a socio-technical model for healthcare management and delivery[1]by using technology to provide healthcare services remotely. Connected health, also known as technology enabled care (TEC) aims to maximize healthcare resources and provide increased, flexible opportunities for consumers to engage with clinicians and better self-manage their care.[2]It uses readily available consumer technologies to deliver patient care outside of the hospital or doctor's office. Connected health encompasses programs intelehealth, remote care (such as home care andremote patient monitoring), and disease and lifestyle management. It often leverages existing technologies, such as connected devices using cellular networks, and is associated with efforts to improve chronic care. However, there is an increasing blur between software capabilities and healthcare needs whereby technologists are now providing the solutions to support consumer wellness and provide the connectivity between patient data, information and decisions. This calls for new techniques to guide Connected Health solutions such as "design thinking" to support software developers in clearly identifying healthcare requirements, and extend and enrich traditional software requirements gathering techniques.[3] TheUnited StatesandEuropean Unionare two dominant markets for the use of connected health in home care service, in part due to the high availability of telephone and Internet service as compared to other parts of the world.[citation needed]Proponents of connected health believe that technology can transform healthcare delivery and address inefficiencies especially in the area of work flow management, chronic disease management andpatient complianceof the US and global healthcare systems.[citation needed] Connected health has its roots intelemedicine, and its more recent relative,telehealth. The first telemedicine programs were primarily undertaken to address healthcare access and/or provider shortages. Connected health is distinguished from telemedicine by:[citation needed] Connected health is the "umbrella term arrived to lessen the confusion over the definitions of telemedicine, telehealth and mhealth".[4]It is considered as the new lexicon for the term telemedicine.[5]The technology view of connected health focuses more on the connection methods between clients and the health care professional. An alternative view is that of a socio-technical perspective in which connected health is considered as a combination of people, processes and technology. In 2015 Connected health was defined as patient-centred care resulting from process-driven health care delivery undertaken by health care professionals, patients and/or carer who are supported by the use of technology (software and/or hardware).[6] Two "core platforms" are emphasized in connected health, self-care and remote care, with programs primarily focused on monitoring and feedback for the chronically ill, elderly, and those patients located at an untenable distance from primary or specialty providers.[citation needed]Programs designed to improve patient-provider communication within an individual medical practice for example, the use of email to communicate with patients between office visits also fall within the purview of connected health.[citation needed]There are alsolifestyle coachingprograms, in which an individual receives healthcare information to facilitate behavior change to improve their fitness and/or general well being, (seewellness) or to reduce or eliminate the impact of a particular behavior that presents a risk to their health status.[7]Some of the most common types of connected health programs in operation today include: The Center for Connected Health is implementing a range of programs in high-risk, chronic and remotely located populations.[citation needed] Inherent in the concept of connected health is flexibility in terms of technological approaches to care delivery and specific program objectives. For instance, remote monitoring programs might use a combination of cell phone and smart phone technology, online communications or biosensors and may aim to increase patient-provider communication, involve patients in their care through regular feedback, or improve upon a health outcome measure in a defined patient population or individual. Digital pen technology, global positioning, videoconferencing and environmental sensors are all playing a role in connected health.[citation needed] Proponents of Connected health view it as a critical component of change in human healthcare and envision: Rising costs, increases in chronic diseases, geographic dispersion of families, growing provider shortages, ethnic disparities in care, better survival rates among patients fighting serious diseases, an aging U.S. population and longer lifespan are all factors pointing to a need for better ways of delivering healthcare.[8][9][10] Direct-to-consumer advertisingis a demonstrated contributor to the rise in consumer demand, as is the mass availability of inexpensive technology and ubiquity of the Internet, cell phones and PDAs.[11][12]Connected health experts such as Joseph C. Kvedar, believe that consumer engagement in healthcare is on its way to becoming a major force for change.[citation needed] In summary, connected health has arisen from: 1) a desire on the part of individual physicians and healthcare organizations to provide better access, quality and efficiency of care 2) dynamics of the healthcare economy (such as rising costs and changing demographics) 3) consumerism in health care and a drive towards patient centric healthcare. Together, these factors are providing impetus for connected healthcare in the United States and many other industrialised nations and forcing innovation both from within and outside the system.[citation needed] While connected health is yet emerging, there is evidence of its benefit. For example, in a program being implemented by the Center for Connected Health and Partners Home Care, over 500 heart failure patients have now been monitored remotely through the collection of vital signs, including heart rate, blood pressure and weight, using simple devices in the patient's home. The information is sent daily to a home health nurse, who can identify early warning signs, notify the patient'sprimary care physician, and intervene to avert potential health crises. A pilot of this program demonstrated reduced hospitalizations.[13]Another initiative at the Center for Connected Health uses cellular telephone technology and a "smart" pill bottle to detect when a patient has not taken their scheduled medication. A signal is then sent that lights up an ambient orb device in the patient's home to remind them to take their medication. It appears that connected health programs are operated and funded primarily by home care agencies and large healthcare systems.[citation needed]However, insurers and employers are increasingly interested in connected health for its potential to reduce direct and indirect healthcare costs. In 2007,EMC Corporationlaunched the first employer-sponsored connected health program, in the beta phase of implementation, aimed at improving outcomes and cost of care for patients with high blood pressure.[14] Government agencies involved in connected health include: Personal health records, or PHRS, (seepersonal health record) – are essentially medical records controlled and maintained by the healthcare consumer. PHRs intersect with connected health in that they attempt to increase the involvement of healthcare consumers in their care.[16]By contrast, electronic medical records (EMRs) (seeelectronic medical record) are digital medical records or medical records systems maintained by hospitals or medical practices and are not part of connected health delivery.
https://en.wikipedia.org/wiki/Connected_Health
eHealthdescribes healthcare services which are supported by digital processes, communication or technology such aselectronic prescribing,Telehealth, orElectronic Health Records(EHRs). The term "eHealth" originated in the 1990s,[1]initially conceived as "Internet medicine," but has since evolved to have a broader range of technologies and innovations aimed at enhancing healthcare delivery and accessibility. According to the World Health Organization (WHO), eHealth encompasses not only internet-based healthcare services but also modern advancements such as artificial intelligence, mHealth (mobile health), and telehealth, which collectively aim to improve accessibility and efficiency in healthcare delivery.[2]Usage of the term varies widely. A study in 2005 found 51 unique definitions of eHealth, reflecting its diverse applications and interpretations.[3]While some argue that it is interchangeable withhealth informaticsas a broad term covering electronic/digital processes in health,[4]others use it in the narrower sense of healthcare practice specifically facilitated by theInternet.[5][6][7]It also includes health applications and links on mobile phones, referred to asmHealthor m-Health.[8]Key components of eHealth include electronic health records (EHRs), telemedicine, health information exchange, mobile health applications, wearable devices, and online health information. For example, diabetes monitoring apps allow patients to track health metrics in real time, bridging the gap between home and clinical care.[2]These technologies enable healthcare providers, patients, and other stakeholders to access, manage, and exchange health information more effectively, leading to improved communication, decision-making, and overall healthcare outcomes. The term can encompass a range of services or systems that are at the edge of medicine/healthcare and information technology, including: Several authors have noted the variable usage in the term; from being specific to the use of the Internet in healthcare to being generally around any use of computers in healthcare.[16]Various authors have considered the evolution of the term and its usage and how this maps to changes in health informatics and healthcare generally.[1][17][18]The name eHealth has to some extent been superseded by the use ofDigital health, which covers technology in healthcare more generally, but which is also seen as covering Internet related technologies.[19]Various authors have considered the evolution of the term and its usage and how this maps to changes in health informatics and healthcare generally. Ohet al., in a 2005 systematic review of the term's usage, offered the definition of eHealth as a set of technological themes in health today, more specifically based on commerce, activities, stakeholders, outcomes, locations, or perspectives.[3]One thing that all sources seem to agree on is that e-health initiatives do not originate with the patient, though the patient may be a member of a patient organization that seeks to do this, as in thee-Patientmovement. eHealth literacy is defined as "the ability to seek, find, understand and appraise health information from electronic sources and apply the knowledge gained to addressing or solving a health problem."[20]This concept encompasses six types of literacy: traditional (literacy and numeracy), information, media, health, computer, and scientific. Of these, media and computer literacies are unique to the Internet context. eHealth media literacy includes awareness of media bias, the ability to discern both explicit and implicit meanings from media messages, and the capability to derive accurate information from digital content. While eHealth literacy involves the ability to use technology, it is extremely important to have the skills to critically evaluate online health information. This makes media literacy a critical part of successfully using eHealth.[21]Having the composite skills of eHealth literacy allows health consumers to achieve positive outcomes from using the Internet for health purposes. eHealth literacy has the potential to both protect consumers from harm and empower them to fully participate in informed health-related decision making. People with high levels of eHealth literacy are also more aware of the risk of encountering unreliable information on the Internet[22]On the other hand, the extension of digital resources to the health domain in the form of eHealth literacy can also create new gaps between health consumers.[23]eHealth literacy hinges not on the mere access to technology, but rather on the skill to apply the accessed knowledge.[20]The efficiency of eHealth also heavily relies on the efficiency and ease of use regarding technology being used by the patient. The population of elderly people surpassed the number of children for the first time in history in 2018. A more multi-faceted approach is necessary for this age group, because they are more susceptible to chronic disease, contraindications of medication, and other age-related setbacks like forgetfulness. Ehealth offers services that can be very helpful for all of these scenarios, making an elderly patient's quality of life substantially better with proper use.[24] One of the factors hindering the widespread acceptance of e-health tools is the concern about privacy, particularly regarding EPRs (Electronic patient record). This main concern has to do with the confidentiality of the data, as well as non-confidential data that may be vulnerable to unauthorized access. Each medical practice has its own jargon and diagnostic tools, so to standardize the exchange of information, various coding schemes may be used in combination with international medical standards. Systems that deal with these transfers are often referred to as Health Information Exchange (HIE). Of the forms of e-health already mentioned, there are roughly two types; front-end data exchange and back-end exchange.[25] Front-end exchange typically involves the patient, while back-end exchange does not. A common example of a rather simple front-end exchange is a patient sending a photo taken by mobile phone of a healing wound and sending it via email to the family doctor for control. Such an action may avoid the cost of an expensive visit to the hospital. A common example of a back-end exchange is when a patient on vacation visits a doctor who then may request access to the patient's health records, such as medicine prescriptions, x-ray photographs, or blood test results. Such an action may reveal allergies or other prior conditions that are relevant to the visit. Successful e-health initiatives such ase-Diabeteshave shown that for data exchange to be facilitated either at the front-end or the back-end, a common thesaurus is needed for terms of reference.[8][26]Various medical practices in chronic patient care (such as fordiabeticpatients) already have a well defined set of terms and actions, which makes standard communication exchange easier, whether the exchange is initiated by the patient or the caregiver. In general, explanatory diagnostic information (such as the standardICD-10) may be exchanged insecurely, and private information (such as personal information from the patient) must be secured. E-health manages both flows of information, while ensuring the quality of the data exchange. Patients living with long term conditions (also called chronic conditions) over time often acquire a high level of knowledge about the processes involved in their own care, and often develop a routine in coping with their condition. For these types of routine patients, front-end e-health solutions tend to be relatively easy to implement. E-mental health is frequently used to refer to internet based interventions and support formental healthconditions.[27]However, it can also refer to the use of information and communication technologies that also includes the use of social media, landline and mobile phones.[28][29]These services can range from providing information to offering peer support, computer-based programs, virtual applications, games, and real-time interaction with trained clinicians.[21]Additionally, services can be delivered through telephones andinteractive voice response(IVR).[30] Mental disorders, including alcohol and drug use disorders,mood disorderssuch asdepression,dementia,schaddressed ia, andanxiety disorderscan all be addressed through e-mental health services.[31][page needed]The majority of e-mental health interventions have focused on the treatment of depression and anxiety.[32]There are also E-mental health programs available for other interventions such assmoking cessation,[33]gambling,[34]and post-disaster mental health.[35] E-mental health has a number of advantages such as being low cost, easily accessible and providing anonymity to users.[36]However, there are also a number of disadvantages such as concerns regarding treatment credibility, user privacy and confidentiality.[37]Online security involves the implementation of appropriate safeguards to protect user privacy and confidentiality. This includes appropriate collection and handling of user data, the protection of data from unauthorized access and modification and the safe storage of data.[38]Technical difficulties are another potential disadvantage. With almost all forms of technology, there will be unintended difficulties or malfunctions, which doesn't exclude tablets, computers, and wireless medical devices. Ehealth is also very dependent on the patient having functional Wi-Fi, which can be an issue that cannot be fixed without an expert.[4] E-mental health has been gaining momentum in academic research as well as practical arenas[39]in a wide variety of disciplines such as psychology, clinical social work, family and marriage therapy, and mental health counseling. Testifying to this momentum, the E-Mental Health movement has its own international organization, the International Society for Mental Health Online.[40]However, e-Mental health implementation into clinical practice and healthcare systems remains limited and fragmented.[41][42] There are at least five programs currently available to treatanxietyanddepression. Several programs have been identified by the UKNational Institute for Health and Care Excellenceas cost effective for use in primary care.[30]These includeFearfighter,[43]a text basedcognitive behavioral therapyprogram to treat people with phobias, andBeating the Blues,[44]an interactive text, cartoon and video CBT program for anxiety and depression. Two programs have been supported for use in primary care by theAustralian Government.[45]The first isAnxiety Online,[46]a text based program for the anxiety, depressive and eating disorders, and the second isTHIS WAY UP,[47]a set of interactive text, cartoon and video programs for the anxiety and depressive disorders. Another isiFightDepression[48]a multilingual, free to use, web-based tool for self-management of less severe forms of depression, for use under guidance of a GP or psychotherapist. There are a number of online programs relating tosmoking cessation.QuitCoach[49]is a personalised quit plan based on the users response to questions regarding giving up smoking and tailored individually each time the user logs into the site.Freedom From Smoking[50]takes users through lessons that are grouped into modules that provide information and assignments to complete. The modules guide participants through steps such as preparing to quit smoking, stopping smoking and preventing relapse. Other internet programs have been developed specifically as part of research into treatment for specific disorders. For example, an online self-directed therapy forproblem gamblingwas developed to specifically test this as a method of treatment.[34]All participants were given access to a website. The treatment group was provided with behavioural and cognitive strategies to reduce or quit gambling. This was presented in the form of a workbook which encouraged participants to self-monitor their gambling by maintaining an online log of gambling and gambling urges. Participants could also use a smartphone application to collect self-monitoring information. Finally participants could also choose to receive motivational email or text reminders of their progress and goals. An internet based intervention was also developed for use afterHurricane Ikein 2009.[35]During this study, 1,249 disaster-affected adults were randomly recruited to take part in the intervention. Participants were given a structured interview then invited to access the web intervention using a unique password. Access to the website was provided for a four-month period. As participants accessed the site they were randomly assigned to either the intervention. those assigned to the intervention were provided with modules consisting of information regarding effective coping strategies to manage mental health and health risk behaviour. eHealth programs have been found to be effective in treatingborderline personality disorder(BPD).[51] Cybermedicine is the use of theInternetto delivermedicalservices, such as medical consultations anddrug prescriptions. It is the successor totelemedicine, whereindoctorswould consult and treatpatientsremotely viatelephoneorfax. Cybermedicine is already being used in small projects where images are transmitted from aprimary caresetting to amedical specialist, who comments on the case and suggests which intervention might benefit the patient. A field that lends itself to this approach isdermatology, where images of an eruption are communicated to a hospital specialist who determines if referral is necessary. The field has also expanded to include online "ask the doctor" services that allow patients direct, paid access to consultations (with varying degrees of depth) with medical professionals (examples includeBundoo.com,Teladoc, andAsk The Doctor). A CyberDoctor,[52]known in the UK as a CyberPhysician,[53]is a medicalprofessionalwho doesconsultationvia the internet, treating virtual patients, who may never meet face to face. This is a new area of medicine which has been utilized by the armed forces and teachinghospitalsofferingonlineconsultation to patients before making their decision to travel for unique medical treatment only offered at a particular medical facility.[52] Self-monitoring is the use of sensors or tools which are readily available to the general public to track and record personal data. The sensors are usually wearable devices and the tools are digitally available through mobile device applications. Self-monitoring devices were created for the purpose of allowing personal data to be instantly available to the individual to be analyzed. As of now, fitness and health monitoring are the most popular applications for self-monitoring devices.[54]The biggest benefit to self-monitoring devices is the elimination of the necessity for third party hospitals to run tests, which are both expensive and lengthy. These devices are an important advancement in the field of personal health management. Self-monitoring devices, like fitness trackers, have also been shown to help manage chronic diseases, providing users with real-time data that supports ongoing care and better disease management.[55] Self-monitoring healthcare devices exist in many forms. An example is theNike+ FuelBand, which is a modified version of the originalpedometer.[54]This device is wearable on the wrist and allows one to set a personal goal for a daily energy burn. It records the calories burned and the number of steps taken for each day while simultaneously functioning as a watch. To add to the ease of the user interface, it includes both numeric and visual indicators of whether or not the individual has achieved his or her daily goal. Finally, it is also synced to aniPhoneapp which allows for tracking and sharing of personal record and achievements.[56] Other monitoring devices have more medical relevance. A well-known device of this type is theblood glucose monitor. The use of this device is restricted to diabetic patients and allows users to measure the blood glucose levels in their body. It is extremely quantitative and the results are available instantaneously.[57]However, this device is not as independent of a self-monitoring device as the Nike+ Fuelband because it requires somepatient educationbefore use. One needs to be able to make connections between the levels of glucose and the effect of diet and exercise. In addition, the users must also understand how the treatment should be adjusted based on the results. In other words, the results are not just static measurements. The demand for self-monitoring health devices is skyrocketing, as wireless health technologies have become especially popular in the last few years. In fact, it is expected that by 2016, self-monitoring health devices will account for 80% of wireless medical devices.[58]The key selling point for these devices is the mobility of information for consumers. The accessibility of mobile devices such as smartphones and tablets has increased significantly within the past decade. This has made it easier for users to access real-time information in a number of peripheral devices. There are still many future improvements for self-monitoring healthcare devices. Although most of these wearable devices have been excellent at providing direct data to the individual user, the biggest task which remains at hand is how to effectively use this data. Although the blood glucose monitor allows the user to take action based on the results, measurements such as the pulse rate, EKG signals, and calories do not necessarily serve to actively guide an individual's personal healthcare management. Consumers are interested in qualitative feedback in addition to the quantitative measurements recorded by the devices.[59]Integrating self-monitoring devices with healthcare providers can help close this gap by allowing healthcare professionals to track their patients' data remotely, which in turn allows for more personalized care and timely interventions.[55] The pandemic that impacted the entire world made it extremely difficult for vast amounts of people to receive adequate healthcare in person. Elderly citizens and people with chronic health conditions were at more risk than the average healthy human, therefore they were more adversely affected than most. The switch from in-person to telehealth appointments and interventions was necessary to reduce the risks of spreading and/or contracting the disease.[60]The forced use of telehealth during the pandemic highlighted its strengths and weaknesses, which accelerated the progression of this medium. The user feedback on eHealth during theCOVID-19 pandemicwas very positive, and consequently many patients and healthcare providers reported that they will continue to use this method of healthcare following the pandemic.[61] eHealth in general, and telemedicine in particular, is a vital resource to remote regions of emerging and developing countries but is often difficult to establish because of the lack of communications infrastructure.[62]For example, in Benin, hospitals often can become inaccessible due to flooding during the rainy season[63]and across Africa, the low population density, along with severe weather conditions and the difficult financial situation in many African states, has meant that the majority of the African people are badly disadvantaged in medical care.Telemedicine in Nepalis becoming popular tool to improve health care delivery in order to combat difficult landscape.[64]In many regions there is not only a significant lack of facilities and trained health professionals, but also no access to eHealth because there is also no internet access in remote villages, or even a reliable electricity supply.[65] Approximately 13 percent of people who live in Kenya have health insurance. A majority of the total health expenditure in sub-Saharan Africa was paid out-of-pocket, which forces millions into poverty yearly. A Kenyan service by the name ofM-PESAmay offer a solution to this problem. This mobile platform provides full transparency of patients needs and allows access to medical products and the ability to efficiently manage their funding.[66] Internet connectivity, and the benefits of eHealth, can be brought to these regions usingsatellite broadbandtechnology, and satellite is often the only solution where terrestrial access may be limited, or poor quality, and one that can provide a fast connection over a vast coverage area.[65] While eHealth has become an indispensable facet of healthcare in the past 5 years, there are still barriers preventing it from reaching its full potential. Knowledge of the socio-economic performance of eHealth is limited, and findings from evaluations are often challenging to transfer to other settings. Socio-economic evaluations of some narrow types of mHealth can rely on health economic methodologies, but larger scale eHealth may have too many variables, and tortuous, intangible cause and effect links may need a wider approach.[67]There are no international guidelines for the usage of eHealth due to many variables such as ignorance on the matter, infrastructure issues, quality of healthcare professionals and lack of healthcare plans. It should also be stated that the effectiveness of eHealth is also dependent on the patient's condition. Some researchers believe that online healthcare may be most efficient as a supplement to in-person care.[66]
https://en.wikipedia.org/wiki/EHealth
Telehealthis the distribution ofhealth-related servicesand information via electronic information andtelecommunication technologies.[1]It allows long-distance patient and clinician contact, care, advice, reminders, education, intervention, monitoring, and remote admissions.[2][3] Telemedicineis sometimes used as asynonym, or is used in a more limited sense to describe remote clinical services, such as diagnosis and monitoring. When rural settings, lack of transport, a lack of mobility, conditions due to outbreaks, epidemics or pandemics, decreased funding, or a lack of staff restrict access to care, telehealth may bridge the gap[4]and can even improve retention in treatment[5]as well as provide distance-learning; meetings, supervision, and presentations between practitioners; online information andhealth datamanagement and healthcare system integration.[6]Telehealth could include twocliniciansdiscussing a case overvideo conference; a robotic surgery occurring through remote access; physical therapy done via digital monitoring instruments, live feed and application combinations; tests being forwarded between facilities for interpretation by a higher specialist; home monitoring through continuous sending of patient health data; client to practitioner online conference; or even videophone interpretation during a consult.[1][2][6] Telehealth is sometimes discussed interchangeably with telemedicine, the latter being more common than the former. TheHealth Resources and Services Administrationdistinguishes telehealth from telemedicine in its scope, defining telemedicine only as describing remote clinical services, such as diagnosis and monitoring, while telehealth includespreventative, promotive, and curative care delivery.[1]This includes the above-mentioned non-clinical applications, like administration and provider education.[2][3] TheUnited States Department of Health and Human Servicesstates that the term telehealth includes "non-clinical services, such as provider training, administrative meetings, and continuing medical education", and that the term telemedicine means "remote clinical services".[7]TheWorld Health Organizationuses telemedicine to describe all aspects of health care including preventive care.[8]TheAmerican Telemedicine Associationuses the terms telemedicine and telehealth interchangeably, although it acknowledges that telehealth is sometimes used more broadly for remote health not involving active clinical treatments.[9] eHealthis another related term, used particularly in the U.K. and Europe, as an umbrella term that includes telehealth,electronic medical records, and other components ofhealth information technology.[10] Telehealth requires good Internet access by participants, usually in the form of a strong, reliablebroadbandconnection, and broadband mobile communication technology of at least the fourth generation (4G) or long-term evolution (LTE) standard to overcome issues with video stability and bandwidth restrictions.[11][12][13]As broadband infrastructure has improved, telehealth usage has become more widely feasible.[1][2] Healthcare providersoften begin telehealth with aneeds assessmentwhich assesses hardships which can be improved by telehealth such as travel time, costs or time off work.[1][2]Collaborators such astechnology companiescan ease the transition.[1] Delivery can come within four distinct domains:live video (synchronous),store-and-forward (asynchronous),remote patient monitoring, andmobile health.[14]Audio-based telemedicine, primarily through telephone consultations, has been studied as a tool for managing chronic conditions. A systematic review of 40 randomized controlled trials found that audio-based care was generally comparable to in-person or video care, though with low to very low certainty of evidence.[15] Store-and-forwardtelemedicine involves acquiring medical data (likemedical images,biosignalsetc.) and then transmitting this data to a doctor or medical specialist at a convenient time for assessmentoffline.[9]It does not require the presence of both parties at the same time.[16]Dermatology(cf:teledermatology),radiology, andpathologyare common specialties that are conducive to asynchronous telemedicine. A properly structuredmedical record, preferably inelectronicform, should be a component of this transfer. The 'store-and-forward' process requires the clinician to rely on a history report and audio/video information in lieu of a physical examination.[9] Remote monitoring, also known as self-monitoring or testing, enables medical professionals to monitor a patient remotely using various technological devices. This method is primarily used for managing chronic diseases or specific conditions, such as heart disease, diabetes mellitus, or asthma. These services can provide comparable health outcomes to traditional in-person patient encounters, supply greater satisfaction to patients, and may be cost-effective.[17]Examples include home-based nocturnaldialysis[18]and improved joint management.[19] Electronic consultationsare possible through interactive telemedicine services which provide real-time interactions between patient and provider.[16]Videoconferencinghas been used in a wide range of clinical disciplines and settings for various purposes, including management, diagnosis, counseling, and monitoring of patients.[20] Videotelephony comprises the technologies for the reception and transmission of audio-video signals by users at different locations for communication between people in real time.[21] At the dawn of the technology, videotelephony also includedimage phoneswhich would exchange still images between units every few seconds over conventionalPOTS-type telephone lines, essentially the same asslow scan TVsystems.[citation needed] Currently, videotelephony is particularly useful to thedeafandspeech-impairedwho can use them withsign languageand also with avideo relay service, and well as to those withmobility issuesor those who are located in distant places and are in need oftelemedicalortele-educationalservices.[22][23] Common daily emergency telemedicine is performed by SAMU Regulator Physicians inFrance,Spain,Chile, andBrazil.Aircraftandmaritimeemergencies are also handled by SAMU centres in Paris, Lisbon and Toulouse.[24] A recent study identified three major barriers to the adoption of telemedicine in emergency and critical care units. They include: Emergency telehealth is also gaining acceptance in theUnited States. There are several modalities currently being practiced that include but are not limited to TeleTriage, TeleMSE, and ePPE. An example of telehealth in the field is when EMS arrives on scene of an incident and is able to take anEKGthat is then sent directly to a physician at the hospital to be read, allowing for instant care and management.[26] Telenursing refers to the use oftelecommunicationsandinformation technologyin order to providenursingservices in health care whenever a large physical distance exists between patient and nurse, or between any number of nurses. As a field, it is part of telehealth, and has many points of contact with other medical and non-medical applications, such astelediagnosis, teleconsultation, telemonitoring, etc. Telenursing is achieving significant growth rates in many countries due to several factors: the preoccupation with reducing the costs of health care, an increase in theagingand chronically ill population, and the increase in coverage of health care to distant, rural, small or sparsely populated regions. Among its benefits, telenursing may help solve increasing shortages of nurses, reduce distances and travel time, and keep patients out of hospital. A greater degree of job satisfaction has been registered among telenurses.[27] InAustralia, during January 2014,Melbournetech startupSmall World Socialcollaborated with theAustralian Breastfeeding Associationto create the first hands-free breastfeedingGoogle Glassapplication for new mothers.[28]The application, namedGoogle Glass Breastfeeding app trial, allows mothers to nurse their baby while viewing instructions about common breastfeeding issues (latching on, posture, etc.) or call a lactation consultant via a secure Google Hangout,[29]who can view the issue through the mother's Google Glass camera.[30]The trial was successfully concluded inMelbournein April 2014, and 100% of participants were breastfeeding confidently.[31][32][33][34] Palliative careis aninterdisciplinarymedicalcaregivingapproach aimed at optimizingquality of lifeand mitigatingsufferingamong people with serious, complex, and oftenterminalillnesses. In the past, palliative care was adiseasespecific approach, but today theWorld Health Organization(WHO) takes a broader approach suggesting that palliative care should be applied as early as possible to any chronic and fatal illness. As in many aspects ofhealth care, telehealth is increasingly being used in palliative care[35]and is often referred to as telepalliative care.[36]The types oftechnologyapplied in telepalliative care are typicallytelecommunicationtechnologies, such asvideo conferencingormessagingfor follow-up, or digitalsymptomassessments through digitalquestionnairesgeneratingalertstohealth care professionals.[37]Telepalliative care has been shown to be a feasible approach to deliver palliative care amongpatients,caregiversand health care professionals.[38][37][39]Telepalliative care can provide an added support system that enable patients to remain at home through self-reporting of symptoms and tailoring care to specific patients.[39]Studies have shown that the use of telehealth in palliative care is mostly well received by patients, and that telepalliative care may improve access tohealth care professionalsat home and enhance feelings of security and safety among patients receiving palliative care.[38]Further, telepalliative care may enable more efficientutilizationof healthcareresources, promotescollaborationbetween different levels of healthcare, and makes healthcare professionals more responsive to changes in patients' condition.[37] Challenging aspects of the use of telehealth in palliative care have also been described. Generally, palliative care is a diversemedicalspecialty, involvinginterdisciplinaryprofessionalsfrom different professionaltraditionsandcultures, delivering care to aheterogenouscohortof patients with diverse diseases, conditions and symptoms. This makes it a challenge to develop telehealth that is suitable for all patients and in all contexts of palliative care. Some of the barriers to telepalliative care relate to inflexible reporting of complex and fluctuating symptoms and circumstances using electronic questionnaires.[39]Further, palliative care emphasizes aholisticapproach that should addressexistential, spiritual andmentaldistress related to serious illness.[40]However, few studies have included the self-reporting of existential or spiritual concerns,emotions, andwell-being.[39]Healthcare professionals may also beuncomfortableproviding emotional orpsychologicalcare remotely.[37]Palliative care has been characterized as high-touch rather than high-tech, limiting the interest in applying technological advancements when developing interventions.[41]To optimize the advantages and minimize the challenges with the use of telehealth inhome-basedpalliative care,futureresearchshould include users in thedesignand developmentprocess. Understanding the potential of telehealth to supporttherapeuticrelationships between patients and health care professionals and being aware of the possible difficulties and tensions it may create are critical to itssuccessfulandacceptableuse.[37][39] Telepharmacy is the delivery ofpharmaceutical careviatelecommunicationsto patients in locations where they may not have direct contact with apharmacist. It is an instance of the wider phenomenon of telemedicine, as implemented in the field ofpharmacy. Telepharmacy services includedrug therapymonitoring, patient counseling, prior authorization and refill authorization forprescription drugs, and monitoring offormularycompliance with the aid ofteleconferencingorvideoconferencing.Remote dispensingof medications by automated packaging and labeling systems can also be thought of as an instance of telepharmacy. Telepharmacy services can be delivered at retail pharmacy sites or throughhospitals,nursing homes, or other medical care facilities. This approach allows patients in remote or underserved areas to receive pharmacy services that would otherwise be unavailable to them, enhancing access to care and ensuring continuity in medication management.[42]Health outcomes appear similar when pharmacy services are delivered by telepharmacy compared to traditional service delivery.[43] The term can also refer to the use of videoconferencing in pharmacy for other purposes, such as providing education, training, and management services to pharmacists and pharmacy staff remotely.[44] Telepsychiatryor telemental health refers to the use oftelecommunicationstechnology (mostlyvideoconferencingand phone calls) to deliverpsychiatric careremotely for people withmental health conditions. It is a branch of telemedicine.[45][46] Telepsychiatry can be effective in treating people with mental health conditions. In the short-term it can be as acceptable and effective as face-to-face care.[47]Research also suggests comparable therapeutic factors, such as changes in problematic thinking or behaviour.[48] It can improve access to mental health services for some but might also represent a barrier for those lacking access to a suitable device, the internet or the necessarydigital skills. Factors such aspovertythat are associated with lack of internet access are also associated with greater risk of mental health problems, makingdigital exclusionan important problem of telemental health services.[47] Teledentistry is the use ofinformation technologyandtelecommunicationsfor dental care, consultation, education, and public awareness in the same manner as telehealth and telemedicine. Tele-audiology (or teleaudiology) is the utilization of telehealth to provideaudiologicalservices and may include the full scope of audiological practice. This term was first used by Gregg Givens in 1999 in reference to a system being developed atEast Carolina Universityin North Carolina, US.[50] Teleneurology describes the use ofmobile technologyto provide neurological care remotely, including care for stroke, movement disorders like Parkinson's disease, seizure disorders (e.g., epilepsy), etc. The use of teleneurology gives us the opportunity to improve health care access for billions around the globe, from those living in urban locations to those in remote, rural locations. Evidence shows that individuals with Parkinson's disease prefer personal connection with a remote specialist to their local clinician. Such home care is convenient but requires access to and familiarity with the Internet.[51][52]A 2017 randomized controlled trial of "virtual house calls" or video visits with individuals diagnosed with Parkinson's disease evidences patient preference for the remote specialist vs their local clinician after one year.[52]Teleneurology for patients with Parkison's disease is found to be cheaper than in person visits by reducing transportation and travel time[53][54]A recent systematic review by Ray Dorsey et al.[51]describes both the limitations and potential benefits of teleneurology in improving care for patients with chronic neurological conditions, especially in low-income countries. White, well-educated and technologically savvy people are the biggest consumers of telehealth services for Parkinson's disease.[53][54]as compared to ethnic minorities in the US.[54] Telemedicine in neurosurgery was historically primarily used for follow-up visits by patients who had to travel far to undergo surgery.[55]In the last decade, telemedicine was also used for remote ICU rounding as well as prompt evaluation for acute ischemic stroke and administration of IV alteplase in conjunction with neurology.[56][57]From the onset of the COVID-19 pandemic, there was a rapid surge in the use of telemedicine across all divisions of neurosurgery: vascular, oncology, spine, and functional neurosurgery. Not only for follow-up visits, but it has gained popularity for seeing new patients or following established patients regardless of whether they underwent surgery.[58][59]Telemedicine is not limited to direct patient care only; there are a number of new research groups and companies focused on using telemedicine for clinical trials involving patients with neurosurgical diagnoses. Teleneuropsychology is the use of telehealth/videoconference technology for the remote administration ofneuropsychological tests. Neuropsychological tests are used to evaluate the cognitive status of individuals with known or suspectedbrain disordersand provide a profile of cognitive strengths and weaknesses. Through a series of studies, there is growing support in the literature showing that remote videoconference-based administration of many standard neuropsychological tests results in test findings similar to traditional in-person evaluations, thereby establishing the basis for the reliability and validity of teleneuropsychological assessment.[60][61][62][63][64][65][66] Telenutrition refers to the use of video conferencing/ telephony to provide online consultation by anutritionistordietician. Patient or clients upload their vital statistics, diet logs, food pictures, etc., on a telenutrition portal that is then used by the nutritionist or dietician to analyze their current health condition. The nutritionist or dietician can then set goals for their respective clients/patients and monitor their progress regularly by follow-up consultations. Telenutrition portals can help people seek remote consultation for themselves and/or their family. This can be extremely helpful for elderly or bedridden patients who can consult their dietician from comfort of their homes. Telenutrition showed to be feasible, and the majority of patients trusted the nutritional televisits, in place of the scheduled but not provided follow-up visits during the lockdown of the COVID-19 pandemic.[67] Telerehabilitation (ore-rehabilitation[68][69]) is the delivery ofrehabilitationservices overtelecommunication networksand the Internet. Most types of services fall into two categories: clinical assessment (the patient's functional abilities in his or her environment) andclinical therapy. Some fields of rehabilitation practice that have explored telerehabilitation are:neuropsychology,speech–language pathology,audiology,occupational therapy, andphysical therapy. Telerehabilitation can deliver therapy to people who cannot travel to aclinicbecause the patient has adisabilityor because of travel time. Telerehabilitation also allows experts in rehabilitation to engage in clinical consultation at a distance. Most telerehabilitation is highly visual. As of 2014, the most commonly used mediums arewebcams,videoconferencing,phone lines,videophones, and webpages containingrich web applications. The visual nature of telerehabilitation technology limits the types of rehabilitation services that can be provided. It is most widely used forneuropsychological rehabilitation, fitting of rehabilitation equipment such aswheelchairs,braces, orartificial limbs, and in speech-language pathology.Rich web applicationsfor neuropsychological rehabilitation (cognitive rehabilitation) of cognitive impairment (from many etiologies) were first introduced in 2001. This endeavor has expanded as ateletherapyapplication for cognitive skills enhancement programs for school children.Tele-audiology(hearing assessments) is a growing application. Physical therapy and psychology interventions delivered via telehealth may result in similar outcomes as those delivered in person for a range of health conditions.[70] Two important areas of telerehabilitation research are (1) demonstrating equivalence of assessment and therapy to in-person assessment and therapy and (2) building new data collection systems to digitize information that a therapist can use in practice. Ground-breaking research intelehaptics(the sense of touch) and virtual reality may broaden the scope of telerehabilitation practice in the future. In the United States, theNational Institute on Disability and Rehabilitation Research's (NIDRR)[71]supports research and the development of telerehabilitation. NIDRR's grantees include the "Rehabilitation Engineering and Research Center" (RERC) at theUniversity of Pittsburgh, theRehabilitation Institute of Chicago, the State University of New York at Buffalo, and the National Rehabilitation Hospital inWashington, D.C. Other federal funders of research are theVeterans Health Administration, the Health Services Research Administration in the US Department of Health and Human Services, and theDepartment of Defense.[72]Outside the United States, excellent research is conducted inAustraliaandEurope. Only a fewhealth insurersin the United States, and about half ofMedicaidprograms,[73]reimbursefor telerehabilitation services. If the research shows that teleassessments and teletherapy are equivalent to clinical encounters, it is more likely thatinsurersandMedicarewill cover telerehabilitation services. InIndia, the Indian Association of Chartered Physiotherapists (IACP) provides telerehabilitation facilities. With the support and collaboration of local clinics and private practitioners and the Members IACP, IACP runs the facility, named Telemedicine. IACP has maintained an internet-based list of their members on their website, through which patients can make online appointments. Telemedicine can be utilized to improve the efficiency and effectiveness of care delivery in a trauma environment. Examples include: Telemedicine for trauma triage: using telemedicine, trauma specialists can interact with personnel on the scene of a mass casualty or disaster situation via the internet using mobile devices to determine the severity of injuries. They can provide clinical assessments and determine whether those injured must be evacuated for necessary care. Remote trauma specialists can provide the same quality of clinical assessment and plan of care as a trauma specialist located physically with the patient.[74] Telemedicine forintensive care unit(ICU) rounds: Telemedicine is also being used in some trauma ICUs to reduce the spread of infections. Rounds are usually conducted at hospitals across the country by a team of approximately ten or more people including attending physicians, fellows, residents, and other clinicians. This group usually moves from bed to bed in a unit, discussing each patient. This aids in the transition of care for patients from the night shift to the morning shift but also serves as an educational experience for new residents to the team. A new approach features the team conducting rounds from a conference room using a video-conferencing system. The trauma attending, residents, fellows, nurses, nurse practitioners, and pharmacists are able to watch a live video stream from the patient's bedside. They can see the vital signs on the monitor, view the settings on the respiratory ventilator, and/or view the patient's wounds. Video-conferencing allows remote viewers to conduct two-way communication with clinicians at the bedside.[75] Telemedicine for trauma education: some trauma centers are delivering trauma education lectures to hospitals and health care providers worldwide using video conferencing technology. Each lecture provides fundamental principles, first-hand knowledge, and evidenced-based methods for critical analysis of established clinical practice standards, and comparisons to newer advanced alternatives. The various sites collaborate and share their perspective based on location, available staff, and available resources.[76] Telemedicine in the trauma operating room: trauma surgeons are able to observe and consult on cases from a remote location using video conferencing. This capability allows the attending to view the residents in real time. The remote surgeon has the capability to control the camera (pan, tilt, and zoom) to get the best angle of the procedure while at the same time providing expertise in order to provide the best possible care to the patient.[77] ECGs, orelectrocardiographs, can be transmitted using telephone and wireless.Willem Einthoven, the inventor of the ECG, actually did tests with the transmission of ECG via telephone lines. This was because the hospital did not allow him to move patients outside the hospital to his laboratory for testing of his new device. In 1906, Einthoven came up with a way to transmit the data from the hospital directly to his lab.[78][79] One of the oldest known telecardiology systems for teletransmissions of ECGs was established in Gwalior, India, in 1975 at GR Medical College by Ajai Shanker, S. Makhija, P.K. Mantri using an indigenous technique for the first time in India. This system enabled wireless transmission of ECG from the moving ICU van or the patients home to the central station in ICU of the department of Medicine. Transmission using wireless was done using frequency modulation which eliminated noise. Transmission was also done through telephone lines. The ECG output was connected to the telephone input using a modulator that converted ECG into high-frequency sound. At the other end a demodulator reconverted the sound into ECG with a good gain accuracy. The ECG was converted to sound waves with a frequency varying from 500 Hz to 2500 Hz with 1500 Hz at baseline. This system was also used to monitor patients with pacemakers in remote areas. The central control unit at the ICU was able to correctly interpretarrhythmia. This technique helped medical aid reach in remote areas.[80] In addition,electronic stethoscopescan be used as recording devices, which is helpful for purposes of telecardiology. There are many examples of successful telecardiology services worldwide. InPakistan, three pilot projects in telemedicine were initiated by the Ministry of IT & Telecom, Government of Pakistan (MoIT) through the Electronic Government Directorate in collaboration with Oratier Technologies (a pioneer company within Pakistan dealing with healthcare and HMIS) and PakDataCom (a bandwidth provider). Three hub stations through were linked via the Pak Sat-I communications satellite, and four districts were linked with another hub. A 312 Kb link was also established with remote sites and 1 Mbit/s bandwidth was provided at each hub. Three hubs were established: the Mayo Hospital (the largest hospital in Asia), JPMC Karachi, and Holy Family Rawalpindi. These 12 remote sites were connected and an average of 1,500 patients were treated per month per hub. The project was still running smoothly after two years.[81] Wireless ambulatory ECG technology, moving beyond previous ambulatory ECG technology such as theHolter monitor, now includes smartphones andApple Watches, which can perform at-home cardiac monitoring and send the data to a physician via the Internet.[82] Teleradiology is the ability to sendradiographicimages (X-rays, CT, MR, PET/CT, SPECT/CT, MG, US...) from one location to another.[83]For this process to be implemented, three essential components are required: an image-sending station, a transmission network, and a receiving-image review station. The most typical implementation is two computers connected via the Internet. The computer at the receiving end will need a high-quality display screen that has been tested and cleared for clinical purposes. Sometimes the receiving computer will have a printer for convenience. The teleradiology process begins at the image-sending station. The radiographic image and a modem or other connection are required for this first step. The image is scanned and then sent via the network connection to the receiving computer. Today's high-speed broadband-based Internet enables the use of new technologies for teleradiology: the image reviewer can now have access to distant servers in order to view an exam. Therefore, they do not need particular workstations to view the images; a standardpersonal computer(PC) anddigital subscriber line(DSL) connection is enough to reach Keosys' central server. No particular software is necessary on the PC, and the images can be reached from anywhere in the world. Teleradiology is the most popular use for telemedicine and accounts for at least 50% of all telemedicine usage. Telepathology is the practice ofpathologyat a distance. It usestelecommunications technologyto facilitate the transfer of image-rich pathology data between distant locations for the purposes ofdiagnosis,education, andresearch.[84][85]The performance of telepathology requires that a pathologist selects thevideoimages for analysis and rendering diagnoses. The use of "television microscopy", the forerunner of telepathology, did not require that a pathologist have physical or virtual "hands-on" involvement in the selection of microscopic fields of view for analysis and diagnosis. A pathologist, Ronald S. Weinstein, M.D., coined the term "telepathology" in 1986. In an editorial in a medical journal, Weinstein outlined the actions that would be needed to create remote pathology diagnostic services.[86]He and his collaborators published the first scientific paper on robotic telepathology.[87]Weinstein was also granted the first U.S.patentsforrobotictelepathology systems and telepathology diagnostic networks.[88]Weinstein is known to many as the "father of telepathology".[89]InNorway, Eide and Nordrum implemented the first sustainable clinical telepathology service in 1989.[90]This is still in operation, decades later. A number of clinical telepathology services have benefited many thousands of patients in North America, Europe, and Asia. Telepathology has been successfully used for many applications, including the renderinghistopathologytissue diagnoses at a distance, for education and research. Althoughdigital pathologyimaging, includingvirtual microscopy, is the mode of choice for telepathology services in developed countries,analogtelepathology imaging is still used for patient services in somedeveloping countries. Teledermatology allowsdermatologyconsultations over a distance using audio, visual anddata communication, and has been found to improve efficiency, access to specialty care, and patient satisfaction.[91][92]Applications comprise health care management such as diagnoses, consultation and treatment as well as (continuing medical) education.[93][94][95]The dermatologists Perednia and Brown were the first to coin the termteledermatologyin 1995, where they described the value of a teledermatologic service in a rural area underserved by dermatologists.[96] Teleophthalmology is a branch of telemedicine that delivers eye care through digital medical equipment and telecommunications technology. Today, applications of teleophthalmology encompass access to eye specialists for patients in remote areas, ophthalmic disease screening, diagnosis and monitoring; as well as distant learning. Teleophthalmology may help reduce disparities by providing remote, low-cost screening tests such as diabetic retinopathy screening to low-income and uninsured patients.[97][98]In Mizoram, India, a hilly area with poor roads, between 2011 and 2015, teleophthalmology provided care to over 10,000 patients. These patients were examined by ophthalmic assistants locally but surgery was done on appointment after the patient images were viewed online by eye surgeons in the hospital 6–12 hours away. Instead of an average five trips for say, a cataract procedure, only one was required for surgery alone as even post-op care like removal of stitches and appointments for glasses was done locally. There were large cost savings in travel as well.[99] In the United States, some companies allow patients to complete an online visual exam and within 24 hours receive a prescription from an optometrist valid for eyeglasses, contact lenses, or both. Some US states such as Indiana have attempted to ban these companies from doing business.[100] Remote surgery(also known as telesurgery) is the ability for a doctor to performsurgeryon a patient even though they are not physically in the same location. It is a form oftelepresence. Remote surgery combines elements ofrobotics, cutting-edgetelecommunicationssuch as high-speed data connections,telehapticsand elements ofmanagement information systems. While the field ofrobotic surgeryis fairly well established, most of these robots are controlled by surgeons at the location of the surgery. Remote surgery isremote workfor surgeons, where the physical distance between the surgeon and the patient is immaterial. It promises to allow the expertise of specialized surgeons to be available to patients worldwide, without the need for patients to travel beyond their local hospital.[101] Remote surgery or telesurgery is performance of surgical procedures where the surgeon is not physically in the same location as the patient, using a roboticteleoperatorsystem controlled by the surgeon. The remote operator may give tactile feedback to the user. Remote surgery combines elements of robotics and high-speed data connections. A critical limiting factor is the speed,latencyand reliability of the communication system between the surgeon and the patient, though trans-Atlantic surgeries have been demonstrated. Telemedicine has been used globally to increase access to abortion care, specificallymedical abortion, in environments where few abortion care providers exist or abortion is legally restricted. Clinicians are able to virtually provide counseling, review screening tests, observe the administration of an abortion medication, and directly mail abortion pills to people.[102]In 2004,Women on Web(WoW), Amsterdam, started offering online consultations, mostly to people living in areas where abortion was legally restricted, informing them how to safely use medical abortion drugs to end a pregnancy.[102]People contact the Women on Web service online; physicians review any necessary lab results or ultrasounds, mailmifepristoneandmisoprostolpills to people, then follow up through online communication.[103]In the United States,medical abortionwas introduced as a telehealth service in Iowa by Planned Parenthood of the Heartland in 2008 to allow a patient at one health facility to communicate via secure video with a health provider at another facility.[104]In this model a person seeking abortion care must come to a health facility. An abortion care provider communicates with the person located at another site using clinic-to-clinic videoconferencing to provide medical abortion after screening tests and consultation with clinic staff. In 2018, the websiteAid Accesswas launched by the founder of Women on Web,Rebecca Gomperts. It offers a similar service as Women on Web in the United States, but the medications are prescribed to an Indian pharmacy, then mailed to the United States. The TelAbortion study conducted by Gynuity Health Projects, with special approval from the U.S. Food and Drug Administration (FDA), aims to increase access to medical abortion care without requiring an in-person visit to a clinic.[105][106][104]This models was expanded during theCOVID-19 pandemicand as of March 2020 exists in 13 U.S. states and has enrolled over 730 people in the study.[107][106]The person receives counseling and instruction from an abortion care provider via videoconference from a location of their choice. The medications necessary for the abortion, mifepristone and misoprostol, are mailed directly to the person and they have a follow-up video consultation in 7–14 days. A systematic review of telemedicine abortion has found the practice to be safe, effective, efficient, and satisfactory.[102] In the United States, eighteen states require the clinician to be physically present during the administration of medications for abortion which effectively bans telehealth of medication abortion: five states explicitly ban telemedicine for medication abortion, while thirteen states require the prescriber (usually required to be a physician) to be physically present with the patient.[108][109]In the UK, the Royal College of Obstetricians and Gynecologists approved a no-test protocol for medication abortion, with mifepristone available through a minimal-contact pick-up or by mail.[110] Telemedicine can facilitate specialty care delivered byprimary care physiciansaccording to a controlled study of the treatment ofhepatitis C.[111]Various specialties are contributing to telemedicine, in varying degrees.Other specialist conditions for which telemedicine has been used include perinatal mental health.[112] In light of the COVID-19 pandemic, primary care physicians have relied on telehealth to continue to provide care in outpatient settings.[113]The transition to virtual health has been beneficial in providing patients access to care (especially care that does not require a physical exam e.g. medication changes, minor health updates) and avoid putting patients at risk of COVID-19. This included providing services to pediatric patients during the pandemic, where issues of last minute cancelation and rescheduling were frequently related to a lack of technicality and engagement, two factors often understudied in the literature.[114] Telemedicine has also been beneficial in facilitating medical education to students while still allowing for adequate social distancing during the COVID-19 pandemic. Many medical schools have shifted to alternate forms of virtual curriculum and are still able to engage in meaningful telehealth encounters with patients.[115][116] Medication assisted treatment (MAT) is the treatment ofopioid use disorder(OUD) with medications, often in combination with behavioral therapy[117]As a response to the COVID-19 pandemic the use of telemedicine has been granted by theDrug Enforcement Administrationto start or maintain people OUD onbuprenorphine(trade name Suboxone) viatelemedicinewithout the need for an initial in-person examination.[118]On March 31, 2020,QuickMDbecame the first nationalTeleMATservice in the United States to provide Medication-assisted Treatment with Suboxone online – without the need of an in-person visit; with others announcing to follow soon.[119] Telehealth is a modern form of health care delivery. Telehealth breaks away from traditional health care delivery by using modern telecommunication systems including wireless communication methods.[120][121]Traditional health is legislated through policy to ensure the safety of medical practitioners and patients. Consequently, since telehealth is a new form of health care delivery that is now gathering momentum in the health sector, many organizations have started to legislate the use of telehealth into policy.[121]In New Zealand, the Medical Council has a statement about telehealth on their website. This illustrates that the medical council has foreseen the importance that telehealth will have on the health system and have started to introduce telehealth legislation to practitioners along with government.[122] Traditional use of telehealth services has been for specialist treatment. However, there has been a paradigm shift and telehealth is no longer considered a specialist service.[123]This development has ensured that many access barriers are eliminated, as medical professionals and patients are able to use wireless communication technologies to deliver health care.[124]This is evident inrural communities. Rural residents typically have to travel to longer distances to access healthcare than urban counterparts due to physician shortages and healthcare facility closures in these areas.[125][126]Telehealth eliminates this barrier as health professionals are able to conduct medical consultations through the use of wireless communication technologies. However, this process is dependent on both parties having internet access and comfort level with technology, which poses barriers for many low-income and rural communities.[124][127][128][129] Telehealth allows the patient to be monitored between physician office visits which can improve patient health. Telehealth also allows patients to access expertise which is not available in their local area. This remote patient monitoring ability enables patients to stay at home longer and helps avoid unnecessary hospital time. In the long-term, this could potentially result in less burdening of the healthcare system and consumption of resources.[1][130] During the COVID-19 pandemic, there were large increases in the use of telemedicine for primary care visits within the United States, increasing from an average of 1.4 million visits in Q2 of 2018 and 2019 to 35 million visits in Q2 2020, according to data fromIQVIA.[131]The telehealth market is expected to grow at 40% a year in 2021. Use of telemedicine by General Practitioners in the UK rose from 20 to 30% pre-COVID to almost 80% by the beginning of 2021. More than 70% of practitioners and patients were satisfied with this.[132]Boris Johnsonwas said to have "piled pressure on GPs to offer more in-person consultations" supporting a campaign largely orchestrated by theDaily Mail. TheRoyal College of General Practitionerssaid that a patient "right" to have face-to-face appointments if they wished was "undeliverable".[133] The technological advancement of wireless communication devices is a major development in telehealth.[134]This allows patients to self-monitor their health conditions and to not rely as much on health care professionals. Furthermore, patients are more willing to stay on their treatment plans as they are more invested and included in the process as the decision-making is shared.[135][136]Technological advancement also means that health care professionals are able to use better technologies to treat patients for example in maternal care[137]andsurgery. A 2023 study published in theJournal of the American College of Surgeonsshowed telemedicine as making a positive impact, with expectations exceeded for those physicians and patients who had consulted online for surgeries.[138]Technological developments in telehealth are essential to improve health care, especially the delivery of healthcare services, as resources are finite along with an ageing population that is living longer.[134][135][136] Restrictive licensure laws in the United States require a practitioner to obtain a full license to deliver telemedicine care across state lines. Typically, states with restrictive licensure laws also have several exceptions (varying from state to state) that may release an out-of-state practitioner from the additional burden of obtaining such a license. A number of states require practitioners who seek compensation to frequently deliver interstate care to acquire a full license. If a practitioner serves several states, obtaining this license in each state could be an expensive and time-consuming proposition. Even if the practitioner never practices medicine face-to-face with a patient in another state, he/she still must meet a variety of other individual state requirements, including paying substantial licensure fees, passing additional oral and written examinations, and traveling for interviews. In 2008, the U.S. passed the Ryan Haight Act which required face-to-face or valid telemedicine consultations prior to receiving a prescription.[139] State medical licensing boardshave sometimes opposed telemedicine; for example, in 2012 electronic consultations were illegal in Idaho, and an Idaho-licensed general practitioner was punished by the board for prescribing an antibiotic, triggering reviews of her licensure and board certifications across the country.[140]Subsequently, in 2015 the state legislature legalized electronic consultations.[140] In 2015, Teladoc filed suit against theTexas Medical Boardover a rule that required in-person consultations initially; the judge refused to dismiss the case, noting that antitrust laws apply to state medical boards.[141] Telehealth allows multiple, varying disciplines to merge and deliver a potentially more uniform level of care, using technology. As telehealth proliferates mainstream healthcare, it challenges notions of traditional healthcare delivery. Some populations experience better quality, access and more personalized health care.[142][143] Telehealth can also increase health promotion efforts. These efforts can now be more personalised to the target population and professionals can extend their help into homes or private and safe environments in which patients of individuals can practice, ask and gain health information.[130][136][144]Health promotion using telehealth has become increasingly popular inunderdeveloped countrieswhere there are very poor physical resources available. There has been a particular push towardmHealthapplications as many areas, even underdeveloped ones have mobile phone and smartphone coverage.[145][146][147] In a 2015 article reviewing research on the use of a mobile health application in the United Kingdom,[148]authors describe how a home-based application helped patients manage and monitor their health and symptoms independently. The mobile health application allows people to rapidly self-report their symptoms – 95% of patients were able to report their daily symptoms in less than 100 seconds, which is less than the 5 minutes (plus commuting) taken to measure vital signs by nurses in hospitals.[149]Online applications allow patients to remain at home to keep track of the progression of their chronic illnesses. The downside of using mHealth applications is that not everyone, especially in developing countries, has daily access to internet or electronic devices.[150] Indeveloped countries, health promotion efforts using telehealth have been met with some success. TheAustralian hands-free breastfeeding Google Glass applicationreported promising results in 2014. This application made in collaboration with the Australian Breastfeeding Association and a tech startup calledSmall World Social, helped new mothers learn how tobreastfeed.[151][152][153]Breastfeeding is beneficial to infant health andmaternal healthand is recommended by theWorld Health Organisationand health organisations all over the world.[154][155]Widespread breastfeeding can prevent 820,000 infant deaths globally but the practice is often stopped prematurely or intents to do are disrupted due to lack of social support, know-how or other factors.[155]This application gave mother's hands-free information on breastfeeding, instructions on how to breastfeed and also had an option to call a lactation consultant over Google Hangout. When the trial ended, all participants were reported to be confident in breastfeeding.[153] Ascientific reviewindicates that, in general, outcomes of telemedicine are or can be as good as in-person care with health care use staying similar.[156] Advantages of the nonexclusive adoption of already existing telemedicine technologies such as smartphonevideotelephonymay include reduced infection risks,[158]increased control of disease during epidemic conditions,[159]improved access to care,[160]reduced stress and exposure to other pathogens[161][162]during illness for better recovery, reduced time[163]and labor costs, efficient more accessible matching of patients with particular symptoms and clinicians who are experts for such, and reduced travel while disadvantages may include privacy breaches (e.g. due to software backdoors and vulnerabilities or sale of data), dependability on Internet access[158]and, depending on various factors, increased health care use.[additional citation(s) needed] Theoretically, the whole health system could benefit from telehealth. There are indications telehealth consumes fewer resources and requires fewer people to operate it with shorter training periods to implement initiatives.[14]Commenters suggested that lawmakers may fear that making telehealth widely accessible, without any other measures, would lead to patients using unnecessary health care services.[160]Telemedicine could also be used for connected networks between health care professionals.[164] Telemedicine also can eliminate the possible transmission ofinfectious diseasesorparasitesbetween patients and medical staff. This is particularly an issue whereMRSAis a concern. Additionally, some patients who feel uncomfortable in a doctors office may do better remotely. For example,white coat syndromemay be avoided. Patients who are home-bound and would otherwise require an ambulance to move them to a clinic are also a consideration. However, whether or not the standard of health care quality is increasing is debatable, with some literature refuting such claims.[143][165][166]Research has reported that clinicians find the process difficult and complex to deal with.[165][167]Furthermore, there are concerns around informed consent, legality issues as well as legislative issues. A recent study also highlighted that the swift and large-scale implementation of telehealth across the United Kingdom NHS Allied Health Professional (AHP) services might increase disparities in health care access for vulnerable populations with limited digital literacy.[168]Although health care may become affordable with the help of technology, whether or not this care will be "good" is the issue.[143]Many studies indicate high satisfaction with telemedicine among patients.[169]Among the factors associated with a good trust in telemedicine, the use of known and user-friendly video services and confidence in thedata protectionpolicies were the two variables contributing most to trust in telemedicine.[170] Major problems with increasing adoption include technically challenged staff, resistance to change or habits[161]and age of patient. Focused policy could eliminate several barriers.[171] A review lists a number of potentially good practices and pitfalls, recommending the use of "virtual handshakes" forconfirming identity, taking consent for conducting remote consultation over a conventional meeting, and professional standardized norms for protecting patient privacy and confidentiality.[172]It also found that theCOVID-19 pandemicsubstantially increased, voluntarily, the adoption of telephone or video consultation and suggests that telemedicine technology "is a key factor in delivery of health care in the future".[172] Technologies’ growing involvement in health care has led to continuous improvement in access, efficiency and quality of care, but numerous challenges lie with addressing the barriers that impair the geriatric population from benefitting the use of this new technology.[173]With the COVID-19 pandemic, rapid implementation of telehealth in geriatric outpatient clinics occurred. Although time efficiency was greatly improve and increase access to geriatric patients with lack of transportion to the clinic, there are complications that arised during and after implementation with many appointments requiring rescheduling due to language barrier, poor connection, hard of hearing, or inability to perform assessments.[174]Studies also show that patients and their family member often show preference to in-person visits. Although benefits were seen in being able to see a provider sooner, high-quality audio and video, and functionality to allow family participation during visits, patients and family noted preferences for in-person visits even still due to difficulty in using the service.[175]Currently new improvements are being made to help ease the complications of telehealth for geriatric patients. This inlcudes integrated captions on video calls for the hearing impaired, virtual interpreters that will attend the calls for language differences, government assisted internet services, increase training to medical providers and patients on telehealth use, etc. Due to its digital nature it is often assumed that telehealth saves the health system money. However, the evidence to support this is varied. When conducting economic evaluations of telehealth services, the individuals evaluating them need to be aware of potential outcomes and extraclinical benefits of the telehealth service.[176]Economic viability relies on the funding model within the country being examined (public vs private), the consumers willingness-to-pay, and the expected remuneration by the clinicians or commercial entities providing the services (examples of research on these topics from teledermoscopy in Australia)[177][178][179] In a UK telehealth trial done in 2011, it was reported that the cost of health could be dramatically reduced with the use of telehealth monitoring. The usual cost ofin vitro fertilisation(IVF) per cycle would be around $15,000; with telehealth it was reduced to $800 per patient.[180]InAlaska the Federal Health Care Access Network, which connects 3,000 healthcare providers to communities, engaged in 160,000 telehealth consultations from 2001 and saved the state $8.5 million in travel costs for justMedicaidpatients.[181] Digital interventions for mental health conditionsseem to be cost-effective compared to no intervention or non-therapeutic responses such as monitoring. However, when compared to in-person therapy or medication their added value is currently uncertain.[182] Telemedicine can be beneficial to patients in isolated communities and remote regions, who can receive care from doctors or specialists far away without the patient having to travel to visit them.[183]Recent developments inmobile collaborationtechnology can allow healthcare professionals in multiple locations to share information and discuss patient issues as if they were in the same place.[184]Remote patient monitoring throughmobile technologycan reduce the need for outpatient visits and enable remote prescription verification and drug administration oversight, potentially significantly reducing the overall cost of medical care.[185]It may also be preferable for patients with limited mobility, for example, patients with Parkinson's disease.[51]Telemedicine can also facilitate medical education by allowing workers to observe experts in their fields and share best practices more easily.[186] Remote surgeryand types of videoconferencing for sharing expertise (e.g.ad hocassistance) have been and could be used to support doctors in Ukraine during the2022 Russian invasion of Ukraine.[187] While many branches of medicine have wanted to fully embrace telehealth for a long time, there are certain risks and barriers which bar the full amalgamation of telehealth intobest practice. For a start, it is dubious as to whether a practitioner can fully leave the "hands-on" experience behind.[143]Although it is predicted that telehealth will replace many consultations and other health interactions, it cannot yet fully replace a physical examination, this is particularly so indiagnostics,rehabilitationormental health.[143]To minimise safety issues, researchers have suggested not offering remote consultations for some conditions (breathing problems, new psychosis, or acute chest pain, for example), when a parent is very concerned about a child, when a condition has not resolved as expected or has worsened, or to people who might struggle to understand or be understood (such as those with limited English or learning difficulties).[191][192] The benefits posed by telehealth challenge the normative means of healthcare delivery set in both legislation and practice. Therefore, the growing prominence of telehealth is starting to underscore the need for updated regulations, guidelines and legislation which reflect the current and future trends of healthcare practices.[2][143]Telehealth enables timely and flexible care to patients wherever they may be; although this is a benefit, it also poses threats toprivacy,safety, medical licensingandreimbursement. When a clinician and patient are in different locations, it is difficult to determine which laws apply to the context.[193]Once healthcare crosses borders different state bodies are involved in order to regulate and maintain the level of care that is warranted to the patient or telehealth consumer. As it stands, telehealth is complex with many grey areas when put into practice especially as it crosses borders. This effectively limits the potential benefits of telehealth.[2][143] An example of these limitations include the current American reimbursement infrastructure, whereMedicarewill reimburse for telehealth services only when a patient is living in an area where specialists are in shortage, or in particular rural counties. The area is defined by whether it is a medical facility as opposed to a patient's' home. The site that the practitioner is in, however, is unrestricted. Medicare will only reimburse live video (synchronous) type services, not store-and-forward, mhealth or remote patient monitoring (if it does not involve live-video). Some insurers currently will reimburse telehealth, but not all yet. So providers and patients must go to the extra effort of finding the correct insurers before continuing. Again in America, states generally tend to require that clinicians are licensed to practice in the surgery' state, therefore they can only provide their service if licensed in an area that they do not live in themselves.[140] More specific and widely reaching laws, legislations and regulations will have to evolve with the technology. They will have to be fully agreed upon, for example, will all clinicians need full licensing in every community they provide telehealth services too, or could there be a limited use telehealth licence? Would the limited use licence cover all potential telehealth interventions, or only some? Who would be responsible if an emergency was occurring and the practitioner could not provide immediate help – would someone else have to be in the room with the patient at all consult times? Which state, city or country would the law apply in when a breach or malpractice occurred?[143][194] A major legal action prompt in telehealth thus far has been issues surrounding online prescribing and whether an appropriate clinician-patient relationship can be established online to make prescribing safe, making this an area that requires particular scrutiny.[142]It may be required that the practitioner and patient involved must meet in person at least once before online prescribing can occur, or that at least a live-video conference must occur, not just impersonal questionnaires or surveys to determine need.[195] Telehealth has some potential for facilitating self-management techniques in health care, but for patients to benefit from it, the appropriate contact with, and relationship, between doctor and patient must be established first.[196]This would start with an online consultation, providing patients with techniques and tools that help them participate in healthy behaviors, and initiating a collaborative partnership between health care professionals and patient.[197]Self-management strategies fall into a broader category called patient activation, which is defined as a "patients' willingness and ability to take independent actions to manage their health."[198]It can be achieved by increasing patients' knowledge and confidence in coping with and managing their own disease through a "regular assessment of progress [...] and problem-solving support."[197]Teaching patients about their conditions and ways to cope with chronic illnesses will allow them to be knowledgeable about their disease and willing to manage it, improving their everyday life. Without a focus on the doctor-patient relationship and on the patient's understanding, telehealth cannot improve the quality of life of patients, despite the benefit of allowing them to do their medical check-ups from the comfort of their home. The downsides of telemedicine include the cost of telecommunication and data management equipment and of technical training for medical personnel who will employ it. Virtual medical treatment also entails potentially decreased human interaction between medical professionals and patients, an increased risk of error when medical services are delivered in the absence of a registered professional, and an increased risk thatprotected health informationmay be compromised through electronic storage and transmission.[199]There is also a concern that telemedicine may actually decrease time efficiency due to the difficulties of assessing and treating patients through virtual interactions; for example, it has been estimated that ateledermatologyconsultation can take up to thirty minutes, whereas fifteen minutes is typical for a traditional consultation.[200]Additionally, potentially poor quality of transmitted records, such as images or patient progress reports, and decreased access to relevant clinical information are quality assurance risks that can compromise the quality and continuity of patient care for the reporting doctor.[201]Other obstacles to the implementation of telemedicine include unclear legal regulation for some telemedical practices and difficulty claiming reimbursement from insurers or government programs in some fields.[44]Some medical organizations have delivered position statement on the correct use of telemedicine in their field.[202][203][204][205] Another disadvantage of telemedicine is the inability to start treatment immediately. For example, a patient with a bacterial infection might be given anantibiotichypodermic injectionin the clinic, and observed for any reaction, before that antibiotic is prescribed in pill form. Equitability is also a concern. Many families and individuals in the United States, and other countries, do not have internet access in their homes or the proper electronic devices to access services such as a laptop or smartphone.[citation needed] Informed consentis another issue. When telehealth includes the possibility for technical problems such as transmission errors, security breaches, or storage issues, it can impact the system's ability to communicate. It may be wise to obtain informed consent in person first, as well as having backup options for when technical issues occur. In person, a patient can see who is involved in their care (namely themselves and their clinician in a consult), but online there will be other involved such as the technology providers, therefore consent may need to involve disclosure of anyone involved in the transmission of the information and the security that will keep their information private, and any legal malpractice cases may need to involve all of those involved as opposed to what would usually just be the practitioner.[142][194][195] The rate of adoption of telehealth services in any jurisdiction is frequently influenced by factors such as the adequacy and cost of existing conventionalhealth servicesin meetingpatient needs; the policies of governments and/or insurers with respect to coverage and payment for telehealth services; and medical licensing requirements that may inhibit or deter the provision of telehealth second opinions or primary consultations by physicians. Projections for the growth of the telehealth market are optimistic, and much of this optimism is predicated upon the increasing demand for remote medical care. According to a recent survey, nearly three-quarters of U.S. consumers say they would use telehealth.[206]At present, several major companies along with a bevvy of startups are working to develop a leading presence in the field. In the UK, the Government's Care Services minister, Paul Burstow, has stated that telehealth andtelecarewould be extended over the next five years (2012–2017) to reach three million people. In the United States, telemedicine companies are collaborating with health insurers and other telemedicine providers to expand marketshare and patient access to telemedicine consultations. As of 2019[update], 95% of employers believe their organizations will continue to provide health care benefits over the next five years.[207] The COVID-19 pandemic drove increased usage of telehealth services in the U.S. The U.S. Centers for Disease Control and Prevention reported a 154% increase in telehealth visits during the last week of March 2020, compared to the same dates in 2019.[208] From 1999 to 2018, theUniversity Hospital of Zurich(USZ) offered clinical telemedicine and online medical advice on the Internet. A team of doctors answered around 2500 anonymous inquiries annually, usually within 24 to 48 hours. The team consisted of up to six physicians who are specialists in clinical telemedicine at the USZ and have many years of experience, particularly in internal and general medicine. In the entire period, 59360 inquiries were sent and answered.[209]The majority of the users were female and on average 38 years old. However, in the course of time, considerably more men and older people began to use the service. The diversity of medical queries covered all categories of theInternational Statistical Classification of Diseases and Related Health Problems(ICD) and correlated with the statistical frequency of diseases in hospitals in Switzerland. Most of the inquiries concerned unclassified symptoms and signs, services related to reproduction, respiratory diseases, skin diseases, health services, diseases of the eye and nervous systems, injuries and disorders of the female genital tract. As with the Swedish online medical advice service,[210]one-sixth of the requests related to often shameful andstigmatiseddiseases of the genitals, gastrointestinal tract, sexually transmitted infections, obesity and mental disorders. By providing an anonymous space where users can talk about (shameful) diseases, online telemedical services empower patients and their health literacy is enhanced by providing individualized health information. The Clinical Telemedicine and Online Counselling service of the University Hospital of Zurich is currently being revised and will be offered in a new form in the future.[211] For developing countries, telemedicine andeHealthcan be the only means of healthcare provision in remote areas. For example, the difficult financial situation in many African states and lack of trained health professionals has meant that the majority of the people in sub-Saharan Africa are badly disadvantaged in medical care, and in remote areas with low population density, direct healthcare provision is often very poor[212]However, provision of telemedicine and eHealth from urban centers or from other countries is hampered by the lack of communications infrastructure, with no landline phone or broadband internet connection, little or no mobile connectivity, and often not even a reliable electricity supply.[213] India has broad rural-urban population and rural India is bereaved from medical facilities, giving telemedicine a space for growth in India. Deprived education and medical professionals in rural areas is the reason behind government's ideology to use technology to bridge this gap. Remote areas not only present a number of challenges for the service providers but also for the families who are accessing these services. Since 2018, telemedicine has expanded in India. It has undertaken a new way for doctor consultations. On 25 March 2020, in the wake of COVID-19 pandemic, theMinistry of Health and Family Welfareissued India's Telemedicine Practice Guidelines.[214]The Board of Governors entasked by the Health Ministry published an amendment to theIndian Medical Council(Professional Conduct, Etiquette and Ethics) Regulations, 2002 that gave much-needed statutory support for the practice of telemedicine in India. This sector is at an ever-growing stage with high scope of development.[215]In April 2020, the union health ministry launched the eSanjeevani telemedicine service that operates at two levels: the doctor-to-doctor telemedicine platform, and the doctor-to-patient platform. This service crossed five million tele-consultations within a year of its launch indicating conducive environment for acceptability and growth of telemedicine in India.[216] Sub-Saharan Africa is marked by the massive introduction of new technologies and internet access.[217]Urban areas are facing a rapid change and development, and access to internet and health is rapidly improving. Population in remote areas however, still lack access to healthcare and modern technologies. Some people in rural regions must travel more between 2 and 6 hours to reach the closest healthcare facilities of their country.[218]leaving room for telehealth to grow and reach isolated people in the near future. The Satellite African eHEalth vaLidation (SAHEL) demonstration project has shown howsatellite broadbandtechnology can be used to establish telemedicine in such areas. SAHEL was started in 2010 in Kenya and Senegal, providing self-contained, solar-powered internet terminals to rural villages for use by community nurses for collaboration with distant health centers for training, diagnosis and advice on local health issues.[219]Those methods can have major impact on both health professionals to get and provide training from remote areas, and on the local population who can receive care without traveling long distances. Some non-profits provide internet to rural places around the world using a mobile VSAT terminal. This VSAT terminal equips remote regions allowing them to alert the world when there is a medical emergency, resulting in a rapid deployment or response from developed countries.[220]Technologies such as the ones used by MAF allows health professionals in remote clinics to have internet access, making consultations much easier, both for patients and doctors. In 2014, the government of Luxembourg, along with satellite operators and NGOs establishedSATMED, a multilayer eHealth platform to improve public health in remote areas of emerging and developing countries, using theEmergency.ludisaster relief satellite platform and theAstra 2GTV satellite.[221]SATMED was first deployed in response to a report in 2014 by German Doctors of poor communications in Sierra Leone hampering the fight against Ebola, and SATMED equipment arrived in the Serabu clinic inSierra Leonein December 2014.[222][223]In June 2015 SATMED was deployed at Maternité Hospital in Ahozonnoude, Benin to provide remote consultation and monitoring, and is the only effective communication link between Ahozonnoude, the capital and a third hospital in Allada, since land routes are often inaccessible due to flooding during the rainy season.[224][225] The development and history of telehealth or telemedicine (terms used interchangeably in literature) is deeply rooted in the history and development in not only technology but also society itself. Humans have long sought to relay important messages throughtorches,optical telegraphy,electroscopes, andwireless transmission. Early forms of telemedicine achieved with telephone and radio have been supplemented withvideotelephony, advanceddiagnostic methodssupported bydistributed client/server applications, and additionally with telemedical devices to support in-home care.[16] In the 21st century, with the advent of theinternet,portable devicesand other such digital devices are taking a transformative role in healthcare and its delivery.[226] Although,traditional medicinerelies on in-person care, the need and want for remote care has existed from the Roman and pre-Hippocratic periods in antiquity. The elderly and infirm who could not visit temples for medical care sent representatives to convey information on symptoms and bring home a diagnosis as well as treatment.[226]In Africa, villagers would usesmoke signalsto warn neighboring villages of disease outbreak.[227]The beginnings of telehealth have existed through primitive forms of communication and technology.[226]The exact date of origin for Telehealth is unknown, but it was known to have been used during theBubonic Plague. That version of telehealth was far different from how we know it today. During that time, they were communicating byheliographand bonfire. Those were used to notify other groups of people about famine and war.[228]Those are not using any form of technology yet but are starting to spread the idea of connectivity among groups of people who geographically could not be together. As technology developed and wired communication became increasingly commonplace, the ideas surrounding telehealth began emerging. The earliest telehealth encounter can be traced toAlexander Graham Bellin 1876, when he used his early telephone as a means of getting help from his assistant Mr. Watson after he spilt acid on his trousers. Another instance of early telehealth, specifically telemedicine was reported inThe Lancetin 1879. An anonymous writer described a case where a doctor successfully diagnosed a child over the telephone in the middle of the night.[226]This Lancet issue, also further discussed the potential of Remote Patient Care in order to avoid unnecessary house visits, which were part of routine health care during the 1800s.[226][229]Other instances of telehealth during this period came from theAmerican Civil War, during which telegraphs were used to deliver casualty/mortality lists, medical care to soldiers,[229]and ordering further medical supplies.[230] As the 1900s started, physicians quickly found a use for the telephone making it a prime communication channel to contact patients and other physicians.[228]Over the next fifty-plus years, the telephone was a staple for medical communication. As the 1930s came around, radio communication played a key role, especially during World War I. It was specifically used to communicate with remote areas such as Alaska and Australia.[228]They used the radio to communicate medical information. During the Vietnam War, radio communication had become more advanced and was now used to send medical teams in helicopters to help. This then brought together the Aerial Medical Service (AMS) who used telegraphs, radios, and planes to help care for people who lived in remote areas. From the late 1800s to the early 1900s the early foundations ofwireless communicationwere laid down.[226]Radiosprovided an easier and near instantaneous form of communication. The use of radio to deliver healthcare became accepted for remote areas.[226][130]TheRoyal Flying Doctor Service of Australiais an example of the early adoption of radios in telehealth.[227] In 1925 the inventorHugo Gernsbackwrote an article for the magazineScience and Inventionwhich included a prediction of a future where patients could be treated remotely by doctors through a device he called a "teledactyl". His descriptions of the device are similar to what would later become possible with new technology.[231] When the AmericanNational Aeronautics and Space Administration (NASA)began plans to send astronauts into space, the need for telemedicine became clear. In order to monitor their astronauts in space, telemedicine capabilities were built into thespacecraftas well as the firstspacesuits.[226][130]Additionally, during this period, telehealth and telemedicine were promoted in different countries especially the United States and Canada.[226]Carrier Sekani Family Serviceshelped pioneer telehealth in British Columbia and Canada, according to its CEO Warner Adam.[232]After the telegraph and telephone started to successfully help physicians treat patients from remote areas, telehealth became more recognized. Technological advancements occurred when NASA sent men to space. Engineers for NASA created biomedical telemetry andtelecommunicationssystems.[228]NASA technology monitored vitals such as blood pressure, heart rate, respiration rate, and temperature. After the technology was created, it then became the base of telehealth medicine for the public. Massachusetts General Hospitaland Boston'sLogan International Airporthad a role in the early use of telemedicine, which more or less coincided with NASA's foray into telemedicine through the use of physiologic monitors for astronauts.[233]On October 26, 1960, a plane struck a flock of birds upon takeoff, killing many passengers and leaving a number wounded. Due to the extreme complexity of trying to get all the medical personnel out from the hospital, the practical solution became telehealth.[234]This was expanded upon in 1967, when Kenneth Bird at Massachusetts General founded one of the first telemedicine clinics. The clinic addressed the fundamental problem of delivering occupational and emergency health services to employees and travellers at the airport, located three congested miles from the hospital. Clinicians at the hospital would provide consultation services to patients who were at the airport. Consultations were achieved through microwave audio as well as video links.[226][235]The airport began seeing over a hundred patients a day at its nurse-run clinic that cared for victims of plane crashes and other accidents, taking vital signs, electrocardiograms, and video images that were sent to Massachusetts General.[236]Over 1,000 patients are documented as having received remote treatment from doctors at MGH using the clinic's two-way audiovisual microwave circuit.[237]One notable story featured a woman who got off a flight in Boston and was experiencing chest pain. They performed a workup at the airport, took her to the telehealth suite where Raymond Murphy appeared on the television, and had a conversation with her. While this was happening, another doctor took notes and the nurses took vitals and any test that Murphy ordered.[234]At this point, telehealth was becoming more mainstream and was starting to become more technologically advanced, which created a viable option for patients. In 1964, the Nebraska Psychiatric Institute began using television links to form two-way communication with the Norfolk State Hospital which was 112 miles away for the education and consultation purposes between clinicians in the two locations.[235] In 1972 theDepartment of Health, Education and Welfarein the United States approved funding for seven telemedicine projects across different states. This funding was renewed and two further projects were funded the following year.[226][235] In March 1972, the San Bernardino County Medical Society officially implemented its Tel-Med program, a system of prerecorded health-related messages, with a log of 50 tapes.[238][239]The nonprofit initiative began in 1971 as a local medical project to ease the doctor shortage in the expandingSan Bernardino Valleyand improve the public's access to sound medical information.[239][238]It covered subjects ranging fromcannabistovaginitis.[238]In January 1973, in response to the developing "London flu" epidemic hitting California and the country, a tape providing information on the disease was on air within a week after news broke of the flu spreading in the state.[240]That spring, programs were implemented inSan DiegoandIndianapolis, Indiana, signaling a national acceptance of the concept.[239]By 1979, its system offered messages on over 300 different subjects, 200 of which were available in Spanish as well as English, and serviced over 65 million people in 180 cities around the country.[238] Telehealth projects underway before and during the 1980s would take off but fail to enter mainstream healthcare.[227][130]As a result, this period of telehealth history is called the "maturation" stage and made way for sustainable growth.[226]Although state funding in North America was beginning to run low, different hospitals began to launch their own telehealth initiatives.[226]NASA provided an ATS-3 satellite to enable medical care communications ofAmerican Red CrossandPan American Health Organizationresponse teams, following the1985 Mexico City earthquake. The agency then launched its SateLife/HealthNet programme to increase health service connectivity in developing countries. In 1997, NASA sponsoredYale's Medical Informatics and Technology Applications Consortium project.[130][241] Florida first experimented with "primitive" telehealth in itsprisonsduring the latter 1980s.[242]Working with Doctors Oscar W. Boultinghouse and Michael J. Davis, from the early 1990s to 2007; Glenn G. Hammack led theUniversity of Texas Medical Branch(UTMB) development of a pioneering telehealth program inTexas state prisons. The three UTMB alumni would, in 2007, co-found telehealth provider NuPhysician.[243] The first interactive telemedicine system, operating over standard telephone lines, designed toremotely diagnoseand treat patients requiring cardiac resuscitation (defibrillation) was developed and launched by an American company, MedPhone Corporation, in 1989. A year later under the leadership of its President/CEO S Eric Wachtel, MedPhone introduced a mobile cellular version, the MDPhone. Twelve hospitals in the U.S. served as receiving and treatment centers.[244] As the expansion of telehealth continued in 1990 Maritime Health Services (MHS) was a big part of the initiation for occupational health services. They sent a medical officer aboard the Pacific trawler that allowed for round-the-clock communication with a physician. The system that allows for this is called the Medical Consultation Network (MedNet). MedNet is a video chatting system that has live audio and visual so the physician on the other end of the call can see and hear what is happening. MetNet can be used from anywhere, not just aboard ships.[228]Being able to provide onsite visual information allows remote patients expert emergency help and medical attention that saves money as well as lives. This has created a demand for at-home monitoring. At-home care has also become a large part of telehealth. Doctors or nurses will now give pre-op and post-op phone calls to check-in. There are also companies such asLifeline, which give the elderly a button to press in case of an emergency. That button will automatically call for emergency help. If someone has surgery and then is sent home, telehealth allows physicians to see how the patient is progressing without them having to stay in the hospital. TeleDiagnostic Systems of San Francisco is a company that has created a device that monitors sleep patterns, so people with sleep disorders do not have to stay the night at the hospital.[228]Another at-home device that was created was the Wanderer, which was attached to Alzheimer's patients or people who had dementia. It was attached to them so when they wandered off it notified the staff to allow them to go after them. All these devices allowed healthcare beyond hospitals to improve, which means that more people are being helped efficiently. The advent of high-speedInternet, and the increasing adoption ofICTin traditional methods of care, spurred advances in telehealth delivery.[14]Increased access to portable devices, like laptops and mobile phones, made telehealth more plausible; the industry then expanded into health promotion, prevention and education.[1][3][130] In 2002, G. Byron Brooks, a former NASA surgeon and engineer who had also helped manage theUTMBTelemedicine program, co-foundedTeladocinDallas, Texas, which was then launched in 2005 as the first national telehealth provider.[245] In the 2010s, integration of smart home telehealth technologies, such as health and wellness devices, software, and integratedIoT, has accelerated the industry. Healthcare organizations are increasingly adopting the use of self-tracking and cloud-based technologies, and innovative data analytic approaches to accelerate telehealth delivery.[citation needed][246] In 2015,Mercy Health systemopenedMercy Virtual, in Chesterfield, Missouri, the world's first medical facility dedicated solely to telemedicine.[247] Telehealth expanded significantly during theCOVID-19pandemic, becoming a vital means of medical communication. It allows doctors to return to humanizing the patient.[248]It forces them to listen to what people have to say and from there make a diagnosis. Studies have demonstrated high trust in telehealth expressed by patients during theCOVID-19pandemic.[249]Among patients withInflammatory bowel disease4 out of 5 considered telemedicine as valuable tool for their management, and 85%) wanted to have a telemedicine service at their center. However, only 1 out of 4 believed that it may guarantee the same level of care as the in-person visit.[250]Some researchers claim this creates an environment that encourages greater vulnerability among patients in self disclosure in the practice of narrative medicine.[248]Telehealth allows for Zoom calls and video chats from across the world checking in on patients and speaking to physicians. Universities are now ensuring that medical students graduate with proficient telehealth communication skills.[251]Experts suggest that telehealth has become a vital part of medical care; with more virtual options becoming available. The pandemic era also identified the "potential to significantly improve global health equity" through telehealth and other virtual care technologies.[252] A retrospective study in 2023 examining 1,589,014 adult primary care patients within an integrated healthcare system in the U.S. found that patients who initially had a telehealth visit were less likely to receive prescriptions, lab tests, or imaging compared to those who had an in-office visit. However, these same telehealth patients had higher rates of in-person follow-up visits. The study revealed that, out of the 2,357,598 primary care visits, 49.2% were in-office, 31.3% were telephone visits, and 19.5% were video visits. Office visits led to higher rates of prescriptions (46.8%), lab tests (41.4%), and imaging (20.5%) compared to video (38.4%, 27.4%, and 11.9%, respectively) and telephone visits (34.6%, 22.8%, and 8.7%, respectively). In contrast, patients who had telephone or video visits were more likely to have in-person follow-up visits, with 7.6% of telephone, 6.2% of video, and only 1.3% of office visit patients returning for primary care follow-up. Furthermore, rates of emergency department visits and hospitalizations were higher for those who had telemedicine visits, though these differences were minimal. The study's limitations include the inability to generalize findings to healthcare settings without telemedicine services or to patients without insurance or a primary care provider. The reasons for increased in-person healthcare utilization were also not captured, and long-term follow-up was not conducted.[253] Media related toTelemedicineat Wikimedia Commons
https://en.wikipedia.org/wiki/Telehealth
Telehealthis the distribution ofhealth-related servicesand information via electronic information andtelecommunication technologies.[1]It allows long-distance patient and clinician contact, care, advice, reminders, education, intervention, monitoring, and remote admissions.[2][3] Telemedicineis sometimes used as asynonym, or is used in a more limited sense to describe remote clinical services, such as diagnosis and monitoring. When rural settings, lack of transport, a lack of mobility, conditions due to outbreaks, epidemics or pandemics, decreased funding, or a lack of staff restrict access to care, telehealth may bridge the gap[4]and can even improve retention in treatment[5]as well as provide distance-learning; meetings, supervision, and presentations between practitioners; online information andhealth datamanagement and healthcare system integration.[6]Telehealth could include twocliniciansdiscussing a case overvideo conference; a robotic surgery occurring through remote access; physical therapy done via digital monitoring instruments, live feed and application combinations; tests being forwarded between facilities for interpretation by a higher specialist; home monitoring through continuous sending of patient health data; client to practitioner online conference; or even videophone interpretation during a consult.[1][2][6] Telehealth is sometimes discussed interchangeably with telemedicine, the latter being more common than the former. TheHealth Resources and Services Administrationdistinguishes telehealth from telemedicine in its scope, defining telemedicine only as describing remote clinical services, such as diagnosis and monitoring, while telehealth includespreventative, promotive, and curative care delivery.[1]This includes the above-mentioned non-clinical applications, like administration and provider education.[2][3] TheUnited States Department of Health and Human Servicesstates that the term telehealth includes "non-clinical services, such as provider training, administrative meetings, and continuing medical education", and that the term telemedicine means "remote clinical services".[7]TheWorld Health Organizationuses telemedicine to describe all aspects of health care including preventive care.[8]TheAmerican Telemedicine Associationuses the terms telemedicine and telehealth interchangeably, although it acknowledges that telehealth is sometimes used more broadly for remote health not involving active clinical treatments.[9] eHealthis another related term, used particularly in the U.K. and Europe, as an umbrella term that includes telehealth,electronic medical records, and other components ofhealth information technology.[10] Telehealth requires good Internet access by participants, usually in the form of a strong, reliablebroadbandconnection, and broadband mobile communication technology of at least the fourth generation (4G) or long-term evolution (LTE) standard to overcome issues with video stability and bandwidth restrictions.[11][12][13]As broadband infrastructure has improved, telehealth usage has become more widely feasible.[1][2] Healthcare providersoften begin telehealth with aneeds assessmentwhich assesses hardships which can be improved by telehealth such as travel time, costs or time off work.[1][2]Collaborators such astechnology companiescan ease the transition.[1] Delivery can come within four distinct domains:live video (synchronous),store-and-forward (asynchronous),remote patient monitoring, andmobile health.[14]Audio-based telemedicine, primarily through telephone consultations, has been studied as a tool for managing chronic conditions. A systematic review of 40 randomized controlled trials found that audio-based care was generally comparable to in-person or video care, though with low to very low certainty of evidence.[15] Store-and-forwardtelemedicine involves acquiring medical data (likemedical images,biosignalsetc.) and then transmitting this data to a doctor or medical specialist at a convenient time for assessmentoffline.[9]It does not require the presence of both parties at the same time.[16]Dermatology(cf:teledermatology),radiology, andpathologyare common specialties that are conducive to asynchronous telemedicine. A properly structuredmedical record, preferably inelectronicform, should be a component of this transfer. The 'store-and-forward' process requires the clinician to rely on a history report and audio/video information in lieu of a physical examination.[9] Remote monitoring, also known as self-monitoring or testing, enables medical professionals to monitor a patient remotely using various technological devices. This method is primarily used for managing chronic diseases or specific conditions, such as heart disease, diabetes mellitus, or asthma. These services can provide comparable health outcomes to traditional in-person patient encounters, supply greater satisfaction to patients, and may be cost-effective.[17]Examples include home-based nocturnaldialysis[18]and improved joint management.[19] Electronic consultationsare possible through interactive telemedicine services which provide real-time interactions between patient and provider.[16]Videoconferencinghas been used in a wide range of clinical disciplines and settings for various purposes, including management, diagnosis, counseling, and monitoring of patients.[20] Videotelephony comprises the technologies for the reception and transmission of audio-video signals by users at different locations for communication between people in real time.[21] At the dawn of the technology, videotelephony also includedimage phoneswhich would exchange still images between units every few seconds over conventionalPOTS-type telephone lines, essentially the same asslow scan TVsystems.[citation needed] Currently, videotelephony is particularly useful to thedeafandspeech-impairedwho can use them withsign languageand also with avideo relay service, and well as to those withmobility issuesor those who are located in distant places and are in need oftelemedicalortele-educationalservices.[22][23] Common daily emergency telemedicine is performed by SAMU Regulator Physicians inFrance,Spain,Chile, andBrazil.Aircraftandmaritimeemergencies are also handled by SAMU centres in Paris, Lisbon and Toulouse.[24] A recent study identified three major barriers to the adoption of telemedicine in emergency and critical care units. They include: Emergency telehealth is also gaining acceptance in theUnited States. There are several modalities currently being practiced that include but are not limited to TeleTriage, TeleMSE, and ePPE. An example of telehealth in the field is when EMS arrives on scene of an incident and is able to take anEKGthat is then sent directly to a physician at the hospital to be read, allowing for instant care and management.[26] Telenursing refers to the use oftelecommunicationsandinformation technologyin order to providenursingservices in health care whenever a large physical distance exists between patient and nurse, or between any number of nurses. As a field, it is part of telehealth, and has many points of contact with other medical and non-medical applications, such astelediagnosis, teleconsultation, telemonitoring, etc. Telenursing is achieving significant growth rates in many countries due to several factors: the preoccupation with reducing the costs of health care, an increase in theagingand chronically ill population, and the increase in coverage of health care to distant, rural, small or sparsely populated regions. Among its benefits, telenursing may help solve increasing shortages of nurses, reduce distances and travel time, and keep patients out of hospital. A greater degree of job satisfaction has been registered among telenurses.[27] InAustralia, during January 2014,Melbournetech startupSmall World Socialcollaborated with theAustralian Breastfeeding Associationto create the first hands-free breastfeedingGoogle Glassapplication for new mothers.[28]The application, namedGoogle Glass Breastfeeding app trial, allows mothers to nurse their baby while viewing instructions about common breastfeeding issues (latching on, posture, etc.) or call a lactation consultant via a secure Google Hangout,[29]who can view the issue through the mother's Google Glass camera.[30]The trial was successfully concluded inMelbournein April 2014, and 100% of participants were breastfeeding confidently.[31][32][33][34] Palliative careis aninterdisciplinarymedicalcaregivingapproach aimed at optimizingquality of lifeand mitigatingsufferingamong people with serious, complex, and oftenterminalillnesses. In the past, palliative care was adiseasespecific approach, but today theWorld Health Organization(WHO) takes a broader approach suggesting that palliative care should be applied as early as possible to any chronic and fatal illness. As in many aspects ofhealth care, telehealth is increasingly being used in palliative care[35]and is often referred to as telepalliative care.[36]The types oftechnologyapplied in telepalliative care are typicallytelecommunicationtechnologies, such asvideo conferencingormessagingfor follow-up, or digitalsymptomassessments through digitalquestionnairesgeneratingalertstohealth care professionals.[37]Telepalliative care has been shown to be a feasible approach to deliver palliative care amongpatients,caregiversand health care professionals.[38][37][39]Telepalliative care can provide an added support system that enable patients to remain at home through self-reporting of symptoms and tailoring care to specific patients.[39]Studies have shown that the use of telehealth in palliative care is mostly well received by patients, and that telepalliative care may improve access tohealth care professionalsat home and enhance feelings of security and safety among patients receiving palliative care.[38]Further, telepalliative care may enable more efficientutilizationof healthcareresources, promotescollaborationbetween different levels of healthcare, and makes healthcare professionals more responsive to changes in patients' condition.[37] Challenging aspects of the use of telehealth in palliative care have also been described. Generally, palliative care is a diversemedicalspecialty, involvinginterdisciplinaryprofessionalsfrom different professionaltraditionsandcultures, delivering care to aheterogenouscohortof patients with diverse diseases, conditions and symptoms. This makes it a challenge to develop telehealth that is suitable for all patients and in all contexts of palliative care. Some of the barriers to telepalliative care relate to inflexible reporting of complex and fluctuating symptoms and circumstances using electronic questionnaires.[39]Further, palliative care emphasizes aholisticapproach that should addressexistential, spiritual andmentaldistress related to serious illness.[40]However, few studies have included the self-reporting of existential or spiritual concerns,emotions, andwell-being.[39]Healthcare professionals may also beuncomfortableproviding emotional orpsychologicalcare remotely.[37]Palliative care has been characterized as high-touch rather than high-tech, limiting the interest in applying technological advancements when developing interventions.[41]To optimize the advantages and minimize the challenges with the use of telehealth inhome-basedpalliative care,futureresearchshould include users in thedesignand developmentprocess. Understanding the potential of telehealth to supporttherapeuticrelationships between patients and health care professionals and being aware of the possible difficulties and tensions it may create are critical to itssuccessfulandacceptableuse.[37][39] Telepharmacy is the delivery ofpharmaceutical careviatelecommunicationsto patients in locations where they may not have direct contact with apharmacist. It is an instance of the wider phenomenon of telemedicine, as implemented in the field ofpharmacy. Telepharmacy services includedrug therapymonitoring, patient counseling, prior authorization and refill authorization forprescription drugs, and monitoring offormularycompliance with the aid ofteleconferencingorvideoconferencing.Remote dispensingof medications by automated packaging and labeling systems can also be thought of as an instance of telepharmacy. Telepharmacy services can be delivered at retail pharmacy sites or throughhospitals,nursing homes, or other medical care facilities. This approach allows patients in remote or underserved areas to receive pharmacy services that would otherwise be unavailable to them, enhancing access to care and ensuring continuity in medication management.[42]Health outcomes appear similar when pharmacy services are delivered by telepharmacy compared to traditional service delivery.[43] The term can also refer to the use of videoconferencing in pharmacy for other purposes, such as providing education, training, and management services to pharmacists and pharmacy staff remotely.[44] Telepsychiatryor telemental health refers to the use oftelecommunicationstechnology (mostlyvideoconferencingand phone calls) to deliverpsychiatric careremotely for people withmental health conditions. It is a branch of telemedicine.[45][46] Telepsychiatry can be effective in treating people with mental health conditions. In the short-term it can be as acceptable and effective as face-to-face care.[47]Research also suggests comparable therapeutic factors, such as changes in problematic thinking or behaviour.[48] It can improve access to mental health services for some but might also represent a barrier for those lacking access to a suitable device, the internet or the necessarydigital skills. Factors such aspovertythat are associated with lack of internet access are also associated with greater risk of mental health problems, makingdigital exclusionan important problem of telemental health services.[47] Teledentistry is the use ofinformation technologyandtelecommunicationsfor dental care, consultation, education, and public awareness in the same manner as telehealth and telemedicine. Tele-audiology (or teleaudiology) is the utilization of telehealth to provideaudiologicalservices and may include the full scope of audiological practice. This term was first used by Gregg Givens in 1999 in reference to a system being developed atEast Carolina Universityin North Carolina, US.[50] Teleneurology describes the use ofmobile technologyto provide neurological care remotely, including care for stroke, movement disorders like Parkinson's disease, seizure disorders (e.g., epilepsy), etc. The use of teleneurology gives us the opportunity to improve health care access for billions around the globe, from those living in urban locations to those in remote, rural locations. Evidence shows that individuals with Parkinson's disease prefer personal connection with a remote specialist to their local clinician. Such home care is convenient but requires access to and familiarity with the Internet.[51][52]A 2017 randomized controlled trial of "virtual house calls" or video visits with individuals diagnosed with Parkinson's disease evidences patient preference for the remote specialist vs their local clinician after one year.[52]Teleneurology for patients with Parkison's disease is found to be cheaper than in person visits by reducing transportation and travel time[53][54]A recent systematic review by Ray Dorsey et al.[51]describes both the limitations and potential benefits of teleneurology in improving care for patients with chronic neurological conditions, especially in low-income countries. White, well-educated and technologically savvy people are the biggest consumers of telehealth services for Parkinson's disease.[53][54]as compared to ethnic minorities in the US.[54] Telemedicine in neurosurgery was historically primarily used for follow-up visits by patients who had to travel far to undergo surgery.[55]In the last decade, telemedicine was also used for remote ICU rounding as well as prompt evaluation for acute ischemic stroke and administration of IV alteplase in conjunction with neurology.[56][57]From the onset of the COVID-19 pandemic, there was a rapid surge in the use of telemedicine across all divisions of neurosurgery: vascular, oncology, spine, and functional neurosurgery. Not only for follow-up visits, but it has gained popularity for seeing new patients or following established patients regardless of whether they underwent surgery.[58][59]Telemedicine is not limited to direct patient care only; there are a number of new research groups and companies focused on using telemedicine for clinical trials involving patients with neurosurgical diagnoses. Teleneuropsychology is the use of telehealth/videoconference technology for the remote administration ofneuropsychological tests. Neuropsychological tests are used to evaluate the cognitive status of individuals with known or suspectedbrain disordersand provide a profile of cognitive strengths and weaknesses. Through a series of studies, there is growing support in the literature showing that remote videoconference-based administration of many standard neuropsychological tests results in test findings similar to traditional in-person evaluations, thereby establishing the basis for the reliability and validity of teleneuropsychological assessment.[60][61][62][63][64][65][66] Telenutrition refers to the use of video conferencing/ telephony to provide online consultation by anutritionistordietician. Patient or clients upload their vital statistics, diet logs, food pictures, etc., on a telenutrition portal that is then used by the nutritionist or dietician to analyze their current health condition. The nutritionist or dietician can then set goals for their respective clients/patients and monitor their progress regularly by follow-up consultations. Telenutrition portals can help people seek remote consultation for themselves and/or their family. This can be extremely helpful for elderly or bedridden patients who can consult their dietician from comfort of their homes. Telenutrition showed to be feasible, and the majority of patients trusted the nutritional televisits, in place of the scheduled but not provided follow-up visits during the lockdown of the COVID-19 pandemic.[67] Telerehabilitation (ore-rehabilitation[68][69]) is the delivery ofrehabilitationservices overtelecommunication networksand the Internet. Most types of services fall into two categories: clinical assessment (the patient's functional abilities in his or her environment) andclinical therapy. Some fields of rehabilitation practice that have explored telerehabilitation are:neuropsychology,speech–language pathology,audiology,occupational therapy, andphysical therapy. Telerehabilitation can deliver therapy to people who cannot travel to aclinicbecause the patient has adisabilityor because of travel time. Telerehabilitation also allows experts in rehabilitation to engage in clinical consultation at a distance. Most telerehabilitation is highly visual. As of 2014, the most commonly used mediums arewebcams,videoconferencing,phone lines,videophones, and webpages containingrich web applications. The visual nature of telerehabilitation technology limits the types of rehabilitation services that can be provided. It is most widely used forneuropsychological rehabilitation, fitting of rehabilitation equipment such aswheelchairs,braces, orartificial limbs, and in speech-language pathology.Rich web applicationsfor neuropsychological rehabilitation (cognitive rehabilitation) of cognitive impairment (from many etiologies) were first introduced in 2001. This endeavor has expanded as ateletherapyapplication for cognitive skills enhancement programs for school children.Tele-audiology(hearing assessments) is a growing application. Physical therapy and psychology interventions delivered via telehealth may result in similar outcomes as those delivered in person for a range of health conditions.[70] Two important areas of telerehabilitation research are (1) demonstrating equivalence of assessment and therapy to in-person assessment and therapy and (2) building new data collection systems to digitize information that a therapist can use in practice. Ground-breaking research intelehaptics(the sense of touch) and virtual reality may broaden the scope of telerehabilitation practice in the future. In the United States, theNational Institute on Disability and Rehabilitation Research's (NIDRR)[71]supports research and the development of telerehabilitation. NIDRR's grantees include the "Rehabilitation Engineering and Research Center" (RERC) at theUniversity of Pittsburgh, theRehabilitation Institute of Chicago, the State University of New York at Buffalo, and the National Rehabilitation Hospital inWashington, D.C. Other federal funders of research are theVeterans Health Administration, the Health Services Research Administration in the US Department of Health and Human Services, and theDepartment of Defense.[72]Outside the United States, excellent research is conducted inAustraliaandEurope. Only a fewhealth insurersin the United States, and about half ofMedicaidprograms,[73]reimbursefor telerehabilitation services. If the research shows that teleassessments and teletherapy are equivalent to clinical encounters, it is more likely thatinsurersandMedicarewill cover telerehabilitation services. InIndia, the Indian Association of Chartered Physiotherapists (IACP) provides telerehabilitation facilities. With the support and collaboration of local clinics and private practitioners and the Members IACP, IACP runs the facility, named Telemedicine. IACP has maintained an internet-based list of their members on their website, through which patients can make online appointments. Telemedicine can be utilized to improve the efficiency and effectiveness of care delivery in a trauma environment. Examples include: Telemedicine for trauma triage: using telemedicine, trauma specialists can interact with personnel on the scene of a mass casualty or disaster situation via the internet using mobile devices to determine the severity of injuries. They can provide clinical assessments and determine whether those injured must be evacuated for necessary care. Remote trauma specialists can provide the same quality of clinical assessment and plan of care as a trauma specialist located physically with the patient.[74] Telemedicine forintensive care unit(ICU) rounds: Telemedicine is also being used in some trauma ICUs to reduce the spread of infections. Rounds are usually conducted at hospitals across the country by a team of approximately ten or more people including attending physicians, fellows, residents, and other clinicians. This group usually moves from bed to bed in a unit, discussing each patient. This aids in the transition of care for patients from the night shift to the morning shift but also serves as an educational experience for new residents to the team. A new approach features the team conducting rounds from a conference room using a video-conferencing system. The trauma attending, residents, fellows, nurses, nurse practitioners, and pharmacists are able to watch a live video stream from the patient's bedside. They can see the vital signs on the monitor, view the settings on the respiratory ventilator, and/or view the patient's wounds. Video-conferencing allows remote viewers to conduct two-way communication with clinicians at the bedside.[75] Telemedicine for trauma education: some trauma centers are delivering trauma education lectures to hospitals and health care providers worldwide using video conferencing technology. Each lecture provides fundamental principles, first-hand knowledge, and evidenced-based methods for critical analysis of established clinical practice standards, and comparisons to newer advanced alternatives. The various sites collaborate and share their perspective based on location, available staff, and available resources.[76] Telemedicine in the trauma operating room: trauma surgeons are able to observe and consult on cases from a remote location using video conferencing. This capability allows the attending to view the residents in real time. The remote surgeon has the capability to control the camera (pan, tilt, and zoom) to get the best angle of the procedure while at the same time providing expertise in order to provide the best possible care to the patient.[77] ECGs, orelectrocardiographs, can be transmitted using telephone and wireless.Willem Einthoven, the inventor of the ECG, actually did tests with the transmission of ECG via telephone lines. This was because the hospital did not allow him to move patients outside the hospital to his laboratory for testing of his new device. In 1906, Einthoven came up with a way to transmit the data from the hospital directly to his lab.[78][79] One of the oldest known telecardiology systems for teletransmissions of ECGs was established in Gwalior, India, in 1975 at GR Medical College by Ajai Shanker, S. Makhija, P.K. Mantri using an indigenous technique for the first time in India. This system enabled wireless transmission of ECG from the moving ICU van or the patients home to the central station in ICU of the department of Medicine. Transmission using wireless was done using frequency modulation which eliminated noise. Transmission was also done through telephone lines. The ECG output was connected to the telephone input using a modulator that converted ECG into high-frequency sound. At the other end a demodulator reconverted the sound into ECG with a good gain accuracy. The ECG was converted to sound waves with a frequency varying from 500 Hz to 2500 Hz with 1500 Hz at baseline. This system was also used to monitor patients with pacemakers in remote areas. The central control unit at the ICU was able to correctly interpretarrhythmia. This technique helped medical aid reach in remote areas.[80] In addition,electronic stethoscopescan be used as recording devices, which is helpful for purposes of telecardiology. There are many examples of successful telecardiology services worldwide. InPakistan, three pilot projects in telemedicine were initiated by the Ministry of IT & Telecom, Government of Pakistan (MoIT) through the Electronic Government Directorate in collaboration with Oratier Technologies (a pioneer company within Pakistan dealing with healthcare and HMIS) and PakDataCom (a bandwidth provider). Three hub stations through were linked via the Pak Sat-I communications satellite, and four districts were linked with another hub. A 312 Kb link was also established with remote sites and 1 Mbit/s bandwidth was provided at each hub. Three hubs were established: the Mayo Hospital (the largest hospital in Asia), JPMC Karachi, and Holy Family Rawalpindi. These 12 remote sites were connected and an average of 1,500 patients were treated per month per hub. The project was still running smoothly after two years.[81] Wireless ambulatory ECG technology, moving beyond previous ambulatory ECG technology such as theHolter monitor, now includes smartphones andApple Watches, which can perform at-home cardiac monitoring and send the data to a physician via the Internet.[82] Teleradiology is the ability to sendradiographicimages (X-rays, CT, MR, PET/CT, SPECT/CT, MG, US...) from one location to another.[83]For this process to be implemented, three essential components are required: an image-sending station, a transmission network, and a receiving-image review station. The most typical implementation is two computers connected via the Internet. The computer at the receiving end will need a high-quality display screen that has been tested and cleared for clinical purposes. Sometimes the receiving computer will have a printer for convenience. The teleradiology process begins at the image-sending station. The radiographic image and a modem or other connection are required for this first step. The image is scanned and then sent via the network connection to the receiving computer. Today's high-speed broadband-based Internet enables the use of new technologies for teleradiology: the image reviewer can now have access to distant servers in order to view an exam. Therefore, they do not need particular workstations to view the images; a standardpersonal computer(PC) anddigital subscriber line(DSL) connection is enough to reach Keosys' central server. No particular software is necessary on the PC, and the images can be reached from anywhere in the world. Teleradiology is the most popular use for telemedicine and accounts for at least 50% of all telemedicine usage. Telepathology is the practice ofpathologyat a distance. It usestelecommunications technologyto facilitate the transfer of image-rich pathology data between distant locations for the purposes ofdiagnosis,education, andresearch.[84][85]The performance of telepathology requires that a pathologist selects thevideoimages for analysis and rendering diagnoses. The use of "television microscopy", the forerunner of telepathology, did not require that a pathologist have physical or virtual "hands-on" involvement in the selection of microscopic fields of view for analysis and diagnosis. A pathologist, Ronald S. Weinstein, M.D., coined the term "telepathology" in 1986. In an editorial in a medical journal, Weinstein outlined the actions that would be needed to create remote pathology diagnostic services.[86]He and his collaborators published the first scientific paper on robotic telepathology.[87]Weinstein was also granted the first U.S.patentsforrobotictelepathology systems and telepathology diagnostic networks.[88]Weinstein is known to many as the "father of telepathology".[89]InNorway, Eide and Nordrum implemented the first sustainable clinical telepathology service in 1989.[90]This is still in operation, decades later. A number of clinical telepathology services have benefited many thousands of patients in North America, Europe, and Asia. Telepathology has been successfully used for many applications, including the renderinghistopathologytissue diagnoses at a distance, for education and research. Althoughdigital pathologyimaging, includingvirtual microscopy, is the mode of choice for telepathology services in developed countries,analogtelepathology imaging is still used for patient services in somedeveloping countries. Teledermatology allowsdermatologyconsultations over a distance using audio, visual anddata communication, and has been found to improve efficiency, access to specialty care, and patient satisfaction.[91][92]Applications comprise health care management such as diagnoses, consultation and treatment as well as (continuing medical) education.[93][94][95]The dermatologists Perednia and Brown were the first to coin the termteledermatologyin 1995, where they described the value of a teledermatologic service in a rural area underserved by dermatologists.[96] Teleophthalmology is a branch of telemedicine that delivers eye care through digital medical equipment and telecommunications technology. Today, applications of teleophthalmology encompass access to eye specialists for patients in remote areas, ophthalmic disease screening, diagnosis and monitoring; as well as distant learning. Teleophthalmology may help reduce disparities by providing remote, low-cost screening tests such as diabetic retinopathy screening to low-income and uninsured patients.[97][98]In Mizoram, India, a hilly area with poor roads, between 2011 and 2015, teleophthalmology provided care to over 10,000 patients. These patients were examined by ophthalmic assistants locally but surgery was done on appointment after the patient images were viewed online by eye surgeons in the hospital 6–12 hours away. Instead of an average five trips for say, a cataract procedure, only one was required for surgery alone as even post-op care like removal of stitches and appointments for glasses was done locally. There were large cost savings in travel as well.[99] In the United States, some companies allow patients to complete an online visual exam and within 24 hours receive a prescription from an optometrist valid for eyeglasses, contact lenses, or both. Some US states such as Indiana have attempted to ban these companies from doing business.[100] Remote surgery(also known as telesurgery) is the ability for a doctor to performsurgeryon a patient even though they are not physically in the same location. It is a form oftelepresence. Remote surgery combines elements ofrobotics, cutting-edgetelecommunicationssuch as high-speed data connections,telehapticsand elements ofmanagement information systems. While the field ofrobotic surgeryis fairly well established, most of these robots are controlled by surgeons at the location of the surgery. Remote surgery isremote workfor surgeons, where the physical distance between the surgeon and the patient is immaterial. It promises to allow the expertise of specialized surgeons to be available to patients worldwide, without the need for patients to travel beyond their local hospital.[101] Remote surgery or telesurgery is performance of surgical procedures where the surgeon is not physically in the same location as the patient, using a roboticteleoperatorsystem controlled by the surgeon. The remote operator may give tactile feedback to the user. Remote surgery combines elements of robotics and high-speed data connections. A critical limiting factor is the speed,latencyand reliability of the communication system between the surgeon and the patient, though trans-Atlantic surgeries have been demonstrated. Telemedicine has been used globally to increase access to abortion care, specificallymedical abortion, in environments where few abortion care providers exist or abortion is legally restricted. Clinicians are able to virtually provide counseling, review screening tests, observe the administration of an abortion medication, and directly mail abortion pills to people.[102]In 2004,Women on Web(WoW), Amsterdam, started offering online consultations, mostly to people living in areas where abortion was legally restricted, informing them how to safely use medical abortion drugs to end a pregnancy.[102]People contact the Women on Web service online; physicians review any necessary lab results or ultrasounds, mailmifepristoneandmisoprostolpills to people, then follow up through online communication.[103]In the United States,medical abortionwas introduced as a telehealth service in Iowa by Planned Parenthood of the Heartland in 2008 to allow a patient at one health facility to communicate via secure video with a health provider at another facility.[104]In this model a person seeking abortion care must come to a health facility. An abortion care provider communicates with the person located at another site using clinic-to-clinic videoconferencing to provide medical abortion after screening tests and consultation with clinic staff. In 2018, the websiteAid Accesswas launched by the founder of Women on Web,Rebecca Gomperts. It offers a similar service as Women on Web in the United States, but the medications are prescribed to an Indian pharmacy, then mailed to the United States. The TelAbortion study conducted by Gynuity Health Projects, with special approval from the U.S. Food and Drug Administration (FDA), aims to increase access to medical abortion care without requiring an in-person visit to a clinic.[105][106][104]This models was expanded during theCOVID-19 pandemicand as of March 2020 exists in 13 U.S. states and has enrolled over 730 people in the study.[107][106]The person receives counseling and instruction from an abortion care provider via videoconference from a location of their choice. The medications necessary for the abortion, mifepristone and misoprostol, are mailed directly to the person and they have a follow-up video consultation in 7–14 days. A systematic review of telemedicine abortion has found the practice to be safe, effective, efficient, and satisfactory.[102] In the United States, eighteen states require the clinician to be physically present during the administration of medications for abortion which effectively bans telehealth of medication abortion: five states explicitly ban telemedicine for medication abortion, while thirteen states require the prescriber (usually required to be a physician) to be physically present with the patient.[108][109]In the UK, the Royal College of Obstetricians and Gynecologists approved a no-test protocol for medication abortion, with mifepristone available through a minimal-contact pick-up or by mail.[110] Telemedicine can facilitate specialty care delivered byprimary care physiciansaccording to a controlled study of the treatment ofhepatitis C.[111]Various specialties are contributing to telemedicine, in varying degrees.Other specialist conditions for which telemedicine has been used include perinatal mental health.[112] In light of the COVID-19 pandemic, primary care physicians have relied on telehealth to continue to provide care in outpatient settings.[113]The transition to virtual health has been beneficial in providing patients access to care (especially care that does not require a physical exam e.g. medication changes, minor health updates) and avoid putting patients at risk of COVID-19. This included providing services to pediatric patients during the pandemic, where issues of last minute cancelation and rescheduling were frequently related to a lack of technicality and engagement, two factors often understudied in the literature.[114] Telemedicine has also been beneficial in facilitating medical education to students while still allowing for adequate social distancing during the COVID-19 pandemic. Many medical schools have shifted to alternate forms of virtual curriculum and are still able to engage in meaningful telehealth encounters with patients.[115][116] Medication assisted treatment (MAT) is the treatment ofopioid use disorder(OUD) with medications, often in combination with behavioral therapy[117]As a response to the COVID-19 pandemic the use of telemedicine has been granted by theDrug Enforcement Administrationto start or maintain people OUD onbuprenorphine(trade name Suboxone) viatelemedicinewithout the need for an initial in-person examination.[118]On March 31, 2020,QuickMDbecame the first nationalTeleMATservice in the United States to provide Medication-assisted Treatment with Suboxone online – without the need of an in-person visit; with others announcing to follow soon.[119] Telehealth is a modern form of health care delivery. Telehealth breaks away from traditional health care delivery by using modern telecommunication systems including wireless communication methods.[120][121]Traditional health is legislated through policy to ensure the safety of medical practitioners and patients. Consequently, since telehealth is a new form of health care delivery that is now gathering momentum in the health sector, many organizations have started to legislate the use of telehealth into policy.[121]In New Zealand, the Medical Council has a statement about telehealth on their website. This illustrates that the medical council has foreseen the importance that telehealth will have on the health system and have started to introduce telehealth legislation to practitioners along with government.[122] Traditional use of telehealth services has been for specialist treatment. However, there has been a paradigm shift and telehealth is no longer considered a specialist service.[123]This development has ensured that many access barriers are eliminated, as medical professionals and patients are able to use wireless communication technologies to deliver health care.[124]This is evident inrural communities. Rural residents typically have to travel to longer distances to access healthcare than urban counterparts due to physician shortages and healthcare facility closures in these areas.[125][126]Telehealth eliminates this barrier as health professionals are able to conduct medical consultations through the use of wireless communication technologies. However, this process is dependent on both parties having internet access and comfort level with technology, which poses barriers for many low-income and rural communities.[124][127][128][129] Telehealth allows the patient to be monitored between physician office visits which can improve patient health. Telehealth also allows patients to access expertise which is not available in their local area. This remote patient monitoring ability enables patients to stay at home longer and helps avoid unnecessary hospital time. In the long-term, this could potentially result in less burdening of the healthcare system and consumption of resources.[1][130] During the COVID-19 pandemic, there were large increases in the use of telemedicine for primary care visits within the United States, increasing from an average of 1.4 million visits in Q2 of 2018 and 2019 to 35 million visits in Q2 2020, according to data fromIQVIA.[131]The telehealth market is expected to grow at 40% a year in 2021. Use of telemedicine by General Practitioners in the UK rose from 20 to 30% pre-COVID to almost 80% by the beginning of 2021. More than 70% of practitioners and patients were satisfied with this.[132]Boris Johnsonwas said to have "piled pressure on GPs to offer more in-person consultations" supporting a campaign largely orchestrated by theDaily Mail. TheRoyal College of General Practitionerssaid that a patient "right" to have face-to-face appointments if they wished was "undeliverable".[133] The technological advancement of wireless communication devices is a major development in telehealth.[134]This allows patients to self-monitor their health conditions and to not rely as much on health care professionals. Furthermore, patients are more willing to stay on their treatment plans as they are more invested and included in the process as the decision-making is shared.[135][136]Technological advancement also means that health care professionals are able to use better technologies to treat patients for example in maternal care[137]andsurgery. A 2023 study published in theJournal of the American College of Surgeonsshowed telemedicine as making a positive impact, with expectations exceeded for those physicians and patients who had consulted online for surgeries.[138]Technological developments in telehealth are essential to improve health care, especially the delivery of healthcare services, as resources are finite along with an ageing population that is living longer.[134][135][136] Restrictive licensure laws in the United States require a practitioner to obtain a full license to deliver telemedicine care across state lines. Typically, states with restrictive licensure laws also have several exceptions (varying from state to state) that may release an out-of-state practitioner from the additional burden of obtaining such a license. A number of states require practitioners who seek compensation to frequently deliver interstate care to acquire a full license. If a practitioner serves several states, obtaining this license in each state could be an expensive and time-consuming proposition. Even if the practitioner never practices medicine face-to-face with a patient in another state, he/she still must meet a variety of other individual state requirements, including paying substantial licensure fees, passing additional oral and written examinations, and traveling for interviews. In 2008, the U.S. passed the Ryan Haight Act which required face-to-face or valid telemedicine consultations prior to receiving a prescription.[139] State medical licensing boardshave sometimes opposed telemedicine; for example, in 2012 electronic consultations were illegal in Idaho, and an Idaho-licensed general practitioner was punished by the board for prescribing an antibiotic, triggering reviews of her licensure and board certifications across the country.[140]Subsequently, in 2015 the state legislature legalized electronic consultations.[140] In 2015, Teladoc filed suit against theTexas Medical Boardover a rule that required in-person consultations initially; the judge refused to dismiss the case, noting that antitrust laws apply to state medical boards.[141] Telehealth allows multiple, varying disciplines to merge and deliver a potentially more uniform level of care, using technology. As telehealth proliferates mainstream healthcare, it challenges notions of traditional healthcare delivery. Some populations experience better quality, access and more personalized health care.[142][143] Telehealth can also increase health promotion efforts. These efforts can now be more personalised to the target population and professionals can extend their help into homes or private and safe environments in which patients of individuals can practice, ask and gain health information.[130][136][144]Health promotion using telehealth has become increasingly popular inunderdeveloped countrieswhere there are very poor physical resources available. There has been a particular push towardmHealthapplications as many areas, even underdeveloped ones have mobile phone and smartphone coverage.[145][146][147] In a 2015 article reviewing research on the use of a mobile health application in the United Kingdom,[148]authors describe how a home-based application helped patients manage and monitor their health and symptoms independently. The mobile health application allows people to rapidly self-report their symptoms – 95% of patients were able to report their daily symptoms in less than 100 seconds, which is less than the 5 minutes (plus commuting) taken to measure vital signs by nurses in hospitals.[149]Online applications allow patients to remain at home to keep track of the progression of their chronic illnesses. The downside of using mHealth applications is that not everyone, especially in developing countries, has daily access to internet or electronic devices.[150] Indeveloped countries, health promotion efforts using telehealth have been met with some success. TheAustralian hands-free breastfeeding Google Glass applicationreported promising results in 2014. This application made in collaboration with the Australian Breastfeeding Association and a tech startup calledSmall World Social, helped new mothers learn how tobreastfeed.[151][152][153]Breastfeeding is beneficial to infant health andmaternal healthand is recommended by theWorld Health Organisationand health organisations all over the world.[154][155]Widespread breastfeeding can prevent 820,000 infant deaths globally but the practice is often stopped prematurely or intents to do are disrupted due to lack of social support, know-how or other factors.[155]This application gave mother's hands-free information on breastfeeding, instructions on how to breastfeed and also had an option to call a lactation consultant over Google Hangout. When the trial ended, all participants were reported to be confident in breastfeeding.[153] Ascientific reviewindicates that, in general, outcomes of telemedicine are or can be as good as in-person care with health care use staying similar.[156] Advantages of the nonexclusive adoption of already existing telemedicine technologies such as smartphonevideotelephonymay include reduced infection risks,[158]increased control of disease during epidemic conditions,[159]improved access to care,[160]reduced stress and exposure to other pathogens[161][162]during illness for better recovery, reduced time[163]and labor costs, efficient more accessible matching of patients with particular symptoms and clinicians who are experts for such, and reduced travel while disadvantages may include privacy breaches (e.g. due to software backdoors and vulnerabilities or sale of data), dependability on Internet access[158]and, depending on various factors, increased health care use.[additional citation(s) needed] Theoretically, the whole health system could benefit from telehealth. There are indications telehealth consumes fewer resources and requires fewer people to operate it with shorter training periods to implement initiatives.[14]Commenters suggested that lawmakers may fear that making telehealth widely accessible, without any other measures, would lead to patients using unnecessary health care services.[160]Telemedicine could also be used for connected networks between health care professionals.[164] Telemedicine also can eliminate the possible transmission ofinfectious diseasesorparasitesbetween patients and medical staff. This is particularly an issue whereMRSAis a concern. Additionally, some patients who feel uncomfortable in a doctors office may do better remotely. For example,white coat syndromemay be avoided. Patients who are home-bound and would otherwise require an ambulance to move them to a clinic are also a consideration. However, whether or not the standard of health care quality is increasing is debatable, with some literature refuting such claims.[143][165][166]Research has reported that clinicians find the process difficult and complex to deal with.[165][167]Furthermore, there are concerns around informed consent, legality issues as well as legislative issues. A recent study also highlighted that the swift and large-scale implementation of telehealth across the United Kingdom NHS Allied Health Professional (AHP) services might increase disparities in health care access for vulnerable populations with limited digital literacy.[168]Although health care may become affordable with the help of technology, whether or not this care will be "good" is the issue.[143]Many studies indicate high satisfaction with telemedicine among patients.[169]Among the factors associated with a good trust in telemedicine, the use of known and user-friendly video services and confidence in thedata protectionpolicies were the two variables contributing most to trust in telemedicine.[170] Major problems with increasing adoption include technically challenged staff, resistance to change or habits[161]and age of patient. Focused policy could eliminate several barriers.[171] A review lists a number of potentially good practices and pitfalls, recommending the use of "virtual handshakes" forconfirming identity, taking consent for conducting remote consultation over a conventional meeting, and professional standardized norms for protecting patient privacy and confidentiality.[172]It also found that theCOVID-19 pandemicsubstantially increased, voluntarily, the adoption of telephone or video consultation and suggests that telemedicine technology "is a key factor in delivery of health care in the future".[172] Technologies’ growing involvement in health care has led to continuous improvement in access, efficiency and quality of care, but numerous challenges lie with addressing the barriers that impair the geriatric population from benefitting the use of this new technology.[173]With the COVID-19 pandemic, rapid implementation of telehealth in geriatric outpatient clinics occurred. Although time efficiency was greatly improve and increase access to geriatric patients with lack of transportion to the clinic, there are complications that arised during and after implementation with many appointments requiring rescheduling due to language barrier, poor connection, hard of hearing, or inability to perform assessments.[174]Studies also show that patients and their family member often show preference to in-person visits. Although benefits were seen in being able to see a provider sooner, high-quality audio and video, and functionality to allow family participation during visits, patients and family noted preferences for in-person visits even still due to difficulty in using the service.[175]Currently new improvements are being made to help ease the complications of telehealth for geriatric patients. This inlcudes integrated captions on video calls for the hearing impaired, virtual interpreters that will attend the calls for language differences, government assisted internet services, increase training to medical providers and patients on telehealth use, etc. Due to its digital nature it is often assumed that telehealth saves the health system money. However, the evidence to support this is varied. When conducting economic evaluations of telehealth services, the individuals evaluating them need to be aware of potential outcomes and extraclinical benefits of the telehealth service.[176]Economic viability relies on the funding model within the country being examined (public vs private), the consumers willingness-to-pay, and the expected remuneration by the clinicians or commercial entities providing the services (examples of research on these topics from teledermoscopy in Australia)[177][178][179] In a UK telehealth trial done in 2011, it was reported that the cost of health could be dramatically reduced with the use of telehealth monitoring. The usual cost ofin vitro fertilisation(IVF) per cycle would be around $15,000; with telehealth it was reduced to $800 per patient.[180]InAlaska the Federal Health Care Access Network, which connects 3,000 healthcare providers to communities, engaged in 160,000 telehealth consultations from 2001 and saved the state $8.5 million in travel costs for justMedicaidpatients.[181] Digital interventions for mental health conditionsseem to be cost-effective compared to no intervention or non-therapeutic responses such as monitoring. However, when compared to in-person therapy or medication their added value is currently uncertain.[182] Telemedicine can be beneficial to patients in isolated communities and remote regions, who can receive care from doctors or specialists far away without the patient having to travel to visit them.[183]Recent developments inmobile collaborationtechnology can allow healthcare professionals in multiple locations to share information and discuss patient issues as if they were in the same place.[184]Remote patient monitoring throughmobile technologycan reduce the need for outpatient visits and enable remote prescription verification and drug administration oversight, potentially significantly reducing the overall cost of medical care.[185]It may also be preferable for patients with limited mobility, for example, patients with Parkinson's disease.[51]Telemedicine can also facilitate medical education by allowing workers to observe experts in their fields and share best practices more easily.[186] Remote surgeryand types of videoconferencing for sharing expertise (e.g.ad hocassistance) have been and could be used to support doctors in Ukraine during the2022 Russian invasion of Ukraine.[187] While many branches of medicine have wanted to fully embrace telehealth for a long time, there are certain risks and barriers which bar the full amalgamation of telehealth intobest practice. For a start, it is dubious as to whether a practitioner can fully leave the "hands-on" experience behind.[143]Although it is predicted that telehealth will replace many consultations and other health interactions, it cannot yet fully replace a physical examination, this is particularly so indiagnostics,rehabilitationormental health.[143]To minimise safety issues, researchers have suggested not offering remote consultations for some conditions (breathing problems, new psychosis, or acute chest pain, for example), when a parent is very concerned about a child, when a condition has not resolved as expected or has worsened, or to people who might struggle to understand or be understood (such as those with limited English or learning difficulties).[191][192] The benefits posed by telehealth challenge the normative means of healthcare delivery set in both legislation and practice. Therefore, the growing prominence of telehealth is starting to underscore the need for updated regulations, guidelines and legislation which reflect the current and future trends of healthcare practices.[2][143]Telehealth enables timely and flexible care to patients wherever they may be; although this is a benefit, it also poses threats toprivacy,safety, medical licensingandreimbursement. When a clinician and patient are in different locations, it is difficult to determine which laws apply to the context.[193]Once healthcare crosses borders different state bodies are involved in order to regulate and maintain the level of care that is warranted to the patient or telehealth consumer. As it stands, telehealth is complex with many grey areas when put into practice especially as it crosses borders. This effectively limits the potential benefits of telehealth.[2][143] An example of these limitations include the current American reimbursement infrastructure, whereMedicarewill reimburse for telehealth services only when a patient is living in an area where specialists are in shortage, or in particular rural counties. The area is defined by whether it is a medical facility as opposed to a patient's' home. The site that the practitioner is in, however, is unrestricted. Medicare will only reimburse live video (synchronous) type services, not store-and-forward, mhealth or remote patient monitoring (if it does not involve live-video). Some insurers currently will reimburse telehealth, but not all yet. So providers and patients must go to the extra effort of finding the correct insurers before continuing. Again in America, states generally tend to require that clinicians are licensed to practice in the surgery' state, therefore they can only provide their service if licensed in an area that they do not live in themselves.[140] More specific and widely reaching laws, legislations and regulations will have to evolve with the technology. They will have to be fully agreed upon, for example, will all clinicians need full licensing in every community they provide telehealth services too, or could there be a limited use telehealth licence? Would the limited use licence cover all potential telehealth interventions, or only some? Who would be responsible if an emergency was occurring and the practitioner could not provide immediate help – would someone else have to be in the room with the patient at all consult times? Which state, city or country would the law apply in when a breach or malpractice occurred?[143][194] A major legal action prompt in telehealth thus far has been issues surrounding online prescribing and whether an appropriate clinician-patient relationship can be established online to make prescribing safe, making this an area that requires particular scrutiny.[142]It may be required that the practitioner and patient involved must meet in person at least once before online prescribing can occur, or that at least a live-video conference must occur, not just impersonal questionnaires or surveys to determine need.[195] Telehealth has some potential for facilitating self-management techniques in health care, but for patients to benefit from it, the appropriate contact with, and relationship, between doctor and patient must be established first.[196]This would start with an online consultation, providing patients with techniques and tools that help them participate in healthy behaviors, and initiating a collaborative partnership between health care professionals and patient.[197]Self-management strategies fall into a broader category called patient activation, which is defined as a "patients' willingness and ability to take independent actions to manage their health."[198]It can be achieved by increasing patients' knowledge and confidence in coping with and managing their own disease through a "regular assessment of progress [...] and problem-solving support."[197]Teaching patients about their conditions and ways to cope with chronic illnesses will allow them to be knowledgeable about their disease and willing to manage it, improving their everyday life. Without a focus on the doctor-patient relationship and on the patient's understanding, telehealth cannot improve the quality of life of patients, despite the benefit of allowing them to do their medical check-ups from the comfort of their home. The downsides of telemedicine include the cost of telecommunication and data management equipment and of technical training for medical personnel who will employ it. Virtual medical treatment also entails potentially decreased human interaction between medical professionals and patients, an increased risk of error when medical services are delivered in the absence of a registered professional, and an increased risk thatprotected health informationmay be compromised through electronic storage and transmission.[199]There is also a concern that telemedicine may actually decrease time efficiency due to the difficulties of assessing and treating patients through virtual interactions; for example, it has been estimated that ateledermatologyconsultation can take up to thirty minutes, whereas fifteen minutes is typical for a traditional consultation.[200]Additionally, potentially poor quality of transmitted records, such as images or patient progress reports, and decreased access to relevant clinical information are quality assurance risks that can compromise the quality and continuity of patient care for the reporting doctor.[201]Other obstacles to the implementation of telemedicine include unclear legal regulation for some telemedical practices and difficulty claiming reimbursement from insurers or government programs in some fields.[44]Some medical organizations have delivered position statement on the correct use of telemedicine in their field.[202][203][204][205] Another disadvantage of telemedicine is the inability to start treatment immediately. For example, a patient with a bacterial infection might be given anantibiotichypodermic injectionin the clinic, and observed for any reaction, before that antibiotic is prescribed in pill form. Equitability is also a concern. Many families and individuals in the United States, and other countries, do not have internet access in their homes or the proper electronic devices to access services such as a laptop or smartphone.[citation needed] Informed consentis another issue. When telehealth includes the possibility for technical problems such as transmission errors, security breaches, or storage issues, it can impact the system's ability to communicate. It may be wise to obtain informed consent in person first, as well as having backup options for when technical issues occur. In person, a patient can see who is involved in their care (namely themselves and their clinician in a consult), but online there will be other involved such as the technology providers, therefore consent may need to involve disclosure of anyone involved in the transmission of the information and the security that will keep their information private, and any legal malpractice cases may need to involve all of those involved as opposed to what would usually just be the practitioner.[142][194][195] The rate of adoption of telehealth services in any jurisdiction is frequently influenced by factors such as the adequacy and cost of existing conventionalhealth servicesin meetingpatient needs; the policies of governments and/or insurers with respect to coverage and payment for telehealth services; and medical licensing requirements that may inhibit or deter the provision of telehealth second opinions or primary consultations by physicians. Projections for the growth of the telehealth market are optimistic, and much of this optimism is predicated upon the increasing demand for remote medical care. According to a recent survey, nearly three-quarters of U.S. consumers say they would use telehealth.[206]At present, several major companies along with a bevvy of startups are working to develop a leading presence in the field. In the UK, the Government's Care Services minister, Paul Burstow, has stated that telehealth andtelecarewould be extended over the next five years (2012–2017) to reach three million people. In the United States, telemedicine companies are collaborating with health insurers and other telemedicine providers to expand marketshare and patient access to telemedicine consultations. As of 2019[update], 95% of employers believe their organizations will continue to provide health care benefits over the next five years.[207] The COVID-19 pandemic drove increased usage of telehealth services in the U.S. The U.S. Centers for Disease Control and Prevention reported a 154% increase in telehealth visits during the last week of March 2020, compared to the same dates in 2019.[208] From 1999 to 2018, theUniversity Hospital of Zurich(USZ) offered clinical telemedicine and online medical advice on the Internet. A team of doctors answered around 2500 anonymous inquiries annually, usually within 24 to 48 hours. The team consisted of up to six physicians who are specialists in clinical telemedicine at the USZ and have many years of experience, particularly in internal and general medicine. In the entire period, 59360 inquiries were sent and answered.[209]The majority of the users were female and on average 38 years old. However, in the course of time, considerably more men and older people began to use the service. The diversity of medical queries covered all categories of theInternational Statistical Classification of Diseases and Related Health Problems(ICD) and correlated with the statistical frequency of diseases in hospitals in Switzerland. Most of the inquiries concerned unclassified symptoms and signs, services related to reproduction, respiratory diseases, skin diseases, health services, diseases of the eye and nervous systems, injuries and disorders of the female genital tract. As with the Swedish online medical advice service,[210]one-sixth of the requests related to often shameful andstigmatiseddiseases of the genitals, gastrointestinal tract, sexually transmitted infections, obesity and mental disorders. By providing an anonymous space where users can talk about (shameful) diseases, online telemedical services empower patients and their health literacy is enhanced by providing individualized health information. The Clinical Telemedicine and Online Counselling service of the University Hospital of Zurich is currently being revised and will be offered in a new form in the future.[211] For developing countries, telemedicine andeHealthcan be the only means of healthcare provision in remote areas. For example, the difficult financial situation in many African states and lack of trained health professionals has meant that the majority of the people in sub-Saharan Africa are badly disadvantaged in medical care, and in remote areas with low population density, direct healthcare provision is often very poor[212]However, provision of telemedicine and eHealth from urban centers or from other countries is hampered by the lack of communications infrastructure, with no landline phone or broadband internet connection, little or no mobile connectivity, and often not even a reliable electricity supply.[213] India has broad rural-urban population and rural India is bereaved from medical facilities, giving telemedicine a space for growth in India. Deprived education and medical professionals in rural areas is the reason behind government's ideology to use technology to bridge this gap. Remote areas not only present a number of challenges for the service providers but also for the families who are accessing these services. Since 2018, telemedicine has expanded in India. It has undertaken a new way for doctor consultations. On 25 March 2020, in the wake of COVID-19 pandemic, theMinistry of Health and Family Welfareissued India's Telemedicine Practice Guidelines.[214]The Board of Governors entasked by the Health Ministry published an amendment to theIndian Medical Council(Professional Conduct, Etiquette and Ethics) Regulations, 2002 that gave much-needed statutory support for the practice of telemedicine in India. This sector is at an ever-growing stage with high scope of development.[215]In April 2020, the union health ministry launched the eSanjeevani telemedicine service that operates at two levels: the doctor-to-doctor telemedicine platform, and the doctor-to-patient platform. This service crossed five million tele-consultations within a year of its launch indicating conducive environment for acceptability and growth of telemedicine in India.[216] Sub-Saharan Africa is marked by the massive introduction of new technologies and internet access.[217]Urban areas are facing a rapid change and development, and access to internet and health is rapidly improving. Population in remote areas however, still lack access to healthcare and modern technologies. Some people in rural regions must travel more between 2 and 6 hours to reach the closest healthcare facilities of their country.[218]leaving room for telehealth to grow and reach isolated people in the near future. The Satellite African eHEalth vaLidation (SAHEL) demonstration project has shown howsatellite broadbandtechnology can be used to establish telemedicine in such areas. SAHEL was started in 2010 in Kenya and Senegal, providing self-contained, solar-powered internet terminals to rural villages for use by community nurses for collaboration with distant health centers for training, diagnosis and advice on local health issues.[219]Those methods can have major impact on both health professionals to get and provide training from remote areas, and on the local population who can receive care without traveling long distances. Some non-profits provide internet to rural places around the world using a mobile VSAT terminal. This VSAT terminal equips remote regions allowing them to alert the world when there is a medical emergency, resulting in a rapid deployment or response from developed countries.[220]Technologies such as the ones used by MAF allows health professionals in remote clinics to have internet access, making consultations much easier, both for patients and doctors. In 2014, the government of Luxembourg, along with satellite operators and NGOs establishedSATMED, a multilayer eHealth platform to improve public health in remote areas of emerging and developing countries, using theEmergency.ludisaster relief satellite platform and theAstra 2GTV satellite.[221]SATMED was first deployed in response to a report in 2014 by German Doctors of poor communications in Sierra Leone hampering the fight against Ebola, and SATMED equipment arrived in the Serabu clinic inSierra Leonein December 2014.[222][223]In June 2015 SATMED was deployed at Maternité Hospital in Ahozonnoude, Benin to provide remote consultation and monitoring, and is the only effective communication link between Ahozonnoude, the capital and a third hospital in Allada, since land routes are often inaccessible due to flooding during the rainy season.[224][225] The development and history of telehealth or telemedicine (terms used interchangeably in literature) is deeply rooted in the history and development in not only technology but also society itself. Humans have long sought to relay important messages throughtorches,optical telegraphy,electroscopes, andwireless transmission. Early forms of telemedicine achieved with telephone and radio have been supplemented withvideotelephony, advanceddiagnostic methodssupported bydistributed client/server applications, and additionally with telemedical devices to support in-home care.[16] In the 21st century, with the advent of theinternet,portable devicesand other such digital devices are taking a transformative role in healthcare and its delivery.[226] Although,traditional medicinerelies on in-person care, the need and want for remote care has existed from the Roman and pre-Hippocratic periods in antiquity. The elderly and infirm who could not visit temples for medical care sent representatives to convey information on symptoms and bring home a diagnosis as well as treatment.[226]In Africa, villagers would usesmoke signalsto warn neighboring villages of disease outbreak.[227]The beginnings of telehealth have existed through primitive forms of communication and technology.[226]The exact date of origin for Telehealth is unknown, but it was known to have been used during theBubonic Plague. That version of telehealth was far different from how we know it today. During that time, they were communicating byheliographand bonfire. Those were used to notify other groups of people about famine and war.[228]Those are not using any form of technology yet but are starting to spread the idea of connectivity among groups of people who geographically could not be together. As technology developed and wired communication became increasingly commonplace, the ideas surrounding telehealth began emerging. The earliest telehealth encounter can be traced toAlexander Graham Bellin 1876, when he used his early telephone as a means of getting help from his assistant Mr. Watson after he spilt acid on his trousers. Another instance of early telehealth, specifically telemedicine was reported inThe Lancetin 1879. An anonymous writer described a case where a doctor successfully diagnosed a child over the telephone in the middle of the night.[226]This Lancet issue, also further discussed the potential of Remote Patient Care in order to avoid unnecessary house visits, which were part of routine health care during the 1800s.[226][229]Other instances of telehealth during this period came from theAmerican Civil War, during which telegraphs were used to deliver casualty/mortality lists, medical care to soldiers,[229]and ordering further medical supplies.[230] As the 1900s started, physicians quickly found a use for the telephone making it a prime communication channel to contact patients and other physicians.[228]Over the next fifty-plus years, the telephone was a staple for medical communication. As the 1930s came around, radio communication played a key role, especially during World War I. It was specifically used to communicate with remote areas such as Alaska and Australia.[228]They used the radio to communicate medical information. During the Vietnam War, radio communication had become more advanced and was now used to send medical teams in helicopters to help. This then brought together the Aerial Medical Service (AMS) who used telegraphs, radios, and planes to help care for people who lived in remote areas. From the late 1800s to the early 1900s the early foundations ofwireless communicationwere laid down.[226]Radiosprovided an easier and near instantaneous form of communication. The use of radio to deliver healthcare became accepted for remote areas.[226][130]TheRoyal Flying Doctor Service of Australiais an example of the early adoption of radios in telehealth.[227] In 1925 the inventorHugo Gernsbackwrote an article for the magazineScience and Inventionwhich included a prediction of a future where patients could be treated remotely by doctors through a device he called a "teledactyl". His descriptions of the device are similar to what would later become possible with new technology.[231] When the AmericanNational Aeronautics and Space Administration (NASA)began plans to send astronauts into space, the need for telemedicine became clear. In order to monitor their astronauts in space, telemedicine capabilities were built into thespacecraftas well as the firstspacesuits.[226][130]Additionally, during this period, telehealth and telemedicine were promoted in different countries especially the United States and Canada.[226]Carrier Sekani Family Serviceshelped pioneer telehealth in British Columbia and Canada, according to its CEO Warner Adam.[232]After the telegraph and telephone started to successfully help physicians treat patients from remote areas, telehealth became more recognized. Technological advancements occurred when NASA sent men to space. Engineers for NASA created biomedical telemetry andtelecommunicationssystems.[228]NASA technology monitored vitals such as blood pressure, heart rate, respiration rate, and temperature. After the technology was created, it then became the base of telehealth medicine for the public. Massachusetts General Hospitaland Boston'sLogan International Airporthad a role in the early use of telemedicine, which more or less coincided with NASA's foray into telemedicine through the use of physiologic monitors for astronauts.[233]On October 26, 1960, a plane struck a flock of birds upon takeoff, killing many passengers and leaving a number wounded. Due to the extreme complexity of trying to get all the medical personnel out from the hospital, the practical solution became telehealth.[234]This was expanded upon in 1967, when Kenneth Bird at Massachusetts General founded one of the first telemedicine clinics. The clinic addressed the fundamental problem of delivering occupational and emergency health services to employees and travellers at the airport, located three congested miles from the hospital. Clinicians at the hospital would provide consultation services to patients who were at the airport. Consultations were achieved through microwave audio as well as video links.[226][235]The airport began seeing over a hundred patients a day at its nurse-run clinic that cared for victims of plane crashes and other accidents, taking vital signs, electrocardiograms, and video images that were sent to Massachusetts General.[236]Over 1,000 patients are documented as having received remote treatment from doctors at MGH using the clinic's two-way audiovisual microwave circuit.[237]One notable story featured a woman who got off a flight in Boston and was experiencing chest pain. They performed a workup at the airport, took her to the telehealth suite where Raymond Murphy appeared on the television, and had a conversation with her. While this was happening, another doctor took notes and the nurses took vitals and any test that Murphy ordered.[234]At this point, telehealth was becoming more mainstream and was starting to become more technologically advanced, which created a viable option for patients. In 1964, the Nebraska Psychiatric Institute began using television links to form two-way communication with the Norfolk State Hospital which was 112 miles away for the education and consultation purposes between clinicians in the two locations.[235] In 1972 theDepartment of Health, Education and Welfarein the United States approved funding for seven telemedicine projects across different states. This funding was renewed and two further projects were funded the following year.[226][235] In March 1972, the San Bernardino County Medical Society officially implemented its Tel-Med program, a system of prerecorded health-related messages, with a log of 50 tapes.[238][239]The nonprofit initiative began in 1971 as a local medical project to ease the doctor shortage in the expandingSan Bernardino Valleyand improve the public's access to sound medical information.[239][238]It covered subjects ranging fromcannabistovaginitis.[238]In January 1973, in response to the developing "London flu" epidemic hitting California and the country, a tape providing information on the disease was on air within a week after news broke of the flu spreading in the state.[240]That spring, programs were implemented inSan DiegoandIndianapolis, Indiana, signaling a national acceptance of the concept.[239]By 1979, its system offered messages on over 300 different subjects, 200 of which were available in Spanish as well as English, and serviced over 65 million people in 180 cities around the country.[238] Telehealth projects underway before and during the 1980s would take off but fail to enter mainstream healthcare.[227][130]As a result, this period of telehealth history is called the "maturation" stage and made way for sustainable growth.[226]Although state funding in North America was beginning to run low, different hospitals began to launch their own telehealth initiatives.[226]NASA provided an ATS-3 satellite to enable medical care communications ofAmerican Red CrossandPan American Health Organizationresponse teams, following the1985 Mexico City earthquake. The agency then launched its SateLife/HealthNet programme to increase health service connectivity in developing countries. In 1997, NASA sponsoredYale's Medical Informatics and Technology Applications Consortium project.[130][241] Florida first experimented with "primitive" telehealth in itsprisonsduring the latter 1980s.[242]Working with Doctors Oscar W. Boultinghouse and Michael J. Davis, from the early 1990s to 2007; Glenn G. Hammack led theUniversity of Texas Medical Branch(UTMB) development of a pioneering telehealth program inTexas state prisons. The three UTMB alumni would, in 2007, co-found telehealth provider NuPhysician.[243] The first interactive telemedicine system, operating over standard telephone lines, designed toremotely diagnoseand treat patients requiring cardiac resuscitation (defibrillation) was developed and launched by an American company, MedPhone Corporation, in 1989. A year later under the leadership of its President/CEO S Eric Wachtel, MedPhone introduced a mobile cellular version, the MDPhone. Twelve hospitals in the U.S. served as receiving and treatment centers.[244] As the expansion of telehealth continued in 1990 Maritime Health Services (MHS) was a big part of the initiation for occupational health services. They sent a medical officer aboard the Pacific trawler that allowed for round-the-clock communication with a physician. The system that allows for this is called the Medical Consultation Network (MedNet). MedNet is a video chatting system that has live audio and visual so the physician on the other end of the call can see and hear what is happening. MetNet can be used from anywhere, not just aboard ships.[228]Being able to provide onsite visual information allows remote patients expert emergency help and medical attention that saves money as well as lives. This has created a demand for at-home monitoring. At-home care has also become a large part of telehealth. Doctors or nurses will now give pre-op and post-op phone calls to check-in. There are also companies such asLifeline, which give the elderly a button to press in case of an emergency. That button will automatically call for emergency help. If someone has surgery and then is sent home, telehealth allows physicians to see how the patient is progressing without them having to stay in the hospital. TeleDiagnostic Systems of San Francisco is a company that has created a device that monitors sleep patterns, so people with sleep disorders do not have to stay the night at the hospital.[228]Another at-home device that was created was the Wanderer, which was attached to Alzheimer's patients or people who had dementia. It was attached to them so when they wandered off it notified the staff to allow them to go after them. All these devices allowed healthcare beyond hospitals to improve, which means that more people are being helped efficiently. The advent of high-speedInternet, and the increasing adoption ofICTin traditional methods of care, spurred advances in telehealth delivery.[14]Increased access to portable devices, like laptops and mobile phones, made telehealth more plausible; the industry then expanded into health promotion, prevention and education.[1][3][130] In 2002, G. Byron Brooks, a former NASA surgeon and engineer who had also helped manage theUTMBTelemedicine program, co-foundedTeladocinDallas, Texas, which was then launched in 2005 as the first national telehealth provider.[245] In the 2010s, integration of smart home telehealth technologies, such as health and wellness devices, software, and integratedIoT, has accelerated the industry. Healthcare organizations are increasingly adopting the use of self-tracking and cloud-based technologies, and innovative data analytic approaches to accelerate telehealth delivery.[citation needed][246] In 2015,Mercy Health systemopenedMercy Virtual, in Chesterfield, Missouri, the world's first medical facility dedicated solely to telemedicine.[247] Telehealth expanded significantly during theCOVID-19pandemic, becoming a vital means of medical communication. It allows doctors to return to humanizing the patient.[248]It forces them to listen to what people have to say and from there make a diagnosis. Studies have demonstrated high trust in telehealth expressed by patients during theCOVID-19pandemic.[249]Among patients withInflammatory bowel disease4 out of 5 considered telemedicine as valuable tool for their management, and 85%) wanted to have a telemedicine service at their center. However, only 1 out of 4 believed that it may guarantee the same level of care as the in-person visit.[250]Some researchers claim this creates an environment that encourages greater vulnerability among patients in self disclosure in the practice of narrative medicine.[248]Telehealth allows for Zoom calls and video chats from across the world checking in on patients and speaking to physicians. Universities are now ensuring that medical students graduate with proficient telehealth communication skills.[251]Experts suggest that telehealth has become a vital part of medical care; with more virtual options becoming available. The pandemic era also identified the "potential to significantly improve global health equity" through telehealth and other virtual care technologies.[252] A retrospective study in 2023 examining 1,589,014 adult primary care patients within an integrated healthcare system in the U.S. found that patients who initially had a telehealth visit were less likely to receive prescriptions, lab tests, or imaging compared to those who had an in-office visit. However, these same telehealth patients had higher rates of in-person follow-up visits. The study revealed that, out of the 2,357,598 primary care visits, 49.2% were in-office, 31.3% were telephone visits, and 19.5% were video visits. Office visits led to higher rates of prescriptions (46.8%), lab tests (41.4%), and imaging (20.5%) compared to video (38.4%, 27.4%, and 11.9%, respectively) and telephone visits (34.6%, 22.8%, and 8.7%, respectively). In contrast, patients who had telephone or video visits were more likely to have in-person follow-up visits, with 7.6% of telephone, 6.2% of video, and only 1.3% of office visit patients returning for primary care follow-up. Furthermore, rates of emergency department visits and hospitalizations were higher for those who had telemedicine visits, though these differences were minimal. The study's limitations include the inability to generalize findings to healthcare settings without telemedicine services or to patients without insurance or a primary care provider. The reasons for increased in-person healthcare utilization were also not captured, and long-term follow-up was not conducted.[253] Media related toTelemedicineat Wikimedia Commons
https://en.wikipedia.org/wiki/Telemedicine
"Health 2.0" is a term introduced in the mid-2000s, as the subset ofhealth caretechnologies mirroring the widerWeb 2.0movement. It has been defined variously as including social media, user-generated content, and cloud-based and mobile technologies. Some Health 2.0 proponents see these technologies as empowering patients to have greater control over their own health care and diminishingmedical paternalism. Critics of the technologies have expressed concerns about possible misinformation and violations of patient privacy. Health 2.0 built on the possibilities for changinghealth care, which started with the introduction ofeHealthin the mid-1990s following the emergence of theWorld Wide Web. In the mid-2000s, following the widespread adoption both of the Internet and of easy to use tools for communication,social networking, andself-publishing, there was spate of media attention to and increasing interest from patients, clinicians, and medical librarians in using these tools for health care and medical purposes.[1][2] Early examples of Health 2.0 were the use of a specific set of Web tools (blogs, email list-servs, online communities,podcasts, search, tagging,Twitter, videos,wikis, and more) by actors in health care including doctors, patients, and scientists, using principles ofopen sourceand user-generated content, and the power of networks and social networks in order to personalize health care, to collaborate, and to promote health education.[3]Possible explanations why health care has generated its own "2.0" term are the availability and proliferation of Health 2.0 applications across health care in general, and the potential for improvingpublic healthin particular.[4] While the "2.0" moniker was originally associated with concepts like collaboration, openness, participation, andsocial networking,[5]in recent years the term "Health 2.0" has evolved to mean the role ofSaasandcloud-based technologies, and their associated applications on multiple devices. Health 2.0 describes the integration of these into much of general clinical and administrative workflow in health care. As of 2014, approximately 3,000 companies were offering products and services matching this definition, withventure capitalfunding in the sector exceeding $2.3 billion in 2013.[6] Public Health 2.0is a movement within public health that aims to make the field more accessible to the general public and more user-driven. The term is used in three senses. In the first sense, "Public Health 2.0" is similar to "Health 2.0" and describes the ways in which traditional public health practitioners and institutions are reaching out (or could reach out) to the public throughsocial mediaandhealth blogs.[7][8] In the second sense, "Public Health 2.0" describes public health research that uses data gathered from social networking sites, search engine queries, cell phones, or other technologies.[9]A recent example is the proposal of statistical framework that utilizes online user-generated content (from social media or search engine queries) to estimate the impact of an influenza vaccination campaign in the UK.[10] In the third sense, "Public Health 2.0" is used to describe public health activities that are completely user-driven.[11]An example is the collection and sharing of information aboutenvironmental radiationlevels after the March 2011 tsunami in Japan.[12]In all cases, Public Health 2.0 draws on ideas fromWeb 2.0, such ascrowdsourcing,information sharing, anduser-centered design.[13]While many individual healthcare providers have started making their own personal contributions to "Public Health 2.0" through personal blogs, social profiles, and websites, other larger organizations, such as theAmerican Heart Association(AHA) and United Medical Education (UME), have a larger team of employees centered around online drivenhealth education, research, and training. These private organizations recognize the need for free and easy to access health materials often building libraries of educational articles.[citation needed] The "traditional" definition of "Health 2.0" focused on technology as an enabler for carecollaboration: "The use of social software t-weight tools to promote collaboration between patients, their caregivers, medical professionals and other stakeholders in health."[14] In 2011, Indu Subaiya redefined Health 2.0[15]as the use in health care of new cloud, Saas, mobile, and device technologies that are: This wider definition allows recognition of what is or what isn't a Health 2.0 technology. Typically, enterprise-based, customized client-server systems are not, while more open, cloud based systems fit the definition. However, this line was blurring by 2011-2 as more enterprise vendors started to introduce cloud-based systems and native applications for new devices like smartphones and tablets. In addition, Health 2.0 has several competing terms, each with its own followers—if not exact definitions—includingConnected Health,Digital Health,Medicine 2.0, andmHealth. All of these support a goal of wider change to the health care system, using technology-enabled system reform—usually changing the relationship between patient and professional.: In the late 2000s, several commentators used Health 2.0 as a moniker for a wider concept of system reform, seeking a participatory process between patient and clinician: "New concept of health care wherein all the constituents (patients, physicians, providers, and payers) focus on health care value (outcomes/price) and use competition at the medical condition level over the full cycle of care as the catalyst for improving the safety, efficiency, and quality of health care".[16] Health 2.0 defines the combination of health data and health information with (patient) experience, through the use of ICT, enabling the citizen to become an active and responsible partner in his/her own health and care pathway.[17] Health 2.0 is a participatory healthcare. Enabled by information, software, and communities that we collect or create, we the patients can be effective partners in our own healthcare, and we the people can participate in reshaping the health system itself.[18] Definitions ofMedicine 2.0appear to be very similar but typically include more scientific and research aspects—Medicine 2.0: "Medicine 2.0 applications, services and tools are Web-based services for health care consumers, caregivers, patients, health professionals, and biomedical researchers, that use Web 2.0 technologies as well as semantic web and virtual reality tools, to enable and facilitate specifically social networking, participation, apomediation, collaboration, and openness within and between these user groups.[19][20]Published in JMIR Tom Van de Belt,Lucien Engelenet al.systematic review found 46 (!) unique definitions of health 2.0[21] Health 2.0 refers to the use of a diverse set of technologies includingConnected Health,electronic medical records,mHealth,telemedicine, and the use of the Internet by patients themselves such as throughblogs,Internet forums, online communities, patient to physician communication systems, and other more advanced systems.[22][23]A key concept is that patients themselves should have greater insight and control into information generated about them. Additionally Health 2.0 relies on the use of modern cloud and mobile-based technologies. Much of the potential for change from Health 2.0 is facilitated by combining technology driven trends such as Personal Health Records with social networking —"[which] may lead to a powerful new generation of health applications, where people share parts of their electronic health records with other consumers and 'crowdsource' the collective wisdom of other patients and professionals."[5]Traditional models of medicine had patient records (held on paper or a proprietarycomputer system) that could only be accessed by a physician or othermedical professional. Physicians acted as gatekeepers to this information, telling patients test results when and if they deemed it necessary. Such a model operates relatively well in situations such as acute care, where information about specific blood results would be of little use to alay person, or in general practice where results were generally benign. However, in the case of complex chronic diseases,psychiatric disorders, or diseases of unknown etiology patients were at risk of being left without well-coordinated care because data about them was stored in a variety of disparate places and in some cases might contain the opinions of healthcare professionals which were not to be shared with the patient. Increasingly,medical ethicsdeems such actions to bemedical paternalism, and they are discouraged inmodern medicine.[24][25] A hypothetical example demonstrates the increased engagement of a patient operating in a Health 2.0 setting: a patient goes to see theirprimary care physicianwith a presenting complaint, having first ensured their ownmedical recordwas up to date via the Internet. The treating physician might make a diagnosis or send for tests, the results of which could be transmitted directly to the patient's electronic medical record. If a second appointment is needed, the patient will have had time to research what the results might mean for them, what diagnoses may be likely, and may have communicated with other patients who have had a similar set of results in the past. On a second visit a referral might be made to a specialist. The patient might have the opportunity to search for the views of other patients on the best specialist to go to, and in combination with their primary care physician decides whom to see. The specialist gives a diagnosis along with a prognosis and potential options for treatment. The patient has the opportunity to research these treatment options and take a more proactive role in coming to a joint decision with their healthcare provider. They can also choose to submit more data about themselves, such as through a personalized genomics service to identify anyrisk factorsthat might improve or worsen their prognosis. As treatment commences, the patient can track their health outcomes through a data-sharing patient community to determine whether the treatment is having an effect for them, and they can stay up to date on research opportunities andclinical trialsfor their condition. They also have thesocial supportof communicating with other patients diagnosed with the same condition throughout the world. Partly due to weak definitions, the novelty of the endeavor and its nature as an entrepreneurial (rather than academic) movement, littleempirical evidenceexists to explain how much Web 2.0 is being used in general. While it has been estimated that nearly one-third of the 100 million Americans who have looked for health information online say that they or people they know have been significantly helped by what they found,[26]this study considers only the broader use of the Internet for health management. A study examining physician practices has suggested that a segment of 245,000 physicians in the U.S are using Web 2.0 for their practice, indicating that use is beyond the stage of theearly adopterwith regard to physicians and Web 2.0.[27] Web 2.0is commonly associated with technologies such as podcasts,RSS feeds,social bookmarking, weblogs (health blogs), wikis, and other forms ofmany-to-manypublishing;social software; andweb applicationprogramming interfaces (APIs).[28] The following are examples of uses that have been documented in academic literature. Hugheset al.(2009) argue there are four major tensions represented in the literature on Health/Medicine 2.0. These concern:[3] Several criticisms have been raised about the use of Web 2.0 in health care. Firstly,Googlehas limitations as adiagnostic toolforMedical Doctors(MDs), as it may be effective only for conditions with unique symptoms and signs that can easily be used as search term.[31]Studies of its accuracy have returned varying results, and this remains in dispute.[34]Secondly, long-held concerns exist about the effects of patients obtaining information online, such as the idea that patients may delay seeking medical advice[35]or accidentally reveal private medical data.[36][37]Finally, concerns exist about the quality ofuser-generated contentleading to misinformation,[38][39]such as perpetuating the discredited claim that theMMR vaccinemaycause autism.[40]In contrast, a 2004 study of a British epilepsy onlinesupport groupsuggested that only 6% of information was factually wrong.[41]In a 2007Pew Research Centersurvey of Americans, only 3% reported that online advice had caused them serious harm, while nearly one-third reported that they or their acquaintances had been helped by online health advice.[41]
https://en.wikipedia.org/wiki/Health_2.0
H.810, "E-health multimedia systems, services and applications - Personal health systems", also known as the Continua Design Guidelines (CDG), is anITU-TRecommendation, developed in collaboration with theWorld Health Organization.[1]It specifies standards forConnected healthwas first approved in 2013. In November 2019, version 4 was approved and published. The Guidelines are a set of standards and guidelines developed by theContinua Health Alliance(now part of the Personal Connected Health Alliance) to enable the interoperability of personal connected health devices and systems. These guidelines were established to ensure thatmedical devices, systems, and applications can communicate with each other, exchange data, and provide integration across various healthcare scenarios.[2] The main objectives of the Guidelines are: H.810and the work on connected health is part of the Inter-Agency Collaboration between theITUand theWorld Health OrganizationonDigital health, which is undertaken primarily throughITU-T Study Group 16. The guidelines are reported to have saved lives in a study that monitors the cardial health of survivors of the 2011 earthquake in Japan.[3] This technology-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/H.810
Safe listeningis aframeworkforhealth promotionactions to ensure that sound-related recreational activities (such asconcerts, nightclubs, and listening to music, broadcasts, orpodcasts) do not pose a risk tohearing.[1] While research shows that repeated exposures to any loud sounds can causehearing disordersand other health effects,[2][3][4][5][6][7]safe listening applies specifically to voluntary listening through personal listening systems,personal sound amplification products(PSAPs), or at entertainment venues and events. Safe listening promotes strategies to prevent negative effects, includinghearing loss,tinnitus, andhyperacusis. While safe listening does not address exposure to unwanted sounds (which are termednoise) – for example, at work or from other noisy hobbies – it is an essential part of a comprehensive approach to total hearing health.[8] The risk of negative health effects from sound exposures (be it noise or music) is primarily determined by the intensity of the sound (loudness), duration of the event, and frequency of that exposure.[9]These three factors characterize the overall sound energy level that reaches a person's ears and can be used to calculate a noise dose. They have been used to determine the limits of noise exposure in the workplace. Both regulatory and recommended limits for noise exposure were developed from hearing and noise data obtained in occupational settings, where exposure to loud sounds is frequent and can last for decades.[3][10]Although specific regulations vary across the world, most workplace best practices consider 85decibels(dB A-weighted) averaged over eight hours per day as the highest safe exposure level for a 40-year lifetime.[1]Using an exchange rate, typically 3 dB, allowable listening time is halved as the sound level increases by the selected rate. For example, a sound level as high as 100 dBA can be safely listened to for only 15 minutes each day.[10][11][12] Because of their availability, occupational data have been adapted to determine damage-risk criteria for sound exposures outside of work. In 1974, the USEnvironmental Protection Agencyrecommended a 24-hour exposure limit of 70 dBA, taking into account the lack of a "rest period" for the ears when exposures are averaged over 24 hours and can occur every day of the year (workplaceexposure limits assume 16 hours of quiet between shifts and two days a week off).[13]In 1995, theWorld Health Organization(WHO) similarly concluded that 24-hour average exposures at or below 70 dBA pose a negligible risk for hearing loss over alifetime.[14]Following reports on hearing disorders from listening to music,[15][16][17][18][19]additional recommendations and interventions to prevent adverse effects from sound-related recreational activities appear necessary.[1][20][21] Severalorganizations have developed initiatives to promote safe listening habits. The U.S.National Institute on Deafness and Other Communication Disorders(NIDCD) has guidelines for safely listening topersonal music playersgeared toward the "tween" population (children aged 9–13 years).[22]TheDangerous Decibelsprogram promotes the use of "Jolene" mannequins to measure output of PLSs as an educational tool to raise awareness of overexposure to sound through personal listening.[23]This type of mannequin is simple and inexpensive to construct and is often an attention-grabber at schools, health fairs, clinic waiting rooms, etc.[23] TheNational Acoustic Laboratories(NAL), the research division ofHearing Australia, developed the Know Your Noise initiative,[24]funded by theAustralian Government Department of Health. The Know Your Noise website has a Noise Risk Calculator that makes it possible and easy for users to identify and understand their levels of noise exposure (at work and play), and possible risks for hearing damage. Users can also take an online hearing test to see how well they hear in a noisy background.[24] The WHO launched the Make Listening Safe initiative[25]as part of the celebration ofWorld Hearing Dayon 3 March 2015.[1]The initiative's main goal is to ensure that people of all ages can enjoy listening to music and other audio media in a manner that does not create a hearing risk.Noise-induced hearing loss,hyperacusis, andtinnitushave been associated with the frequent use at high volume of devices such asheadphones,headsets,earpieces, earbuds, and True Wireless Stereo technologies of any type.[19][20][26][27] Make Listening Safe aims to: In 2019 the World Health Organization published a toolkit for safe listening devices and systems that provides the rationale for the proposed strategies, and identifies actions that governments, industry partners and the civil society can take.[29] On 1 November 2023 the WHO launched aMake Listening Safe Campaign (MLSC)in the United Kingdom as a pilot to a strategy to encourage the adoption of safe listening practices amongst those between the ages of ten and forty. The MLSC UK will run a sequence of run short campaigns focused on different themes, starting with avoidable risks amongst headphone users. It will include an ePetition requesting the government to adopt higher hearing safeguarding standards/regulations in line with the WHO/International Telecommunication Union(ITU) recommendations. The plan is to evaluate the effort and later roll it out to its other 193 member states. It includes an in-person launch event, public education focused campaigns, policy advocacy, and collaboration with various stakeholders, including governmental bodies, industry players, and healthcare professionals. Make Listening Safe is promoting the development of features in PLS to raise the users' awareness of risky listening practices. In this context, the WHO partnered with the International Telecommunication Union (ITU) to develop suitable exposure limits for inclusion in the voluntaryH.870safety standards on "Guidelines for safe listening devices/systems."[30]Experts in the fields ofaudiology,otology,public health,epidemiology,acoustics, andsound engineering, as well as professional organizations, standardization organizations, manufacturers, and users are collaborating on this effort.[31] The Make Listening Safe initiative also covers entertainment venues. Averagesound pressure levels(SPL) in nightclubs, discotheques, bars, gyms and live sports venues can be as high as 112 dB (A-weighted); sound levels at pop concerts may be even higher.[32][33][34][35][36][37][38]Frequent exposure or even a short exposure to very high-sound pressure levels such as these can be harmful. WHO reviewed existing noise regulations for various entertainment sites – including clubs, bars, concert venues, and sporting arenas[25]in countries around the world, and released a globalStandard for Safe Listening Venues and Eventsas part ofWorld Hearing Day2022. Also released in 2022 were: Personal listening systems areportable devices– usually an electronic player attached toheadphonesorearphones– which are designed for listening to various media, such as music orgaming. The output of such systems varies widely. Maximum output levels vary depending upon the specific devices and regional regulatory requirements.[39]Typically, PLS users can choose to limit the volume between 75 and 105 dB SPL.[19]The ITU and the WHO recommend that PLS be programmed with a monitoring function that sets a weekly sound exposure limit and provides alerts as users reach 100% of their weekly sound allowance. If users acknowledge the alert, they can choose to whether or not to reduce the volume. But if the user does not acknowledge the alert, the device will automatically reduce the volume to a predetermined level (based on the mode selected, i.e. 80 or 75 dBA). By conveying exposure information in a way that can be easily understood by end-users, this recommendation aims to make it easier for listeners to manage their exposures and avoid any negative effects. Thehealth apponiPhones,Apple Watches, andiPadsincorporated this approach starting in 2019.[40]These feature the opt-inAppleHearing Study, part of the Research app that is being conducted in collaboration with theUniversity of Michigan School of Public Health. Data is being shared with the WHO's Make Listening Safe initiative. Preliminary results released in March 2021, one year into the study, indicated that 25% of participants experienced ringing in their ears a few times a week or more, 20% of participants have hearing loss, and 10% have characteristics that are typical in cases ofnoise-induced hearing loss.[41]Nearly 50% of participants reported that they had not had their hearing tested in at least 10 years. In terms of exposure levels, 25% of the participants experienced high environmental sound exposures.[41] The International Technical Commission (ITC) published the first European standard IEC 62368–1 on personal audio systems in 2010.[42]It defined safe output levels for PLSs as 85 dB or less, while allowing users to increase the volume to a maximum of 100 dBA. However, when users raise the volume to the maximum level, the standard specifies that an alert should pop up to warn the listener of the potential for hearing problems.[31] The 2018 ITU and WHO standardH.870[30]"Guidelines for safe listening devices/systems" focus on the management of weekly sound-dose exposure. This standard was based on the EN 50332-3 standard "Sound system equipment: headphones and earphones associated with personal music players – maximum sound pressure level measurement methodology – Part 3: measurement method for sound dose management." This standard defines a safe listening limit as a weekly sound dose equivalent to 80 dBA for 40 hours/week.[30] The frequent use of PLS among children has raised concerns about the potential risks that might be associated with such exposure.[43]A systematic review andmeta-analysispublished in 2022 recorded an increased prevalence of risk of hearing loss compared to 2015 estimates among young people between 12 and 34 years of age who are exposed to high sound pressure levels (SPL) due to use of headphones and entertainment soundscapes.[44]The authors included articles published between 2000 and 2021 that reported unsafe listening practices. The number of young people who may be at risk of hearing loss worldwide has been estimated from the total global estimates of the population aged 12 to 34 years. Thirty-three studies (corresponding to data from 35 medical records and 19,046 individuals) were included; 17 and 18 records focused on the use of SEPs and noisy entertainment venues, respectively. The pooled prevalence estimate of exposure to unsafe listening to EPS was 23.81% (95% CI 18.99% to 29.42%). The model was adjusted according to the intensity and duration of exposure to identify an estimated prevalence of 48.2%. The estimated global number of young people who may be at risk of hearing loss due to exposure to unsafe listening practices ranged from 0.67 to 1.35 billion.[44]The authors concluded that unsafe listening practices are highly prevalent worldwide and may put over 1 billion young people at risk of hearing loss.[44] There is no agreement on the acceptable risk of noise-induced hearing loss in children; and adult damage-risk criteria may not be suitable for establishing safe listening levels for children due to differences inphysiologyand the more serious developmental impact of hearing loss early in life.[45][46]One attempt to identify safe levels assumed that the most appropriate exposure limit for recreationalnoise exposurein children would aim to protect 99% of children from a shift in hearing exceeding 5 dB at 4 kHz after 18 years of noise exposure.[45]Using estimates from theInternational Organization for Standardization(ISO 1999:2013),[47]the authors calculated that 99% of children who are exposed from birth until the age of 18 years to 8-h average sound levels (LEX) of 82 dBA would have hearing thresholds of about 4.2 dB greater, indicating a shift in hearing ability. By including a 2 dBA margin of safety which reduces the 8-hr exposure allowance to 80 dBA, the study estimated a hearing change of 2.1 dB or less in 99% of children. To preserve the hearing from birth until the age of 18 years, it was recommended that noise exposures be limited to 75 dBA over a 24-hour period.[45]Other researchers recommended that the weekly sound dose be limited to the equivalent of 75 dBA for 40 hours/week for children and users who are sensitive to intense sound stimulation.[31] Personal sound amplification productsare ear-levelamplification devicesintended for use by persons with normal hearing. The output levels of 27 PSAPs that were commercially available in Europe were analyzed in 2014. All of them had a maximum output level that exceeded 120 dB SPL; 23 (85%) exceeded 125 dB SPL, while 8 (30%) exceeded 130 dB SPL. None of the analyzed products had a level limiting option.[48] The report triggered the development of a few standards for these devices. The ANSI/CTA standard 2051[49]on "Personal Sound Amplification Performance Criteria" followed in 2017. It specified a maximum output sound pressure level of 120 dB SPL. In 2019, the ITU published standard ITU-T H.871[50]called "Safe listening guidelines for personal sound amplifiers". This standard recommends that PSAPs measure the weekly sound dose and adhere to a weekly maximum of less than 80 dBA for 40 hours. PSAPs that cannot measure weekly sound dose should limit the maximum output of the device to 95 dBA. It also recommends that PSAPs provide clear alerts in their userguides,packaging, andadsmentioning the risks of ear damage that can result from using the device and providing information on how to avoid these risks.[31]A technical paper describing how to test the compliance of various personal audio systems/devices to the essential/mandatory and optional features of Recommendation ITU-T H.870 was published in 2021.[51] Both those working in the music industry and those enjoying recreational music at venues and events can be at risk of experiencing hearing disorders.[2][52][53]In 2019, the WHO published a report summarizing regulations for control of sound exposure in entertainment venues in Belgium, France, and Switzerland.[54]The case studies were published as an initial step towards the development of a WHO regulatory framework for control of sound exposure in entertainment venues. In 2020, a couple of reports described exposure scenarios and procedures in use during entertainment events. These took into account the safety of those attending an event, those exposed occupationally to the high intensity music, as well as those in surrounding neighborhoods.[55][56]Technical solutions, practices of monitoring and on-stage sound are presented, as well as the problems of enforcing environmental noise regulations in anurban environment, with country specific examples.[56] Several different regulatory approaches have been implemented to manage sound levels and minimize the risk of hearing damage for those attending music venues.[57]A report published in 2020 identified 18 regulations regarding sound levels in entertainment venues – 12 from Europe and the remainder from cities orstatesinNorthand South America. Legislative approaches include: sound level limitations, real-time sound exposure monitoring, mandatory supply ofhearing protection devices, signage and warning requirements,loudspeakerplacement restrictions, and ensuringpatronscan access quiet zones orrest areas.[57]The effectiveness of these measures in reducing the risk of hearing damage has not been evaluated,[57]but the adaptation of the approaches described above is consistent with the general principles of thehierarchy of controlsused to manage exposure to noise inworkplaces.[58][59] Patrons of music venues have indicated their preference for lower sound levels[60][61][62]and can be receptive whenearplugsare provided or made accessible.[63][64][65]This finding may be region or country-specific. In 2018, the U.S.Centers for Disease Control and Preventionpublished the results of a survey of U.S. adults related to the use of ahearing protection deviceduring exposure to loud sounds at recreational events.[66]Overall, more than four of five reported never or seldom wearing hearing protection devices when attending a loud athletic or entertainment event. Adults aged 35 years and older were significantly more likely to not wear hearing protection than were young adults aged 18–24 years. Among adults who frequently enjoy attending sporting events, women were twice as likely as men to seldom or never wear hearing protection. Adults who were more likely to wear protection had at least some college education or had higher household incomes. Adults with hearing impairment or with a deaf or hard-of-hearing household member were significantly more likely to wear their protective devices.[66] The challenges in implementing measures to reduce risks to hearing in a wide range of entertainment venues – whether through mandatory or voluntaryguidelines, with or withoutenforcement– are significant. It requires involvement from many different professional groups and buy-in from both venue managers and users.[58][67]The WHO and ITU GlobalStandard for Venues and Eventsreleased on World Hearing Day 2022 offers resources to facilitate action. The standard details six features recommended for safe listening venues and events. The standard can be used by Governments to implement legislation, by owners and managers of venues and events to protect their clientele, and by audio engineers, and by other staff. A 2023 survey showed that U.S. adults acknowledge the risks posed by high sound exposures at concerts and other events. Results indicated an interest towards protective actions, such as limiting sound levels, posting warning signs, and wearing hearing protection. Fifty four percent of the study participants agreed that sound levels at concert venues should be limited to reduce risk for hearing disorders, seventy five percent agreed that warning signs should be posted when sound levels are likely to exceed safe levels, and 61% of respondents stated that they would wear hearing protection if s provided when sound levels were likely to exceed safe levels.[68] While establishing effectivepublic and community health interventions, enacting appropriate legislation and regulations, and developing pertinent standards for listening and audio systems are all important in establishing a societal infrastructure for safe listening, Individuals can take steps to ensure that their personal listening habits minimize their risk of hearing problems.[9]Personal safe listening strategies include:[22][69][70] Teaching children and young adults about the hazards of overexposure to loud sounds and how to practice safe listening habits could help protect their hearing. Good role models in their own listening habits could also prompt healthy listening habits.Health care professionalshave the opportunity to educate patients about relevant hearing risks and promote safe listening habits.[9]As part of their health promotion activities, hearing professionals can recommend appropriate hearing protection when necessary and provide information, training and fit-testing to ensure individuals are adequately but not overly protected.[69]Wearing earplugs to concerts has been shown to be an effective way to reduce post-concert temporary hearing changes.[72]
https://en.wikipedia.org/wiki/Safe_listening
Anevil twinis a fraudulentWi-Fiaccess point that appears to be legitimate but is set up to eavesdrop on wireless communications.[1] This type of attack, also known as aman-in-the-middleattack, may be used to steal the passwords of unsuspecting users, either by monitoring their connections or by phishing, which involves setting up a fraudulent web site and luring people there.[2] The attacker snoops on Internet traffic using a boguswireless access point. Unwittingwebusers may be invited to log into the attacker'sserver, prompting them to enter sensitive information such asusernamesandpasswords. Often, users are unaware they have been duped until well after the incident has occurred. When users log into unsecured (non-HTTPS) bank ore-mailaccounts, the attacker intercepts the transaction, since it is sent through their equipment. The attacker is also able to connect to other networks associated with the users' credentials. Fake access points are set up by configuring a wireless card to act as an access point (known asHostAP). They are hard to trace since they can be shut off instantly. The counterfeit access point may be given the same SSID and BSSID as a nearby Wi-Fi network. The evil twin can be configured to pass Internet traffic through to the legitimate access point while monitoring the victim's connection,[3]or it can simply say the system is temporarily unavailable after obtaining a username and password.[4][5][6][7] One of the most commonly used attacks under evil twins is a captive portal. At first, the attacker would create a fake wireless access point that has a similarESSIDto the legitimate access point. The attacker then might execute adenial-of-service attackon the legitimate access point which will cause it to go offline. From then on, clients would connect to the fake access point automatically. The clients would then be led to a web portal that will be requesting them to enter their password, which can then be misused by the attackers. In July 2024 a man was charged byAustralian Federal Policewith running a fakeWiFinetwork to steal credentials of passengers on at least one commercial flight.[8]An airline had reported that employees had concerns about a suspicious WiFi network identified during a domestic flight.[8]
https://en.wikipedia.org/wiki/Evil_twin_(wireless_networks)
Ahotspot gatewayis a device that providesauthentication,authorizationandaccountingfor awireless network. This can keep malicious users off of aprivate networkeven in the event that they are able to break theencryption.[1]A wireless hotspot gateway helps solve guest user connectivity problems by offering instant Internet access without the need for configuration changes to the client computer or any resident client-side software. This means that even if client configuration such as network IP address (including Gateway IP, DNS) orHTTPProxysettings are different from that of the provided network, the client can still get access to the network instantly with their existing network configuration. Some of the prominent hotspot gateway brands are - WiJungle, Nomadix, Wavertech etc. This article aboutwireless technologyis astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Hotspot_gateway
IEEE 802.11is part of theIEEE 802set oflocal area network(LAN)technical standards, and specifies the set ofmedium access control(MAC) andphysical layer(PHY) protocols for implementingwireless local area network(WLAN) computer communication. The standard and amendments provide the basis for wireless network products using theWi-Fibrand and are the world's most widely used wireless computer networking standards. IEEE 802.11 is used in most home and office networks to allow laptops, printers, smartphones, and other devices to communicate with each other and access theInternetwithout connecting wires. IEEE 802.11 is also a basis for vehicle-based communication networks withIEEE 802.11p. The standards are created and maintained by theInstitute of Electrical and Electronics Engineers(IEEE) LAN/MANStandards Committee (IEEE 802). The base version of the standard was released in 1997 and has had subsequent amendments. While each amendment is officially revoked when it is incorporated in the latest version of the standard, the corporate world tends to market to the revisions because they concisely denote the capabilities of their products. As a result, in the marketplace, each revision tends to become its own standard.802.11xis a shorthand for "any version of 802.11", to avoid confusion with "802.11" used specifically forthe original 1997 version. IEEE 802.11 uses various frequencies including, but not limited to, 2.4 GHz, 5 GHz, 6 GHz, and 60 GHz frequency bands. Although IEEE 802.11 specifications list channels that might be used, the allowedradio frequencyspectrum availability varies significantly by regulatory domain. The protocols are typically used in conjunction withIEEE 802.2, and are designed to interwork seamlessly withEthernet, and are very often used to carryInternet Protocoltraffic. The 802.11 family consists of a series ofhalf-duplexover-the-airmodulationtechniques that use the same basic protocol. The 802.11 protocol family employscarrier-sense multiple access with collision avoidance(CSMA/CA) whereby equipment listens to a channel for other users (including non 802.11 users) before transmitting each frame (some use the term "packet", which may be ambiguous: "frame" is more technically correct). 802.11-1997 was the first wireless networking standard in the family, but802.11bwas the first widely accepted one, followed by802.11a,802.11g,802.11n,802.11ac, and802.11ax. Other standards in the family (c–f, h, j) are service amendments that are used to extend the current scope of the existing standard, which amendments may also include corrections to a previous specification.[4] 802.11b and 802.11g use the 2.4-GHzISM band, operating in the United States underPart 15of the U.S.Federal Communications CommissionRules and Regulations. 802.11n can also use that 2.4-GHz band. Because of this choice of frequency band, 802.11b/g/n equipment may occasionally sufferinterference in the 2.4-GHz bandfrommicrowave ovens,cordless telephones, andBluetoothdevices. 802.11b and 802.11g control their interference and susceptibility to interference by usingdirect-sequence spread spectrum(DSSS) andorthogonal frequency-division multiplexing(OFDM) signaling methods, respectively. 802.11a uses the5 GHz U-NII bandwhich, for much of the world, offers at least 23 non-overlapping, 20-MHz-wide channels. This is an advantage over the 2.4-GHz, ISM-frequency band, which offers only three non-overlapping, 20-MHz-wide channels where other adjacent channels overlap (see:list of WLAN channels). Better or worse performance with higher or lower frequencies (channels) may be realized, depending on the environment. 802.11n and 802.11ax can use either the 2.4 GHz or 5 GHz band; 802.11ac uses only the 5 GHz band. The segment of theradio frequencyspectrum used by 802.11 varies between countries. In the US, 802.11a and 802.11g devices may be operated without a license, as allowed in Part 15 of the FCC Rules and Regulations. Frequencies used by channels one through six of 802.11b and 802.11g fall within the 2.4 GHzamateur radioband. Licensed amateur radio operators may operate 802.11b/g devices underPart 97of the FCC Rules and Regulations, allowing increased power output but not commercial content or encryption.[5] In 2018, theWi-Fi Alliancebegan using a consumer-friendly generation numbering scheme for the publicly used 802.11 protocols. Wi-Fi generations 1–8 use the 802.11b, 802.11a, 802.11g, 802.11n, 802.11ac, 802.11ax, 802.11be and 802.11bn protocols, in that order.[6][7] 802.11 technology has its origins in a 1985 ruling by the U.S. Federal Communications Commission that released theISM band[4]for unlicensed use.[8] In 1991NCR Corporation/AT&T(nowNokia LabsandLSI Corporation) invented a precursor to 802.11 in Nieuwegein, the Netherlands. The inventors initially intended to use the technology for cashier systems. The first wireless products were brought to the market under the nameWaveLANwith raw data rates of 1 Mbit/s and 2 Mbit/s. Vic Hayes, who held the chair of IEEE 802.11 for 10 years, and has been called the "father of Wi-Fi", was involved in designing the initial 802.11b and 802.11a standards within theIEEE.[9]He, along withBell LabsEngineer Bruce Tuch, approached IEEE to create a standard.[10] In 1999, theWi-Fi Alliancewas formed as a trade association to hold theWi-Fitrademark under which most products are sold.[11] The major commercial breakthrough came withApple'sadoption of Wi-Fi for their iBook series of laptops in 1999. It was the first mass consumer product to offer Wi-Fi network connectivity, which was then branded by Apple as AirPort.[12][13][14]One year later IBM followed with its ThinkPad 1300 series in 2000.[15] The original version of the standard IEEE 802.11 was released in 1997 and clarified in 1999, but is now obsolete. It specified twonet bit ratesof 1 or 2megabits per second(Mbit/s), plusforward error correctioncode. It specified three alternativephysical layertechnologies: diffuseinfraredoperating at 1 Mbit/s;frequency-hoppingspread spectrum operating at 1 Mbit/s or 2 Mbit/s; anddirect-sequencespread spectrum operating at 1 Mbit/s or 2 Mbit/s. The latter two radio technologies usedmicrowavetransmission over theIndustrial Scientific Medical frequency bandat 2.4 GHz. Some earlier WLAN technologies used lower frequencies, such as the U.S. 900 MHz ISM band. Legacy 802.11 with direct-sequence spread spectrum was rapidly supplanted and popularized by 802.11b. 802.11a, published in 1999, uses the same data link layer protocol and frame format as the original standard, but anOFDMbased air interface (physical layer) was added. It operates in the 5 GHz band with a maximum net data rate of 54 Mbit/s, plus error correction code, which yields realistic net achievable throughput in the mid-20 Mbit/s.[36]It has seen widespread worldwide implementation, particularly within the corporate workspace. Since the 2.4 GHz band is heavily used to the point of being crowded, using the relatively unused 5 GHz band gives 802.11a a significant advantage. However, this highcarrier frequencyalso brings a disadvantage: the effective overall range of 802.11a is less than that of 802.11b/g. In theory, 802.11a signals are absorbed more readily by walls and other solid objects in their path due to their smaller wavelength, and, as a result, cannot penetrate as far as those of 802.11b. In practice, 802.11b typically has a higher range at low speeds (802.11b will reduce speed to 5.5 Mbit/s or even 1 Mbit/s at low signal strengths). 802.11a also suffers from interference,[37]but locally there may be fewer signals to interfere with, resulting in less interference and better throughput. The 802.11b standard has a maximum raw data rate of 11 Mbit/s (Megabits per second) and uses the same media access method defined in the original standard. 802.11b products appeared on the market in early 2000, since 802.11b is a direct extension of the modulation technique defined in the original standard. The dramatic increase in throughput of 802.11b (compared to the original standard) along with simultaneous substantial price reductions led to the rapid acceptance of 802.11b as the definitive wireless LAN technology. Devices using 802.11b experience interference from other products operating in the 2.4 GHz band. Devices operating in the 2.4 GHz range include microwave ovens, Bluetooth devices, baby monitors, cordless telephones, and some amateur radio equipment. As unlicensed intentional radiators in thisISM band, they must not interfere with and must tolerate interference from primary or secondary allocations (users) of this band, such as amateur radio. In June 2003, a third modulation standard was ratified: 802.11g. This works in the 2.4 GHz band (like 802.11b), but uses the sameOFDMbased transmission scheme as 802.11a. It operates at a maximum physical layer bit rate of 54 Mbit/s exclusive of forward error correction codes, or about 22 Mbit/s average throughput.[38]802.11g hardware is fully backward compatible with 802.11b hardware, and therefore is encumbered with legacy issues that reduce throughput by ~21% when compared to 802.11a.[citation needed] The then-proposed 802.11g standard was rapidly adopted in the market starting in January 2003, well before ratification, due to the desire for higher data rates as well as reductions in manufacturing costs.[citation needed]By summer 2003, most dual-band 802.11a/b products became dual-band/tri-mode, supporting a and b/g in a single mobileadapter cardor access point. Details of making b and g work well together occupied much of the lingering technical process; in an 802.11g network, however, the activity of an 802.11b participant will reduce the data rate of the overall 802.11g network. Like 802.11b, 802.11g devices also suffer interference from other products operating in the 2.4 GHz band, for example, wireless keyboards. In 2003, task group TGma was authorized to "roll up" many of the amendments to the 1999 version of the 802.11 standard. REVma or 802.11ma, as it was called, created a single document that merged 8 amendments (802.11a,b,d,e,g,h,i,j) with the base standard. Upon approval on 8 March 2007, 802.11REVma was renamed to the then-current base standardIEEE 802.11-2007.[39] 802.11n is an amendment that improves upon the previous 802.11 standards; its first draft of certification was published in 2006. The 802.11n standard was retroactively labelled asWi-Fi 4by the Wi-Fi Alliance.[40][41]The standard added support formultiple-input multiple-outputantennas (MIMO). 802.11n operates on both the 2.4 GHz and the 5 GHz bands. Support for 5 GHz bands is optional. Its net data rate ranges from 54 Mbit/s to 600 Mbit/s. The IEEE has approved the amendment, and it was published in October 2009.[42][43] Prior to the final ratification, enterprises were already migrating to 802.11n networks based on the Wi-Fi Alliance's certification of products conforming to a 2007 draft of the 802.11n proposal. Early Intel WiFi cards were not compatible with the final standard. Many rival access points and cards also did not support 5 GHz at all.[citation needed] In May 2007, task group TGmb was authorized to "roll up" many of the amendments to the 2007 version of the 802.11 standard.[44]REVmb or 802.11mb, as it was called, created a single document that merged ten amendments (802.11k,r,y,n,w,p,z,v,u,s) with the 2007 base standard. In addition much cleanup was done, including a reordering of many of the clauses.[45]Upon publication on 29 March 2012, the new standard was referred to asIEEE 802.11-2012. IEEE 802.11ac-2013 is an amendment to IEEE 802.11, published in December 2013, that builds on 802.11n.[46]The 802.11ac standard was retroactively labelled asWi-Fi 5by the Wi-Fi Alliance.[40][41]Changes compared to 802.11n include wider channels (80 or 160 MHz versus 40 MHz) in the 5 GHz band, more spatial streams (up to eight versus four), higher-order modulation (up to 256-QAMvs. 64-QAM), and the addition ofMulti-user MIMO(MU-MIMO). The Wi-Fi Alliance separated the introduction of ac wireless products into two phases ("waves"), named "Wave 1" and "Wave 2".[47][48]From mid-2013, the alliance started certifying Wave 1 802.11ac products shipped by manufacturers, based on the IEEE 802.11ac Draft 3.0 (the IEEE standard was not finalized until later that year).[49]In 2016 Wi-Fi Alliance introduced the Wave 2 certification, to provide higher bandwidth and capacity than Wave 1 products. Wave 2 products include additional features like MU-MIMO, 160 MHz channel width support, support for more 5 GHz channels, and four spatial streams (with four antennas; compared to three in Wave 1 and 802.11n, and eight in IEEE's 802.11ax specification).[50][51] IEEE 802.11ad is an amendment that defines a newphysical layerfor 802.11 networks to operate in the 60 GHzmillimeter wavespectrum. This frequency band has significantly different propagation characteristics than the 2.4 GHz and 5 GHz bands where Wi-Fi networks operate. Products implementing the802.11adstandard are sold under theWiGigbrand name, with a certification program developed by the Wi-Fi Alliance.[52]The peak transmission rate of 802.11ad is 7 Gbit/s.[53] IEEE 802.11ad is a protocol used for very high data rates (about 8 Gbit/s) and for short range communication (about 1–10 meters).[54] TP-Link announced the world's first 802.11ad router in January 2016.[55] The WiGig standard as of 2021 has been published after being announced in 2009 and added to the IEEE 802.11 family in December 2012. IEEE 802.11af, also referred to as "White-Fi" and "Super Wi-Fi",[56]is an amendment, approved in February 2014, that allows WLAN operation in TVwhite space spectrumin theVHFandUHFbands between 54 and 790 MHz.[57][58]It usescognitive radiotechnology to transmit on unused TV channels, with the standard taking measures to limit interference for primary users, such as analog TV, digital TV, and wireless microphones.[58]Access points and stations determine their position using a satellite positioning system such asGPS, and use the Internet to query ageolocation database (GDB)provided by a regional regulatory agency to discover what frequency channels are available for use at a given time and position.[58]The physical layer uses OFDM and is based on 802.11ac.[59]The propagation path loss as well as the attenuation by materials such as brick and concrete is lower in the UHF and VHF bands than in the 2.4 GHz and 5 GHz bands, which increases the possible range.[58]The frequency channels are 6 to 8 MHz wide, depending on the regulatory domain.[58]Up to four channels may be bonded in either one or two contiguous blocks.[58]MIMO operation is possible with up to four streams used for eitherspace–time block code(STBC) or multi-user (MU) operation.[58]The achievable data rate per spatial stream is 26.7 Mbit/s for 6 and 7 MHz channels, and 35.6 Mbit/s for 8 MHz channels.[34]With four spatial streams and four bonded channels, the maximum data rate is 426.7 Mbit/s for 6 and 7 MHz channels and 568.9 Mbit/s for 8 MHz channels.[34] IEEE 802.11-2016 which was known as IEEE 802.11 REVmc,[60]is a revision based on IEEE 802.11-2012, incorporating 5 amendments (11ae,11aa,11ad,11ac,11af). In addition, existing MAC and PHY functions have been enhanced and obsolete features were removed or marked for removal. Some clauses and annexes have been renumbered.[61] IEEE 802.11ah, published in 2017,[62]defines a WLAN system operating at sub-1 GHz license-exempt bands. Due to the favorable propagation characteristics of the low-frequency spectra, 802.11ah can provide improved transmission range compared with the conventional 802.11 WLANs operating in the 2.4 GHz and 5 GHz bands. 802.11ah can be used for various purposes including large-scale sensor networks,[63]extended-range hotspots, and outdoor Wi-Fi for cellular WAN carrier traffic offloading, whereas the available bandwidth is relatively narrow. The protocol intends consumption to be competitive with low-powerBluetooth, at a much wider range.[64] IEEE 802.11ai is an amendment to the 802.11 standard that added new mechanisms for a faster initial link setup time.[65] IEEE 802.11aj is a derivative of 802.11ad for use in the 45 GHz unlicensed spectrum available in some regions of the world (specifically China); it also provides additional capabilities for use in the 60 GHz band.[65] Alternatively known as China Millimeter Wave (CMMW).[66] IEEE 802.11aq is an amendment to the 802.11 standard that will enable pre-association discovery of services. This extends some of the mechanisms in 802.11u that enabled device discovery to discover further the services running on a device, or provided by a network.[65] IEEE 802.11-2020, which was known as IEEE 802.11 REVmd,[67]is a revision based on IEEE 802.11-2016 incorporating 5 amendments (11ai,11ah,11aj,11ak,11aq). In addition, existing MAC and PHY functions have been enhanced and obsolete features were removed or marked for removal. Some clauses and annexes have been added.[68] IEEE 802.11ax is the successor to 802.11ac, marketed asWi-Fi 6(2.4 GHz and 5 GHz)[69]andWi-Fi 6E(6 GHz)[70]by theWi-Fi Alliance. It is also known asHigh EfficiencyWi-Fi, for the overall improvements toWi-Fi 6clients indense environments.[71]For an individual client, the maximum improvement in data rate (PHYspeed) against the predecessor (802.11ac) is only 39%[d](for comparison, this improvement was nearly 500%[e][i]for the predecessors).[f]Yet, even with this comparatively minor 39% figure, the goal was to provide4 timesthethroughput-per-area[g]of 802.11ac (henceHigh Efficiency). The motivation behind this goal was the deployment ofWLANin dense environments such as corporate offices, shopping malls and dense residential apartments.[71]This is achieved by means of a technique calledOFDMA, which is basically multiplexing in thefrequency domain(as opposed tospatialmultiplexing, as in 802.11ac). This is equivalent tocellular technologyapplied intoWi-Fi.[71]:qt The IEEE 802.11ax‑2021 standard was approved on February 9, 2021.[74][75] IEEE 802.11ay is a standard that is being developed, also called EDMG: Enhanced Directional MultiGigabit PHY. It is an amendment that defines a newphysical layerfor 802.11 networks to operate in the 60 GHzmillimeter wavespectrum. It will be an extension of the existing 11ad, aimed to extend the throughput, range, and use-cases. The main use-cases include indoor operation and short-range communications due to atmospheric oxygen absorption and inability to penetrate walls. The peak transmission rate of 802.11ay is 40 Gbit/s.[76]The main extensions include: channel bonding (2, 3 and 4),MIMO(up to 4 streams) and higher modulation schemes. The expected range is 300–500 m.[77] IEEE 802.11ba Wake-up Radio (WUR) Operation is an amendment to the IEEE 802.11 standard that enables energy-efficient operation for data reception without increasing latency.[78]The target active power consumption to receive a WUR packet is less than 1 milliwatt and supports data rates of 62.5 kbit/s and 250 kbit/s. The WUR PHY uses MC-OOK (multicarrierOOK) to achieve extremely low power consumption.[79] IEEE 802.11bbis a networking protocol standard in the IEEE 802.11 set of protocols that uses infrared light for communications.[80] IEEE 802.11be Extremely High Throughput (EHT) is the potential next amendment to the 802.11 IEEE standard,[81]and will likely be designated asWi-Fi 7.[82][83]It will build upon 802.11ax, focusing on WLAN indoor and outdoor operation with stationary and pedestrian speeds in the 2.4 GHz, 5 GHz, and 6 GHz frequency bands. Across all variations of 802.11, maximum achievable throughputs are given either based on measurements under ideal conditions or in the layer-2 data rates. However, this does not apply to typical deployments in which data is being transferred between two endpoints, of which at least one is typically connected to a wired infrastructure and the other endpoint is connected to an infrastructure via a wireless link. This means that, typically, data frames pass an 802.11 (WLAN) medium and are being converted to802.3(Ethernet) or vice versa. Due to the difference in the frame (header) lengths of these two media, the application's packet size determines the speed of the data transfer. This means applications that use small packets (e.g., VoIP) create dataflows with high-overhead traffic (i.e., a lowgoodput). Other factors that contribute to the overall application data rate are the speed with which the application transmits the packets (i.e., the data rate) and, of course, the energy with which the wireless signal is received. The latter is determined by distance and by the configured output power of the communicating devices.[84][85] The same references apply to the attached graphs that show measurements ofUDPthroughput. Each represents an average (UDP) throughput (please note that the error bars are there but barely visible due to the small variation) of 25 measurements. Each is with a specific packet size (small or large) and with a specific data rate (10 kbit/s – 100 Mbit/s). Markers for traffic profiles of common applications are included as well. These figures assume there are no packet errors, which, if occurring, will lower the transmission rate further. 802.11b, 802.11g, and 802.11n-2.4 utilize the2.400–2.500 GHzspectrum, one of theISM bands. 802.11a, 802.11n, and 802.11ac use the more heavily regulated4.915–5.825 GHzband. These are commonly referred to as the "2.4 GHz and 5 GHz bands" in most sales literature. Each spectrum is sub-divided intochannelswith a center frequency and bandwidth, analogous to how radio and TV broadcast bands are sub-divided. The 2.4 GHz band is divided into 14 channels spaced 5 MHz apart, beginning with channel 1, which is centered on 2.412 GHz. The latter channels have additional restrictions or are unavailable for use in some regulatory domains. The channel numbering of the5.725–5.875 GHzspectrum is less intuitive due to the differences in regulations between countries. These are discussed in greater detail on thelist of WLAN channels. In addition to specifying the channel center frequency, 802.11 also specifies (in Clause 17) aspectral maskdefining the permitted power distribution across each channel. The mask requires the signal to beattenuateda minimum of 20dBfrom its peak amplitude at ±11 MHz from the center frequency, the point at which a channel is effectively 22 MHz wide. One consequence is that stations can use only every fourth or fifth channel without overlap. Availability of channels is regulated by country, constrained in part by how each countryallocates radio spectrumto various services. At one extreme, Japan permits the use of all 14 channels for 802.11b, and1–13for 802.11g/n-2.4. Other countries such as Spain initially allowed only channels 10 and 11, and France allowed only 10, 11, 12, and 13; however, Europe now allow channels 1 through 13.[86][87]North America and some Central and South American countries allow only1 through 11. Since the spectral mask defines only power output restrictions up to ±11 MHz from the center frequency to be attenuated by −50 dBr, it is often assumed that the energy of the channel extends no further than these limits. It is more correct to say that the overlapping signal on any channel should be sufficiently attenuated to interfere with a transmitter on any other channel minimally, given the separation between channels. Due to thenear–far problema transmitter can impact (desensitize) a receiver on a "non-overlapping" channel, but only if it is close to the victim receiver (within a meter) or operating above allowed power levels. Conversely, a sufficiently distant transmitter on an overlapping channel can have little to no significant effect. Confusion often arises over the amount of channel separation required between transmitting devices. 802.11b was based ondirect-sequence spread spectrum(DSSS) modulation and utilized a channel bandwidth of 22 MHz, resulting inthree"non-overlapping" channels (1, 6, and 11). 802.11g was based on OFDM modulation and utilized a channel bandwidth of 20 MHz. This occasionally leads to the belief thatfour"non-overlapping" channels (1, 5, 9, and 13) exist under 802.11g. However, this is not the case as per 17.4.6.3 Channel Numbering of operating channels of the IEEE Std 802.11 (2012), which states, "In a multiple cell network topology, overlapping and/or adjacent cells using different channels can operate simultaneously without interference if the distance between the center frequencies is at least 25 MHz."[88]and section 18.3.9.3 and Figure 18-13. This does not mean that the technical overlap of the channels recommends the non-use of overlapping channels. The amount of inter-channel interference seen on a configuration using channels 1, 5, 9, and 13 (which is permitted in Europe, but not in North America) is barely different from a three-channel configuration, but with an entire extra channel.[89][90] However, overlap between channels with more narrow spacing (e.g. 1, 4, 7, 11 in North America) may cause unacceptable degradation of signal quality and throughput, particularly when users transmit near the boundaries of AP cells.[91] IEEE uses the phraseregdomainto refer to a legal regulatory region. Different countries define different levels of allowable transmitter power, time that a channel can be occupied, and different available channels.[92]Domain codes are specified for the United States, Canada,ETSI (Europe), Spain, France, Japan, and China. MostWi-Fi certifieddevices default toregdomain0, which meansleast common denominatorsettings, i.e., the device will not transmit at a power above the allowable power in any nation, nor will it use frequencies that are not permitted in any nation.[citation needed] Theregdomainsetting is often made difficult or impossible to change so that the end-users do not conflict with local regulatory agencies such as the United States'Federal Communications Commission.[citation needed] Thedatagramsare calledframes. Current 802.11 standards specify frame types for use in the transmission of data as well as management and control of wireless links. Frames are divided into very specific and standardized sections. Each frame consists of aMAC header,payload, andframe check sequence(FCS). Some frames do not have payloads. The first two bytes of the MAC header form a frame control field specifying the form and function of the frame. This frame control field is subdivided into the following sub-fields: The next two bytes are reserved for the Duration ID field, indicating how long the field's transmission will take so other devices know when the channel will be available again. This field can take one of three forms: Duration, Contention-Free Period (CFP), and Association ID (AID). An 802.11 frame can have up to four address fields. Each field can carry aMAC address. Address 1 is the receiver, Address 2 is the transmitter, Address 3 is used for filtering purposes by the receiver.[dubious–discuss]Address 4 is only present in data frames transmitted between access points in anExtended Service Setor between intermediate nodes in amesh network. The remaining fields of the header are: The payload or frame body field is variable in size, from 0 to 2304 bytes plus any overhead from security encapsulation, and contains information from higher layers. The Frame Check Sequence (FCS) is the last four bytes in the standard 802.11 frame. Often referred to as the Cyclic Redundancy Check (CRC), it allows for integrity checks of retrieved frames. As frames are about to be sent, the FCS is calculated and appended. When a station receives a frame, it can calculate the FCS of the frame and compare it to the one received. If they match, it is assumed that the frame was not distorted during transmission.[95] Management frames arenot always authenticated, and allow for the maintenance, or discontinuance, of communication. Some common 802.11 subtypes include: The body of a management frame consists of frame-subtype-dependent fixed fields followed by a sequence ofinformation elements(IEs). The common structure of an IE is as follows: Control frames facilitate the exchange of data frames between stations. Some common 802.11 control frames include: Data frames carry packets from web pages, files, etc. within the body.[96]The body begins with anIEEE 802.2header, with the DestinationService Access Point(DSAP) specifying the protocol, followed by aSubnetwork Access Protocol(SNAP) header if the DSAP is hex AA, with theorganizationally unique identifier(OUI) and protocol ID (PID) fields specifying the protocol. If the OUI is all zeroes, the protocol ID field is anEtherTypevalue.[97]Almost all 802.11 data frames use 802.2 and SNAP headers, and most use an OUI of 00:00:00 and an EtherType value. Similar toTCP congestion controlon the internet, frame loss is built into the operation of 802.11. To select the correct transmission speed orModulation and Coding Scheme, a rate control algorithm may test different speeds. The actual packet loss rate of Access points varies widely for different link conditions. There are variations in the loss rate experienced on production Access points, between 10% and 80%, with 30% being a common average.[98]It is important to be aware that the link layer should recover these lost frames. If the sender does not receive an Acknowledgement (ACK) frame, then it will be resent. Within the IEEE 802.11 Working Group,[57]the followingIEEE Standards AssociationStandard and Amendments exist: 802.11F and 802.11T are recommended practices rather than standards and are capitalized as such. 802.11m is used for standard maintenance. 802.11ma was completed for 802.11-2007, 802.11mb for 802.11-2012,802.11mcfor 802.11-2016, 802.11md for 802.11-2020, and 802.11me for 802.11-2024. Both the terms "standard" and "amendment" are used when referring to the different variants of IEEE standards.[100] As far as the IEEE Standards Association is concerned, there is only one current standard; it is denoted by IEEE 802.11 followed by the date published. IEEE 802.11-2024 is the only version currently in publication, superseding previous releases. The standard is updated by means of amendments. Amendments are created by task groups (TG). Both the task group and their finished document are denoted by 802.11 followed by one or two lower case letters, for example,IEEE 802.11aorIEEE 802.11ax. Updating 802.11 is the responsibility of task group m. In order to create a new version, TGm combines the previous version of the standard and all published amendments. TGm also provides clarification and interpretation to industry on published documents. New versions of theIEEE 802.11were published in 1999, 2007, 2012, 2016, 2020, and 2024.[101][102] Various terms in 802.11 are used to specify aspects of wireless local-area networking operation and may be unfamiliar to some readers. For example,time unit(usually abbreviatedTU) is used to indicate a unit of time equal to 1024microseconds. Numerous time constants are defined in terms of TU (rather than the nearly equal millisecond). Also, the termportalis used to describe an entity that is similar to an802.1Hbridge. A portal provides access to the WLAN by non-802.11 LAN STAs. In 2001, a group from theUniversity of California, Berkeleypresented a paper describing weaknesses in the802.11Wired Equivalent Privacy (WEP) security mechanism defined in the original standard; they were followed byFluhrer, Mantin, and Shamir's paper titled "Weaknesses in the Key Scheduling Algorithm ofRC4". Not long after, Adam Stubblefield andAT&Tpublicly announced the first verification of the attack. In the attack, they were able to intercept transmissions and gain unauthorized access to wireless networks.[103] The IEEE set up a dedicated task group to create a replacement security solution, 802.11i (previously, this work was handled as part of a broader 802.11e effort to enhance theMAC layer). The Wi-Fi Alliance announced an interim specification calledWi-Fi Protected Access(WPA) based on a subset of the then-current IEEE 802.11i draft. These started to appear in products in mid-2003.IEEE 802.11i(also known as WPA2) itself was ratified in June 2004, and uses theAdvanced Encryption Standard(AES), instead ofRC4, which was used in WEP. The modern recommended encryption for the home/consumer space is WPA2 (AES Pre-Shared Key), and for the enterprise space is WPA2 along with aRADIUSauthentication server (or another type of authentication server) and a strong authentication method such asEAP-TLS.[citation needed] In January 2005, the IEEE set up yet anothertask group "w"to protect management and broadcast frames, which previously were sent unsecured. Its standard was published in 2009.[104] In December 2011, a security flaw was revealed that affects some wireless routers with a specific implementation of the optionalWi-Fi Protected Setup(WPS) feature. While WPS is not a part of 802.11, the flaw allows an attacker within the range of the wireless router to recover the WPS PIN and, with it, the router's 802.11i password in a few hours.[105][106] In late 2014,Appleannounced that itsiOS8 mobile operating system would scramble MAC addresses during the pre-association stage to thwartretail footfall trackingmade possible by the regular transmission of uniquely identifiable probe requests.[107]Android 8.0 "Oreo" introduced a similar feature, named "MAC randomization".[108] Wi-Fi users may be subjected to aWi-Fi deauthentication attackto eavesdrop, attack passwords, or force the use of another, usually more expensive access point.[citation needed]
https://en.wikipedia.org/wiki/IEEE_802.11
Laws regarding "unauthorized access of acomputer network" exist in many legal codes, though the wording and meaning differs from one to the next. However, the interpretation of terms like "access" and "authorization" is not clear, and there is no general agreement on whetherpiggybacking(intentional access of an openWi-Fi networkwithoutharmful intent) falls under this classification.[1]Some jurisdictions prohibit it, some permit it, and others are not well-defined. For example, a common but untested argument is that the802.11andDHCPprotocols operate on behalf of the owner, implicitly requesting permission to access the network, which thewireless routerthen authorizes. (This would not apply if the user has other reason to know that their use is unauthorized, such as a written or unwritten notice.) In addition to laws against unauthorized access on the user side, there are the issues ofbreach of contractwith theInternet service provideron the network owner's side. Manyterms of serviceprohibit bandwidth sharing with others, though others allow it. TheElectronic Frontier Foundationmaintains a list of ISPs[2]that allow sharing of the Wi-Fi signal. Under Australian Law, "unauthorized access, modification or impairment" of data held in a computer system is a federal offence under the Criminal Code Act 1995.[3]The act refers specifically todataas opposed to network resources (connection). InCanadian law, unauthorized access is addressed theCriminal Code, s 342.1, which provides that "Every one who, fraudulently and withoutcolour of right" obtains "computer services" from an access point is subject to criminal charges.[4] Section 326(1) of theCriminal Codemay also be used to address unauthorized access of a computer network: '(1) Every one commits theft who fraudulently, maliciously, or withoutcolour of right', '(b) uses any telecommunication facility or obtains any telecommunication service.' InMorrisburg, Ontarioin 2006, a man was arrested under section 326 of theCriminal Code. Ultimately the arrest was poorly reported, there does not seem to be any information available with regards to conviction.[5] In September 2016, theEuropean Court of Justicedecided in "McFadden"[6]C-484/14 that "a businessman providing a public wifi network is not responsible for copyright infringement incurred by users. But he can be ordered to protect the network with a password, to prevent copyright infringement".[7]TheElectronic Frontier Foundationhad lobbied for not requiring passwords.[8] Under HK Laws. Chapter 200Crimes OrdinanceSection 161Access to computer with criminal or dishonest intent: (1) Any person who obtains access to a computer– Unauthorized access to a protected system is illegal.[9] On 28 April 2017 theTokyo District Courtruled that accessing a wireless LAN network without authorization is not a crime, even if the network is protected with a password. In a case brought before the court involved a man named Hiroshi Fujita, who was accused of accessing a neighbors wi-fi network without authorization and sending virus-infected emails, and then using that to steal internet banking information and send funds to his own bank account without authorization. Hiroshi was found guilty of most of what he was accused of and sentenced to 8 years in prison. Regarding the unauthorized access of wireless networks, prosecutors argued that wi-fi passwords fall under the category of "secrets of wireless transmission" (無線通信の秘密) and that therefore obtaining and using passwords without permission of the network operator would fall under the category of unauthorized use of wireless transmission secrets, which is prohibited by law. However, the court ruled that the defendant is not guilty, stating in their ruling that wi-fi passwords do not fall under that category and therefore the unauthorized obtainment of passwords and subsequent accessing of protected wireless networks is not a crime.[10] Although Russian criminal law does not explicitly forbid access to another person's network, there is a common judicial practice to qualify these cases as an "unauthorized access to an information" (a broadly interpreted concept in Russian law regarding computer crimes) according to Article 272 of the Criminal Code. To construct the accusation, one considers ISP's billing data the information which has been accessed. In addition, if the defendant have used any program (network scanner, for example) to access the network, he may also be charged by Article 273 (creation, usage and distribution of malware). Piggybacking another person's unsecured wireless network is illegal in Singapore under section 6(1)(a) of the Computer Misuse and Cybersecurity Act.[11][12]The offender is liable to a fine of $10,000, imprisonment for up to 3 years, or both.[11][12] In November 2006, the 17-year-old Garyl Tan Jia Luo, was arrested for tapping into his neighbour's wireless Internet connection.[13]On 19 December, Tan pleaded guilty to the charge,[14]and on 16 January 2007 he became the first person in Singapore to be convicted of the offense. He was sentenced by theCommunity Courtto 18 months' probation, half of which was to be served at a boys' home. For the remaining nine months, he had to stay indoors from 10:00 pm to 6:00 am. He was also sentenced to 80 hours of community service and banned from using the Internet for 18 months; his parents risked forfeiting aS$5,000 bond if he failed to abide by the ban. Tan was also given the option of enlisting early forNational Service. If he did so, he would not have to serve whatever remained of his sentence.[15][16] On 4 January 2007, Lin Zhenghuang was charged for using his neighbour's unsecured wireless network to post a bomb hoax online. In July 2005, Lin had posted a message entitled "Breaking News –Toa PayohHit by Bomb Attacks" on a forum managed byHardwareZone. Alarmed by the message, a user reported it to the authorities through theGovernment of Singapore's eCitizen[17]website. Lin faced an additional 60 charges for having used his notebook computer to repeatedly access the wireless networks of nine people in his neighborhood.[18][19]Lin pleaded guilty to one charge under the Telecommunications Act[20]and another nine under the Computer Misuse Act on 31 January. He apologised for his actions, claiming he had acted out of "stupidness" and not due to any "malicious or evil intent".[19]On 7 February he was sentenced by District Judge Francis Tseng to three months' jail and a S$4,000 fine. The judge also set sentencing guidelines for future 'mooching' cases, stating that offenders would be liable to fines and not to imprisonment unless offences were "committed in order to facilitate the commission of or to avoid detection for some more serious offence", as it was in Lin's case.[21][22] TheComputer Misuse Act 1990, section 1 reads:[23] (1) A person is guilty of an offence if— InLondon, 2005, Gregory Straszkiewicz was the first person to be convicted of a related crime, "dishonestly obtaining an electronics communication service" (under s.125Communications Act 2003). Local residents complained that he was repeatedly trying to gain access to residential networks with a laptop from a car. There was no evidence that he had any othercriminal intent.[24]He was fined £500 and given a 12-monthconditional discharge.[25] In early 2006, two other individuals were arrested and received an official caution for "dishonestly obtaining electronic communications services with intent to avoid payment."[26][27] There are federal and state laws (in all 50 states) addressing the issue of unauthorized access to wireless networks.[1]The laws vary widely between states. Some criminalize the mere unauthorized access of a network, while others require monetary damages for intentional breaching of security features. The majority of state laws do not specify what is meant by "unauthorized access". Regardless, enforcement is minimal in most states even where it is illegal, and detection is difficult in many cases.[1][28] Some portable devices, such as the AppleiPadandiPod Touch, allow casual use of open Wi-Fi networks as a basic feature, and often identify the presence of specific access points within the vicinity for usergeolocation. InSt. Petersburg, 2005, Benjamin Smith III was arrested and charged with "unauthorized access to a computer network", a third-degree felony in the state ofFlorida, after using a resident's wireless network from a car parked outside.[29][30] AnIllinoisman was arrested in January 2006 for piggybacking on aWi-Finetwork. David M. Kauchak was the first person to be charged with "remotely accessing another computer system" inWinnebago County. He had been accessing the Internet through a nonprofit agency's network from a car parked nearby and chatted with the police officer about it. He pleaded guilty and was sentenced to a fine of $250 and one year of court supervision.[31][32] InSparta, Michigan, 2007, Sam Peterson was arrested for checking his email each day using a café's wireless Internet access from a car parked nearby. A police officer became suspicious, stating, "I had a feeling a law was being broken, but I didn't know exactly what". When asked, the man explained to the officer what he was doing, as he did not know the act was illegal. The officer found a law against "unauthorized use of computer access", leading to an arrest and charges that could result in a five-year felony and $10,000 fine. The café owner was not aware of the law, either. "I didn't know it was really illegal, either. If he would have come in [to the coffee shop] it would have been fine." They did not press charges, but he was eventually sentenced to a $400 fine and 40 hours of community service.[33]This case was featured onThe Colbert Report.[34] In 2007, inPalmer, Alaska, 21-year-old Brian Tanner was charged with "theft of services" and had his laptop confiscated after accessinga gaming websiteat night from the parking lot outside the Palmer Public Library, as he was allowed to do during the day. The night before, the police had asked him to leave the parking lot, which he had started using because they had asked him not to use residential connections in the past. He was not ultimately charged with theft, but could still be charged with trespassing or not obeying a police order. The library director said that Tanner had not broken any rules, and local citizens criticized police for their actions.[35][36][37] In 2003, theNew HampshireHouse Bill 495[38]was proposed, which would clarify that the duty to secure the wireless network lies with the network owner, instead of criminalizing the automatic access of open networks.[39][40]It was passed by the New Hampshire House in March 2003 but was not signed into law. The current wording of the law provides someaffirmative defensesfor use of a network that is not explicitly authorized:[41] I. A person is guilty of the computer crime of unauthorized access to a computer or computer network when, knowing that the person is not authorized to do so, he or she knowingly accesses or causes to be accessed any computer or computer network without authorization. It shall be an affirmative defense to a prosecution for unauthorized access to a computer or computer network that: There are additional provisions in the NH law, Section 638:17 Computer Related Offenses, as found by searching NH RSA's in December 2009. They cover actual use of someone else's computer rather than simply "access": II. A person is guilty of the computer crime of theft of computer services when he or she knowingly accesses or causes to be accessed or otherwise uses or causes to be used a computer or computer network with the purpose of obtaining unauthorized computer services. III. A person is guilty of the computer crime of interruption of computer services when the person, without authorization, knowingly or recklessly disrupts or degrades or causes the disruption or degradation of computer services or denies or causes the denial of computer services to an authorized user of a computer or computer network. IV. A person is guilty of the computer crime of misuse of computer or computer network information when: V. A person is guilty of the computer crime of destruction of computer equipment when he or she, without authorization, knowingly or recklessly tampers with, takes, transfers, conceals, alters, damages, or destroys any equipment used in a computer or computer network, or knowingly or recklessly causes any of the foregoing to occur. VI. A person is guilty of the computer crime of computer contamination if such person knowingly introduces, or causes to be introduced, a computer contaminant into any computer, computer program, or computer network which results in a loss of property or computer services. New Yorklaw is the most permissive.[1]The statute against unauthorized access only applies when the network "is equipped or programmed with any device or coding system, a function of which is to prevent the unauthorized use of said computer or computer system".[42][43][44][45]In other words, the use of a network would only be considered unauthorized and illegal if the network owner had enabled encryption or password protection and the user bypassed this protection, or when the owner has explicitly given notice that use of the network is prohibited, either orally or in writing.[1][46][47]Westchester Countypassed a law,[48]taking effect in October 2006, that requires any commercial business that stores, utilizes or otherwise maintains personal information electronically to take some minimum security measures (e.g., a firewall,SSIDbroadcasting disabled, or using a non-default SSID) in an effort to fightidentity theft. Businesses that do not secure their networks in this way face a $500 fine. The law has been criticized as being ineffectual against actual identity thieves and punishing businesses likecoffee housesfor normal business practices.[49][50][51]
https://en.wikipedia.org/wiki/Legality_of_piggybacking
LinkNYCis an infrastructure project providing freeWi-Fiservice inNew York City. The office ofNew York City MayorBill de Blasioannounced the plan on November 17, 2014, and the installation of the first kiosks, or "Links," started in late 2015. The Links replace the city's network of 9,000 to 13,000payphones, a contract for which expired in October 2014. The LinkNYC kiosks were devised after thegovernment of New York Cityheld several competitions to replace the payphone system. The most recent competition, in 2014, resulted in the contract being awarded to the CityBridge consortium, which comprisesQualcomm;TitanandControl Group, which now make upIntersection; and Comark. All of the 9.5-foot-tall (2.9 m) Links feature two 55-inch (140 cm)high-definitiondisplays on their sides;Androidtablet computersfor accessing city maps, directions, and services, and making video calls; two freeUSBcharging stationsforsmartphones; and a phone allowing free calls to all50 statesandWashington, D.C.The Links also provide the ability to usecalling cardsto make international calls, and each Link has one button to call9-1-1directly. Since 2022, CityBridge has also installed 32-foot-tall (9.8 m) poles under the Link5G brand, which provide both Wi-Fi and5Gservice. The project brings free, encrypted,gigabitwireless internet coverage to thefive boroughsby converting old payphones intoWi-Fi hotspotswhere free phone calls could also be made. As of 2020[update], there are 1,869 Links citywide; eventually, 7,500 Links are planned to be installed in theNew York metropolitan area, making the system the world's fastest and most expansive. Intersection has also installed InLinks in cities across the UK. The Links are seen as a model for future city builds as part ofsmart citydata pools and infrastructure. Since the Links' deployment, there have been several concerns about the kiosks' features. Privacy advocates have stated that thedataof LinkNYC users can be collected and used to track users' movements throughout the city. There are also concerns with cybercriminals possibly hijacking the Links, orrenaming their personal wireless networksto the same name as LinkNYC's network, in order to steal LinkNYC users' data. In addition, prior to September 2016, the tablets of the Links could be used to browse the Internet. In summer 2016, concerns arose about the Link tablets'browsersbeing used for illicit purposes; despite the implementation ofcontent filterson the kiosks, the illicit activities continued, and the browsers were disabled. In 1999, 13 companies signed a contract that legally obligated them to maintain New York City's payphones for 15 years.[1]In 2000, the city's tens of thousands of payphones were among the 2.2 million payphones spread across the United States.[2]Since then, these payphones' use had been declining with the advent ofcellphones.[1]As of July 2012[update], there were 13,000 phones in over 10,000 individual locations;[1]that number had dropped to 9,133 phones in 7,302 locations by April 2014,[3]at a time when the number of payphones in the United States had declined more than 75 percent, to 500,000.[2]The contract with the 13 payphone operators was set to expire in October 2014, after which time the payphones' futures were unknown.[1][3] In July 2012, the New York City government released a publicrequest for information, asking for comments about the future uses for these payphones.[1]The RFI presented questions such as "What alternative communications amenities would fill a need?"; "If retained, should the current designs of sidewalk payphone enclosures be substantially revised?"; and "Should the current number of payphones on City sidewalks change, and if so, how?".[1]Through the RFI, the New York City government sought new uses for the payphones, including a combination of "public wireless hotspots, touch-screen wayfinding panels, information kiosks, charging stations for mobile communications devices, [and] electronic community bulletin boards,"[1]all of which eventually became the features of the kiosks that were included in the LinkNYC proposal.[2][4][5] In 2013, a year before the payphone contract was set to expire, there was a competition that sought ideas to further repurpose the network of payphones.[6]The competition, held by the administration ofMichael Bloomberg, expanded the idea of the pilot project.[6]There were 125 responses that suggested a Wi-Fi network, but none of these responses elaborated on how that would be accomplished.[7][8] In 2012, thegovernment of New York CityinstalledWi-Firoutersat 10payphonesin the city (seven inManhattan, two inBrooklyn, and one inQueens[9]) as part of apilot project. The Wi-Fi was free of charge and available for useat all times.[6][9]The Wi-Fi signal was detectable from a radius of a few hundred feet (about 100m). Two of New York City's largest advertising companies—Van Wagner andTitan, who collectively owned more than 9,000 of New York City's 12,000 payphones at the time—paid $2,000 per router,[6]with no monetary input from either the city or taxpayers.[9]While the payphones participating in the Wi-Fi pilot project were poorly marked, the Wi-Fi offered at these payphones was significantly faster than some of the other free public Wi-Fi networks offered elsewhere.[9] The Manhattan neighborhood ofHarlemreceived free Wi-Fi starting in late 2013.[10]Routers were installed in three phases within a 95-blockarea between110th Street,Frederick Douglass Boulevard,138th Street, andMadison Avenue. Phase 1, from 110th to 120th Streets, finished in 2013; Phase 2, from 121st to 126th Street, was expected to be complete in February 2014; and Phase 3, the remaining area, was supposed to be finished by May 2014.[10]The network was estimated to serve 80,000 Harlemites, including 13,000 inpublic housing projects[10]who may have otherwise not hadbroadbandinternet accessat home.[11][12]At the time, it was dubbed the United States' most expansive "continuous free public Wi-Fi network."[10] On April 30, 2014, theNew York City Department of Information Technology and Telecommunications(DOITT) requested proposals for how to convert the city's over 7,000 payphones into a citywide Wi-Fi network.[7][8]A new competition was held, with the winner standing to receive a 12-year contract to maintain up to 10,000 communication points.[7][8][13]The communication points would tentatively have free Wi-Fi service, advertising, and free calls to at least9-1-1(the emergency service) or3-1-1(the city information hotline).[2][3] The contract would require the operator, or the operating consortium, to pay "$17.5 million or 50 percent of gross revenues, whichever is greater" to the City of New York every year. The communication points could be up to 10 ft 3 in (3.12 m) tall, compared to the 7 ft 6 in (2.29 m) height of the phone booths; however, the advertising space on these points would only be allowed to accommodate up to 21.3 square feet (1.98 m2) of advertisements, or roughly half the maximum of 41.6 square feet (3.86 m2) of the advertising space allowed on existing phone booths.[3]There would still need to be phone service at these Links because the payphones are still used often: collectively, all of New York City's nearly 12,000 payphones were used 27 million times in 2011, amounting to each phone being used about six times per day.[1] In November 2014, the bid was awarded to the consortium CityBridge, which consists ofQualcomm, Titan,Control Group, and Comark.[2][13][14][15][16]In June 2015, Control Group and Titan announced that they would merge into one company calledIntersection. Intersection is being led by aSidewalk Labs-led group of investors who operate the company as a subsidiary ofAlphabet Inc.that focuses on solving problems unique to urban environments.[17][18][19]Daniel L. Doctoroff, the former CEO ofBloomberg L.P.and former New York City Deputy Mayor for Economic Development and Rebuilding, is the CEO of Sidewalk Labs.[20] CityBridge announced that it would be setting up about 7,000 kiosks, called "Links," near where guests could use the LinkNYC Wi-Fi. Coverage was set to be up by late 2015, starting with about 500 Links in areas that already have payphones, and later to other areas.[21]These Links were to be placed online by the end of the year.[16]The project would require the installation of 400 miles (640 km) of new communication cables.[4]The Links would be built in coordination withborough presidents,business improvement districts, theNew York City Council, and New York Citycommunity boards.[14]The project is expected to create up to 800 jobs, including 100 to 150 full-time jobs at CityBridge as well as 650technical supportpositions.[2][14]Of the LinkNYC plans, New York City MayorBill de Blasiosaid, With this proposal for the fastest and largest municipal Wi-Fi network in the world – accessible to and free for all New Yorkers and visitors alike – we're taking a critical step toward a more equal, open and connected city – for every New Yorker, in every borough.[14] In December 2014, the network was approved by New York City's Franchise and Concession Review Committee.[22]Installation of two stations onThird Avenue—at 15th and 17th Streets[23]—began on December 28, 2015,[24]followed by other Links on Third Avenue below 58th Street,[25][26]as well as onEighth Avenue.[26]After some delays, the first Links went online in January 2016.[5][25][27]The public network was announced in February 2016.[28]Locations likeSt. George,Jamaica,South Bronx, andFlatbush Avenuewere prioritized for LinkNYC kiosk installations, with these places receiving Links by the end of 2016.[28] The vast majority of the payphones were to be demolished and replaced with Links.[2][25][26][28]However, several payphones alongWest End Avenueon theUpper West Sidewere preserved rather than being replaced with Links.[2][4][13][29]These payphones are the only remaining fully enclosed payphones in Manhattan.[29][30]The preservation process includes creating new fully enclosed booths for the site, which is a difficulty because that specific model of phone booths is no longer manufactured.[29]The New York City government and Intersection agreed to preserve these payphones because of their historical value, and because they were a relic of the Upper West Side community, having been featured in the 2002 moviePhone Boothand the 2010 book "The Lonely Phone Booth."[29] By mid-July 2016, the planned roll-out of 500 hubs throughout New York City was to occur,[27]though the actual installation proceeded at a slower rate.[31]As of September 2016[update], there were 400 hubs in three boroughs,[31]most of which were in Manhattan, although there were at least 25 hubs inthe Bronxand several additional hubs in Queens.[32]In November 2016, the first two Links were installed in Brooklyn, with plans to install nine more Links in various places around Brooklyn before year's end.[33]Around this time, Staten Island received its first Links, which were installed inNew Dorp.[33]The Links were being installed at an average pace of ten per day throughout the boroughs[26]with a projected goal of 500 hubs by the end of 2016.[25]By July 2017[update], there were 920 Links installed across the city.[34]This number had increased to 1,250 by January 2018,[35]and to 1,600 by September 2018.[36] As originally planned, there would be 4,550 hubs by July 2019[37]and 7,500 hubs by 2024,[25][26][28]which would make LinkNYC the largest and fastest public, government-operated Wi-Fi network in the world.[2][7][13][14][23]Slightly more than half, or 52 percent, of the hubs would be inManhattanand the rest would be in the outer boroughs.[26]There would be capacity for up to 10,000 Links within the network, as per the contract.[13][16][28]The total cost for installation is estimated at more than $200 million.[25][26]The eventual network includes 736 Links in the Bronx, 361 of which will haveadvertisingand fast network speeds; as well as over 2,500 in Manhattan, most with advertising and fast network speeds.[38]By December 2019, only 1,774 LinkNYC kiosks had been installed across the city; the kiosks were largely concentrated in wealthy neighborhoods Manhattan, althoughHarlem, theSouth Bronx, and Queens also had several kiosks.[39] CityBridge had installed 1,869 kiosks by May 2020.[40]Most of the kiosks were in Manhattan. CityBridge had only provided three-fifths the number of kiosks that it had been expected to provide by that time.[40][41]New York state comptrollerThomas DiNapolireleased a report in 2021, finding that 86 of the city's 185 ZIP Codes had kiosks; Manhattan was the only borough that had LinkNYC kiosks in the vast majority of its ZIP Codes.[40] In October 2021, CityBridge submitted designs to theNew York City Public Design Commissionfor the installation of 32-foot-tall (9.8 m) poles, capable of transmitting5Gwireless signals, under the Link5G brand.[42]The Public Design Commission initially only approved the construction of Link5G poles in commercial and industrial neighborhoods.[42][43]The first such pole was installed at the intersection of Hunters Point Avenue and 30th Place inLong Island City, Queens, in March 2022 and was used for testing.[44]As part of an agreement with the city government, over 2,000 poles were to be installed in portions of the city that lacked reliable internet service.[41][45]Under the agreement with CityBridge, the city would receive eight percent of the first $200 million in profits from the Link5G project, as well as half of all revenue above $200 million.[41]The first publicly accessible pole was installed inMorris Heights, Bronx, in July 2022.[43][46]By the end of the year, CityBridge had installed 26 Link5G poles citywide.[47] A 2023 study conducted by LinkNYC found that, although nearly half of residents surveyed were unaware of the poles' existence, those who did were largely supportive of the program.[48]Nonetheless, as additional poles were rolled out across the city in 2023, residents expressed concerns about the Link5G poles' appearance and height; some opponents also citedmisinformation related to 5G technology.[49][50][51]Neighborhoods such as theWest Village[52][53]and theUpper East Sideopposed the Link5G poles.[49][54]Conversely, city officials and businesses supported the installation of the poles.[50]Following a letter from U.S. representativeJerrold Nadler,[55]theFederal Communications Commission(FCC) ruled in April 2023 that the poles needed to undergo environmental and historic-preservation reviews.[56][57]All but one of the planned 5G towers on the Upper East Side were canceled in 2024 because they violated historic-district guidelines.[58]As of July 2024[update], there were 200 Link5G poles citywide, of which only two were transmitting 5G signals.[59] The Links are 9.5 feet (2.9 m) tall, and are compliant with theAmericans with Disabilities Act of 1990.[14][25]There are two 55-inch (140 cm)high-definitiondisplays on each Link[2][4][24][60]for advertisements[2][15][16][21]andpublic service announcements.[2][5]There is an integratedAndroidtabletembedded within each Link, which can be used to access city maps, directions, and services, as well as make video calls;[2][4][5]they were formerly also available to allow patrons to use the internet, but these browsers have now been disabled due to abuse (seebelow).[31] Each Link includes two freeUSBcharging stationsforsmartphonesas well as a phone that allows free calls to all50 statesand toWashington, D.C.[36]The Links allow people to make either phone calls (using the keypad and the headphone jack to the keypad's left), or video calls (using the tablet).[2][4][5][26]Vonageprovides this free domestic phone call service as well as the ability to make international calls usingcalling cards.[20]The Links feature a red 9-1-1 call button between the tablet and the headphone jack,[4][61]and they can be used to call the information helpline 3-1-1.[4][14][61] The Links can be used for completing simple time-specific tasks[35]such asregistering to vote.[62]In April 2017, the Links were equipped with another app, Aunt Bertha, which could be utilized to find social services such as food pantries, financial aid, and emergency shelter.[63]The Links sometimes offer eccentric apps, such as an app to call Santa's voice mail that was enabled in December 2017.[35]In October 2019, a video relay service for deaf users was added to the Links.[64] The Wi-Fi technology comes from Ruckus Wireless and is enabled by Qualcomm's Vive 802.11ac Wave 2 4x4 chipsets.[5]The Links' operating system runs on theQualcomm Snapdragon600processorand theAdreno320graphics processing unit.[60]The Links' hardware and software can handle future upgrades. The software will be updated until at least 2022, but Qualcomm has promised to maintain the Links for the rest of their service lives.[60] Links are cleaned twice weekly, with LinkNYC staff removing vandalism and dirt from the Links. Each Link has cameras and over 30 vibration sensors to sense if the kiosk has been hit by an object.[26][65]A separate set of sensors also detects if the USB ports are tampered with.[65]If either the vibration sensors or the USB port sensors detect tampering, an alert is displayed at LinkNYC headquarters that the specific part of the Link has been affected.[65]All of the Links have abackup battery power supplythat can last for up to 24 hours if a long-termpower outagewere to occur.[28]This was added to prevent interruption of phone service, as happened in theaftermathofHurricane Sandyin 2012, which caused power outages citywide, especially to the city's payphones (which were connected to the municipal power supply of New York City).[2]Antenna Design helped with the overall design of the kiosks,[14][16]which are produced by Comark subsidiary Civiq.[4][66] New York City does not pay for the system because CityBridge oversees the installation, ownership, and operations, and is responsible for building the new optic infrastructure under the streets.[67]CityBridge stated in a press release that the network would be free for all users, and that the service would be funded by advertisements.[2][15][16][21]This advertising will provide revenue for New York City as well as for the partners involved in CityBridge.[67] The advertising is estimated to bring in over $1 billion in revenue over twelve years, with the City of New York receiving over $500 million, or about half of that amount.[25][26]Technically, the LinkNYC network is intended to act as a public internet utility with advertising services.[4]However, in four of the first five years the Links have been active, actual revenue fell short of goals. This is partially due to the fact that some local small businesses and non-profits were given advertisement space for free.[68] The Links' advertising screens also display "NYC Fun Facts", one-sentence factoids about New York City, as well as "This Day in New York" facts and historic photographs of the city, which are shown between advertisements.[35][69]In April 2018, some advertising screens started displaying real-time bus arrival information for nearby bus routes, using data from theMTA Bus Timesystem.[70][71]Other things displayed on Links include headlines from theAssociated Press, as well as weather information, comics, contests, and "content collaborations" where third-party organizations display their own information.[69]As part of the Link Local program, in 2017, LinkNYC began allowing small business owners to advertise on the nearest two LinkNYC kiosks to their stores.[72]The kiosks have also run promotions forblack-owned businesses[73]and LGBT sites.[74] Links in some areas, especially lower-income and lower-traffic areas, are expected to not display advertisements because it is not worthwhile for CityBridge to advertise in these areas.[38]Controversially, the Links that lack advertising are expected to exhibit network speeds that may be as slow as one-tenth of the network speeds of advertisement-enabled Links. As of 2014[update], wealthier neighborhoods in Manhattan, Brooklyn, and Queens were expected to have the most Links with advertisements and fast network speeds, while poorer neighborhoods andStaten Islandwould get slower Links with no advertising.[38]CityBridge sold fewer advertisements than expected, and it defaulted on $70 million owed to the city in July 2021.[40][75] CityBridge began installing Link5G poles across the city in 2022. Each pole measures 32 feet (9.8 m) tall, more than three times as high as the original kiosks;[41][45]the FCC had mandated that the poles be at least 19.5 feet (5.9 m) high.[76]In contrast to the kiosks, the Link5G poles were supposed to be installed in neighborhoods without good internet service; 90 percent of the poles were to be placed in the outer boroughs or in Upper Manhattan north of96th Street.[77] The lower sections of many of the poles have tablets, USB charging ports, 9-1-1 call buttons, and advertising displays, similar to the original kiosks. The upper portions of each pole contain 5G equipment installed by telecommunications companies, which can rent space within the poles from CityBridge.[41]The 5G antennas measure 63 inches (1,600 mm) tall and 21 feet (6.4 m) across. Next to each antenna is a box measuring 38 by 16 by 14 inches (970 by 410 by 360 mm).[45]There are five transmitters atop each pole, measuring at least 29 inches (740 mm) tall.[76]Although Wi-Fi service from the 5G poles is provided free of charge, users have to pay their telecom companies to receive 5G service.[41]The poles also have cameras on them, but the cameras are not operational at all times.[43][77] According to its specifications, the Links' Wi-Fi will cover a radius of 150 feet (46 m)[4][5][7][16][21]to 400 feet (120 m).[4][23][26]The Links' Wi-Fi is capable of running at 1 gigabit per second or 1000 megabits per second,[2][16][21][23]more than 100 times faster than the 8.7 megabit per second speed of the average public Wi-Fi network in the United States.[23][26]LinkNYC's routers have neither abandwidth capnor a time limit for usage, meaning that users can use LinkNYC Wi-Fi for as long as they need to.[26]The free phone calls are also available for unlimited use.[26]The network is only intended for use in public spaces,[26]though this may be subject to change in the future.[4]In the future, the LinkNYC network could also be used to "connect lighting systems, smart meters, traffic networks, connected cameras and other IoT systems,"[60]as well as for utility monitoring and for5Ginstallations.[4] CityBridge emphasized that it takessecurityandprivacyseriously "and will never sell any personally identifiable information or share with third parties for their own use".[15]: 2Aside from the unsecured network that devices can directly connect to, the Links provide an encrypted network that shields communications from eavesdropping within the network. There are two types of networks: a private (securedWPA/WPA2) network called "LinkNYC Private," which is available toiOSdevices withiOS 7and above; and a public network called "LinkNYC Free Public Wi-Fi," which is available to all devices but is only protected by the device's browser.[78][79] Private network users will have to accept anetwork keyin order to log onto the LinkNYC Wi-Fi.[65][79]This would make New York City one of the first American municipalities to have a free, encrypted Wi-Fi network,[16]as well as North America's largest.[4]LinkNYC would also be the fastest citywide ISP in the world, with download and upload speeds between 15 and 32 times faster than on free networks atStarbucks, inLaGuardia Airport, and within New York City hotels.[79] Originally, the CityBridge consortium was supposed to includeTransit Wireless, which maintains theNew York City Subway's wireless system.[14]However, as neither company mentioned each other on their respective websites, one communications writer speculated that the deal had either not been implemented yet or had fallen through. Transit Wireless stated that "those details have not been finalized yet", and CityBridge "promised to let [the writer] know when more information is available."[16] The network is extremely popular, and by September 2016, around 450,000 unique users and over 1 million devices connected to the Links in an average week.[80]The Links had been used a total of more than 21 million times by that date.[81]This had risen to over 576,000 unique users by October 4,[62]with 21,000 phone calls made in the previous week alone.[82]By January 2018, the number of calls registered by the LinkNYC system had risen to 200,000 per month, or 50,000 per week on average. There were also 600,000 unique users connecting to the Links' Wi-Fi or cellular services each week.[35]The LinkNYC network exceeded 500,000 average monthly calls, 1 billion total sessions, and 5 million monthly users in September 2018.[36] One writer for theMotherboardwebsite observed that the LinkNYC network also helped connect poor communities, as people from these communities come to congregate at the Links.[83]This stems from the fact that the network provides service to all New Yorkers regardless of income, but it especially helps residents who would have otherwise used their smartphones for internet access using3Gand4G.[28]The New York City Bureau of Policy and Research published a report in 2015 that stated that one-fourth of residents do not have home broadband internet access, including 32 percent of unemployed residents.[12] As of January 2018[update], the most-dialed number on the LinkNYC network was the helpline for the state'selectronic benefit transfersystem, which distributes food stamps to low-income residents.[35]The LinkNYC network is seen as only somewhat mitigating this internet inequality, as many poor neighborhoods, like some in the Bronx, will get relatively few Links.[83]LinkNYC is seen as an example ofsmart cityinfrastructure in New York City, as it is a technologically advanced system that helps enable technological connectivity.[4][83] The deployment of the Links and the method, process, eventual selection, and ownership of entities involved in the project has come under scrutiny by privacy advocates, who express concerns about the terms of service, the financial model, and the collection ofend users'data.[66][84][85][86]These concerns are aggravated by the involvement of Sidewalk Labs, which belongs to Google's holding company, Alphabet Inc.[66]Google already has the ability to track the majority of all website visits,[87]and LinkNYC could be used to track people's movements.[66]Nick Pinto of theVillage Voice, aLower Manhattannewspaper, wrote: Google is in the business of taking as much information as it can get away with, from as many sources as possible, until someone steps in to stop it. ... But LinkNYC marks a radical step even for Google. It is an effort to establish a permanent presence across our city, block by block, and to extend its online model to the physical landscape we humans occupy on a daily basis. The company then intends to clone that system and start selling it around the world, government by government, to as many as will buy.[66] In March 2016, the New York Civil Liberties Union (NYCLU), the New York City office of theAmerican Civil Liberties Union, wrote a letter to Mayor de Blasio outlining their privacy concerns.[85][86]In the letter, representatives for the NYCLU wrote that CityBridge could be retaining too much information about LinkNYC users. They also stated that the privacy policy was vague and needed to be clarified. They recommended that the privacy policy be rewritten so that it expressly mentions whether the Links' environmental sensors or cameras are being used by the NYPDfor surveillanceor by other city systems.[85]In response, LinkNYC updated its privacy policy to make clear that the kiosks do not store users' browsing history or track the websites visited while using LinkNYC's Wi-Fi,[88]a step that NYCLU commended.[89] In an unrelated incident, Titan, one of the members of CityBridge, was accused of embeddingBluetoothradio transmittersin their phones, which could be used to track phone users' movements without their consent.[67][90]These beacons were later found to have been permitted by the DOITT, but "without any public notice, consultation, or approval", so they were removed in October 2014.[67]Despite the removal of the transmitters, Titan is proposing putting similar tracking devices on Links, but if the company decides to go through with the plan, it has to notify the public in advance.[67] In 2018, aNew York City College of Technologyundergraduate student, Charles Myers, found that LinkNYC had published folders onGitHubtitled "LinkNYC Mobile Observation" and "RxLocation". He shared these withThe Interceptwebsite, which wrote that the folders indicated that identifiable user data was being collected, including information on the user'scoordinates,web browser,operating system, and device details, among other things. However, LinkNYC disputed these claims and filed aDigital Millennium Copyright Actclaim to force GitHub to remove files containing code that Meyer had copied from LinkNYC's GitHub account.[91] According to LinkNYC, it does not monitor its kiosks' Wi-Fi, nor does it give information to third parties.[26]However, data will be given to law enforcement officials in situations where LinkNYC is legally obliged.[23][86]Itsprivacy policystates that it can collectpersonally identifiable information(PII) from users to give to "service providers, and sub-contractors to the extent reasonably necessary to enable us provide the Services; a third party that acquires CityBridge or a majority of its assets [if CityBridge was acquired by that third party]; a third party with whom we must legally share information about you; you, upon your request; [and] other third parties with your express consent to do so."[92]Non-personally identifiable information can be shared with service providers and advertisers.[2][92]The privacy policy also states that "in the event that we receive a request from a governmental entity to provide it with your Personally [sic] Information, we will take reasonable attempts to notify you of such request, to the extent possible."[26][92] There are also concerns that despite the WPA/WPA2 encryption,hackersmay still be able to steal other users' data, especially since the LinkNYC Wi-Fi network has millions of users. To reduce the risk of data theft, LinkNYC is deploying a better encryption system for devices that haveHotspot 2.0.[26][28]Another concern is that hackers could affect the tablet itself by redirecting it to amalwaresite when users put in PII, or adding akeystroke loggingprogram to the tablets.[65]To protect against this, CityBridge places in "a series of filters and proxies" that prevents malware from being installed; ends a session when a tablet is detected communicating with acommand-and-control server; and resets the entire kiosk after 15 seconds of inactivity.[65][78]The USB ports have been configured so that they can only be used to charge devices. However, the USB ports are still susceptible to physical tampering with skimmers, which may lead to a user's device getting amalwareinfection while charging; this is prevented by themore than 30 anti-vandalism sensorson each Link.[65][78] Yet another concern is that a person may carry out aspoofing attackby renaming their personal Wi-Fi network to "LinkNYC." This is potentially dangerous since many electronic devices tend to automatically connect to networks with a given name, but do not differentiate between the different networks.[65]One reporter forThe Vergesuggested that to circumvent this, a person could turn off their mobile device's Wi-Fi while in the vicinity of a kiosk, or "forget" the LinkNYC network altogether.[65] The cameras on the top of each kiosk's tablet posed a concern in some communities where these cameras face the interiors of buildings. However, as of July 2017[update], the cameras were not activated.[34] In the summer of 2016, acontent filterwas set up on the Links to restrict navigation to certain websites, such aspornographysites and other sites withnot safe for work(NSFW) content.[93]This was described as a problem especially among the homeless,[94]and at least one video showed a homeless man watching pornography on a LinkNYC tablet.[93]This problem has supposedly been ongoing since at least January 2016.[94]Despite the existence of the filter, Link users still found a way to bypass these filters.[78][80][95][96] The filters, which consisted of GoogleSafeSearchas well as a web blocker that was based on the web blockers of many schools, were intentionally lax to begin with because LinkNYC feared that stricter filters that blocked certain keywords would alienate customers.[96]Other challenges included the fact that "stimulating"user-generated contentcan be found on popular, relatively interactive websites likeTumblrandYouTube; it is hard to block NSFW content on these sites, because that would entail blocking the entire website when only a small portion hosts NSFW content. In addition, it was hard, if not impossible, for LinkNYC to block new websites with NSFW content, as such websites are constantly being created.[96] A few days after Díaz's and Johnson's statements, the web browsers of the tablets embedded into the Links were disabled indefinitely due to concerns of illicit activities such as drug deals and NSFW website browsing.[31][97]LinkNYC cited "lewd acts" as the reason for shutting off the tables' browsing capabilities.[95]OneMurray Hillresident reported that a homeless man "enthusiastically hump[ed]" a Link in her neighborhood while watching pornography.[94]Despite the tablets being disabled, the 9-1-1 capabilities, maps, and phone calls would still be usable, and people can still use LinkNYC Wi-Fi from their own devices.[81][95][97] The disabling of the LinkNYC tablets' browsers had stoked fears about further restrictions on the Links.The Independent, a British newspaper, surveyed some homeless New Yorkers and found that while most of these homeless citizens used the kiosks for legitimate reasons (usually not to browse NSFW content), many of the interviewees were scared that LinkNYC may eventually charge money to use the internet via the Links, or that the kiosks may be demolished altogether.[98]The Guardian, another British newspaper, came to a similar conclusion; one of the LinkNYC users they interviewed said that the Links are "very helpful, but of course bad people messed it up for everyone".[99]In a press release, LinkNYC refuted fears that service would bepaywalledor eliminated, though it did state that several improvements, including dimming the kiosks and lowering maximum volumes, were being implemented to reduce the kiosks' effect on the surrounding communities.[81] Immediately after the disabling of the tablets' browsing capabilities, reports of loitering near kiosks decreased by more than 80 percent.[62][82]By the next year, such complaints had dropped 96 percent from the pre-September 2016 figure.[100]The tablets' use, as a whole, has increased 12 percent, with more unique users accessing maps, phone calls, and 3-1-1.[62][82] There have been scattered complaints in some communities that the LinkNYC towers themselves are a nuisance. These complaints mainly have to do with loitering, browser access, and kiosk volume, the latter two of which the city has resolved.[34]However, these nuisance complaints are rare citywide; of the 920 kiosks installed citywide by then, there had been only one complaint relating to the kiosk design itself.[34] In September 2016, the borough president of the Bronx,Rubén Díaz Jr., called on city leaders to take stricter action, saying that "after learning about the inappropriate and over-extended usage of Links throughout the city, in particular in Manhattan, it is time to make adjustments that will allow all of our city residents to use this service safely and comfortably."[80]City CouncilmanCorey Johnsonsaid thatsome police officialshad called for several Links inChelseato be removed because homeless men had been watching NSFW content on these Links while children were nearby.[31][101]Barbara A. Blair, president of theGarment DistrictAlliance, stated that "people are congregating around these Links to the point where they're bringing furniture and building little encampments clustered around them. It's created this really unfortunate and actually deplorable condition."[31] A related problem arising from the tablets' browser access was that even though the tablets were intended for people to use it for a short period of time, the Links began being "monopolized" almost as soon as they were unveiled.[80]Some people would use the Links for hours at a time.[31]Particularly, homeless New Yorkers would sometimes loiter around the Links, using newspaper dispensers and milk crates as "makeshift furniture" on which they could sit while using the Links.[31][83][101]TheNew York Postcharacterized the Links as having become "living rooms for vagrants".[102]As a result, LinkNYC staff were working on a way to help ensure that Links would not be monopolized by one or two people.[80][81]Proposals for solutions included putting time limits on how long the tablets could be used by any one person.[103] Some people stated that the Links could also be used for loitering and illicit phone calls.[101][104]OneHell's Kitchenbar owner cited concerns about the users of a Link located right outside his bar, including a homeless man who a patron complained was a "creeper" watching animal pornography, as well as several people who made drug deals using the Link's phone capabilities while families were nearby.[104]InGreenpoint, locals alleged that after Links were activated in their neighborhood in July 2017, these particular kiosks became locations for drug deals; however, that particular Link was installed near a knowndrug den.[100] Intersection, in collaboration with British telecommunications companyBTand British advertising agencyPrimesight, is also planning to install up to 850 Links in theUnited Kingdom, including inLondon, beginning in 2017. TheLinkUKkiosks, as they will be called, are similar to the LinkNYC kiosks in New York City. These Links will replace some of London's iconictelephone boothsdue to these booths' age.[105][106][107]The first hundred Links would be installed in theborough of Camden.[105]The Links will have tablets, but they will lack web browsing capabilities due to the problems that LinkNYC faced in enabling the tablet browsers.[108][109] In early 2016, Intersection announced that it could install about 100 Links in a mid-sized city in the United States, provided that it wins theUnited States Department of Transportation's Smart City Challenge.[110]Approximately 25 of that city's blocks will get the Links, which will be integrated with Sidewalk Labs' transportation data-analysis initiative,Flow.[110]In summer 2016, the city ofColumbus, Ohio, was announced as the winner of the Smart City Challenge.[111]Intersection has proposed installing Links in four Columbus neighborhoods.[112] In July 2017, the city ofHoboken, New Jersey, located across theHudson Riverfrom Manhattan, proposed adding free Wi-Fi kiosks on its busiest pedestrian corridors. The kiosks, which are also a smart-city initiative, are proposed to be installed by Intersection.[113]
https://en.wikipedia.org/wiki/LinkNYC
MobileStar Networkwas awireless Internet service providerwhich first gained notability in deploying Wi-Fi Internet access points inStarbuckscoffee shops,American AirlinesAdmiral Club locations across theUnited Statesand at Hilton Hotels. Founded by Mark Goode and Greg Jackson in 1998, MobileStar was the first wireless ISP to place a WiFi hotspot in an airport, a hotel, or a coffee shop. MobileStar's core value proposition was to provide wireless broadband connectivity for the business traveler in all the places s/he was likely to "sleep, eat, move, or meet." MobileStar's founder, Mark Goode, was the first to coin the now industry standard expression "hotspot," as a reference to a location equipped with an 802.11 wireless access point. MobileStar's financing was initially provided by Greg Jackson. A predecessor entity, PLANCOM (Public Local Area Network Communications), was disbanded and the intellectual property moved into MobileStar Network. During the Series A financing round, funds were obtained from high-net-worth investors, corporate investors including Proxim and Comdisco, and institutional investors from New York. The Series B investors, who invested $38 million, included theMayfield Fund[1]and Blueprint Ventures. MobileStar's initial deployments used a frequency hopping product supplied by Proxim. As reported in the EE Times, "In a move that represents the first use of unlicensed wireless LAN technology in the industrial scientific and medical (ISM) bands to develop a nationwide Internet-access network, Proxim Inc. has teamed up with Dallas-based MobileStar Network Inc. to link its 2.4-GHz unlicensed RangeLAN2 wireless LAN to a national network of Internet access points." However, after the IEEE 802.11b standard was adopted, MobileStar converted its network infrastructure to the 802.11b industry standard. The initial infrastructure was manufactured and financed by Cisco. MobileStar's founders faced many challenges in developing the company: evolving technology standards, fluid business models, no industry standard billing system, and questions about the competitive value of a site license agreement instead of licensed spectrum. Over time each of these issues were addressed and the agreement with Starbucks in late 2000 signaled a maturing of the marketplace.[2]American Airlines also entered into an agreement with MobileStar[3]as did Hilton Hotels[4]As more laptop vendors included integrated 802.11 wireless connectivity within their laptops, users came to expect broadband connectivity in their residences, workplaces, and in public locations such as airports, coffee shops, and hotels. License-free broadband connectivity exploded with the advent of the iPhone in 2007, further validating the premise that license-free spectrum could open up a large domain of connectivity at a cost far less than licensed spectrum. The rise of voice over IP (VOIP) communications operating on the 2.4 MHz spectrum via the 802.11 standard was another indicator of the power of ubiquitous, low to no cost wireless broadband communications. MobileStar Network's demise in 2001 was the result of at least two important factors: the collapse in the private equity markets in mid-2001 and the events of September 11. While MobileStar's investors provided a bridge loan during the mid-2001 time frame, the terrorist attacks in New York and Washington brought a steep decline in business travel, MobileStar Network's initial core market. MobileStar's investors could not continue to finance the business and new investors were skittish about investing in a company focused on serving a market that had recently and rapidly collapsed. MobileStar Network ceased operation in October 2001, but its bankrupt assets and contracts were bought byVoicestream Wirelessand by February 2002, was operating asT-MobileBroadband. T-Mobile Broadband was the first part of VoiceStream to rebrand to the T-Mobile name. It was officially launched as T-Mobile HotSpot in August 2002.[5]Many of the original MobileStar Network employees still work for T-Mobile Hotspot and have been responsible for its expansion.
https://en.wikipedia.org/wiki/MobileStar
TheSecuring Adolescents From Exploitation-Online Act of 2007(H.R. 3791) is a U.S. House bill stating that anyone offering anopen Wi-Fi Internet connectionto the public, who "obtains actual knowledge of any facts or circumstances" in relation to illegal visual media such as "child pornography" transferred over that connection, must register a report of their knowledge to theNational Center for Missing and Exploited Children.[1]The act references US Code sections 2251, 2251A, 2252, 2252A, 2252B, 2260, and 1466A in defining its scope. Anyone failing to report their knowledge faces fines of up to $300,000.[2]It was written byNick Lampson(D-TX)[1]and introduced in theHouse of Representativeson October 10, 2007. It was approved (409-2-20) on December 5, 2007, with only RepublicansRon PaulandPaul Brounvoting against.[3]Some commentators criticized it as overly broad,[1]but Lampson's spokesman dismissed these interpretations, saying that the act was not intended to cover Americans who hadwireless routersat home, but only to target theirinternet service providers.[4] This United States federal legislation article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Securing_Adolescents_From_Exploitation-Online_Act
AVisitor-based network (VBN)is acomputer networkintended for mobile users in need of temporaryInternetaccess. A visitor-based network is most commonly established in hotels, airports, convention centers, universities, and business offices. It gives the on-the-go user a quick and painless way to temporarily connect a device to networks and broadband Internet connections. A visitor-based network usually includes hardware (such as VBN gateways, hubs, switches, and/or routers), telecommunications (an Internet connection), and service (subscriber support).[1] Virtually any Internet-based Ethernet LAN can become a visitor-based network by adding a device generally termed a "VBN gateway". The function of a VBN Gateway is to provide a necessary layer of management between public users and the Internet router to enable a plug and play connection for visitors. Typical VBN Gateway provide services and support for billing and management application integrations, such as PMS systems (in hotels), credit-card billing interfaces, or Radius/LDAP servers for central authentication models. A common criteria for VBN gateways is they allow users to connect and access the available network services with little or no configuration on their local machines (specifically modification to theirIP address). In order to accomplish this a layer 2 (See:OSI model#Layer 2: Data link layer) connection is required between the user and the VBN gateway. Aside from the layer 2 (or bridged) network requirement, there are really no other restrictions for using a VBN gateway to enable a network. As such,Ethernet,802.11x,CMTS, andxDSLare all acceptable mediums for distributing networks to use with VBN Gateways. In the simplest form a VBN gateway is a hardware device with a minimum of two network connections. One network connection is considered the subscriber network, and the other the uplink to the Internet. The majority of VBN gateways on the market today all use Ethernet interfaces for their connection, but as stated above, any layer 2 connection is acceptable for this. Generally speaking there are three models of operation for a VBN: Transparent, Pay For Use, and Authenticate For Use. Transparent VBN A transparent VBN's purpose is to provide network services to users to reduce support and/or IT infrastructure costs. Generally these networks are not concerned with security but rather fast and easy access. Metro Wi-Fi networks, or free to use Hotspots are examples of this type of VBN. Billing VBN A billing-based VBN is one where users are required to pay to obtain network services. Traditionally these types of VBNs are found in hotel orHotspot (Wi-Fi)networks. Payment services are provided in a variety of methods, most commonly with a credit cardMerchant accountin hot spot environments or integration to aproperty management systemin hotel environments. Authentication VBN An authenticate for use VBN is most commonly found in business environments. In these cases the VBN gateway requires users to authenticate to the gateway in order to be allowed access to network services. Commonly this authentication is achieved via integration toRADIUSorLDAPservers or by implementing access-codes which a user would be required to enter. While manufacturers offer many different configurations for VBN gateways, a set of common features exist. Even the most basic VBN gateways provideDHCPandProxy ARPto allow users to connect to the network with no IP address configuration required. Acaptive portalis used for a variety of functions including, billing or authentication and acceptance of terms and conditions. Once the user successfully meets the criteria in thecaptive portal, the VBN gateway then allows the user's traffic to be routed through.
https://en.wikipedia.org/wiki/Visitor_Based_Network
Incomputer networking, awireless access point(WAP) (also justaccess point(AP)) is anetworking hardwaredevice that allows other Wi-Fi devices to connect to a wired network or wireless network. As a standalone device, the AP may have a wired or wireless connection to aswitchorrouter, but in awireless routerit can also be an integral component of the networking device itself. A WAP and AP is differentiated from ahotspot, which can be a physical location or digital location where Wi-Fi or WAP access is available.[1][2] An AP connects directly to a wiredlocal[3]area network, typicallyEthernet, and the AP then provides wireless connections usingwireless LANtechnology, typically Wi-Fi, for other devices to use that wired connection. APs support the connection of multiple wireless devices through their one wired connection. There are manywireless data standardsthat have been introduced for wireless access point andwireless routertechnology. New standards have been created to accommodate the increasing need for faster wireless connections. Access points can provide backward compatibility with older Wi-Fi protocols as many devices were manufactured for use with older standards.[3] Some people confuse wireless access points withwireless ad hoc networks. An ad hoc network uses a connection between two or more deviceswithoutusing a wireless access point; the devices communicate directly. Because setup is easy and does not require an access point, an ad hoc network is used in situations such as a quick data exchange or amultiplayer video game. Due to its peer-to-peer layout, ad hoc Wi-Fi connections are similar to connections available usingBluetooth. Ad hoc connections are generally not recommended for a permanent installation.[1]Internet accessviaad hoc networks, using features likeWindows'Internet Connection Sharingor dedicated software such asWiFi Direct Access Point, may work well with a small number of devices that are close to each other, but ad hoc networks do not scale well. Internet traffic will converge to the nodes with direct internet connection, potentially congesting these nodes. For internet-enabled nodes, access points have a clear advantage, with the possibility of having a wiredLAN. It is generally recommended that oneIEEE 802.11AP should have, at a maximum, 10–25 clients.[4]However, the actual maximum number of clients that can be supported can vary significantly depending on several factors, such as type of APs in use, density of client environment, desired client throughput, etc. The range ofcommunicationcan also vary significantly, depending on such variables as indoor or outdoor placement, height above ground, nearby obstructions, other electronic devices that might actively interfere with the signal by broadcasting on the same frequency, type ofantenna, the current weather, operatingradio frequency, and the power output of devices. Network designers can extend the range of APs through the use ofrepeaters, whichamplifya radio signal, andreflectors, which only bounce it. In experimental conditions, wireless networking has operated over distances of several hundred kilometers.[5] Most jurisdictions have only a limited number of frequencies legally available for use by wireless networks. Usually, adjacent APs will use different frequencies (channels) to communicate with their clients in order to avoidinterferencebetween the two nearby systems. Wireless devices can "listen" for data traffic on other frequencies, and can rapidly switch from one frequency to another to achieve better reception. However, the limited number of frequencies becomes problematic in crowded downtown areas with tall buildings using multiple APs. In such anenvironment, signal overlap becomes an issue causing interference, which results in signal degradation and data errors.[6] Wireless networking lags wired networking in terms of increasingbandwidthandthroughput. While (as of 2013) high-density256-QAMmodulation, 3-antenna wireless devices for the consumer market can reach sustained real-world speeds of some 240 Mbit/s at 13 m behind two standing walls (NLOS) depending on their nature or 360 Mbit/s at 10 m line of sight or 380 Mbit/s at 2 m line of sight (IEEE802.11ac) or 20 to 25 Mbit/s at 2 m line of sight (IEEE802.11g), wired hardware of similar cost reaches closer to 1000 Mbit/s up to specified distance of 100 m with twisted-pair cabling in optimal conditions (Category 5 (known as Cat-5)or better cabling withGigabit Ethernet). One impediment to increasing the speed of wireless communications comes fromWi-Fi's use of a shared communications medium: Thus, two stations in infrastructure mode that are communicating with each other even over the same AP must have each and every frame transmitted twice: from the sender to the AP, then from the AP to the receiver. This approximately halves the effective bandwidth, so an AP is only able to use somewhat less than half the actual over-the-air rate for data throughput. Thus a typical 54 Mbit/s wireless connection actually carriesTCP/IPdata at 20 to 25 Mbit/s. Users of legacy wired networks expect faster speeds, and people using wireless connections keenly want to see the wireless networks catch up. By 2012, 802.11n based access points and client devices have already taken a fair share of the marketplace and with thefinalization of the 802.11n standard in 2009inherent problems integrating products from different vendors are less prevalent. Wireless access has specialsecurityconsiderations. Many wired networks base the security on physical access control, trusting all the users on the local network, but if wireless access points are connected to the network, anybody within range of the AP (which typically extends farther than the intended area) can attach to the network. The most common solution is wireless traffic encryption. Modern access points come with built-in encryption. The first generation encryption scheme,WEP, proved easy to crack; the second and third generation schemes,WPAandWPA2, are considered secure[7]if a strong enoughpasswordorpassphraseis used. Some APs support hotspot style authentication usingRADIUSand otherauthentication servers. Opinions about wireless network security vary widely. For example, in a 2008 article forWiredmagazine,Bruce Schneierasserted the net benefits of open Wi-Fi without passwords outweigh the risks,[8]a position supported in 2014 by Peter Eckersley of theElectronic Frontier Foundation.[9]The opposite position was taken by Nick Mediati in an article forPC World, in which he advocates that every wireless access point should be protected with a password.[10]
https://en.wikipedia.org/wiki/Wireless_Access_Point
Awireless LAN(WLAN) is awireless computer networkthat links two or more devices usingwireless communicationto form alocal area network(LAN) within a limited area such as a home, school, computer laboratory, campus, or office building. This gives users the ability to move around within the area and remain connected to the network. Through agateway, a WLAN can also provide a connection to the widerInternet. Wireless LANs based on theIEEE 802.11standards are the most widely used computer networks in the world. These are commonly calledWi-Fi, which is a trademark belonging to theWi-Fi Alliance. They are used for home and small office networks that link togetherlaptop computers,printers,smartphones,Web TVsand gaming devices through awireless network router, which in turn may link them to the Internet.Hotspotsprovided by routers at restaurants, coffee shops, hotels, libraries, and airports allow consumers to access the internet with portable wireless devices. Norman Abramson, a professor at theUniversity of Hawaii, developed the world's first wireless computer communication network,ALOHAnet. The system became operational in 1971 and included seven computers deployed over four islands to communicate with the central computer on theOahuisland without using phone lines.[1] Wireless LAN hardware initially cost so much that it was only used as an alternative to cabled LAN in places where cabling was difficult or impossible. Early development included industry-specific solutions and proprietary protocols, but at the end of the 1990s these were replaced bytechnical standards, primarily the various versions of IEEE 802.11 (in products using theWi-Fibrand name). Beginning in 1991, a European alternative known as HiperLAN/1 was pursued by theEuropean Telecommunications Standards Institute(ETSI) with a first version approved in 1996. This was followed by a HiperLAN/2 functional specification withATMinfluences[citation needed]accomplished February 2000. Neither European standard achieved the commercial success of 802.11, although much of the work on HiperLAN/2 has survived in the physical specification (PHY) for IEEE802.11a, which is nearly identical to the PHY of HiperLAN/2. In 2009,802.11nwas added to 802.11. It operates in both the 2.4 GHz and 5 GHz bands at a maximum data transfer rate of 600 Mbit/s. Most newer routers aredual-bandand able to utilize both wireless bands. This allows data communications to avoid the crowded2.4 GHz band, which is also shared withBluetoothdevices andmicrowave ovens. The 5 GHz band also has more channels than the 2.4 GHz band, permitting a greater number of devices to share the space. Not allWLAN channelsare available in all regions. AHomeRFgroup formed in 1997 to promote a technology aimed at residential use, but it disbanded in January 2003.[2] All components that can connect into a wireless medium in anetworkare referred to as stations. All stations are equipped withwireless network interface controllers. Wireless stations fall into two categories:wireless access points(WAPs) and clients. WAPs are base stations for the wireless network. They transmit and receive radio frequencies for wireless-enabled devices to communicate with. Wireless clients can be mobile devices such as laptops,personal digital assistants,VoIP phonesand othersmartphones, or non-portable devices such asdesktop computers, printers, andworkstationsthat are equipped with a wireless network interface. The basic service set (BSS) is a set of all stations that can communicate with each other at PHY layer. Every BSS has an identification (ID) called the BSSID, which is theMAC addressof the access point servicing the BSS. There are two types of BSS: Independent BSS (also referred to as IBSS), and infrastructure BSS. An independent BSS (IBSS) is anad hoc networkthat contains no access points, which means they cannot connect to any other basic service set. In an IBSS the STAs are configured in ad hoc (peer-to-peer) mode. An extended service set (ESS) is a set of connected BSSs. Access points in an ESS are connected by a distribution system. Each ESS has an ID called the SSID which is a 32-byte (maximum) character string. A distribution system (DS) connects access points in an extended service set. The concept of a DS can be used to increase network coverage through roaming between cells. DS can be wired or wireless. Current wireless distribution systems are mostly based onWDSorMesh protocols,[3]though other systems are in use. TheIEEE 802.11has two basic modes of operation:infrastructureandad hocmode. In ad hoc mode, mobile units communicate directly peer-to-peer. In infrastructure mode, mobile units communicate through awireless access point(WAP) that also serves as a bridge to other networks such as alocal area networkor the Internet. Since wireless communication uses a more open medium for communication in comparison to wired LANs, the 802.11 designers also included encryption mechanisms:Wired Equivalent Privacy(WEP), no longer considered secure,Wi-Fi Protected Access(WPA, WPA2, WPA3), to secure wireless computer networks. Many access points will also offerWi-Fi Protected Setup, a quick, but no longer considered secure, method of joining a new device to an encrypted network. Most Wi-Fi networks are deployed ininfrastructure mode. In infrastructure mode, wireless clients, such as laptops and smartphones, connect to the WAP to join the network. The WAP usually has a wired network connection and may have permanent wireless connections to other WAPs. WAPs are usually fixed and provide service to their client nodes within range. Some networks will have multiple WAPs using the same SSID and security arrangement. In that case, connecting to any WAP on that network joins the client to the network, and the client software will try to choose the WAP that gives the best service, such as the WAP with the strongest signal. Anad hoc networkis a network where stations communicate onlypeer-to-peer(P2P). There is no base and no one gives permission to talk. This is accomplished using theIndependent Basic Service Set(IBSS). AWi-Fi Directnetwork is a different type of wireless network where stations communicate peer-to-peer.[4]In a peer-to-peer network wireless devices within range of each other can discover and communicate directly without involving central access points. In a Wi-Fi P2P group, the group owner operates as an access point and all other devices are clients. There are two main methods to establish a group owner in the Wi-Fi Direct group. In one approach, the user sets up a P2P group owner manually. This method is also known asautonomous group owner(autonomous GO). In the second method, callednegotiation-based group creation, two devices compete based on the group owner intent value. The device with higher intent value becomes a group owner and the second device becomes a client. Group owner intent value can depend on whether the wireless device performs a cross-connection between an infrastructure WLAN service and a P2P group, available power in the wireless device, whether the wireless device is already a group owner in another group or a received signal strength of the first wireless device. IEEE 802.11defines the PHY andmedium access control(MAC) layers based oncarrier-sense multiple access with collision avoidance(CSMA/CA). This is in contrast to Ethernet which usescarrier-sense multiple access with collision detection(CSMA/CD). The 802.11 specification includes provisions designed to minimize collisions because mobile units have to contend with thehidden node problemwhere two mobile units may both be in range of a common access point, but out of range of each other. A bridge can be used to connect networks, typically of different types. A wirelessEthernetbridge allows the connection of devices on a wired Ethernet network to a wireless network. The bridge acts as the connection point to the wireless LAN. Awireless distribution system(WDS) enables the wireless interconnection of access points in an IEEE 802.11 network. It allows a wireless network to be expanded using multiple access points without the need for a wired backbone to link them, as is traditionally required. The notable advantage of a WDS over some other solutions is that it preserves the MAC addresses of client packets across links between access points.[5] An access point can be either a main, relay, or remote base station. A main base station is typically connected to the wired Ethernet. A relay base station relays data between remote base stations, wireless clients or other relay stations to either a main or another relay base station. A remote base station accepts connections from wireless clients and passes them to relay or main stations. Because data is forwarded wirelessly, consuming wireless bandwidth, throughput in this method is halved for wireless clients not connected to a main base station. Connections between base stations are done at layer-2 and do not involve or require layer-3 IP addresses. WDS capability may also be referred to as repeater mode because it appears to bridge and accept wireless clients at the same time (unlike traditional bridging). All base stations in a WDS must be configured to use the same radio channel and share WEP keys or WPA keys if they are used. They can be configured to different service set identifiers. WDS also requires that every base station be configured to forward to others in the system as mentioned above. There are two definitions for wireless LAN roaming: WLAN signals often extends beyond the boundaries of a building and can create coverage where it is unwanted, offering a channel through which non-residents or other unauthorized people could compromise a system and retrieve personal data. To prevent this it is usually sufficient to enforce the use of authentication, encryption, orVPNthat requires a password for network connectivity.[7]
https://en.wikipedia.org/wiki/Wireless_LAN
Wi-Fi Directis aWi-Fistandard for wireless connections[1]that allows two devices to establish a direct Wi-Fi connection without an intermediarywireless access point,router, orInternetconnection. Wi-Fi Direct is single-hop communication, rather thanmulti-hopcommunication likewireless ad hoc networks. The Wi-Fi Direct standard was specified in 2009.[2] It is useful for things such as file transfer, casting and projecting withMiracast, wirelessprinting,[3]and to communicate with one or more devices simultaneously at typical Wi-Fi speeds (IEEE 802.11) without requiring a hotspot or an Internet connection.[4]It is therefore similar toBluetoothtechnology but offers a longer range.[3]Only one of the Wi-Fi devices needs to be compliant with Wi-Fi Direct to establish a peer-to-peer connection.[5] Wi-Fi Direct negotiates the link with a Wi-Fi Protected Setup system that assigns each device a limited wireless access point. The "pairing" of Wi-Fi Direct devices can be set up to require the proximity of anear field communication, aBluetoothsignal, or a button press on one or all the devices. Simultaneous connections also allow one device connected via an infrastructurelocal area networkto theInternetto share the Internet connection to devices it is connected through Wi-Fi Direct.[6] Conventional Wi-Fi networks are typically based on the presence of controller devices known aswireless access points. These devices normally combine three primary functions: A typical Wi-Fi home network includes laptops, tablets and phones, devices like modern printers, music devices, and televisions. Most Wi-Fi networks are set up ininfrastructure mode, where the access point acts as a central hub to which Wi-Fi capable devices are connected. All communication between devices goes through the access point. In contrast, Wi-Fi Direct devices are able to communicate with each other without requiring a dedicated wireless access point. The Wi-Fi Direct devices negotiate when they first connect to determine which device shall act as an access point.[citation needed] With the increase in the number and type of devices attaching to Wi-Fi systems, the basic model of a simple router with smart computers became increasingly strained. At the same time, the increasing sophistication of the hot spots presented setup problems for the users. To address these problems, there have been numerous attempts to simplify certain aspects of the setup task. A common example is theWi-Fi Protected Setupsystem included in most access points manufactured since 2007 when the standard was introduced.[7]Wi-Fi Protected Setup allows access points to be set up simply by entering a PIN or other identification into a connection screen, or in some cases, simply by pressing a button. The Protected Setup system uses this information to send data to a computer, handing it the information needed to complete the network setup and connect to the Internet. From the user's point of view, a single click replaces the multi-step, jargon-filled setup experience formerly required. While the Protected Setup model works as intended, it was intended only to simplify the connection between the access point and the devices that would make use of its services, primarily accessing the Internet. It provides little helpwithina network - finding and setting up printer access from a computer for instance. To address those roles, a number of different protocols have developed, including Universal Plug and Play (UPnP),Devices Profile for Web Services(DPWS), andzero-configuration networking(ZeroConf). These protocols allow devices to seek out other devices within the network, query their capabilities, and provide some level of automatic setup. Wi-Fi Direct has become a standard feature insmart phonesandportable media players, and infeature phonesas well.[8]The process of adding Wi-Fi to smaller devices has accelerated, and it is now possible to find printers,cameras, scanners, and many other common devices with Wi-Fi in addition to other connections, likeUSB. The widespread adoption of Wi-Fi in new classes of smaller devices made the need for ad hoc networking much more important. Even without a central Wi-Fi hub or router, it would be useful for alaptop computerto be able to wirelessly connect to a local printer. Although the ad hoc mode was created to address this sort of need, the lack of additional information for discovery makes it difficult to use in practice.[9][10] Although systems like UPnP andBonjourprovide many of the needed capabilities and are included in some devices, a single widely supported standard was lacking, and support within existing devices was far from universal. A guest using their smart phone would likely be able to find a hotspot and connect to the Internet with ease, perhaps using Protected Setup to do so. But, the same device would find that streaming music to a computer or printing a file might be difficult, or simply not supported between differing brands of hardware. Wi-Fi Direct can provide a wireless connection to peripherals. Wireless mice, keyboards, remote controls, headsets, speakers, displays, and many other functions can be implemented with Wi-Fi Direct. This has begun with Wi-Fi mouse products, and Wi-Fi Direct remote controls that were shipping circa November 2012. File sharing applications such asSHAREiton Android and BlackBerry 10 devices could use Wi-Fi Direct, with mostAndroid version 4.1(Jellybean), introduced in July 2012, and BlackBerry 10.2 supported. Android version 4.2 (Jellybean) included further refinements to Wi-Fi Direct including persistent permissions enabling two-way transfer of data between multiple devices. TheMiracaststandard for the wireless connection of devices to displays is based on Wi-Fi direct.[citation needed] Wi-Fi Direct essentially embeds a software access point ("Soft AP") into any device that must support Direct.[9]The soft AP provides a version of Wi-Fi Protected Setup with its push-button or PIN-based setup. When a device enters the range of the Wi-Fi Direct host, it can connect to it, and then gather setup information using a Protected Setup-style transfer.[9]Connection and setup is so simplified that it may replaceBluetoothin some situations.[11] Soft APs can be as simple or as complex as the role requires. Adigital picture framemight provide only the most basic services needed to allow digital cameras to connect and upload images. A smart phone that allowsdata tetheringmight run a more complex soft AP that adds the ability to route to the Internet. The standard also includesWPA2security and features to control access within corporate networks.[9]Wi-Fi Direct-certified devices can connect one-to-one or one-to-many and not all connected products need to be Wi-Fi Direct-certified. One Wi-Fi Direct enabled device can connect to legacy Wi-Fi certified devices. The Wi-Fi Direct certification program is developed and administered by theWi-Fi Alliance, the industry group that owns the "Wi-Fi"trademark. The specification is available for purchase from the Wi-Fi Alliance.[12] Intelincluded Wi-Fi Direct on theCentrino 2platform, in its My WiFi technology by 2008.[13]Wi-Fi Direct devices can connect to a notebook computer that plays the role of a softwareAccess Point(AP). The notebook computer can then provide Internet access to the Wi-Fi Direct-enabled devices without a Wi-Fi AP.Marvell Technology Group,[14]Atheros,Broadcom,Intel,Ralink, andRealtekannounced their first products in October 2010.[15]Redpine Signals's chipset was Wi-Fi Direct certified in November of the same year.[16] Googleannounced Wi-Fi Direct support inAndroid 4.0in October 2011.[17]While someAndroid 2.3devices likeSamsung Galaxy S IIhave had this feature through proprietary operating system extensions developed by OEMs, theGalaxy Nexus(released November 2011) was the first Android device to ship with Google's implementation of this feature and anAPIfor developers.[citation needed]Ozmo Devices, which developedintegrated circuits(chips) designed for Wi-Fi Direct, was acquired byAtmelin 2012.[18][19] Wi-Fi Direct became available with the BlackBerry 10.2 upgrade.[20][21] As of March 2016,[update]noiPhonedevice implements Wi-Fi Direct; instead,iOShas its own proprietary feature, namely Apple'sMultipeerConnectivity.[22]This protocol and others are used in the featureAirDrop, used for sharing both trivial-sized content like links and contact cards, and large files of any size, between Apple devices without the need for network infrastructure. Wi‑Fi Direct was meant to bring somewhat similar capabilities to other devices. The Xbox One, released in 2013, supports Wi-Fi Direct.[23] NVIDIA's SHIELD controller uses Wi-Fi Direct to connect to compatible devices. NVIDIA claims a reduction in latency and increase in throughput over competing Bluetooth controllers.[24]
https://en.wikipedia.org/wiki/Wi-Fi_Direct
The wireless data exchange standardBluetoothuses a variety ofprotocols. Core protocols are defined by the trade organizationBluetooth SIG. Additional protocols have been adopted from other standards bodies. This article gives an overview of the core protocols and those adopted protocols that are widely used. The Bluetooth protocol stack is split in two parts: a "controller stack" containing the timing critical radio interface, and a "host stack" dealing with high level data. The controller stack is generally implemented in a low cost silicon device containing the Bluetooth radio and a microprocessor. The host stack is generally implemented as part of an operating system, or as an installable package on top of an operating system. For integrated devices such as Bluetooth headsets, the host stack and controller stack can be run on the same microprocessor to reduce mass production costs; this is known as ahostlesssystem. The normal type of radio link used for general data packets using a pollingTDMAscheme to arbitrate access. It can carry packets of several types, which are distinguished by: A connection must be explicitly set up and accepted between two devices before packets can be transferred. ACL packets are retransmitted automatically if unacknowledged, allowing for correction of a radio link that is subject to interference. Forisochronousdata, the number of retransmissions can be limited by a flush timeout; but without using L2PLAY retransmission and flow control mode or EL2CAP, a higher layer must handle the packet loss. ACL links are disconnected if there is nothing received for the supervision timeout period; the default timeout is 20 seconds, but this may be modified by the master. The type of radio link used for voice data. A SCO link is a set of reserved time slots separated by the SCO interval Tscowhich is determined during logical link establishment by the Central device. Each device transmits encoded voice data in the reserved timeslot. There are no retransmissions, but forward error correction can be optionally applied. SCO packets may be sent every 1, 2, or 3 time slots. Enhanced SCO (eSCO) links allow greater flexibility in setting up links: they may use retransmissions to achieve reliability, allow for a wider variety of packet types and for greater intervals between packets than SCO, thus increasing radio availability for other links. Used for control of the radio link between two devices, mobile dmv, querying device abilities and power control. Implemented on the controller. Standardized communication between the host stack (e.g., a PC or mobile phone OS) and the controller (the Bluetooth integrated circuit (IC)). This standard allows the host stack or controller IC to be swapped with minimal adaptation. There are several HCI transport layer standards, each using a different hardware interface to transfer the same command, event and data packets. The most commonly used areUSB(in PCs) andUART(in mobile phones and PDAs). In Bluetooth devices with simple functionality (e.g., headsets), the host stack and controller can be implemented on the same microprocessor. In this case the HCI is optional, although often implemented as an internal software interface. This is the LMP equivalent forBluetooth Low Energy(LE), but is simpler. It is implemented on the controller and manages advertisement, scanning, connection and security from a low-level, close to the hardware point of view fromBluetooth perspective. L2CAPis used within the Bluetooth protocol stack. It passes packets to either the Host Controller Interface (HCI) or, on a hostless system, directly to the Link Manager/ACL link. L2CAP's functions include: L2CAP is used to communicate over the host ACL link. Its connection is established after the ACL link has been set up. In basic mode, L2CAP provides packets with a payload configurable up to 64 kB, with 672 bytes as the default MTU, and 48 bytes as the minimum mandatory supported MTU. In retransmission and flow control modes, L2CAP can be configured for reliable or asynchronous data per channel by performing retransmissions and CRC checks. Reliability in either of these modes is optionally and/or additionally guaranteed by the lower layer Bluetooth BDR/EDR air interface by configuring the number of retransmissions and flush timeout (time after which the radio will flush packets). In-order sequencing is guaranteed by the lower layer. The EL2CAP specification adds an additionalenhanced retransmission mode(ERTM) to the core specification, which is an improved version of retransmission and flow control modes. ERTM is required when using an AMP (Alternate MAC/PHY), such as 802.11abgn. BNEP[1]is used for delivering network packets on top of L2CAP. This protocol is used by thepersonal area networking (PAN)profile. BNEP performs a similar function toSubnetwork Access Protocol(SNAP) in Wireless LAN. In the protocol stack, BNEP is bound to L2CAP. The Bluetooth protocol RFCOMM is a simple set of transport protocols, made on top of the L2CAP protocol, providing emulatedRS-232serial ports(up to sixty simultaneous connections to a Bluetooth device at a time). The protocol is based on the ETSI standard TS 07.10. RFCOMM is sometimes calledserial port emulation. The Bluetoothserial port profile(SPP) is based on this protocol. RFCOMM provides a simple reliable data stream to the user, similar to TCP. It is used directly by many telephony related profiles as a carrier for AT commands, as well as being a transport layer for OBEX over Bluetooth. Many Bluetooth applications use RFCOMM because of its widespread support and publicly available API on most operating systems. Additionally, applications that used a serial port to communicate can be quickly ported to use RFCOMM. In the protocol stack, RFCOMM is bound to L2CAP. Used to allow devices to discover what services each other support, and what parameters to use to connect to them. For example, when connecting a mobile phone to a Bluetooth headset, SDP will be used to determine whichBluetooth profilesare supported by the headset (headset profile,hands free profile,advanced audio distribution profile, etc.) and the protocol multiplexer settings needed to connect to each of them. Each service is identified by aUniversally Unique Identifier(UUID), with official services (Bluetooth profiles) assigned a short form UUID (16 bits rather than the full 128). In the protocol stack, SDP is bound to L2CAP. Also referred to astelephony control protocol specification binary(TCS binary) Used to set up and control speech and data calls between Bluetooth devices. The protocol is based on the ITU-T standardQ.931, with the provisions of Annex D applied, making only the minimum changes necessary for Bluetooth. TCS is used by theintercom(ICP) andcordless telephony(CTP) profiles. The telephone control protocol specification is not called TCP, to avoid confusion with transmission control protocol (TCP) used for Internet communication. Used by the remote control profile to transferAV/Ccommands over an L2CAP channel. The music control buttons on a stereo headset use this protocol to control the music player. In the protocol stack, AVCTP is bound to L2CAP. Used by the advanced audio distribution profile to stream music to stereo headsets over an L2CAP channel. Intended to be used by video distribution profile. In the protocol stack, AVDTP is bound to L2CAP. Object exchange(OBEX; also termedIrOBEX) is a communications protocol that facilitates the exchange of binary objects between devices. It is maintained by theInfrared Data Associationbut has also been adopted by theBluetooth Special Interest Groupand theSyncMLwing of theOpen Mobile Alliance(OMA). In Bluetooth, OBEX is used for many profiles that require simple data exchange (e.g., object push, file transfer, basic imaging, basic printing, phonebook access, etc.). Similar in scope to SDP but specially adapted and simplified for Low Energy Bluetooth. It allows a client to read and/or write certain attributes exposed by the server in a non-complex, low-power friendly manner. In the protocol stack, ATT is bound to L2CAP. This is used by Bluetooth Low Energy implementations for pairing and transport specific key distribution. In the protocol stack, SMP is bound to L2CAP.
https://en.wikipedia.org/wiki/Bluetooth_Protocols
There are two types ofJava programming languageapplication programming interfaces (APIs): The following is a partial list of application programming interfaces (APIs) for Java. Following is a very incomplete list, as the number of APIs available for the Java platform is overwhelming. Real time Javais a catch-all term for a combination of technologies that allows programmers to write programs that meet the demands of real-time systems in the Java programming language. Java's sophisticated memory management, native support for threading and concurrency, type safety, and relative simplicity have created a demand for its use in many domains. Its capabilities have been enhanced to support real time computational needs: To overcome typical real time difficulties, the Java Community introduced a specification for real-time Java, JSR001. A number of implementations of the resultingReal-Time Specification for Java(RTSJ) have emerged, including a reference implementation from Timesys, IBM's WebSphere Real Time, Sun Microsystems's Java SE Real-Time Systems,[1]Aonix PERC or JamaicaVM from aicas. The RTSJ addressed the critical issues by mandating a minimum (only two) specification for the threading model (and allowing other models to be plugged into the VM) and by providing for areas of memory that are not subject to garbage collection, along with threads that are not preempt able by the garbage collector. These areas are instead managed using region-based memory management. TheReal-Time Specification for Java(RTSJ) is a set of interfaces and behavioral refinements that enable real-time computer programming in the Java programming language. RTSJ 1.0 was developed as JSR 1 under the Java Community Process, which approved the new standard in November, 2001. RTSJ 2.0 is being developed under JSR 282. A draft version is available at JSR 282 JCP Page. More information can be found at RTSJ 2.0
https://en.wikipedia.org/wiki/List_of_Java_APIs
TheGlobal Positioning System(GPS) is asatellite-basedhyperbolic navigationsystem owned by theUnited States Space Forceand operated byMission Delta 31.[2][3]It is one of theglobal navigation satellite systems(GNSS) that providegeolocationandtime informationto aGPS receiveranywhere on or near the Earth where there is an unobstructed line of sight to four or more GPS satellites.[4]It does not require the user to transmit any data, and operates independently of any telephone or Internet reception, though these technologies can enhance the usefulness of the GPS positioning information.[5]It provides critical positioning capabilities to military, civil, and commercial users around the world. Although the United States government created, controls, and maintains the GPS system, it is freely accessible to anyone with a GPS receiver.[6] The GPS project was started by theU.S. Department of Defensein 1973.[7]The first prototype spacecraft was launched in 1978 and the full constellation of 24 satellites became operational in 1993.[7]AfterKorean Air Lines Flight 007was shot down when it mistakenly entered Soviet airspace, PresidentRonald Reaganannounced that the GPS system would be made available for civilian use as of September 16, 1983;[8]however, initially this civilian use was limited to an average accuracy of 100 meters (330 ft) by use ofSelective Availability(SA), a deliberate error introduced into the GPS data that military receivers could correct for. As civilian GPS usage grew, there was increasing pressure to remove this error. The SA system was temporarily disabled during theGulf War, as a shortage of military GPS units meant that many US soldiers were using civilian GPS units sent from home. In the 1990s,Differential GPSsystems from theUS Coast Guard,Federal Aviation Administration, and similar agencies in other countries began to broadcast local GPS corrections, reducing the effect of both SA degradation and atmospheric effects (that military receivers also corrected for). The U.S. military had also developed methods to perform local GPS jamming, meaning that the ability to globally degrade the system was no longer necessary. As a result, United States PresidentBill Clintonsigned a bill ordering that Selective Availability be disabled on May 1, 2000;[9]and, in 2007, the US government announced that the next generation of GPS satellites would not include the feature at all. Advances in technology and new demands on the existing system have now led to efforts to modernize the GPS and implement the next generation ofGPS Block IIIsatellites and Next Generation Operational Control System (OCX)[10]which was authorized by theU.S. Congressin 2000. When Selective Availability was discontinued, GPS was accurate to about 5 meters (16 ft). GPS receivers that use the L5 band have much higher accuracy of 30 centimeters (12 in), while those for high-end applications such as engineering and land surveying are accurate to within2 cm (3⁄4in) and can even provide sub-millimeter accuracy with long-term measurements.[9][11][12]Consumer devices such as smartphones can be accurate to 4.9 m (16 ft) or better when used with assistive services likeWi-Fi positioning.[13] As of July 2023[update], 18 GPS satellites broadcast L5 signals, which are considered pre-operational prior to being broadcast by a full complement of 24 satellites in 2027.[14] The GPS project was launched in the United States in 1973 to overcome the limitations of previous navigation systems,[15]combining ideas from several predecessors, including classified engineering design studies from the 1960s. TheU.S. Department of Defensedeveloped the system, which originally used 24 satellites, for use by the United States military, and became fully operational in 1993. Civilian use was allowed from the 1980s.Roger L. Eastonof theNaval Research Laboratory,Ivan A. GettingofThe Aerospace Corporation, andBradford Parkinsonof theApplied Physics Laboratoryare credited with inventing it.[16]The work ofGladys Weston the creation of the mathematical geodetic Earth model is credited as instrumental in the development of computational techniques for detecting satellite positions with the precision needed for GPS.[17][18] The design of GPS is based partly on similar ground-basedradio-navigationsystems, such asLORANand theDecca Navigator System, developed in the early 1940s. In 1955,Friedwardt Winterbergproposed a test ofgeneral relativity—detecting time slowing in a strong gravitational field using accurate atomic clocks placed in orbit inside artificial satellites. Special and general relativity predicted that the clocks on GPS satellites, as observed by those on Earth, run 38 microseconds faster per day than those on the Earth. The design of GPS corrects for this difference; because without doing so, GPS calculated positions would accumulate errors of up to 10 kilometers per day (6 mi/d).[19] When theSoviet Unionlaunched its first artificial satellite (Sputnik 1) in 1957, two American physicists, William Guier and George Weiffenbach, atJohns Hopkins University'sApplied Physics Laboratory(APL) monitored its radio transmissions.[20]Within hours they realized that, because of theDoppler effect, they could pinpoint where the satellite was along its orbit. The Director of the APL gave them access to theirUNIVAC Icomputer to perform the heavy calculations required. Early the next year, Frank McClure, the deputy director of the APL, asked Guier and Weiffenbach to investigate the inverse problem: pinpointing the user's location, given the satellite's. (At the time, the Navy was developing the submarine-launchedPolarismissile, which required them to know the submarine's location.) This led them and APL to develop theTRANSITsystem.[21]In 1959, ARPA (renamedDARPAin 1972) also played a role in TRANSIT.[22][23][24] TRANSIT was first successfully tested in 1960.[25]It used aconstellationof five satellites and could provide a navigational fix approximately once per hour. In 1967, the U.S. Navy developed theTimationsatellite, which proved the feasibility of placing accurate clocks in space, a technology required for GPS.[26] In the 1970s, the ground-basedOMEGAnavigation system, based on phase comparison of signal transmission from pairs of stations,[27]became the first worldwide radio navigation system. Limitations of these systems drove the need for a more universal navigation solution with greater accuracy. Although there were wide needs for accurate navigation in military and civilian sectors, almost none of those was seen as justification for the billions of dollars it would cost in research, development, deployment, and operation of a constellation of navigation satellites. During theCold Wararms race, the nuclear threat to the existence of the United States was the one need that did justify this cost in the view of the United States Congress. This deterrent effect is why GPS was funded.[citation needed]It is also the reason for the ultra-secrecy at that time. Thenuclear triadconsisted of the United States Navy'ssubmarine-launched ballistic missiles(SLBMs) along withUnited States Air Force(USAF)strategic bombersandintercontinental ballistic missiles(ICBMs). Considered vital to thenuclear deterrenceposture, accurate determination of the SLBM launch position was aforce multiplier. Precise navigation would enable United Statesballistic missile submarinesto get an accurate fix of their positions before they launched their SLBMs.[28]The USAF, with two-thirds of the nuclear triad, also had requirements for a more accurate and reliable navigation system. The U.S. Navy and U.S. Air Force were developing their own technologies in parallel to solve what was essentially the same problem. To increase the survivability of ICBMs, there was a proposal to use mobile launch platforms (comparable to the SovietSS-24andSS-25) and so the need to fix the launch position had similarity to the SLBM situation. In 1960, the Air Force proposed a radio-navigation system called MOSAIC (MObile System for Accurate ICBM Control) that was essentially a 3-D LORAN System. A follow-on study, Project 57, was performed in 1963 and it was "in this study that the GPS concept was born". That same year, the concept was pursued as Project 621B, which had "many of the attributes that you now see in GPS"[29]and promised increased accuracy for U.S. Air Force bombers as well as ICBMs. Updates from the Navy TRANSIT system were too slow for the high speeds of Air Force operation. TheNaval Research Laboratory(NRL) continued making advances with theirTimation(Time Navigation) satellites, first launched in 1967, second launched in 1969, with the third in 1974 carrying the firstatomic clockinto orbit and the fourth launched in 1977.[30] Another important predecessor to GPS came from a different branch of the United States military. In 1964, theUnited States Armyorbited its first Sequential Collation of Range (SECOR) satellite used for geodetic surveying.[31]The SECOR system included three ground-based transmitters at known locations that would send signals to the satellite transponder in orbit. A fourth ground-based station, at an undetermined position, could then use those signals to fix its location precisely. The last SECOR satellite was launched in 1969.[32] With these parallel developments in the 1960s, it was realized that a superior system could be developed by synthesizing the best technologies from 621B, Transit, Timation, and SECOR in a multi-service program. Satellite orbital position errors, induced by variations in thegravity fieldandradar refractionamong others, had to be resolved. A team led by Harold L. Jury of Pan Am Aerospace Division in Florida from 1970 to 1973, used real-time data assimilation and recursive estimation to do so, reducing systematic and residual errors to a manageable level to permit accurate navigation.[33] During Labor Day weekend in 1973, a meeting of about twelve military officers at the Pentagon discussed the creation of aDefense Navigation Satellite System (DNSS). It was at this meeting that the real synthesis that became GPS was created. Later that year, the DNSS program was namedNavstar.[34]Navstar is often erroneously considered an acronym for "NAVigation System using Timing And Ranging" but was never considered as such by the GPS Joint Program Office (TRW may have once advocated for a different navigational system that used that acronym).[35]With the individual satellites being associated with the name Navstar (as with the predecessors Transit and Timation), a more fully encompassing name was used to identify the constellation of Navstar satellites,Navstar-GPS.[36]Ten "Block I" prototype satellites were launched between 1978 and 1985 (an additional unit was destroyed in a launch failure).[37] The effect of the ionosphere on radio transmission was investigated in a geophysics laboratory ofAir Force Cambridge Research Laboratory, renamed to Air Force Geophysical Research Lab (AFGRL) in 1974. AFGRL developed the Klobuchar model for computingionosphericcorrections to GPS location.[38]Of note is work done by Australian space scientistElizabeth Essex-Cohenat AFGRL in 1974. She was concerned with the curving of the paths of radio waves (atmospheric refraction) traversing the ionosphere from NavSTAR satellites.[39] AfterKorean Air Lines Flight 007, aBoeing 747carrying 269 people, was shot down by a Sovietinterceptor aircraftafter straying inprohibited airspacebecause of navigational errors,[40]in the vicinity ofSakhalinandMoneron Islands, PresidentRonald Reaganissued a directive making GPS freely available for civilian use, once it was sufficiently developed, as a common good.[41]The first Block II satellite was launched on February 14, 1989,[42]and the 24th satellite was launched in 1994. The GPS program cost at this point, not including the cost of the user equipment but including the costs of the satellite launches, has been estimated at US$5 billion (equivalent to $11 billion in 2024).[43] Initially, the highest-quality signal was reserved for military use, and the signal available for civilian use was intentionally degraded, in a policy known asSelective Availability. This changed on May 1, 2000, with U.S. PresidentBill Clintonsigning a policy directive to turn off Selective Availability to provide the same accuracy to civilians that was afforded to the military. The directive was proposed by the U.S. Secretary of Defense,William Perry, in view of the widespread growth ofdifferential GPSservices by private industry to improve civilian accuracy. Moreover, the U.S. military was developing technologies to deny GPS service to potential adversaries on a regional basis.[44]Selective Availability was removed from the GPS architecture beginning with GPS-III. Since its deployment, the U.S. has implemented several improvements to the GPS service, including new signals for civil use and increased accuracy and integrity for all users, all the while maintaining compatibility with existing GPS equipment. Modernization of the satellite system has been an ongoing initiative by the U.S. Department of Defense through a series ofsatellite acquisitionsto meet the growing needs of the military, civilians, and the commercial market. As of early 2015, high-quality Standard Positioning Service (SPS) GPS receivers provided horizontal accuracy of better than 3.5 meters (11 ft),[9]although many factors such as receiver and antenna quality and atmospheric issues can affect this accuracy. GPS is owned and operated by the United States government as a national resource. The Department of Defense is the steward of GPS. TheInteragency GPS Executive Board (IGEB)oversaw GPS policy matters from 1996 to 2004. After that, the National Space-Based Positioning, Navigation and Timing Executive Committee was established by presidential directive in 2004 to advise and coordinate federal departments and agencies on matters concerning the GPS and related systems.[45]The executive committee is chaired jointly by the Deputy Secretaries of Defense and Transportation. Its membership includes equivalent-level officials from the Departments of State, Commerce, and Homeland Security, theJoint Chiefs of StaffandNASA. Components of the executive office of the president participate as observers to the executive committee, and the FCC chairman participates as a liaison. The U.S. Department of Defense is required by law to "maintain a Standard Positioning Service (as defined in the federal radio navigation plan and the standard positioning service signal specification) that will be available on a continuous, worldwide basis" and "develop measures to prevent hostile use of GPS and its augmentations without unduly disrupting or degrading civilian uses". USA-203from Block IIR-M is unhealthy[50]For a more complete list, seeList of GPS satellites On February 10, 1993, theNational Aeronautic Associationselected the GPS Team as winners of the 1992Robert J. Collier Trophy, the US's most prestigious aviation award. This team combines researchers from the Naval Research Laboratory, the U.S. Air Force, theAerospace Corporation,Rockwell InternationalCorporation, andIBMFederal Systems Company. The citation honors them "for the most significant development for safe and efficient navigation and surveillance of air and spacecraft since the introduction of radio navigation 50 years ago". Two GPS developers received theNational Academy of EngineeringCharles Stark Draper Prizefor 2003: GPS developerRoger L. Eastonreceived theNational Medal of Technologyon February 13, 2006.[70]Francis X. Kane(Col. USAF, ret.) was inducted into the U.S. Air Force Space and Missile Pioneers Hall of Fame at Lackland A.F.B., San Antonio, Texas, March 2, 2010, for his role in space technology development and the engineering design concept of GPS conducted as part of Project 621B. In 1998, GPS technology was inducted into theSpace FoundationSpace Technology Hall of Fame.[71] On October 4, 2011, theInternational Astronautical Federation(IAF) awarded the Global Positioning System (GPS) its 60th Anniversary Award, nominated by IAF member, the American Institute for Aeronautics and Astronautics (AIAA). The IAF Honors and Awards Committee recognized the uniqueness of the GPS program and the exemplary role it has played in building international collaboration for the benefit of humanity.[72]On December 6, 2018, Gladys West was inducted into the Air Force Space and Missile Pioneers Hall of Fame in recognition of her work on an extremely accurate geodetic Earth model, which was ultimately used to determine the orbit of the GPS constellation.[73][74]On February 12, 2019, four founding members of the project were awarded the Queen Elizabeth Prize for Engineering with the chair of the awarding board stating: "Engineering is the foundation of civilisation; ...They've re-written, in a major way, the infrastructure of our world."[75] The GPS satellites carry very stableatomic clocksthat are synchronized with one another and with the reference atomic clocks at the ground control stations; any drift of the clocks aboard the satellites from the reference time maintained on the ground stations is corrected regularly.[76]Since the speed ofradio waves(speed of light)[77]is constant and independent of the satellite speed, the time delay between when the satellite transmits a signal and the ground station receives it is proportional to the distance from the satellite to the ground station. With the distance information collected from multiple ground stations, the location coordinates of any satellite at any time can be calculated with great precision. Each GPS satellite carries an accurate record of its own position and time,[78]and broadcasts that data continuously. Based on data received from multiple GPSsatellites, an end user's GPS receiver can calculate its ownfour-dimensional positioninspacetime; However, at a minimum, four satellites must be in view of the receiver for it to compute four unknown quantities (three position coordinates and the deviation of its own clock from satellite time).[79] Each GPS satellite continually broadcasts a signal (carrier wavewithmodulation) that includes: Conceptually, the receiver measures the TOAs (according to its own clock) of four satellite signals. From the TOAs and the TOTs, the receiver forms fourtime of flight(TOF) values, which are (given the speed of light) approximately equivalent to receiver-satellite ranges plus time difference between the receiver and GPS satellites multiplied by speed of light, which are called pseudo-ranges. The receiver then computes its three-dimensional position and clock deviation from the four TOFs. In practice the receiver position (in three dimensionalCartesian coordinateswith origin at the Earth's center) and the offset of the receiver clock relative to the GPS time are computed simultaneously, using thenavigation equationsto process the TOFs. The receiver's Earth-centered solution location is usually converted tolatitude,longitudeand height relative to an ellipsoidal Earth model. The height may then be further converted to height relative to thegeoid, which is essentially mean sea level. These coordinates may be displayed, such as on amoving map display, or recorded or used by some other system, such as a vehicle guidance system. Although usually not formed explicitly in the receiver processing, the conceptual time differences of arrival (TDOAs) define the measurement geometry. Each TDOA corresponds to ahyperboloidof revolution (seeMultilateration). The line connecting the two satellites involved (and its extensions) forms the axis of the hyperboloid. The receiver is located at the point where three hyperboloids intersect.[80][81] It is sometimes incorrectly said that the user location is at the intersection of three spheres. While simpler to visualize, this is the case only if the receiver has a clock synchronized with the satellite clocks (i.e., the receiver measures true ranges to the satellites rather than range differences). There are marked performance benefits to the user carrying a clock synchronized with the satellites. Foremost is that only three satellites are needed to compute a position solution. If it were an essential part of the GPS concept that all users needed to carry a synchronized clock, a smaller number of satellites could be deployed, but the cost and complexity of the user equipment would increase. The description above is representative of a receiver start-up situation. Most receivers have atrack algorithm, sometimes called atracker, that combines sets of satellite measurements collected at different times—in effect, taking advantage of the fact that successive receiver positions are usually close to each other. After a set of measurements are processed, the tracker predicts the receiver location corresponding to the next set of satellite measurements. When the new measurements are collected, the receiver uses a weighting scheme to combine the new measurements with the tracker prediction. In general, a tracker can (a) improve receiver position and time accuracy, (b) reject bad measurements, and (c) estimate receiver speed and direction. The disadvantage of a tracker is that changes in speed or direction can be computed only with a delay, and that derived direction becomes inaccurate when the distance traveled between two position measurements drops below or near therandom errorof position measurement. GPS units can use measurements of theDoppler shiftof the signals received to compute velocity accurately.[82]More advanced navigation systems use additional sensors like acompassor aninertial navigation systemto complement GPS. GPS requires four or more satellites to be visible for accurate navigation.[83]The solution of thenavigation equationsgives the position of the receiver along with the difference between the time kept by the receiver's on-board clock and the true time-of-day, thereby eliminating the need for a more precise and possibly impractical receiver based clock. Applications for GPS such astime transfer, traffic signal timing, andsynchronization of cell phone base stations,make use ofthis cheap and highly accurate timing. Some GPS applications use this time for display, or, other than for the basic position calculations, do not use it at all. Although four satellites are required for normal operation, fewer apply in special cases. If one variable is already known, a receiver can determine its position using only three satellites. For example, a ship on the open ocean usually has a known elevationclose to 0m, and the elevation of an aircraft may be known.[a]Some GPS receivers may use additional clues or assumptions such as reusing the last known altitude,dead reckoning,inertial navigation, or including information from the vehicle computer, to give a (possibly degraded) position when fewer than four satellites are visible.[84][85][86] The current GPS consists of three major segments. These are the space segment, a control segment, and a user segment.[55]TheU.S. Space Forcedevelops, maintains, and operates the space and control segments. GPS satellitesbroadcast signalsfrom space, and each GPS receiver uses these signals to calculate its three-dimensional location (latitude, longitude, and altitude) and the current time.[87] The space segment (SS) is composed of 24 to 32 satellites, or Space Vehicles (SV), inmedium Earth orbit, and also includes the payload adapters to the boosters required to launch them into orbit. The GPS design originally called for 24 SVs, eight each in three approximately circularorbits,[88]but this was modified to six orbital planes with four satellites each.[89]The six orbit planes have approximately 55°inclination(tilt relative to the Earth'sequator) and are separated by 60°right ascensionof theascending node(angle along the equator from a reference point to the orbit's intersection).[90]Theorbital periodis one-half of asidereal day,i.e., 11 hours and 58 minutes, so thatthe satellites pass over the same locations[91]or almost the same locations[92]every day. The orbits are arranged so that at least six satellites are always withinline of sightfrom everywhere on the Earth's surface (see animation at right).[93]The result of this objective is that the four satellites are not evenly spaced (90°) apart within each orbit. In general terms, the angular difference between satellites in each orbit is 30°, 105°, 120°, and 105° apart, which sum to 360°.[94] Orbiting at an altitude of approximately 20,200 km (12,600 mi); orbital radius of approximately 26,600 km (16,500 mi),[95]each SV makes two complete orbits eachsidereal day, repeating the sameground trackeach day.[96]This was very helpful during development because even with only four satellites, correct alignment means all four are visible from one spot for a few hours each day. For military operations, the ground track repeat can be used to ensure good coverage in combat zones. As of February 2019[update],[97]there are 31 satellites in the GPSconstellation, 27 of which are in use at a given time with the rest allocated as stand-bys. A 32nd was launched in 2018, but as of July 2019 is still in evaluation. More decommissioned satellites are in orbit and available as spares. The additional satellites improve the precision of GPS receiver calculations by providing redundant measurements. With the increased number of satellites, the constellation was changed to a nonuniform arrangement. Such an arrangement was shown to improve accuracy but also improves reliability and availability of the system, relative to a uniform system, when multiple satellites fail.[98]With the expanded constellation, nine satellites are usually visible at any time from any point on the Earth with a clear horizon, ensuring considerable redundancy over the minimum four satellites needed for a position. The control segment (CS) is composed of: The MCS can also accessSatellite Control Network(SCN) ground antennas (for additional command and control capability) and NGA (National Geospatial-Intelligence Agency) monitor stations. The flight paths of the satellites are tracked by dedicated U.S. Space Force monitoring stations in Hawaii,Kwajalein Atoll,Ascension Island,Diego Garcia,Colorado Springs, ColoradoandCape Canaveral, Florida, along with shared NGA monitor stations operated in England, Argentina, Ecuador, Bahrain, Australia and Washington, DC.[99]The tracking information is sent to the MCS atSchriever Space Force Base25 km (16 mi) ESE of Colorado Springs, which is operated by the2nd Space Operations Squadron(2 SOPS) of the U.S. Space Force. Then 2 SOPS contacts each GPS satellite regularly with a navigational update using dedicated or shared (AFSCN) ground antennas (GPS dedicated ground antennas are located atKwajalein,Ascension Island,Diego Garcia, andCape Canaveral). These updates synchronize the atomic clocks on board the satellites to within a fewnanosecondsof each other, and adjust theephemerisof each satellite's internal orbital model. The updates are created by aKalman filterthat uses inputs from the ground monitoring stations,space weatherinformation, and various other inputs.[100] When a satellite's orbit is being adjusted, the satellite is markedunhealthy, so receivers do not use it. After the maneuver, engineers track the new orbit from the ground, upload the new ephemeris, and mark the satellite healthy again. The operation control segment (OCS) currently serves as the control segment of record. It provides the operational capability that supports GPS users and keeps the GPS operational and performing within specification. OCS replaced the 1970s-era mainframe computer at Schriever Air Force Base in September 2007. After installation, the system helped enable upgrades and provide a foundation for a new security architecture that supported U.S. armed forces. OCS will continue to be the ground control system of record until the new segment, Next Generation GPS Operation Control System[10](OCX), is fully developed and functional. The U.S. Department of Defense has claimed that the new capabilities provided by OCX will be the cornerstone for enhancing GPS's mission capabilities, enabling U.S. Space Force to enhance GPS operational services to U.S. combat forces, civil partners and domestic and international users.[101][102]The GPS OCX program also will reduce cost, schedule and technical risk. It is designed to provide 50%[103]sustainment cost savings through efficient software architecture and Performance-Based Logistics. In addition, GPS OCX is expected to cost millions of dollars less than the cost to upgrade OCS while providing four times the capability. The GPS OCX program represents a critical part of GPS modernization and provides information assurance improvements over the current GPS OCS program. On September 14, 2011,[104]the U.S. Air Force announced the completion of GPS OCX Preliminary Design Review and confirmed that the OCX program is ready for the next phase of development. The GPS OCX program missed major milestones and pushed its launch into 2021, 5 years past the original deadline. According to the Government Accounting Office in 2019, the 2021 deadline looked shaky.[105] The project remained delayed in 2023, and was (as of June 2023) 73% over its original estimated budget.[106][107]In late 2023, Frank Calvelli, the assistant secretary of the Air Force for space acquisitions and integration, stated that the project was estimated to go live some time during the summer of 2024.[108] The user segment (US) is composed of hundreds of thousands of U.S. and allied military users of the secure GPS Precise Positioning Service, and tens of millions of civil, commercial and scientific users of the Standard Positioning Service. In general, GPS receivers are composed of an antenna, tuned to the frequencies transmitted by the satellites, receiver-processors, and a highly stable clock (often acrystal oscillator). They may also include a display for providing location and speed information to the user. GPS receivers may include an input for differential corrections, using theRTCMSC-104 format. This is typically in the form of anRS-232port at 4,800 bit/s speed. Data is actually sent at a much lower rate, which limits the accuracy of the signal sent using RTCM.[citation needed]Receivers with internal DGPS receivers can outperform those using external RTCM data.[citation needed]As of 2006[update], even low-cost units commonly includeWide Area Augmentation System(WAAS) receivers. Many GPS receivers can relay position data to a PC or other device using theNMEA 0183protocol. Although this protocol is officially defined by the National Marine Electronics Association (NMEA),[109]references to this protocol have been compiled from public records, allowing open source tools likegpsdto read the protocol without violating intellectual property laws.[clarification needed]Other proprietary protocols exist as well, such as theSiRFandMTKprotocols. Receivers can interface with other devices using methods including a serial connection,USB, orBluetooth. While originally a military project, GPS is considered adual-use technology, meaning it has significant civilian applications as well. GPS has become a widely deployed and useful tool for commerce, scientific uses, tracking, and surveillance. GPS's accurate time facilitates everyday activities such as banking, mobile phone operations, and even the control of power grids by allowing well synchronized hand-off switching.[87] Many civilian applications use one or more of GPS's three basic components: absolute location, relative movement, and time transfer. The U.S. government controls the export of some civilian receivers. All GPS receivers capable of functioning above 60,000 ft (18 km) above sea level and 1,000 kn (500 m/s; 2,000 km/h; 1,000 mph), or designed or modified for use with unmanned missiles and aircraft, are classified asmunitions(weapons)—which means they requireState Departmentexport licenses.[140]This rule applies even to otherwise purely civilian units that only receive the L1 frequency and the C/A (Coarse/Acquisition) code. Disabling operation above these limits exempts the receiver from classification as a munition. Vendor interpretations differ. The rule refers to operation at both the target altitude and speed, but some receivers stop operating even when stationary. This has caused problems with some amateur radio balloon launches that regularly reach 30 km (100,000 feet). These limits only apply to units or components exported from the United States. A growing trade in various components exists, including GPS units from other countries. These are expressly sold asITAR-free. As of 2009, military GPS applications include: GPS type navigation was first used in war in the1991 Persian Gulf War, before GPS was fully developed in 1995, to assistCoalition Forcesto navigate and perform maneuvers in the war. The war also demonstrated the vulnerability of GPS to beingjammed, when Iraqi forces installed jamming devices on likely targets that emitted radio noise, disrupting reception of the weak GPS signal.[147] GPS's vulnerability to jamming is a threat that continues to grow as jamming equipment and experience grows.[148][149]GPS signals have been reported to have been jammed many times over the years for military purposes. Russia seems to have several objectives for this approach, such as intimidating neighbors while undermining confidence in their reliance on American systems, promoting their GLONASS alternative, disrupting Western military exercises, and protecting assets from drones.[150]China uses jamming to discourage US surveillance aircraft near the contestedSpratly Islands.[151]North Koreahas mounted several major jamming operations near its border with South Korea and offshore, disrupting flights, shipping and fishing operations.[152]Iranian Armed Forces disrupted the civilian airliner plane FlightPS752's GPS when it shot down the aircraft.[153][154] In theRusso-Ukrainian War, GPS-guided munitions provided to Ukraine by NATO countries experienced significant failure rates as a result of Russian electronic warfare. Excalibur artillery shells efficiency rate hitting targets dropped from 70% to 6% as Russia adapted its electronic warfare activities.[155] While most clocks derive their time fromCoordinated Universal Time(UTC), the atomic clocks on the satellites are set toGPS time. The difference is that GPS time is not corrected to match the rotation of the Earth, so it does not contain newleap secondsor other corrections that are periodically added to UTC. GPS time was set to match UTC in 1980, but has since diverged. The lack of corrections means that GPS time remains at a constant offset withInternational Atomic Time(TAI) (TAI – GPS = 19 seconds). Periodic corrections are performed to the on-board clocks to keep them synchronized with ground clocks.[85]: Section 1.2.2 The GPS navigation message includes the difference between GPS time and UTC. As of January 2017,[update]GPS time is 18 seconds ahead of UTC because of the leap second added to UTC on December 31, 2016.[156]Receivers subtract this offset from GPS time to calculate UTC and specific time zone values. New GPS units may not show the correct UTC time until after receiving the UTC offset message. The GPS-UTC offset field can accommodate 255 leap seconds (eight bits). GPS time is theoretically accurate to about 14 nanoseconds, due to theclock driftrelative toInternational Atomic Timethat the atomic clocks in GPS transmitters experience.[157]Most receivers lose some accuracy in their interpretation of the signals and are only accurate to about 100 nanoseconds.[158][159] The GPS implements two major corrections to its time signals for relativistic effects: one for relative velocity of satellite and receiver, using the special theory of relativity, and one for the difference in gravitational potential between satellite and receiver, using general relativity. The acceleration of the satellite could also be computed independently as a correction, depending on purpose, but normally the effect is already dealt with in the first two corrections.[160][161] As opposed to the year, month, and day format of theGregorian calendar, the GPS date is expressed as a week number and a seconds-into-week number. The week number is transmitted as a ten-bitfield in the C/A and P(Y) navigation messages, and so it becomes zero again every 1,024 weeks (19.6 years). GPS week zero started at 00:00:00 UTC (00:00:19 TAI) on January 6, 1980, and the week number became zero again for the first time at 23:59:47 UTC on August 21, 1999 (00:00:19 TAI on August 22, 1999). It happened the second time at 23:59:42 UTC on April 6, 2019. To determine the current Gregorian date, a GPS receiver must be provided with the approximate date (to within 3,584 days) to correctly translate the GPS date signal. To address this concern in the future the modernized GPS civil navigation (CNAV) message will use a 13-bit field that only repeats every 8,192 weeks (157 years), thus lasting until 2137 (157 years after GPS week zero). The navigational signals transmitted by GPS satellites encode a variety of information including satellite positions, the state of the internal clocks, and the health of the network. These signals are transmitted on two separate carrier frequencies that are common to all satellites in the network. Two different encodings are used: a public encoding that enables lower resolution navigation, and an encrypted encoding used by the U.S. military.[162] Each GPS satellite continuously broadcasts anavigation messageon L1 (C/A and P/Y) and L2 (P/Y) frequencies at a rate of 50 bits per second (seebitrate). Each complete message takes 750 seconds (12+1⁄2minutes) to complete. The message structure has a basic format of a 1500-bit-long frame made up of five subframes, each subframe being 300 bits (6 seconds) long. Subframes 4 and 5 aresubcommutated25 times each, so that a complete data message requires the transmission of 25 full frames. Each subframe consists of ten words, each 30 bits long. Thus, with 300 bits in a subframe times 5 subframes in a frame times 25 frames in a message, each message is 37,500 bits long. At a transmission rate of 50-bit/s, this gives 750 seconds to transmit an entirealmanac message (GPS). Each 30-second frame begins precisely on the minute or half-minute as indicated by the atomic clock on each satellite.[163] The first subframe of each frame encodes the week number and the time within the week,[164]as well as the data about the health of the satellite. The second and the third subframes contain theephemeris– the precise orbit for the satellite. The fourth and fifth subframes contain thealmanac, which contains coarse orbit and status information for up to 32 satellites in the constellation as well as data related to error correction. Thus, to obtain an accurate satellite location from this transmitted message, the receiver must demodulate the message from each satellite it includes in its solution for 18 to 30 seconds. To collect all transmitted almanacs, the receiver must demodulate the message for 732 to 750 seconds or12+1⁄2minutes.[165] All satellites broadcast at the same frequencies, encoding signals using uniquecode-division multiple access(CDMA) so receivers can distinguish individual satellites from each other. The system uses two distinct CDMA encoding types: the coarse/acquisition (C/A) code, which is accessible by the general public, and the precise (P(Y)) code, which is encrypted so that only the U.S. military and other NATO nations who have been given access to the encryption code can access it.[166] The ephemeris is updated every 2 hours and is sufficiently stable for 4 hours, with provisions for updates every 6 hours or longer in non-nominal conditions. The almanac is updated typically every 24 hours. Additionally, data for a few weeks following is uploaded in case of transmission updates that delay data upload.[citation needed] All satellites broadcast at the same two frequencies, 1.57542 GHz (L1 signal) and 1.2276 GHz (L2 signal). The satellite network uses a CDMA spread-spectrum technique[167]: 607where the low-bitrate message data is encoded with a high-ratepseudo-random(PRN) sequence that is different for each satellite. The receiver must be aware of the PRN codes for each satellite to reconstruct the actual message data. The C/A code, for civilian use, transmits data at 1.023 millionchipsper second, whereas the P code, for U.S. military use, transmits at 10.23 million chips per second. The actual internal reference of the satellites is 10.22999999543 MHz to compensate forrelativistic effects[168][169]that make observers on the Earth perceive a different time reference with respect to the transmitters in orbit. The L1 carrier is modulated by both the C/A and P codes, while the L2 carrier is only modulated by the P code.[94]The P code can be encrypted as a so-called P(Y) code that is only available to military equipment with a proper decryption key. Both the C/A and P(Y) codes impart the precise time-of-day to the user. The L3 signal at a frequency of 1.38105 GHz is used to transmit data from the satellites to ground stations. This data is used by the United States Nuclear Detonation (NUDET) Detection System (USNDS) to detect, locate, and report nuclear detonations (NUDETs) in the Earth's atmosphere and near space.[170]One usage is the enforcement of nuclear test ban treaties. The L4 band at 1.379913 GHz is being studied for additional ionospheric correction.[167]: 607 The L5 frequency band at 1.17645 GHz was added in the process ofGPS modernization. This frequency falls into an internationally protected range for aeronautical navigation, promising little or no interference under all circumstances. The first Block IIF satellite that provides this signal was launched in May 2010.[171]On February 5, 2016, the 12th and final Block IIF satellite was launched.[172]The L5 consists of two carrier components that are in phase quadrature with each other. Each carrier component is bi-phase shift key (BPSK) modulated by a separate bit train. "L5, the third civil GPS signal, will eventually support safety-of-life applications for aviation and provide improved availability and accuracy."[173] In 2011, a conditional waiver was granted toLightSquaredto operate a terrestrial broadband service near the L1 band. Although LightSquared had applied for a license to operate in the 1525 to 1559 band as early as 2003 and it was put out for public comment, the FCC asked LightSquared to form a study group with the GPS community to test GPS receivers and identify issues that might arise due to the larger signal power from the LightSquared terrestrial network. The GPS community had not objected to the LightSquared (formerly MSV and SkyTerra) applications until November 2010, when LightSquared applied for a modification to its Ancillary Terrestrial Component (ATC) authorization. This filing (SAT-MOD-20101118-00239) amounted to a request to run several orders of magnitude more power in the same frequency band for terrestrial base stations, essentially repurposing what was supposed to be a "quiet neighborhood" for signals from space as the equivalent of a cellular network. Testing in the first half of 2011 has demonstrated that the effects from the lower 10 MHz of spectrum are minimal to GPS devices (less than 1% of the total GPS devices are affected). The upper 10 MHz intended for use by LightSquared may have some effect on GPS devices. There is some concern that this may seriously degrade the GPS signal for many consumer uses.[174][175]Aviation Weekmagazine reports that the latest testing (June 2011) confirms "significant jamming" of GPS by LightSquared's system.[176] Because all of the satellite signals are modulated onto the same L1 carrier frequency, the signals must be separated after demodulation. This is done by assigning each satellite a unique binarysequenceknown as aGold code. The signals are decoded after demodulation using addition of the Gold codes corresponding to the satellites monitored by the receiver.[177][178] If the almanac information has previously been acquired, the receiver picks the satellites to listen for by their PRNs, unique numbers in the range 1 through 32. If the almanac information is not in memory, the receiver enters a search mode until a lock is obtained on one of the satellites. To obtain a lock, it is necessary that there be an unobstructed line of sight from the receiver to the satellite. The receiver can then acquire the almanac and determine the satellites it should listen for. As it detects each satellite's signal, it identifies it by its distinct C/A code pattern. There can be a delay of up to 30 seconds before the first estimate of position because of the need to read the ephemeris data. Processing of the navigation message enables the determination of the time of transmission and the satellite position at this time. For more information seeDemodulation and Decoding, Advanced. The receiver uses messages received from satellites to determine the satellite positions and time sent. Thex, y,andzcomponents of satellite position and the time sent (s) are designated as [xi, yi, zi, si] where the subscriptidenotes the satellite and has the value 1, 2, ...,n, wheren≥ 4. When the time of message reception indicated by the on-board receiver clock ist~i{\displaystyle {\tilde {t}}_{i}}, the true reception time isti=t~i−b{\displaystyle t_{i}={\tilde {t}}_{i}-b}, wherebis the receiver's clock bias from the much more accurate GPS clocks employed by the satellites. The receiver clock bias is the same for all received satellite signals (assuming the satellite clocks are all perfectly synchronized). The message's transit time ist~i−b−si{\displaystyle {\tilde {t}}_{i}-b-s_{i}}, wheresiis the satellite time. Assuming the message traveled atthe speed of light,c, the distance traveled is(t~i−b−si)c{\displaystyle \left({\tilde {t}}_{i}-b-s_{i}\right)c}. For n satellites, the equations to satisfy are: wherediis the geometric distance or range between receiver and satellitei(the values without subscripts are thex, y,andzcomponents of receiver position): Definingpseudorangesaspi=(t~i−si)c{\displaystyle p_{i}=\left({\tilde {t}}_{i}-s_{i}\right)c}, we see they are biased versions of the true range: Since the equations have four unknowns [x, y, z, b]—the three components of GPS receiver position and the clock bias—signals from at least four satellites are necessary to attempt solving these equations. They can be solved by algebraic or numerical methods. Existence and uniqueness of GPS solutions are discussed by Abell and Chaffee.[80]Whennis greater than four, this system isoverdeterminedand afitting methodmust be used. The amount of error in the results varies with the received satellites' locations in the sky, since certain configurations (when the received satellites are close together in the sky) cause larger errors. Receivers usually calculate a running estimate of the error in the calculated position. This is done by multiplying the basic resolution of the receiver by quantities called thegeometric dilution of position(GDOP) factors, calculated from the relative sky directions of the satellites used.[181]The receiver location is expressed in a specific coordinate system, such as latitude and longitude using theWGS 84geodetic datumor a country-specific system.[182] The GPS equations can be solved by numerical and analytical methods. Geometrical interpretations can enhance the understanding of these solution methods. The measured ranges, called pseudoranges, contain clock errors. In a simplified idealization in which the ranges are synchronized, these true ranges represent the radii of spheres, each centered on one of the transmitting satellites. The solution for the position of the receiver is then at the intersection of the surfaces of these spheres; seetrilateration(more generally, true-range multilateration). Signals from at minimum three satellites are required, and their three spheres would typically intersect at two points.[183]One of the points is the location of the receiver, and the other moves rapidly in successive measurements and would not usually be on Earth's surface. In practice, there are many sources of inaccuracy besides clock bias, including random errors as well as the potential for precision loss from subtracting numbers close to each other if the centers of the spheres are relatively close together. This means that the position calculated from three satellites alone is unlikely to be accurate enough. Data from more satellites can help because of the tendency for random errors to cancel out and also by giving a larger spread between the sphere centers. But at the same time, more spheres will not generally intersect at one point. Therefore, a near intersection gets computed, typically via least squares. The more signals available, the better the approximation is likely to be. If the pseudorange between the receiver and satelliteiand the pseudorange between the receiver and satellitejare subtracted,pi−pj, the common receiver clock bias (b) cancels out, resulting in a difference of distancesdi−dj. The locus of points having a constant difference in distance to two points (here, two satellites) is ahyperbolaon a plane and ahyperboloid of revolution(more specifically, atwo-sheeted hyperboloid) in 3D space (seeMultilateration). Thus, from four pseudorange measurements, the receiver can be placed at the intersection of the surfaces of three hyperboloids each withfociat a pair of satellites. With additional satellites, the multiple intersections are not necessarily unique, and a best-fitting solution is sought instead.[80][81][184][185][186][187] The receiver position can be interpreted as the center of aninscribed sphere(insphere) of radiusbc, given by the receiver clock biasb(scaled by the speed of lightc). The insphere location is such that it touches other spheres. Thecircumscribing spheresare centered at the GPS satellites, whose radii equal the measured pseudorangespi. This configuration is distinct from the one described above, in which the spheres' radii were the unbiased or geometric rangesdi.[186]: 36–37[188] The clock in the receiver is usually not of the same quality as the ones in the satellites and will not be accurately synchronized to them. This producespseudorangeswith large differences compared to the true distances to the satellites. Therefore, in practice, the time difference between the receiver clock and the satellite time is defined as an unknown clock biasb. The equations are then solved simultaneously for the receiver position and the clock bias. The solution space [x, y, z, b] can be seen as a four-dimensionalspacetime, and signals from at minimum four satellites are needed. In that case each of the equations describes ahypercone(or spherical cone),[189]with the cusp located at the satellite, and the base a sphere around the satellite. The receiver is at the intersection of four or more of such hypercones. When more than four satellites are available, the calculation can use the four best, or more than four simultaneously (up to all visible satellites), depending on the number of receiver channels, processing capability, andgeometric dilution of precision(GDOP). Using more than four involves an over-determined system of equations with no unique solution; such a system can be solved by aleast-squaresor weighted least squares method.[179] Both the equations for four satellites, or the least squares equations for more than four, are non-linear and need special solution methods. A common approach is by iteration on a linearized form of the equations, such as theGauss–Newton algorithm. The GPS was initially developed assuming use of a numerical least-squares solution method—i.e., before closed-form solutions were found. One closed-form solution to the above set of equations was developed by S. Bancroft.[180][190]Its properties are well known;[80][81][191]in particular, proponents claim it is superior in low-GDOPsituations, compared to iterative least squares methods.[190] Bancroft's method is algebraic, as opposed to numerical, and can be used for four or more satellites. When four satellites are used, the key steps are inversion of a 4x4 matrix and solution of a single-variable quadratic equation. Bancroft's method provides one or two solutions for the unknown quantities. When there are two (usually the case), only one is a near-Earth sensible solution.[180] When a receiver uses more than four satellites for a solution, Bancroft uses thegeneralized inverse(i.e., the pseudoinverse) to find a solution. A case has been made that iterative methods, such as the Gauss–Newton algorithm approach for solving over-determinednon-linear least squaresproblems, generally provide more accurate solutions.[192] Leick et al. (2015) states that "Bancroft's (1985) solution is a very early, if not the first, closed-form solution."[193]Other closed-form solutions were published afterwards,[194][195]although their adoption in practice is unclear. GPS error analysis examines error sources in GPS results and the expected size of those errors. GPS makes corrections for receiver clock errors and other effects, but some residual errors remain uncorrected. Error sources include signal arrival time measurements, numerical calculations, atmospheric effects (ionospheric/tropospheric delays),ephemerisand clock data, multipath signals, and natural and artificial interference. Magnitude of residual errors from these sources depends on geometric dilution of precision. Artificial errors may result from jamming devices and threaten ships and aircraft[196]or from intentional signal degradation through selective availability, which limited accuracy to ≈ 6–12 m (20–40 ft), but has been switched off since May 1, 2000.[197][198] GNSS enhancementrefers to techniques used to improve the accuracy of positioning information provided by the Global Positioning System or otherglobal navigation satellite systemsin general, a network of satellites used for navigation. In the United States, GPS receivers are regulated under theFederal Communications Commission's (FCC)Part 15rules. As indicated in the manuals of GPS-enabled devices sold in the United States, as a Part 15 device, it "must accept any interference received, including interference that may cause undesired operation".[199]With respect to GPS devices in particular, the FCC states that GPS receiver manufacturers "must use receivers that reasonably discriminate against reception of signals outside their allocated spectrum".[200]For the last 30 years, GPS receivers have operated next to the Mobile Satellite Service band, and have discriminated against reception of mobile satellite services, such as Inmarsat, without any issue. The spectrum allocated for GPS L1 use by the FCC is 1559 to 1610 MHz, while the spectrum allocated for satellite-to-ground use owned by Lightsquared is the Mobile Satellite Service band.[201]Since 1996, the FCC has authorized licensed use of the spectrum neighboring the GPS band of 1525 to 1559 MHz to theVirginiacompanyLightSquared. On March 1, 2001, the FCC received an application from LightSquared's predecessor,MotientServices, to use their allocated frequencies for an integrated satellite-terrestrial service.[202]In 2002, the U.S. GPS Industry Council came to an out-of-band-emissions (OOBE) agreement with LightSquared to prevent transmissions from LightSquared's ground-based stations from emitting transmissions into the neighboring GPS band of 1559 to 1610 MHz.[203]In 2004, the FCC adopted the OOBE agreement in its authorization for LightSquared to deploy a ground-based network ancillary to their satellite system – known as the Ancillary Tower Components (ATCs) – "We will authorize MSS ATC subject to conditions that ensure that the added terrestrial component remains ancillary to the principal MSS offering. We do not intend, nor will we permit, the terrestrial component to become a stand-alone service."[204]This authorization was reviewed and approved by the U.S. Interdepartment Radio Advisory Committee, which includes theU.S. Department of Agriculture, U.S. Space Force, U.S. Army,U.S. Coast Guard,Federal Aviation Administration,National Aeronautics and Space Administration(NASA),U.S. Department of the Interior, andU.S. Department of Transportation.[205] In January 2011, the FCC conditionally authorized LightSquared's wholesale customers—such asBest Buy,Sharp, andC Spire—to only purchase an integrated satellite-ground-based service from LightSquared and re-sell that integrated service on devices that are equipped to only use the ground-based signal using LightSquared's allocated frequencies of 1525 to 1559 MHz.[206]In December 2010, GPS receiver manufacturers expressed concerns to the FCC that LightSquared's signal would interfere with GPS receiver devices[174]although the FCC's policy considerations leading up to the January 2011 order did not pertain to any proposed changes to the maximum number of ground-based LightSquared stations or the maximum power at which these stations could operate. The January 2011 order makes final authorization contingent upon studies of GPS interference issues carried out by a LightSquared led working group along with GPS industry and Federal agency participation. On February 14, 2012, the FCC initiated proceedings to vacate LightSquared's Conditional Waiver Order based on the NTIA's conclusion that there was currently no practical way to mitigate potential GPS interference. GPS receiver manufacturers design GPS receivers to use spectrum beyond the GPS-allocated band. In some cases, GPS receivers are designed to use up to 400 MHz of spectrum in either direction of the L1 frequency of 1575.42 MHz, because mobile satellite services in those regions are broadcasting from space to ground, and at power levels commensurate with mobile satellite services.[207]As regulated under the FCC's Part 15 rules, GPS receivers are not warranted protection from signals outside GPS-allocated spectrum.[200]This is why GPS operates next to the Mobile Satellite Service band, and also why the Mobile Satellite Service band operates next to GPS. The symbiotic relationship of spectrum allocation ensures that users of both bands are able to operate cooperatively and freely. The FCC adopted rules in February 2003 that allowed Mobile Satellite Service (MSS) licensees such as LightSquared to construct a small number of ancillary ground-based towers in their licensed spectrum to "promote more efficient use of terrestrial wireless spectrum".[208]In those 2003 rules, the FCC stated: "As a preliminary matter, terrestrial [Commercial Mobile Radio Service ('CMRS')] and MSS ATC are expected to have different prices, coverage, product acceptance and distribution; therefore, the two services appear, at best, to be imperfect substitutes for one another that would be operating in predominantly different market segments ... MSS ATC is unlikely to compete directly with terrestrial CMRS for the same customer base...". In 2004, the FCC clarified that the ground-based towers would be ancillary, noting: "We will authorize MSS ATC subject to conditions that ensure that the added terrestrial component remains ancillary to the principal MSS offering. We do not intend, nor will we permit, the terrestrial component to become a stand-alone service."[204]In July 2010, the FCC stated that it expected LightSquared to use its authority to offer an integrated satellite-terrestrial service to "provide mobile broadband services similar to those provided by terrestrial mobile providers and enhance competition in the mobile broadband sector".[209]GPS receiver manufacturers have argued that LightSquared's licensed spectrum of 1525 to 1559 MHz was never envisioned as being used for high-speed wireless broadband based on the 2003 and 2004 FCC ATC rulings making clear that the Ancillary Tower Component (ATC) would be, in fact, ancillary to the primary satellite component.[210]To build public support of efforts to continue the 2004 FCC authorization of LightSquared's ancillary terrestrial component vs. a simple ground-based LTE service in the Mobile Satellite Service band, GPS receiver manufacturerTrimble NavigationLtd. formed the "Coalition To Save Our GPS".[211] The FCC and LightSquared have each made public commitments to solve the GPS interference issue before the network is allowed to operate.[212][213]According to Chris Dancy of theAircraft Owners and Pilots Association, airline pilots with the type of systems that would be affected "may go off course and not even realize it".[214]The problems could also affect the Federal Aviation Administration upgrade to theair traffic controlsystem,United States Defense Departmentguidance, and localemergency servicesincluding911.[214] On February 14, 2012, the FCC moved to bar LightSquared's planned national broadband network after being informed by theNational Telecommunications and Information Administration(NTIA), the federal agency that coordinates spectrum uses for the military and other federal government entities, that "there is no practical way to mitigate potential interference at this time".[215][216]LightSquared is challenging the FCC's action.[needs update] Following the United States's deployment of GPS, other countries have also developed their own satellite navigation systems. These systems include: In the event of adversespace weatheror the deployment of an anti-satellite weapon against GPS, the United States has no terrestrial backup system. The potential cost of such an event to the U.S. economy is estimated at $1 billion per day. TheLORAN-Csystem was turned off in North America in 2010 and Europe in 2015.eLoranis proposed as an American terrestrial backup system, but as of 2024 has not received approval or funding.[228] China continues to operate LORAN-C transmitters,[229]and Russia has a similar system calledCHAYKA("Seagull").
https://en.wikipedia.org/wiki/GPS
Location-based service(LBS) is a general term denoting softwareserviceswhich usegeographic data and informationto provide services or information to users.[1]LBS can be used in a variety of contexts, such as health, indoorobject search,[2]entertainment,[3]work, personal life, etc.[4]Commonly used examples of location-based services include navigation software,social networking services,location-based advertising, andtracking systems.[5]LBS can also includemobile commercewhen taking the form of coupons or advertising directed at customers based on their current location. LBS also includes personalized weather services and even location-based games. LBS is critical to many businesses as well as government organizations to drive real insight from data tied to a specific location where activities take place. The spatial patterns that location-related data and services can provide is one of its most powerful and useful aspects where location is a common denominator in all of these activities and can be leveraged to better understand patterns and relationships. Banking, surveillance,online commerce, and many weapon systems are dependent on LBS. Access policiesare controlled bylocationdata or time-of-day constraints, or a combination thereof. As such, an LBS is an information service and has a number of uses insocial networkingtoday as information, in entertainment or security, which is accessible withmobile devicesthrough themobile networkand which uses information on the geographical position of the mobile device.[6][7][8][9] This concept of location-based systems is not compliant with the standardized concept ofreal-time locating systems(RTLS) and related local services, as noted in ISO/IEC 19762-5[10]and ISO/IEC 24730-1.[11]While networked computing devices generally do very well to inform consumers of days old data, the computing devices themselves can also be tracked, even in real-time. LBS privacy issues arise in that context, and are documented below. Location-based services (LBSs) are widely used in many computer systems and applications. Modern location-based services are made possible by technological developments such as theWorld Wide Web,satellite navigationsystems, and the widespread use ofmobile phones.[12] Location-based services were developed by integrating data fromsatellite navigation systems,cellular networks, andmobile computing, to provide services based on the geographical locations of users.[13]Over their history, location-based software has evolved from simple synchronization-based service models to authenticated and complex tools for implementing virtually any location-based service model or facility. There is currently no agreed upon criteria for defining the market size of location-based services, but theEuropean GNSS Agencyestimated that 40% of allcomputer applicationsused location-based software as of 2013, and 30% of all Internet searches were for locations.[14] LBS is the ability to open and close specific data objects based on the use of location or time (or both) as controls and triggers or as part of complexcryptographickey or hashing systems and the data they provide access to. Location-based services may be one of the most heavily usedapplication-layerdecision frameworkin computing. TheGlobal Positioning Systemwas first developed by theUnited States Department of Defensein the 1970s, and was made available for worldwide use and use by civilians in the 1980s.[15]Research forerunners of today's location-based services include the infrared Active Badge system[16](1989–1993), theEricsson-EuropolitanGSMLBS trial by Jörgen Johansson (1995), and the master thesis written by Nokia employee Timo Rantalainen in 1995.[17] In 1990 International Teletrac Systems (laterPacTelTeletrac), founded in Los Angeles CA, introduced the world's first dynamic real-timestolen vehicle recoveryservices. As an adjacency to this they began developing location-based services that could transmit information about location-based goods and services to custom-programmed alphanumericMotorolapagers. In 1996 the USFederal Communications Commission(FCC) issued rules requiring all US mobile operators to locateemergency callers. This rule was a compromise resulting from US mobile operators seeking the support of the emergency community in order to obtain the same protection from lawsuits relating to emergency calls as fixed-line operators already had. In 1997 Christopher Kingdon, of Ericsson, handed in the Location Services (LCS) stage 1 description to the joint GSM group of theEuropean Telecommunications Standards Institute(ETSI) and theAmerican National Standards Institute(ANSI). As a result, the LCS sub-working group was created under ANSI T1P1.5. This group went on to select positioning methods and standardize Location Services (LCS), later known as Location Based Services (LBS). Nodes defined include the Gateway Mobile Location Centre (GMLC), the Serving Mobile Location Centre (SMLC) and concepts such as Mobile Originating Location Request (MO-LR), Network Induced Location Request (NI-LR) and Mobile Terminating Location Request (MT-LR). As a result of these efforts in 1999 the first digital location-based service patent was filed in the US and ultimately issued after nine office actions in March 2002. The patent[18]has controls which when applied to today's networking models provide key value in all systems. In 2000, after approval from the world’s twelve largest telecom operators, Ericsson, Motorola andNokiajointly formed and launched the Location Interoperability Forum Ltd (LIF). This forum first specified theMobile Location Protocol(MLP), an interface between the telecom network and an LBS application running on a server in the Internet domain. Then, much driven by theVodafonegroup, LIF went on to specify the Location Enabling Server (LES), a "middleware", which simplifies the integration of multiple LBS with an operators infrastructure. In 2004 LIF was merged with theOpen Mobile Association(OMA). An LBS work group was formed within the OMA. In 2002, Marex.com in Miami Florida designed the world first marine asset telemetry device for commercial sale. The device, designed by Marex and engineered by its partner firms in telecom and hardware, was capable of transmitting location data and retrieving location-based service data via both cellular and satellite-based communications channels. Utilizing the Orbcomm satellite network, the device had multi level SOS features for both MAYDAY and marine assistance, vessel system condition and performance monitoring with remote notification, and a dedicated hardware device similar to GPS units. Based upon the device location, it was capable of providing detailed bearing, distance and communication information to the vessel operator in real time, in addition to the marine assistance and MAYDAY features. The concept and functionality was coinedLocation Based Servicesby the principal architect and product manager for Marex, Jason Manowitz, SVP, Product and Strategy. The device was branded asIntegrated Marine Asset Management System(IMAMS), and the proof-of-concept beta device was demonstrated to various US government agencies for vessel identification, tracking, and enforcement operations in addition to the commercial product line.[19]The device was capable of tracking assets including ships, planes, shipping containers, or any other mobile asset with a proper power source and antenna placement. Marex's financial challenges were unable to support product introduction and the beta device disappeared. The first consumer LBS-capable mobile Web device was thePalm VII, released in 1999.[20]Two of the in-the-box applications made use of theZIP-code–level positioning information and share the title for first consumer LBS application: the Weather.com app from The Weather Channel, and the[21]TrafficTouch app from Sony-Etak/ Metro Traffic.[22][23] The first LBS services were launched during 2001 by TeliaSonera in Sweden (FriendFinder, yellow pages, houseposition, emergency call location etc.) and by EMT in Estonia (emergency call location, friend finder, TV game). TeliaSonera and EMT based their services on the Ericsson Mobile Positioning System (MPS). Other early LBSs include friendzone, launched by swisscom inSwitzerlandin May 2001, using the technology of valis ltd. The service included friend finder, LBS dating and LBS games. The same service was launched later byVodafoneGermany, Orange Portugal and Pelephone inIsrael.[21]Microsoft's Wi-Fi-based indoor location system RADAR (2000), MIT's Cricket project using ultrasound location (2000) and Intel's Place Lab with wide-area location (2003).[24] In May 2002, go2 andAT&T Mobilitylaunched the first (US) mobile LBS local search application that used Automatic Location Identification (ALI) technologies mandated by the FCC. go2 users were able to use AT&T's ALI to determine their location and search near that location to obtain a list of requested locations (stores, restaurants, etc.) ranked by proximity to the ALI provide by the AT&T wireless network. The ALI determined location was also used as a starting point forturn-by-turndirections. The main advantage is that mobile users do not have to manually specify postal codes or other location identifiers to use LBS, when they roam into a different location. There are various companies that sell access to an individual's location history and this is estimated to be a $12 billion industry composed of collectors, aggregators and marketplaces. As of 2021, a company named Near claimed to have data from 1.6 billion people in 44 different countries,Mobilewallaclaims data on 1.9 billion devices, andX-Modeclaims to have a database of 25 percent of the U.S. adult population. An analysis, conducted by the non-profit newsroom calledThe Markup, found six out of 47 companies who claimed over a billion devices in their database. As of 2021, there are no rules or laws governing who can buy an individual's data.[25] There are a number of ways in which the location of an object, such as a mobile phone or device, can be determined. Another emerging method for confirming location is IoT and blockchain-based relative object location verification.[26] Withcontrol planelocating, sometimes referred to as positioning, the mobile phone service provider gets the location based on the radio signal delay of the closest cell-phone towers (for phones without satellite navigation features) which can be quite slow as it uses the 'voice control' channel.[9]In theUK, networks do not use trilateration; Because LBS services use a single base station, with a "radius" of inaccuracy, to determine a phone's location. This technique was the basis of the E-911 mandate and is still used to locate cellphones as a safety measure. Newer phones andPDAstypically have an integratedA-GPSchip. In addition there are emerging techniques like Real Time Kinematics and WiFi RTT (Round Trip Timing) as part of Precision Time Management services in WiFi and related protocols. In order to provide a successful LBS technology the following factors must be met: Several categories of methods can be used to find the location of the subscriber.[7][27]The simple and standard solution is LBS based on asatellite navigationsystem such asGalileoorGPS.Sony Ericsson's "NearMe" is one such example; it is used to maintain knowledge of the exact location. Satellite navigation is based on the concept oftrilateration, a basic geometric principle that allows finding one location if one knows its distance from other, already known locations. A low cost alternative to using location technology to track the player, is to not track at all. This has been referred to as "self-reported positioning". It was used in themixed reality gamecalledUncle Roy All Around Youin 2003 and considered for use in theAugmented realitygames in 2006.[28]Instead of tracking technologies, players were given a map which they could pan around and subsequently mark their location upon.[29][30]With the rise of location-based networking, this is more commonly known as a user "check-in". Near LBS (NLBS) involves local-range technologies such asBluetooth Low Energy,wireless LAN, infrared ornear-field communicationtechnologies, which are used to match devices to nearby services. This application allows a person to access information based on their surroundings; especially suitable for using inside closed premises, restricted or regional area. Another alternative is an operator- and satellite-independent location service based on access into the deep level telecoms network (SS7). This solution enables accurate and quick determination of geographical coordinates of mobile phones by providing operator-independent location data and works also for handsets that do not have satellite navigation capability. In addition, theIP addresscould provide the end-user's location. Many otherlocal positioning systemsandindoor positioning systemsare available, especially for indoor use. GPS and GSM do not work very well indoors, so other techniques are used, including co-pilot beacon for CDMA networks, Bluetooth, UWB,RFIDand Wi-Fi.[31] Location-based services may be employed in a number of applications, including:[7] For the carrier, location-based services provide added value by enabling services such as: In theU.S.theFCCrequires that all carriers meet certain criteria for supporting location-based services (FCC 94–102). The mandate requires 95% of handsets to resolve within 300 meters for network-based tracking (e.g. triangulation) and 150 meters for handset-based tracking (e.g. GPS). This can be especially useful when dialing anemergency telephone number– such asenhanced 9-1-1inNorth America, or112inEurope– so that the operator can dispatch emergency services such asemergency medical services,policeorfirefightersto the correct location. CDMA and iDEN operators have chosen to use GPS location technology for locating emergency callers. This led to rapidly increasing penetration of GPS in iDEN and CDMA handsets in North America and other parts of the world where CDMA is widely deployed. Even though no such rules are yet in place in Japan or in Europe the number of GPS-enabled GSM/WCDMA handset models is growing fast. According to the independent wireless analyst firmBerg Insightthe attach rate for GPS is growing rapidly in GSM/WCDMA handsets, from less than 8% in 2008 to 15% in 2009.[34] As for economic impact, location-based services are estimated to have a $1.6 Trillion impact on the US economy alone.[35] European operators are mainly usingCell IDfor locating subscribers. This is also a method used in Europe by companies that are using cell-based LBS as part of systems to recover stolen assets. In the US companies such asRave Wirelessin New York are using GPS and triangulation to enable college students to notify campus police when they are in trouble. Currently there are roughly three different models for location-based apps on mobile devices. All share that they allow one's location to be tracked by others. Each functions in the same way at a high level, but with differing functions and features. Below is a comparison of an example application from each of the three models. [36] Mobile messaging plays an essential role in LBS. Messaging, especially SMS, has been used in combination with various LBS applications, such as location-based mobile advertising.SMSis still the main technology carrying mobile advertising / marketing campaigns to mobile phones. A classic example of LBS applications using SMS is the delivery of mobile coupons or discounts to mobile subscribers who are near to advertising restaurants, cafes, movie theatres. The Singaporean mobile operatorMobileOnecarried out such an initiative in 2007 that involved many local marketers, what was reported to be a huge success in terms of subscriber acceptance. The Location Privacy Protection Act of 2012 (S.1223)[37]was introduced by SenatorAl Franken(D-MN) in order to regulate the transmission and sharing of user location data in the United States. It is based on the individual's one time consent to participate in these services (Opt In). The bill specifies the collecting entities, the collectable data and its usage. The bill does not specify, however, the period of time that the data collecting entity can hold on to the user data (a limit of 24 hours seems appropriate since most of the services use the data for immediate searches, communications, etc.), and the bill does not include location data stored locally on the device (the user should be able to delete the contents of the location data document periodically just as he would delete a log document). The bill which was approved by theSenate Judiciary Committee, would also require mobile services to disclose the names of the advertising networks or other third parties with which they share consumers' locations.[38] With the passing of theCAN-SPAM Actin 2003, it became illegal in the United States to send any message to the end user without the end user specifically opting-in. This put an additional challenge on LBS applications as far as "carrier-centric" services were concerned. As a result, there has been a focus on user-centric location-based services and applications which give the user control of the experience, typically by opting in first via a website or mobile interface (such asSMS, mobile Web, andJava/BREWapplications). TheEuropean Unionalso provides a legal framework for data protection that may be applied for location-based services, and more particularly several European directives such as: (1) Personal data: Directive 95/46/EC; (2) Personal data in electronic communications: Directive 2002/58/EC; (3) Data Retention:Directive 2006/24/EC. However the applicability of legal provisions to varying forms of LBS and of processing location data is unclear.[39] One implication of this technology is that data about a subscriber's location and historical movements is owned and controlled by the network operators, including mobile carriers and mobile content providers.[40]Mobile content providers and app developers are a concern. Indeed, a 2013 MIT study[41][42]by de Montjoye et al. showed that 4 spatio-temporal points, approximate places and times, are enough to uniquely identify 95% of 1.5M people in a mobility database. The study further shows that these constraints hold even when the resolution of the dataset is low. Therefore, even coarse or blurred datasets provide little anonymity. A critical article by Dobson and Fisher[43]discusses the possibilities for misuse of location information. Beside the legal framework there exist several technical approaches to protect privacy usingprivacy-enhancing technologies(PETs). Such PETs range from simplistic on/off switches[44]to sophisticated PETs using anonymization techniques (e.g. providing k-anonymity),[45]or cryptograpic protocols.[46]Only few LBS offer such PETs, e.g.,Google Latitudeoffered an on/off switch and allows to stick one's position to a free definable location. Additionally, it is an open question how users perceive and trust in different PETs. The only study that addresses user perception of state of the art PETs is.[47]Another set of techniques included in the PETs are thelocation obfuscationtechniques, which slightly alter the location of the users in order to hide their real location while still being able to represent their position and receive services from their LBS provider. Recent research has shown thatcrowdsourcingis also an effective approach at locating lost objects while still upholding the privacy of users. This is done by ensuring a limited level of interactions between users.[48]
https://en.wikipedia.org/wiki/Location-based_service
SmartTag Plus released since April 14, 2021; 4 years ago(2021-04-14). Galaxy SmartTagis akey finderand object finder produced bySamsung Electronics. The device utilizesBluetooth LEto allow the user to locate whatever object it is attached to via theSmartThingsmobile app.[1]The SmartTag & SmartTag plus were announced at Samsung's Galaxy Unpacked event on January 14, 2021, SmartTag was included with every Galaxy S21 for pre-order, and released on January 29, 2021.[2]The SmartTag Plus announced again at Samsung Newsroom on April 8, 2021, and released on April 14th.[3]On October 11, 2023, Samsung released the SmartTag2.[4] The Galaxy SmartTag is a tracking device which can be attached to various objects that are easily lost with a small strap (sold separately) or by other means, such as a keychain. Objects include keys, luggage, purses, among others. The device can then be located withSmartThingsmobile app, usingBluetooth LE. Another variant, the Galaxy SmartTag+, usesultra-widebandtechnology in order to locate the device and was released on April 16, 2021.[5]Only a limited number of phone models, which are more recently produced, manufactured by Samsung support such features. While the device is inside of the Bluetooth range (120 meters), it can play a ringtone using its inbuiltpiezoelectric speakerto alert the user of its exact location audibly, using a volume between 85 - 96 dB (unobstructed). If the device is outside of theBluetooth LErange, the device can still be located using Samsung'sSmartThingsFind Network, which uses the internet connection and GPS location of other Samsung Galaxy phones in the area to anonymously pinpoint the location of the SmartTag to the owner. The device also has a programmable button that can be used to controlSmartThings-compatible smart-home products.[6] The Samsung Galaxy SmartTag2 was released on October 11th 2023. This is the second generation in the SmartTag lineup, and has been improved overall. The new design is better for keyrings, less thick, and not bulging in the middle. The louder speaker also allows for better audible clarity. It also features refreshedBluetoothLE antennas, but still runs off of the sameSmartThingsFind Network. The longer battery life also allows it to be used for longer. A programmableNFCtag also allows the owner to set a custom lost message with contact details and instructions. TheSmartThingsFind Network allows Galaxy devices that have opted in, don't haveairplane modeor data saver mode on, and are inBluetoothrange to anonymously send the location of the tracker to the owner. It all happens in the background, and anonymously, so it updates seamlessly.
https://en.wikipedia.org/wiki/Samsung_Galaxy_SmartTag
Tile, Inc.(stylized astile) is an Americanconsumer electronicscompany which producestracking devicesthat users can attach to their belongings such as keys and backpacks. A companionmobile appforAndroidandiOSallows users to track the devices usingBluetooth 4.0in order to locate lost items or to view their last detected location.[1]The first devices were delivered in 2013. In September 2015, Tile launched a newer line of hardware that includes functionality to assist users in locating smartphones, as well as other feature upgrades.[2][3]In August 2017, two new versions of the Tile were launched, the Tile Sport and Tile Style.[4]As of 2019[update], Tile's hardware offerings consist of the Pro, Mate, Slim, and Sticker.[5] Since September 2018, formerGoProexecutive C. J. Prober has been theCEOof Tile after he replaced co-founder Mike Farley.[6]In November 2021,Life360agreed to acquire Tile in a $205 million acquisition, and is expected to integrate the two services.[7] Tile manufactures hardware devices, "Tiles", that can be attached to items such as keychains. By attaching the device, a user can later use the Tile app to help locate the item if it is lost.[8]The Tile application usesBluetooth Low Energy4.0 radio technology to locate Tiles within a 100 foot (30 meters) range, depending on the model.[9]Each Tile comes with a built-in speaker, and the user is able to trigger the device to play a sound to aid in the location of items at close range. The second generation of Tile devices produce sound at a volume of 90 decibels,[10]which is three times as loud as the previous generation of products.[11]The second generation also added a "Find My Phone" feature, which can be used to produce a sound on the user's paired smartphone when the user presses a button on the Tile device.[10] The Tile app can locate Tiles beyond the 100 foot (30 m) Bluetooth range by using "crowd GPS". If a Tile device is reported as lost and comes within range of any smartphone running the Tile app, the nearby user's app will send the item's owner an anonymous update of the lost item's location.[9][12][13][14]Users can also share their Tiles with others, which allows both participants to locate shared Tiles.[15] Tile's first generation products have built-in batteries with a battery life of about one year. Owners of these devices were automatically notified when the batteries were nearing depletion and were eligible to receive a discount on a replacement product.[16]Users could then return Tiles with depleted batteries in order for them to be recycled.[17][18]In October 2018, the Tile Mate and Tile Pro were redesigned to have user-replaceable batteries.[19]These models have lower water-resistance ratings than models that require factory battery replacement.[20] Tile's developers used Selfstarter, anopen sourcewebsite platform, tocrowdfundthe project through pre-orders.[21] As of July 7, 2013, Tile had raised overUS$2.6millionby selling preordered Tiles directly to 50,000 backers through their website.[22] In 2014, Tile raised additionalSeries A fundingof US$13 million led byGGV Capitaland a further US$3 million fromKhosla Venturesin 2015.[23][24] In May 2020, Tile sought assistance from theEuropean Unionin a dispute it had withAppleregarding the provision of its services on Apple devices. It claimed that its app was not activated on Apple devices while the Find My service provided by Apple is activated automatically. Apple denied the allegation.[25]In September 2020, Tile joined theCoalition for App Fairnesswhich aims to reach better conditions for the inclusion of apps into app stores.[26] In 2024 a computerhackeracquired the credentials of a suspected former Tile employee and gained access to the company's internal tool that processes location data for law enforcement, as well as customer data such as names, addresses, emails, telephone numbers, and order information. Other functions that were compromised include changing the email address linked to a particular device and creating administrative users. Tile said only its customer support platform, not the service platform, was breached and it has disabled credentials to prevent further unauthorized access.[27]
https://en.wikipedia.org/wiki/Tile_(company)
TrackRwas a commercialkey finderthat assisted in the tracking of lost belongings and devices.[1]Trackr was produced by the company Phone Halo[2]and was inspired by the founders' losing their keys on a beach during a surfing trip.[3] The founders ofPhone Halobegan working on TrackR in 2009. In 2010, they founded the company and launched the product.[4]In Winter 2018, TrackR rebranded itself toAdero, as part of changing its focus to other uses for its tracking technology, taking TrackR beyond the Bluetooth fobs that had been the core of its service.[5]TrackR shut down its services and removed its apps in August 2021.[6] The device contains a lithium battery that needs to be changed about once a year by the user. It communicates its current location viaBluetooth4.0, to an Android 4.4+ or iOS 8.0+ mobile device on which the TrackR app is installed and running. This feature is referred to as "Crowd Locate", since each device will report its location to all other TrackR devices in range, including those that are neither owned nor registered by the user. This feature is useful because the app must be installed and running on a nearby Bluetooth enabled device for any device's location to be relayed. As of August 2017, over 5 million TrackR devices had been sold.[3] As of August 2021, the official website stated that the manufacturer has discontinued App support for both Apple and Android devices. ForTrackr Bravo, the producer published the following data as of August 2017:[7]
https://en.wikipedia.org/wiki/TrackR
Long-range optical wireless communicationorfree-space optical communication(FSO) is anoptical communicationtechnology that uses light propagating in free space towirelesslytransmit data fortelecommunicationsorcomputer networkingover long distances. "Free space" means air, outer space, vacuum, or something similar. This contrasts with using solids such asoptical fiber cable. The technology is useful where the physical connections are impractical due to high costs or other considerations. Optical communications, in various forms, have been used for thousands of years. Theancient Greeksused a coded alphabetic system of signalling with torches developed by Cleoxenus, Democleitus andPolybius.[1]In the modern era,semaphoresand wireless solartelegraphscalledheliographswere developed, using coded signals to communicate with their recipients. In 1880,Alexander Graham Belland his assistantCharles Sumner Taintercreated thephotophone, at Bell's newly establishedVolta LaboratoryinWashington, DC. Bell considered it his most important invention. The device allowed for thetransmissionofsoundon a beam oflight. On June 3, 1880, Bell conducted the world's first wirelesstelephonetransmission between two buildings, some 213 meters (699 feet) apart.[2][3] Its first practical use came in military communication systems many decades later, first for optical telegraphy. German colonial troops usedheliographtelegraphy transmitters during theHerero Warsstarting in 1904, inGerman South-West Africa(today'sNamibia) as did British, French, US or Ottoman signals. During thetrench warfareofWorld War Iwhen wire communications were often cut, German signals used three types of optical Morse transmitters calledBlinkgerät, the intermediate type for distances of up to 4 km (2.5 mi) at daylight and of up to 8 km (5.0 mi) at night, using red filters for undetected communications. Optical telephone communications were tested at the end of the war, but not introduced at troop level. In addition, special blinkgeräts were used for communication with airplanes, balloons, and tanks, with varying success.[citation needed] A major technological step was to replace the Morse code by modulating optical waves in speech transmission.Carl Zeiss, Jenadeveloped theLichtsprechgerät80/80(literal translation: optical speaking device) that the German army used in their World War II anti-aircraft defense units, or in bunkers at theAtlantic Wall.[4] The invention oflasersin the 1960s revolutionized free-space optics.[citation needed]Military organizations were particularly interested and boosted their development. In 1973, while prototyping the firstlaser printersatPARC,Gary Starkweatherand others made aduplex30 Mbit/sCANoptical linkusing astronomical telescopes andHeNe lasersto send data between offices; they chose the method due partly to less strict regulations (at the time) on free-space optical communication by theFCC.[5][non-primary source needed]However, laser-based free-space optics lost market momentum when the installation ofoptical fibernetworks for civilian uses was at its peak.[citation needed] Many simple and inexpensive consumerremote controlsuse low-speed communication usinginfrared(IR) light. This is known asconsumer IRtechnologies. Free-space point-to-point optical links can be implemented using infrared laser light, although low-data-rate communication over short distances is possible usingLEDs.Infrared Data Association(IrDA) technology is a very simple form of free-space optical communications. On the communications side the FSO technology is considered as a part of theoptical wireless communicationsapplications. Free-space optics can be used for communications betweenspacecraft.[6] The reliability of FSO units has always been a problem for commercial telecommunications. Consistently, studies find too many dropped packets and signal errors over small ranges (400 to 500 meters (1,300 to 1,600 ft)). This is from both independent studies, such as in the Czech Republic,[7]as well as internal studies, such as one conducted by MRV FSO staff.[8] Military based studies consistently produce longer estimates for reliability, projecting the maximum range for terrestrial links is of the order of 2 to 3 km (1.2 to 1.9 mi).[9]All studies agree the stability and quality of the link is highly dependent on atmospheric factors such as rain, fog, dust and heat. Relays may be employed to extend the range for FSO communications.[10][11] TMEX USA ran two eight-mile links betweenLaredo, TexasandNuevo Laredo, Mexicofrom 1998[12]to 2002. The links operated at 155 Mbit/s and reliably carried phone calls and internet service.[13][dubious–discuss][citation needed] The main reason terrestrial communications have been limited to non-commercial telecommunications functions is fog. Fog often prevents FSO laser links over 500 meters (1,600 ft) from achieving a year-round availability sufficient for commercial services. Several entities are continually attempting to overcome these key disadvantages to FSO communications and field a system with a betterquality of service.DARPAhas sponsored over US$130 million in research toward this effort, with the ORCA and ORCLE programs.[14][15][16] Other non-government groups are fielding tests to evaluate different technologies that some claim have the ability to address key FSO adoption challenges. As of October 2014[update], none have fielded a working system that addresses the most common atmospheric events. FSO research from 1998 to 2006 in the private sector totaled $407.1 million, divided primarily among four start-up companies. All four failed to deliver products that would meet telecommunications quality and distance standards:[17] One private company published a paper on November 20, 2014, claiming they had achieved commercial reliability (99.999% availability) in extreme fog. There is no indication this product is currently commercially available.[25] The massive advantages of laser communication in space have multiple space agencies racing to develop a stable space communication platform, with many significant demonstrations and achievements. The first gigabit laser-based communication[clarification needed]was achieved by the European Space Agency and called theEuropean Data Relay System(EDRS) on November 28, 2014. The system is operational and is being used on a daily basis. In December 2023, theAustralian National University(ANU) demonstrated itsQuantum Optical Ground Stationat itsMount Stromlo Observatory. QOGS uses adaptive optics and lasers as part of a telescope, to create a bi-directional communications system capable of supporting theNASAArtemis programto theMoon.[26] A two-way distance record for communication was set by the Mercury laser altimeter instrument aboard theMESSENGERspacecraft. It was able to communicate across a distance of 24 million km (15 million mi), as the craft neared Earth on a fly-by in May 2005. The previous record had been set with a one-way detection of laser light from Earth by the Galileo probe, of 6 million km (3.7 million mi) in 1992. In January 2013, NASA used lasers to beam an image of the Mona Lisa to the Lunar Reconnaissance Orbiter roughly 390,000 km (240,000 mi) away. To compensate for atmospheric interference,an error correction code algorithm similar to that used in CDswas implemented.[27] In the early morning hours of October 18, 2013, NASA's Lunar Laser Communication Demonstration (LLCD) transmitted data from lunar orbit to Earth at a rate of 622 megabits per second (Mbit/s).[28]LLCD was flown aboard theLunar Atmosphere and Dust Environment Explorer(LADEE) spacecraft, whose primary science mission was to investigate the tenuous and exotic atmosphere that exists around the Moon. Between April and July 2014 NASA'sOPALSinstrument successfully uploaded 175 megabytes in 3.5 seconds and downloaded 200–300 MB in 20 s.[29]Their system was also able to re-acquire tracking after the signal was lost due to cloud cover. On December 7, 2021 NASA launched theLaser Communications Relay Demonstration(LCRD), which aims to relay data between spacecraft ingeosynchronous orbitand ground stations. LCRD is NASA's first two-way, end-to-end optical relay. LCRD uses twoground stations, Optical Ground Station (OGS)-1 and -2, atTable Mountain Observatoryin California, andHaleakalā,Hawaii.[30]One of LCRD's first operational users is theIntegrated LCRD Low-Earth Orbit User Modem and Amplifier Terminal(ILLUMA-T), on the International Space Station. The terminal will receive high-resolution science data from experiments and instruments on board the space station and then transfer this data to LCRD, which will then transmit it to a ground station. After the data arrives on Earth, it will be delivered to mission operation centers and mission scientists. The ILLUMA-T payload was sent to the ISS in late 2023 onSpaceX CRS-29, and achievedfirst lighton December 5, 2023.[31][32] On April 28, 2023, NASA and its partners achieved 200 gigabit per second (Gbit/s) throughput on a space-to-ground optical link between a satellite in orbit and Earth. This was achieved by theTeraByte InfraRed Delivery(TBIRD) system, mounted on NASA'sPathfinder Technology Demonstrator 3(PTD-3) satellite.[33] Varioussatellite constellationsthat are intended to provide global broadband coverage, such asSpaceXStarlink, employlaser communicationfor inter-satellite links. This effectively creates a space-basedoptical mesh networkbetween the satellites. In 2001, Twibright Labs releasedRONJA Metropolis, an open-source DIY 10 Mbit/s full-duplex LED FSO system that can span 1.4 km (0.87 mi).[34][35] In 2004, avisible light communicationconsortium was formed inJapan.[36]This was based on work from researchers that used a white LED-based space lighting system for indoorlocal area network(LAN) communications. These systems present advantages over traditionalUHFRF-based systems from improved isolation between systems, the size and cost of receivers/transmitters, RF licensing laws and by combining space lighting and communication into the same system.[37]In January 2009, a task force for visible light communication was formed by theInstitute of Electrical and Electronics Engineersworking group for wirelesspersonal area networkstandards known asIEEE 802.15.7.[38]A trial was announced in 2010, inSt. Cloud, Minnesota.[39] Amateur radiooperators have achieved significantly farther distances using incoherent sources of light from high-intensity LEDs. One reported 278 km (173 mi) in 2007.[40]However, physical limitations of the equipment used limitedbandwidthsto about 4kHz. The high sensitivities required of the detector to cover such distances made the internal capacitance of the photodiode used a dominant factor in the high-impedance amplifier which followed it, thus naturally forming a low-pass filter with a cut-off frequency in the 4 kHz range. Lasers can reach very high data rates which are comparable to fiber communications. Projected data rates and future data rate claims vary. A low-costwhite LED (GaN-phosphor)which could be used for space lighting can typically be modulated up to 20 MHz.[41]Data rates of over 100Mbit/scan be achieved using efficientmodulationschemes andSiemensclaimed to have achieved over 500 Mbit/s in 2010.[42]Research published in 2009, used a similar system for traffic control of automated vehicles with LED traffic lights.[43] In September 2013, pureLiFi, the Edinburgh start-up working onLi-Fi, also demonstrated high speed point-to-point connectivity using any off-the-shelf LED light bulb. In previous work, high bandwidth specialist LEDs have been used to achieve the high data rates. The new system, theLi-1st, maximizes the available optical bandwidth for any LED device, thereby reducing the cost and improving the performance of deploying indoor FSO systems.[44] Typically, the best scenarios for using this technology are: The light beam can be very narrow, which makes FSO hard to intercept, improving security.Encryptioncan secure the data traversing the link. FSO provides vastly improvedelectromagnetic interference(EMI) behavior compared to usingmicrowaves. For terrestrial applications, the principal limiting factors are: These factors cause an attenuated receiver signal and lead to higherbit error ratio(BER). To overcome these issues, vendors found some solutions, like multi-beam or multi-path architectures, which use more than one sender and more than one receiver. Some state-of-the-art devices also have largerfade margin(extra power, reserved for rain, smog, fog). To keep an eye-safe environment, good FSO systems have a limited laser power density and supportlaser classes1or1M. Atmospheric and fog attenuation, which are exponential in nature, limit practical range of FSO devices to several kilometers. However, free-space optics based on1550 nmwavelength, have considerably lower optical loss than free-space optics using830 nmwavelength, in dense fog conditions. FSO using wavelength 1550 nm system are capable of transmitting several times higher power than systems with850 nmand are safe to the human eye (1M class). Additionally, some free-space optics, such as EC SYSTEM,[47]ensure higher connection reliability in bad weather conditions by constantly monitoring link quality to regulate laser diode transmission power with built-in automatic gain control.[47]
https://en.wikipedia.org/wiki/Free-space_optical_communication
Aheliograph(fromAncient Greekἥλιος(hḗlios)'sun'andγράφειν(gráphein)'to write') is a solar telegraph[1]system that signals by flashes of sunlight (generally usingMorse codefrom the 1840s) reflected by amirror. The flashes are produced by momentarily pivoting the mirror, or by interrupting the beam with a shutter.[2]The heliograph was a simple but effective instrument for instantaneousoptical communicationover long distances during the late 19th and early 20th centuries.[2]Its main uses were military,surveyingandforest protectionwork. Heliographs were standard issue in the British and Royal Australian armies until the 1960s, and were used by the Pakistani army as late as 1975.[3] There were many heliograph types. Most heliographs were variants of theBritish ArmyMance Mark V version (Fig.1). It used a flat[4]round mirror with a small unsilvered spot in the centre. The sender aligned the heliograph to the target by looking at the reflected target in the mirror and moving their head until the target was hidden by the unsilvered spot. Keeping their head still, they then adjusted the aiming rod so its cross wires bisected the target.[5]They then turned up the sighting vane, which covered the cross wires with a diagram of a cross, and aligned the mirror with the tangent and elevation screws, so the small shadow that was the reflection of the unsilvered spot hole was on the cross target.[5]This indicated that the sunbeam was pointing at the target. The flashes were produced by a keying mechanism that tilted the mirror up a few degrees at the push of a lever at the back of the instrument. If the Sun was in front of the sender, its rays were reflected directly from this mirror to the receiving station. If the Sun was behind the sender, the sighting rod was replaced by a second mirror, to capture the sunlight from the main mirror and reflect it to the receiving station.[6][7]TheU.S. Army's Signal Corpsheliograph used a flat square mirror that did not tilt.[8]This type produced flashes by ashuttermounted on a second tripod (Fig 4).[6] The heliograph had certain advantages. It allowed long-distance communication without a fixed infrastructure, though it could also be linked to make a fixed network extending for hundreds of miles, as in the fort-to-fort network used for theGeronimomilitary campaign. It was very portable, did not require any power source, and was relatively secure since it was invisible to those not near the axis of operation, and the beam was very narrow, spreading only 50 ft (15 m) per 1 mi (1.6 km) of range. However, anyone in the beam with the correct knowledge could intercept signals without being detected.[3][9]In theSecond Boer War(1899–1902) in South Africa, where both sides used heliographs, tubes were sometimes used to decrease the dispersion of the beam.[3]In some other circumstances, though, a narrow beam made it difficult to stay aligned with a moving target, as when communicating from shore to a moving ship, so the British issued a dispersing lens to broaden the heliograph beam from its natural diameter of 0.5 degrees to 15 degrees.[10] The range of a heliograph depends on the opacity of the air and the effective collecting area of the mirrors. Heliograph mirrors ranged from 1.5 to 12 in (38 to 305 mm) or more. Stations at higher altitudes benefit from thinner, clearer air, and are required in any event for great ranges, to clear thecurvature of the Earth. A good approximation for ranges of 20 to 50 mi (32 to 80 km) is that the flash of a circular mirror is visible to the naked eye at a distance of 10 mi (16 km) for each inch of mirror diameter,[11]and farther apart seen with atelescope. The world record distance was established by a detachment of U.S. Army signal sergeants by the inter-operation of stations in North America onMount Ellen, (Utah), andMount Uncompahgre, (Colorado), 183 mi (295 km) apart on 17 September 1894, with Army Signal Corps heliographs carrying mirrors only 8 inches (20 cm) on a side.[12] The German professorCarl Friedrich Gauss(1777–1855), of theUniversity of Göttingendeveloped and used a predecessor of the heliograph (theheliotrope) in 1821.[2][13]His device directed a controlled beam of sunlight to a distant station to be used as a marker forgeodetic surveywork, and was suggested as a means of telegraphic communications.[14]This is the first reliably documented heliographic device,[15]despite much speculation about possible ancient incidents of sun-flash signalling, and the documented existence of other forms ofancient optical telegraphy. For example, one author in 1919 chose to "hazard the theory"[16]that the Italian mainland signals from the capital ofRomethat ancientRoman emperorTiberius(42 B.C.-A.D.37, reigned A.D.14 to 37), watched for from his imperial retreat on the island ofCapri.[17]were mirror flashes, but admitted "there are no references in ancient writings to the use of signaling by mirrors", and that the documented means of ancient long-range visual telecommunications was by beacon fires and beacon smoke, not mirrors. Similarly, the story that a shield was used as a heliograph at the ancient famousBattle of Marathonbetween theGreeksandPersiansin 490 B.C. is also unfortunately a modern myth,[18]originating in the 1800s. The ancient historianHerodotusnever mentioned any flash.[19]What Herodotus did write was that someone was accused of having arranged to "hold up a shield as a signal".[20]Suspicion grew in the later 1900s, that the flash theory was implausible.[21]The conclusion after testing the theory was "Nobody flashed a shield at the Battle of Marathon".[22] In a letter dated 3 June 1778,John Norris, High Sheriff ofBuckinghamshire, England, notes: "Did this day heliograph intelligence from Dr [Benjamin] Franklin in Paris to Wycombe".[23]However, there is little evidence that "heliograph" here is other than a misspelling of "holograph". The term "heliograph" for solar telegraphy did not enter the English language until the 1870s—even the word "telegraphy" was not coined until the 1790s. Henry Christopher Mance(1840–1926), of the British Government's Persian Gulf Telegraph Department, developed the first widely accepted heliograph about 1869,[2][24][25]while stationed atKarachi(now in modernPakistan) in the thenBombay PresidencyofBritish India. Mance was familiar with heliotropes by their use earlier for the mapping project of theGreat Trigonometrical Surveyof India (done 1802–1871).[12]The Mance Heliograph was operated easily by one man, and since it weighed about 7 lb (3.2 kg), the operator could readily carry the device and its supporting tripod. The British Army tested the heliograph in India at a range of 35 mi (56 km) with favorable results.[26]During theJowaki Afridi expeditionsent by the British-Indian government in 1877, the heliograph was first tested in war.[27][28] The simple and effective instrument that Mance invented was to be an important part of military communications for more than 60 years. The usefulness of heliographs was limited to daytimes with strong sunlight, but they were the most powerful type of visual signalling device known. In pre-radio times heliography was often the only means of communication that could span ranges of as much as 100 mi (160 km) with a lightweight portable instrument.[12] In theUnited States military, by mid-1878, a younger ColonelNelson A. Mileshad established a line of heliographs connecting far-flung military outposts ofFort KeoghandFort Custer, in the northernMontana Territory, a distance of 140 mi (230 km).[29][30][31]In 1886,United States Armynow GeneralNelson A. Miles(1839–1925), set up a network of 27 heliograph stations in theArizonaandNew Mexicoterritories of the oldSouthwestduring the extended campaign and hunt for the nativeApacherenegade chief / guerrilla warfare leaderGeronimo(1829–1909).[32]In 1890, now little-known Major W.J. Volkmar of the U.S. Army demonstrated in theArizonaandNew Mexicoterritories, the possibility of performing communication by heliograph over a heliograph network aggregating 2,000 mi (3,200 km) in length.[33]The network of communication begun by General Miles in 1886, and continued by unsung and now unfortunately relatively unknown Lieutenant W. A. Glassford, was perfected in 1889 at ranges of 85, 88, 95 and 125 mi (137, 142, 153 and 201 km) over a rugged and broken country, which was the stronghold of theApache,Commancheand other hostile native Indian tribes.[12] By 1887, heliographs in use included not only the British Mance and Begbie heliographs, but also the American Grugan, Garner and Pursell heliographs. The Grugan and Pursell heliographs used shutters, and the others used movable mirrors operated by a finger key. The Mance, Grugan and Pursell heliographs used two tripods, and the others one. The signals could either be momentary flashes, or momentary obscurations.[34]In 1888, the U.S. Army Signal Corps reviewed all of these devices, as well as the Finley Helio-Telegraph,[34]and finding none completely suitable, developed its own instrument of the U.S. Army Signal Corps heliograph, a two-tripod, shutter-based machine of13+7⁄8lb (6.3 kg) total weight, and ordered 100, for a total cost of $4,205.[35]By 1893, the number of heliographs manufactured for the American Army Signal Corps was 133.[36] The heyday of the heliograph was probably theSecond Boer Warof the 1890s and early 1900s in South Africa, where it was much used by both the British and the native immigrantBoers.[2][3]The terrain and climate, as well as the nature of the campaign, made heliography a logical choice. For night communications, the British used some largesignal lamps, brought inland on railroad cars, and equipped with leaf-type shutters for keying a beam of light into dots and dashes. During the early stages of the war, theBritish Armygarrisons were besieged inKimberley, along with the sieges ofLadysmith, and atMafeking. With land wiretelegraphlines cut, the only contact with the outside world was via light-beam communication, helio by day, and signal lamps at night.[12] In 1909, the use of heliography for forestry protection was introduced by theUnited States Forestry Servicein the western States. By 1920, such use was widespread in the US and beginning in the neighboringDominion of Canadato the north, and the heliograph was regarded as "next to the telephone, the most useful communication device that is at present available for forest-protection services".[6]D.P. Godwin of the U.S. Forestry Service invented a very portable (4.5 lb [2.0 kg]) heliograph of the single-tripod, shutter plus mirror type for forestry use.[6] Immediately prior to the outbreak ofWorld War I(1914–1918), the mounted cavalry regiments of theRussian Imperial Armyin theRussian Empirewere still being trained in heliograph communications to augment the efficiency of their scouting and reporting roles.[37]Following the twoRussian Revolutions of 1917, the revolutionaryBolshevik/Communistunits of theirRed Armyduring the subsequentRussian Civil Warof 1918–1922, made use of a series of heliograph stations to disseminate intelligence efficiently. This continued even a decade later about counter-revolutionarybasmachirebel movements in Central Asia'sTurkestanregion in 1926.[38] DuringWorld War II(1939–1945),Union of South Africaand Royal Australian military forces used the heliograph while fighting enemyNazi GermanandFascist Italianforces along the southern coast of theMediterranean SeainLibyaand westernEgyptwith fellow defending British military in the desertNorth African campaignin 1940, 1941 and 1942.[2] The heliograph remained standard equipment for militarysignallersin the Royal Australian andBritish armiesuntil the 1940s, where it was considered a "low probability of intercept" type of communication. TheCanadian Armywas the last major military force to have the heliograph as an issue item. By the time the mirror instruments were retired, they were seldom used for signalling.[12]However, as recently as the 1980s, heliographs were used by insurgent Afghan mujahedeen forces during theSoviet invasion of Afghanistanin 1978–1979.[2]Signal mirrors are still included insurvival kitsfor emergency signaling tosearch and rescueaircraft.[2] Most heliographs of the 19th and 20th centuries were completely manual.[6]The steps of aligning the heliograph on the target, co-aligning the reflected sunbeam with the heliograph, maintaining the sunbeam alignment as the sun moved, transcribing the message into flashes, modulating the sunbeam into those flashes, detecting the flashes at the receiving end, and transcribing the flashes into the message were all done manually.[6]One notable exception – many French heliographs used clockwork heliostats to automatically steer out the sun's motion. By 1884, all active units of the "Mangin apparatus" (a dual-modeFrench Armymilitary field optical telegraph that could use either lantern or sunlight) were equipped with clockwork heliostats.[39]The Mangin apparatus with heliostat was still in service in 1917.[40][41][42]Proposals to automate both the modulation of the sunbeam (by clockwork) and the detection (by electrical selenium photodetectors, or photographic means) date back to at least 1882.[43]In 1961, theUnited States Air Forcewas working on a space heliograph to signal between satellites[44] In May 2012, "Solar Beacon" robotic mirrors designed at theUniversity of California at Berkeleywere mounted on the twin towers of theGolden Gate Bridgeat the entrance toSan Francisco Bay, and a web site set up[45]where the public could schedule times for the mirrors to signal with sun-flashes, entering the time and their latitude, longitude and altitude.[46]The solar beacons were later moved to Sather Tower at the U.C. – Berkeley campus.[47][48]By June 2012, the public could specify a "custom show" of up to 32 "on" or "off" periods of 4 seconds each, permitting the transmission of a few characters of Morse Code.[49]The designer described the Solar Beacon as a "heliostat", not a "heliograph".[46] The first digitally controlled heliograph was designed and built in 2015.[50][51]It was a semi-finalist in the Broadcom MASTERS competition.[52]
https://en.wikipedia.org/wiki/Heliograph
Anindoor positioning system(IPS) is a network of devices used to locate people or objects whereGPSand other satellite technologies lack precision or fail entirely, such as inside multistory buildings, airports, alleys, parking garages, and underground locations.[1] A large variety of techniques and devices are used to provide indoor positioning ranging from reconfigured devices already deployed such as smartphones,WiFiandBluetoothantennas, digital cameras, and clocks; to purpose built installations with relays and beacons strategically placed throughout a defined space. Lights, radio waves, magnetic fields, acoustic signals, and behavioral analytics are all used in IPS networks.[2][3]IPS can achieve position accuracy of 2 cm,[4]which is on par withRTKenabled GNSS receivers that can achieve 2 cm accuracy outdoors.[5]IPS use different technologies, including distance measurement to nearby anchor nodes (nodes with known fixed positions, e.g.WiFi/LiFiaccess points,Bluetooth beaconsor Ultra-Wideband beacons),magnetic positioning,dead reckoning.[6]They either actively locate mobile devices and tags or provide ambient location or environmental context for devices to get sensed.[7][8][9]The localized nature of an IPS has resulted in design fragmentation, with systems making use of variousoptical,[10]radio,[11][12][13][14][15][16][17]or evenacoustic[18][19]technologies. IPS has broad applications in commercial, military, retail, and inventory tracking industries. There are several commercial systems on the market, but no standards for an IPS system. Instead each installation is tailored to spatial dimensions, building materials, accuracy needs, and budget constraints. For smoothing to compensate forstochastic(unpredictable) errors there must be a sound method for reducing the error budget significantly. The system might include information from other systems to cope for physical ambiguity and to enable error compensation. Detecting the device's orientation (often referred to as thecompass directionin order to disambiguate it from smartphone vertical orientation) can be achieved either by detecting landmarks inside images taken in real time, or by using trilateration with beacons.[20]There also exist technologies for detecting magnetometric information inside buildings or locations with steel structures or in iron ore mines.[21] Due to the signalattenuationcaused by construction materials, the satellite basedGlobal Positioning System(GPS) loses significant power indoors affecting the required coverage for receivers by at least four satellites. In addition, the multiple reflections at surfaces cause multi-path propagation serving for uncontrollable errors. These very same effects are degrading all known solutions for indoor locating which uses electromagnetic waves from indoor transmitters to indoor receivers. A bundle of physical and mathematical methods are applied to compensate for these problems. Promising direction radio frequency positioning error correction opened by the use of alternative sources of navigational information, such asinertial measurement unit(IMU), monocular cameraSimultaneous localization and mapping(SLAM) and WiFi SLAM. Integration of data from various navigation systems with different physical principles can increase the accuracy and robustness of the overall solution.[22] The U.S.Global Positioning System(GPS) and other similarGlobal navigation satellite systems(GNSS) are generally not suitable to establish indoor locations, since microwaves will be attenuated and scattered by roofs, walls and other objects. However, in order to make the positioning signals become ubiquitous, integration between GPS and indoor positioning can be made.[23][24][25][26][27][28][29][30] Currently,GNSSreceivers are becoming more and more sensitive due to increasing microchip processing power.High Sensitivity GNSSreceivers are able to receive satellite signals in most indoor environments and attempts to determine the 3D position indoors have been successful.[31]Besides increasing the sensitivity of the receivers, the technique ofA-GPSis used, where the almanac and other information are transferred through a mobile phone. However, despite the fact that proper coverage for the required four satellites to locate a receiver is not achieved with all current designs (2008–11) for indoor operations, GPS emulation has been deployed successfully in Stockholm metro.[32]GPS coverage extension solutions have been able to provide zone-based positioning indoors, accessible with standard GPS chipsets like the ones used in smartphones.[32] While most current IPS are able to detect the location of an object, they are so coarse that they cannot be used to detect theorientationordirectionof an object.[33] One of the methods to thrive for sufficient operational suitability is "tracking". Whether a sequence of locations determined form a trajectory from the first to the most actual location. Statistical methods then serve for smoothing the locations determined in a track resembling the physical capabilities of the object to move. This smoothing must be applied, when a target moves and also for a resident target, to compensate erratic measures. Otherwise the single resident location or even the followed trajectory would compose of an itinerant sequence of jumps. In most applications the population of targets is larger than just one. Hence the IPS must serve a proper specific identification for each observed target and must be capable to segregate and separate the targets individually within the group. An IPS must be able to identify the entities being tracked, despite the "non-interesting" neighbors. Depending on the design, either asensor networkmust know from which tag it has received information, or a locating device must be able to identify the targets directly. Any wireless technology can be used for locating. Many different systems take advantage of existing wireless infrastructure for indoor positioning. There are three primary system topology options for hardware and software configuration, network-based, terminal-based, and terminal-assisted. Positioning accuracy can be increased at the expense of wireless infrastructure equipment and installations. Wi-Fi positioning system (WPS) is used whereGPSis inadequate. The localization technique used for positioning with wireless access points is based on measuring the intensity of the received signal (received signal strengthin English RSS) and the method of "fingerprinting".[34][35][36][37]To increase the accuracy of fingerprinting methods, statistical post-processing techniques (likeGaussian processtheory) can be applied, to transform discrete set of "fingerprints" to a continuous distribution of RSSI of each access point over entire location.[38][39][40]Typical parameters useful to geolocate theWi-Fi hotspotorwireless access pointinclude theSSIDand theMAC addressof the access point. The accuracy depends on the number of positions that have been entered into the database. The possible signal fluctuations that may occur can increase errors and inaccuracies in the path of the user.[41][42] Originally,Bluetoothwas concerned about proximity, not about exact location.[43]Bluetooth was not intended to offer a pinned location like GPS, however is known as ageo-fenceor micro-fence solution which makes it an indoor proximity solution, not an indoor positioning solution. Micromapping and indoor mapping[44]has been linked to Bluetooth[45]and to theBluetooth LEbasediBeaconpromoted byApple Inc.Large-scale indoor positioning system based on iBeacons has been implemented and applied in practice.[46][47] Bluetooth speaker position andhome networkscan be used for broad reference. In 2021 Apple released theirAirTagswhich allow a combination of Bluetooth andUWBtechnology to track Apple devices amongst theFind Mynetwork causing a surge of popularity for tracking technology. Simple concept of location indexing and presence reporting for tagged objects, uses known sensor identification only.[16]This is usually the case with passiveradio-frequency identification(RFID) /NFCsystems, which do not report the signal strengths and various distances of single tags or of a bulk of tags and do not renew any before known location coordinates of the sensor or current location of any tags. Operability of such approaches requires some narrow passage to prevent from passing by out of range. Instead of long range measurement, a dense network of low-range receivers may be arranged, e.g. in a grid pattern for economy, throughout the space being observed. Due to the low range, a tagged entity will be identified by only a few close, networked receivers. An identified tag must be within range of the identifying reader, allowing a rough approximation of the tag location. Advanced systems combine visual coverage with a camera grid with the wireless coverage for the rough location. Most systems use a continuous physical measurement (such as angle and distance or distance only) along with the identification data in one combined signal. Reach by these sensors mostly covers an entire floor, or an aisle or just a single room. Short reach solutions get applied with multiple sensors and overlapping reach. Angle of arrival(AoA) is the angle from which a signal arrives at a receiver. AoA is usually determined by measuring thetime difference of arrival(TDOA) between multiple antennas in a sensor array. In other receivers, it is determined by an array of highly directional sensors—the angle can be determined by which sensor received the signal. AoA is usually used withtriangulationand a known base line to find the location relative to two anchor transmitters. Time of arrival(ToA, also time of flight) is the amount of time a signal takes to propagate from transmitter to receiver. Because the signal propagation rate is constant and known (ignoring differences in mediums) the travel time of a signal can be used to directly calculate distance. Multiple measurements can be combined withtrilaterationandmultilaterationto find a location. This is the technique used byGPSandUltra Widebandsystems. Systems which use ToA, generally require a complicated synchronization mechanism to maintain a reliable source of time for sensors (though this can be avoided in carefully designed systems by using repeaters to establish coupling[17]). The accuracy of the TOA based methods often suffers from massive multipath conditions in indoor localization, which is caused by the reflection and diffraction of the RF signal from objects (e.g., interior wall, doors or furniture) in the environment. However, it is possible to reduce the effect of multipath by applying temporal or spatial sparsity based techniques.[48][49] Joint estimation of angles and times of arrival is another method of estimating the location of the user. Indeed, instead of requiring multiple access points and techniques such as triangulation and trilateration, a single access point will be able to locate a user with combined angles and times of arrival.[50]Even more, techniques that leverage both space and time dimensions can increase the degrees of freedom of the whole system and further create more virtual resources to resolve more sources, via subspace approaches.[51] Received signal strength indication(RSSI) is a measurement of the power level received by sensor. Because radio waves propagate according to theinverse-square law, distance can be approximated (typically to within 1.5 meters in ideal conditions and 2 to 4 meters in standard conditions[52]) based on the relationship between transmitted and received signal strength (the transmission strength is a constant based on the equipment being used), as long as no other errors contribute to faulty results. The inside of buildings is notfree space, so accuracy is significantly impacted by reflection and absorption from walls. Non-stationary objects such as doors, furniture, and people can pose an even greater problem, as they can affect the signal strength in dynamic, unpredictable ways. A lot of systems use enhancedWi-Fiinfrastructure to provide location information.[12][14][15]None of these systems serves for proper operation with any infrastructure as is. Unfortunately, Wi-Fi signal strength measurements are extremelynoisy, so there is ongoing research focused on making more accurate systems Non-radio technologies can be used for positioning without using the existing wireless infrastructure. This can provide increased accuracy at the expense of costly equipment and installations. Magnetic positioningcan offer pedestrians with smartphones an indoor accuracy of 1–2 meters with 90% confidence level, without using the additional wireless infrastructure for positioning. Magnetic positioning is based on the iron inside buildings that create local variations in the Earth's magnetic field. Un-optimized compass chips inside smartphones can sense and record these magnetic variations to map indoor locations.[55] Pedestriandead reckoningand other approaches for positioning of pedestrians propose aninertial measurement unitcarried by the pedestrian either by measuring steps indirectly (step counting) or in a foot mounted approach,[56]sometimes referring to maps or other additional sensors to constrain the inherent sensor drift encountered with inertial navigation. The MEMS inertial sensors suffer from internal noises which result in cubically growing position error with time. To reduce the error growth in such devices aKalman Filteringbased approach is often used.[57][58][59][60]However, in order to make it capable to build map itself, the SLAM algorithm framework[61]will be used.[62][63][64] Inertial measures generally cover the differentials of motion, hence the location gets determined with integrating and thus requires integration constants to provide results.[65][66]The actual position estimation can be found as the maximum of a 2-d probability distribution which is recomputed at each step taking into account the noise model of all the sensors involved and the constraints posed by walls and furniture.[67]Based on the motions and users' walking behaviors, IPS is able to estimate users' locations by machine learning algorithms.[68] A visual positioning system can determine the location of a camera-enabled mobile device by decoding location coordinates from visual markers. In such a system, markers are placed at specific locations throughout a venue, each marker encoding that location's coordinates: latitude, longitude and height off the floor. Measuring the visual angle from the device to the marker enables the device to estimate its own location coordinates in reference to the marker. Coordinates include latitude, longitude, level and altitude off the floor.[69][70]As visual markers usually are not symmetric, also the orientation of the user can be determined.[71] A collection of successive snapshots from a mobile device's camera can build a database of images that is suitable for estimating location in a venue. Once the database is built, a mobile device moving through the venue can take snapshots that can be interpolated into the venue's database, yielding location coordinates. These coordinates can be used in conjunction with other location techniques for higher accuracy. Note that this can be a special case of sensor fusion where a camera plays the role of yet another sensor. Once sensor data has been collected, an IPS tries to determine the location from which the received transmission was most likely collected. The data from a single sensor is generally ambiguous and must be resolved by a series of statistical procedures to combine several sensor input streams. One way to determine position is to match the data from the unknown location with a large set of known locations using an algorithm such ask-nearest neighbor. This technique requires a comprehensive on-site survey and will be inaccurate with any significant change in the environment (due to moving persons or moved objects). Location will be calculated mathematically by approximating signal propagation and finding angles and / or distance. Inverse trigonometry will then be used to determine location: Advanced systems combine more accurate physical models with statistical procedures: The major consumer benefit of indoor positioning is the expansion oflocation-awaremobile computing indoors. As mobile devices become ubiquitous,contextual awarenessfor applications has become a priority for developers. Most applications currently rely on GPS, however, and function poorly indoors. Applications benefiting from indoor location include:
https://en.wikipedia.org/wiki/Indoor_positioning_system
Infrared(IR; sometimes calledinfrared light) iselectromagnetic radiation(EMR) withwavelengthslonger than that ofvisible lightbut shorter thanmicrowaves. The infraredspectral bandbegins with the waves that are just longer than those ofredlight (the longest waves in thevisible spectrum), so IR is invisible to the human eye. IR is generally (according to ISO, CIE) understood to include wavelengths from around 780nm(380THz) to 1mm(300GHz).[1][2]IR is commonly divided between longer-wavelength thermal IR, emitted from terrestrial sources, and shorter-wavelength IR or near-IR, part of thesolar spectrum.[3]Longer IR wavelengths (30–100 μm) are sometimes included as part of theterahertz radiationband.[4]Almost allblack-body radiationfrom objects nearroom temperatureis in the IR band. As a form of EMR, IR carriesenergyandmomentum, exertsradiation pressure, and has properties corresponding toboththose of awaveand of aparticle, thephoton.[5] It was long known that fires emit invisibleheat; in 1681 the pioneering experimenterEdme Mariotteshowed that glass, though transparent to sunlight, obstructed radiant heat.[6][7]In 1800 the astronomer SirWilliam Herscheldiscovered that infrared radiation is a type of invisible radiation in the spectrum lower in energy than red light, by means of its effect on athermometer.[8]Slightly more than half of the energy from theSunwas eventually found, through Herschel's studies, to arrive onEarthin the form of infrared. The balance between absorbed and emitted infrared radiation has an important effect on Earth'sclimate. Infrared radiation is emitted or absorbed bymoleculeswhen changing rotational-vibrational movements. It excitesvibrationalmodes in a molecule through a change in thedipole moment, making it a useful frequency range for study of these energy states for molecules of the proper symmetry.Infrared spectroscopyexamines absorption and transmission ofphotonsin the infrared range.[9] Infrared radiation is used in industrial, scientific, military, commercial, and medical applications. Night-vision devices using active near-infrared illumination allow people or animals to be observed without the observer being detected.Infrared astronomyuses sensor-equippedtelescopesto penetrate dusty regions of space such asmolecular clouds, to detect objects such asplanets, and to view highlyred-shiftedobjects from the early days of theuniverse.[10]Infrared thermal-imaging cameras are used to detect heat loss in insulated systems, to observe changing blood flow in the skin, to assist firefighting, and to detect the overheating of electrical components.[11]Military and civilian applications includetarget acquisition,surveillance,night vision,homing, and tracking. Humans at normal body temperature radiate chiefly at wavelengths around 10 μm. Non-military uses includethermal efficiencyanalysis, environmental monitoring, industrial facility inspections, detection ofgrow-ops, remote temperature sensing, short-rangewireless communication,spectroscopy, andweather forecasting. There is no universally accepted definition of the range of infrared radiation. Typically, it is taken to extend from the nominal red edge of the visible spectrum at 780 nm to 1 mm. This range of wavelengths corresponds to afrequencyrange of approximately 430 THz down to 300 GHz. Beyond infrared is the microwave portion of theelectromagnetic spectrum. Increasingly, terahertz radiation is counted as part of the microwave band, not infrared, moving the band edge of infrared to 0.1 mm (3 THz). Sunlight, at an effective temperature of 5,780K(5,510 °C, 9,940 °F), is composed of near-thermal-spectrum radiation that is slightly more than half infrared. Atzenith, sunlight provides anirradianceof just over 1kWper square meter at sea level. Of this energy, 527 W is infrared radiation, 445 W is visible light, and 32 W isultravioletradiation.[13]Nearly all the infrared radiation in sunlight is near infrared, shorter than 4 μm. On the surface of Earth, at far lower temperatures than the surface of the Sun, some thermal radiation consists of infrared in the mid-infrared region, much longer than in sunlight. Black-body, or thermal, radiation is continuous: it radiates at all wavelengths. Of these natural thermal radiation processes, only lightning and natural fires are hot enough to produce much visible energy, and fires produce far more infrared than visible-light energy.[14] In general, objects emit infrared radiation across a spectrum of wavelengths, but sometimes only a limited region of the spectrum is of interest because sensors usually collect radiation only within a specific bandwidth. Thermal infrared radiation also has a maximum emission wavelength, which is inversely proportional to the absolute temperature of object, in accordance withWien's displacement law. The infrared band is often subdivided into smaller sections, although how the IR spectrum is thereby divided varies between different areas in which IR is employed. Infrared radiation is generally considered to begin with wavelengths longer than visible by the human eye. There is no hard wavelength limit to what is visible, as the eye's sensitivity decreases rapidly but smoothly, for wavelengths exceeding about 700 nm. Therefore wavelengths just longer than that can be seen if they are sufficiently bright, though they may still be classified as infrared according to usual definitions. Light from a near-IR laser may thus appear dim red and can present a hazard since it may actually carry a large amount of energy. Even IR at wavelengths up to 1,050 nm from pulsed lasers can be seen by humans under certain conditions.[15][16][17] A commonly used subdivision scheme is:[18][19][20] NIR and SWIR together is sometimes called "reflected infrared", whereas MWIR and LWIR is sometimes referred to as "thermal infrared". TheInternational Commission on Illumination(CIE) recommended the division of infrared radiation into the following three bands:[23][24] ISO20473 specifies the following scheme:[25] Astronomers typically divide the infrared spectrum as follows:[26] These divisions are not precise and can vary depending on the publication. The three regions are used for observation of different temperature ranges,[27]and hence different environments in space. The most common photometric system used in astronomy allocates capitalletters to different spectral regionsaccording to filters used; I, J, H, and K cover the near-infrared wavelengths; L, M, N, and Q refer to the mid-infrared region. These letters are commonly understood in reference toatmospheric windowsand appear, for instance, in the titles of manypapers. A third scheme divides up the band based on the response of various detectors:[28] Near-infrared is the region closest in wavelength to the radiation detectable by the human eye. mid- and far-infrared are progressively further from the visible spectrum. Other definitions follow different physical mechanisms (emission peaks, vs. bands, water absorption) and the newest follow technical reasons (the commonsilicondetectors are sensitive to about 1,050 nm, whileInGaAs's sensitivity starts around 950 nm and ends between 1,700 and 2,600 nm, depending on the specific configuration). No international standards for these specifications are currently available. The onset of infrared is defined (according to different standards) at various values typically between 700 nm and 800 nm, but the boundary between visible and infrared light is not precisely defined. The human eye is markedly less sensitive to light above 700 nm wavelength, so longer wavelengths make insignificant contributions to scenes illuminated by common light sources. Particularly intense near-IR light (e.g., fromlasers, LEDs or bright daylight with the visible light filtered out) can be detected up to approximately 780 nm, and will be perceived as red light. Intense light sources providing wavelengths as long as 1,050 nm can be seen as a dull red glow, causing some difficulty in near-IR illumination of scenes in the dark (usually this practical problem is solved by indirect illumination). Leaves are particularly bright in the near IR, and if all visible light leaks from around an IR-filter are blocked, and the eye is given a moment to adjust to the extremely dim image coming through a visually opaque IR-passing photographic filter, it is possible to see theWood effectthat consists of IR-glowing foliage.[29] Inoptical communications, the part of the infrared spectrum that is used is divided into seven bands based on availability of light sources, transmitting/absorbing materials (fibers), and detectors:[30] The C-band is the dominant band for long-distancetelecommunications networks. The S and L bands are based on less well established technology, and are not as widely deployed. Infrared radiation is popularly known as "heat radiation",[31]but light and electromagnetic waves of any frequency will heat surfaces that absorb them. Infrared light from the Sun accounts for 49%[32]of the heating of Earth, with the rest being caused by visible light that is absorbed then re-radiated at longer wavelengths. Visible light or ultraviolet-emitting lasers can char paper and incandescently hot objects emit visible radiation. Objects at roomtemperaturewillemitradiationconcentrated mostly in the 8 to 25 μm band, but this is not distinct from the emission of visible light by incandescent objects and ultraviolet by even hotter objects (seeblack bodyandWien's displacement law).[33] Heatis energy in transit that flows due to a temperature difference. Unlike heat transmitted bythermal conductionorthermal convection, thermal radiation can propagate through avacuum. Thermal radiation is characterized by a particular spectrum of many wavelengths that are associated with emission from an object, due to the vibration of its molecules at a given temperature. Thermal radiation can be emitted from objects at any wavelength, and at very high temperatures such radiation is associated with spectra far above the infrared, extending into visible, ultraviolet, and even X-ray regions (e.g. thesolar corona). Thus, the popular association of infrared radiation with thermal radiation is only a coincidence based on typical (comparatively low) temperatures often found near the surface of planet Earth. The concept ofemissivityis important in understanding the infrared emissions of objects. This is a property of a surface that describes how its thermal emissions deviate from the ideal of ablack body. To further explain, two objects at the same physical temperature may not show the same infrared image if they have differing emissivity. For example, for any pre-set emissivity value, objects with higher emissivity will appear hotter, and those with a lower emissivity will appear cooler (assuming, as is often the case, that the surrounding environment is cooler than the objects being viewed). When an object has less than perfect emissivity, it obtains properties of reflectivity and/or transparency, and so the temperature of the surrounding environment is partially reflected by and/or transmitted through the object. If the object were in a hotter environment, then a lower emissivity object at the same temperature would likely appear to be hotter than a more emissive one. For that reason, incorrect selection of emissivity and not accounting for environmental temperatures will give inaccurate results when using infrared cameras and pyrometers. Infrared is used in night vision equipment when there is insufficient visible light to see.[34]Night vision devicesoperate through a process involving the conversion of ambient light photons into electrons that are then amplified by a chemical and electrical process and then converted back into visible light.[34]Infrared light sources can be used to augment the available ambient light for conversion by night vision devices, increasing in-the-dark visibility without actually using a visible light source.[34][1] The use of infrared light and night vision devices should not be confused withthermal imaging, which creates images based on differences in surface temperature by detecting infrared radiation (heat) that emanates from objects and their surrounding environment.[35][8] Infrared radiation can be used to remotely determine the temperature of objects (if the emissivity is known). This is termed thermography, or in the case of very hot objects in the NIR or visible it is termedpyrometry. Thermography (thermal imaging) is mainly used in military and industrial applications but the technology is reaching the public market in the form of infrared cameras on cars due to greatly reduced production costs. Thermographic camerasdetect radiation in the infrared range of the electromagnetic spectrum (roughly 9,000–14,000 nm or 9–14 μm) and produce images of that radiation. Since infrared radiation is emitted by all objects based on their temperatures, according to the black-body radiation law, thermography makes it possible to "see" one's environment with or without visible illumination. The amount of radiation emitted by an object increases with temperature, therefore thermography allows one to see variations in temperature (hence the name). A hyperspectral image is a "picture" containing continuousspectrumthrough a wide spectral range at each pixel. Hyperspectral imaging is gaining importance in the field of applied spectroscopy particularly with NIR, SWIR, MWIR, and LWIR spectral regions. Typical applications include biological, mineralogical, defence, and industrial measurements. Thermal infrared hyperspectral imaging can be similarly performed using athermographic camera, with the fundamental difference that each pixel contains a full LWIR spectrum. Consequently, chemical identification of the object can be performed without a need for an external light source such as the Sun or the Moon. Such cameras are typically applied for geological measurements, outdoor surveillance andUAVapplications.[37] Ininfrared photography,infrared filtersare used to capture the near-infrared spectrum.Digital camerasoften use infraredblockers. Cheaper digital cameras andcamera phoneshave less effective filters and can view intense near-infrared, appearing as a bright purple-white color. This is especially pronounced when taking pictures of subjects near IR-bright areas (such as near a lamp), where the resulting infrared interference can wash out the image. There is also a technique called 'T-ray' imaging, which is imaging usingfar-infraredorterahertz radiation. Lack of bright sources can make terahertz photography more challenging than most other infrared imaging techniques. Recently T-ray imaging has been of considerable interest due to a number of new developments such asterahertz time-domain spectroscopy. Infrared tracking, also known as infrared homing, refers to apassive missile guidance system, which uses theemissionfrom a target of electromagnetic radiation in the infrared part of the spectrum to track it. Missiles that use infrared seeking are often referred to as "heat-seekers" since infrared (IR) is just below the visible spectrum of light in frequency and is radiated strongly by hot bodies. Many objects such as people, vehicle engines, and aircraft generate and retain heat, and as such, are especially visible in the infrared wavelengths of light compared to objects in the background.[38] Infrared radiation can be used as a deliberate heating source. For example, it is used ininfrared saunasto heat the occupants. It may also be used in other heating applications, such as to remove ice from the wings of aircraft (de-icing).[39] Infrared heating is also becoming more popular in industrial manufacturing processes, e.g. curing of coatings, forming of plastics, annealing, plastic welding, and print drying. In these applications, infrared heaters replace convection ovens and contact heating. A variety of technologies or proposed technologies take advantage of infrared emissions to cool buildings or other systems. The LWIR (8–15 μm) region is especially useful since some radiation at these wavelengths can escape into space through the atmosphere'sinfrared window. This is howpassive daytime radiative cooling(PDRC) surfaces are able to achieve sub-ambient cooling temperatures under direct solar intensity, enhancing terrestrialheat flowto outer space with zeroenergy consumptionorpollution.[40][41]PDRC surfaces maximize shortwavesolar reflectanceto lessen heat gain while maintaining strong longwave infrared (LWIR)thermal radiationheat transfer.[42][43]When imagined on a worldwide scale, this cooling method has been proposed as a way to slow and even reverseglobal warming, with some estimates proposing a global surface area coverage of 1-2% to balance global heat fluxes.[44][45] IR data transmission is also employed in short-range communication among computer peripherals andpersonal digital assistants. These devices usually conform to standards published byIrDA, the Infrared Data Association. Remote controls and IrDA devices use infraredlight-emitting diodes(LEDs) to emit infrared radiation that may be concentrated by alensinto a beam that the user aims at the detector. The beam ismodulated, i.e. switched on and off, according to a code which the receiver interprets. Usually very near-IR is used (below 800 nm) for practical reasons. This wavelength is efficiently detected by inexpensivesiliconphotodiodes, which the receiver uses to convert the detected radiation to anelectric current. That electrical signal is passed through ahigh-pass filterwhich retains the rapid pulsations due to the IR transmitter but filters out slowly changing infrared radiation from ambient light. Infrared communications are useful for indoor use in areas of high population density. IR does not penetrate walls and so does not interfere with other devices in adjoining rooms. Infrared is the most common way forremote controlsto command appliances. Infrared remote control protocols likeRC-5,SIRC, are used to communicate with infrared. Free-space optical communicationusing infraredlaserscan be a relatively inexpensive way to install a communications link in an urban area operating at up to 4 gigabit/s, compared to the cost of burying fiber optic cable, except for the radiation damage. "Since the eye cannot detect IR, blinking or closing the eyes to help prevent or reduce damage may not happen."[46] Infrared lasers are used to provide the light foroptical fibercommunications systems. Wavelengths around 1,330 nm (leastdispersion) or 1,550 nm (best transmission) are the best choices for standardsilicafibers. IR data transmission of audio versions of printed signs is being researched as an aid for visually impaired people through theRemote infrared audible signageproject. Transmitting IR data from one device to another is sometimes referred to asbeaming. IR is sometimes used for assistive audio as an alternative to anaudio induction loop. Infrared vibrational spectroscopy(see alsonear-infrared spectroscopy) is a technique that can be used to identify molecules by analysis of their constituent bonds. Each chemical bond in a molecule vibrates at a frequency characteristic of that bond. A group of atoms in a molecule (e.g., CH2) may have multiple modes of oscillation caused by the stretching and bending motions of the group as a whole. If an oscillation leads to a change indipolein the molecule then it will absorb aphotonthat has the same frequency. The vibrational frequencies of most molecules correspond to the frequencies of infrared light. Typically, the technique is used to studyorganic compoundsusing light radiation from the mid-infrared, 4,000–400 cm−1. A spectrum of all the frequencies of absorption in a sample is recorded. This can be used to gain information about the sample composition in terms of chemical groups present and also its purity (for example, a wet sample will show a broad O-H absorption around 3200 cm−1). The unit for expressing radiation in this application, cm−1, is the spectroscopicwavenumber. It is the frequency divided by the speed of light in vacuum. In the semiconductor industry, infrared light can be used to characterize materials such as thin films and periodic trench structures. By measuring the reflectance of light from the surface of a semiconductor wafer, the index of refraction (n) and the extinction Coefficient (k) can be determined via theForouhi–Bloomer dispersion equations. The reflectance from the infrared light can also be used to determine the critical dimension, depth, and sidewall angle of high aspect ratio trench structures. Weather satellitesequipped with scanning radiometers produce thermal or infrared images, which can then enable a trained analyst to determine cloud heights and types, to calculate land and surface water temperatures, and to locate ocean surface features. The scanning is typically in the range 10.3–12.5 μm (IR4 and IR5 channels). Clouds with high and cold tops, such ascyclonesorcumulonimbus clouds, are often displayed as red or black, lower warmer clouds such asstratusorstratocumulusare displayed as blue or grey, with intermediate clouds shaded accordingly. Hot land surfaces are shown as dark-grey or black. One disadvantage of infrared imagery is that low clouds such as stratus orfogcan have a temperature similar to the surrounding land or sea surface and do not show up. However, using the difference in brightness of the IR4 channel (10.3–11.5 μm) and the near-infrared channel (1.58–1.64 μm), low clouds can be distinguished, producing afogsatellite picture. The main advantage of infrared is that images can be produced at night, allowing a continuous sequence of weather to be studied. These infrared pictures can depict ocean eddies or vortices and map currents such as the Gulf Stream, which are valuable to the shipping industry. Fishermen and farmers are interested in knowing land and water temperatures to protect their crops against frost or increase their catch from the sea. EvenEl Niñophenomena can be spotted. Using color-digitized techniques, the gray-shaded thermal images can be converted to color for easier identification of desired information. The main water vapour channel at 6.40 to 7.08 μm can be imaged by some weather satellites and shows the amount of moisture in the atmosphere. In the field of climatology, atmospheric infrared radiation is monitored to detect trends in the energy exchange between the Earth and the atmosphere. These trends provide information on long-term changes in Earth's climate. It is one of the primary parameters studied in research intoglobal warming, together withsolar radiation. Apyrgeometeris utilized in this field of research to perform continuous outdoor measurements. This is a broadband infrared radiometer with sensitivity for infrared radiation between approximately 4.5 μm and 50 μm. Astronomers observe objects in the infrared portion of the electromagnetic spectrum using optical components, including mirrors, lenses and solid state digital detectors. For this reason it is classified as part ofoptical astronomy. To form an image, the components of an infrared telescope need to be carefully shielded from heat sources, and the detectors are chilled using liquidhelium. The sensitivity of Earth-based infrared telescopes is significantly limited by water vapor in the atmosphere, which absorbs a portion of the infrared radiation arriving from space outside of selectedatmospheric windows. This limitation can be partially alleviated by placing the telescope observatory at a high altitude, or by carrying the telescope aloft with a balloon or an aircraft. Space telescopes do not suffer from this handicap, and so outer space is considered the ideal location for infrared astronomy. The infrared portion of the spectrum has several useful benefits for astronomers. Cold, darkmolecular cloudsof gas and dust in our galaxy will glow with radiated heat as they are irradiated by imbedded stars. Infrared can also be used to detectprotostarsbefore they begin to emit visible light. Stars emit a smaller portion of their energy in the infrared spectrum, so nearby cool objects such asplanetscan be more readily detected. (In the visible light spectrum, the glare from the star will drown out the reflected light from a planet.) Infrared light is also useful for observing the cores ofactive galaxies, which are often cloaked in gas and dust. Distant galaxies with a highredshiftwill have the peak portion of their spectrum shifted toward longer wavelengths, so they are more readily observed in the infrared.[10] Infrared cleaningis a technique used by somemotion picture film scanners,film scannersandflatbed scannersto reduce or remove the effect of dust and scratches upon the finishedscan. It works by collecting an additional infrared channel from the scan at the same position and resolution as the three visible color channels (red, green, and blue). The infrared channel, in combination with the other channels, is used to detect the location of scratches and dust. Once located, those defects can be corrected by scaling or replaced byinpainting.[47] Infrared reflectography[48]can be applied to paintings to reveal underlying layers in a non-destructive manner, in particular the artist'sunderdrawingor outline drawn as a guide. Art conservators use the technique to examine how the visible layers of paint differ from the underdrawing or layers in between (such alterations are calledpentimentiwhen made by the original artist). This is very useful information in deciding whether a painting is theprime versionby the original artist or a copy, and whether it has been altered by over-enthusiastic restoration work. In general, the more pentimenti, the more likely a painting is to be the prime version. It also gives useful insights into working practices.[49]Reflectography often reveals the artist's use ofcarbon black, which shows up well in reflectograms, as long as it has not also been used in the ground underlying the whole painting. Infrared reflectography can be realized by modified commercial digital cameras in the NIR spectral region or by dedicated instruments in the SWIR spectral region.[50]The recent extension of reflectography into the MWIR spectral region[51][52]has proved capable of detecting subtle differences in surface materials. Finally, NIR reflectography can be performed with good results using smartphone cameras .[53] Recent progress in the design of infrared-sensitive cameras makes it possible to discover and depict not only underpaintings and pentimenti, but entire paintings that were later overpainted by the artist.[54]Notable examples arePicasso'sWoman IroningandBlue Room, where in both cases a portrait of a man has been made visible under the painting as it is known today. Similar uses of infrared are made by conservators and scientists on various types of objects, especially very old written documents such as theDead Sea Scrolls, the Roman works in theVilla of the Papyri, and the Silk Road texts found in theDunhuang Caves.[55]Carbon black used in ink can show up extremely well. Thepit viperhas a pair of infrared sensory pits on its head. There is uncertainty regarding the exact thermal sensitivity of this biological infrared detection system.[56][57] Other organisms that have thermoreceptive organs are pythons (familyPythonidae), some boas (familyBoidae), theCommon Vampire Bat(Desmodus rotundus), a variety ofjewel beetles(Melanophila acuminata),[58]darkly pigmented butterflies (Pachliopta aristolochiaeandTroides rhadamantus plateni), and possibly blood-sucking bugs (Triatoma infestans).[59]By detecting the heat that their prey emits,crotalineandboid snakesidentify and capture their prey using theirIR-sensitive pit organs. Comparably, IR-sensitive pits on theCommon Vampire Bat(Desmodus rotundus) aid in the identification of blood-rich regions on its warm-blooded victim. The jewel beetle,Melanophila acuminata, locatesforest firesvia infrared pit organs, where on recently burnt trees, they deposit their eggs.Thermoreceptorson the wings and antennae of butterflies with dark pigmentation, suchPachliopta aristolochiaeandTroides rhadamantus plateni, shield them from heat damage as they sunbathe in the sun. Additionally, it's hypothesised that thermoreceptors let bloodsucking bugs (Triatoma infestans) locate theirwarm-bloodedvictims by sensing their body heat.[59] Some fungi likeVenturia inaequalisrequire near-infrared light for ejection.[60] Although near-infrared vision (780–1,000 nm) has long been deemed impossible due to noise in visual pigments,[61]sensation of near-infrared light was reported in the common carp and in three cichlid species.[61][62][63][64][65]Fish use NIR to capture prey[61]and for phototactic swimming orientation.[65]NIR sensation in fish may be relevant under poor lighting conditions during twilight[61]and in turbid surface waters.[65] Near-infrared light, orphotobiomodulation, is used for treatment of chemotherapy-induced oral ulceration as well as wound healing. There is some work relating to anti-herpes virus treatment.[66]Research projects include work on central nervous system healing effects via cytochrome c oxidase upregulation and other possible mechanisms.[67] Strong infrared radiation in certain industry high-heat settings may be hazardous to the eyes, resulting in damage or blindness to the user. Since the radiation is invisible, special IR-proof goggles must be worn in such places.[68] The discovery of infrared radiation is ascribed toWilliam Herschel, theastronomer, in the early 19th century. Herschel published his results in 1800 before theRoyal Society of London. Herschel used aprismtorefractlight from thesunand detected the infrared, beyond theredpart of the spectrum, through an increase in the temperature recorded on athermometer. He was surprised at the result and called them "Calorific Rays".[69][70]The term "infrared" did not appear until late 19th century.[71]The Latin prefixinfra-means below, as it is light below red on the spectrum.[72]Anearlier experiment in 1790byMarc-Auguste Pictetdemonstrated the reflection and focusing of radiant heat via mirrors in the absence of visible light.[73] Other important dates include:[28]
https://en.wikipedia.org/wiki/Infrared_communication
TheInfrared Data Association(IrDA) is an industry-driven interest group that was founded in 1994[1]by around 50 companies. IrDA provides specifications for a complete set of protocols for wireless infrared communications, and the name "IrDA" also refers to that set of protocols. The main reason for using the IrDA protocols had been wireless data transfer over the "last one meter" using point-and-shoot principles. Thus, it has been implemented in portable devices such as mobile telephones, laptops, cameras, printers, and medical devices. The main characteristics of this kind ofwireless optical communicationare physically secure data transfer,line-of-sight(LOS) and very lowbit error rate(BER) that makes it very efficient. The mandatoryIrPHY(Infrared Physical Layer Specification) is the physical layer of the IrDA specifications. It comprises optical link definitions, modulation, coding,cyclic redundancy check(CRC) and the framer. Different data rates use different modulation/coding schemes: Further characteristics are: The frame size depends on the data rate mostly and varies between 64Band 64 kB. Additionally, bigger blocks of data can be transferred by sending multiple frames consecutively. This can be adjusted with a parameter called "window size" (1–127). Finally, data blocks up to 8 MB can be sent at once. Combined with a low bit error rate of generally <10−9, that communication could be very efficient compared to other wireless solutions. IrDA transceivers communicate with infrared pulses (samples) in a cone that extends at least 15 degrees half angle off center. The IrDA physical specifications require the lower and upper limits ofirradiancesuch that a signal is visible up to one meter away, but a receiver is not overwhelmed with brightness when a device comes close. In practice, there are some devices on the market that do not reach one meter, while other devices may reach up to several meters. There are also devices that do not tolerate extreme closeness. The typical sweet spot for IrDA communications is from 5 to 60 cm (2.0 to 23.6 in) away from a transceiver, in the center of the cone. IrDAdata communicationsoperate inhalf-duplexmode because while transmitting, a device's receiver is blinded by the light of its own transmitter, and thusfull-duplexcommunication is not feasible. The two devices that communicate simulate full-duplex communication by quickly turning the link around. The primary device controls the timing of the link, but both sides are bound to certain hard constraints and are encouraged to turn the link around as fast as possible. The mandatoryIrLAP(Infrared Link Access Protocol) is the second layer of the IrDA specifications. It lies on top of the IrPHY layer and below the IrLMP layer. It represents thedata link layerof theOSI model. The most important specifications are: On the IrLAP layer the communicating devices are divided into a "primary device" and one or more "secondary devices". The primary device controls the secondary devices. Only if the primary device requests a secondary device to send, is it allowed to do so. The mandatoryIrLMP(Infrared Link Management Protocol) is the third layer of the IrDA specifications. It can be broken down into two parts. First, the LM-MUX (Link Management Multiplexer), which lies on top of the IrLAP layer. Its most important achievements are: Second, the LM-IAS (Link Management Information Access Service), which provides a list, where service providers can register their services so other devices can access these services by querying the LM-IAS. The optionalTiny TP(Tiny Transport Protocol) lies on top of the IrLMP layer. It provides: The optionalIrCOMM(Infrared Communications Protocol) lets the infrared device act like either aserialorparallel port. It lies on top of the IrLMP layer. The optionalOBEX(Object Exchange) provides the exchange of arbitrary data objects (e.g.,vCard,vCalendaror even applications) between infrared devices. It lies on top of the Tiny TP protocol, so Tiny TP is mandatory for OBEX to work. The optionalIrLAN(Infrared Local Area Network) provides the possibility to connect an infrared device to a local area network. There are three possible methods: As IrLAN lies on top of the Tiny TP protocol, the Tiny TP protocol must be implemented for IrLAN to work. IrSimpleachieves at least four to ten times faster data transmission speeds by improving the efficiency of the infrared IrDA protocol. A 500 KB normal picture from a cell phone can be transferred within one second.[2] One of the primary targets ofIrSimpleShot(IrSS) is to allow the millions of IrDA-enabled camera phones to wirelessly transfer pictures to printers, printer kiosks and flat-panel TVs. Infrared Financial Messaging(IrFM) is a wireless payment standard developed by the Infrared Data Association. It was thought to be logical because of the excellent privacy of IrDA, which does not pass through walls. Many modern (2021) implementations are used for semi-automated reading of power meters. This high-volume application is keeping IrDA transceivers in production. Lacking specialized electronics, many power meter implementations utilize a bit-banged SIR phy, running at 9600 BAUD using a minimum-width pulse (i.e. 3/16 of a 115.2KBAUD pulse) to save energy. To drive the LED, a computer-controlled pin is turned on and off at the right time. Cross-talk from the LED to the receivingPIN diodeis extreme, so the protocol ishalf-duplex. To receive, an external interrupt bit is started by the start bit, then polled a half-bit time after following bits. A timer interrupt is often used to free the CPU between pulses. Power meters' higher protocol levels abandon IrDA standards, typically usingDLMS/COSEMinstead. With IrDA transceivers (a package combining an IR LED and PIN diode), even this crude IrDA SIR is extremely resistant to external optical noise from incandescents, sunlight, etc. IrDA was popular on PDAs, laptops and some desktops[3]from the late 1990s[4]through the early 2000s.[5][6]However, it has been displaced by other wireless technologies such asBluetooth,[7]andWi-Fi, favored because they don't need a direct line of sight and can therefore support hardware like mice and keyboards. It is still used in some environments where interference makesradio-based wireless technologies unusable. An attempt was made to revive IrDA around 2005[8]with IrSimple protocols by providing sub-1-second transfers of pictures between cell phones, printers, and display devices. IrDA hardware was still less expensive and didn't share the same security problems encountered with wireless technologies such as Bluetooth. For example, some Pentax DSLRs (K-x, K-r) incorporated IrSimple for image transfer and gaming.[9] Official Other
https://en.wikipedia.org/wiki/Infrared_Data_Association
Wi-Fi positioning system(WPS,WiPSorWFPS) is ageolocationsystem that uses the characteristics of nearbyWi‑Fi access pointsto discover where a device is located.[1] It is used wheresatellite navigationsuch asGPSis inadequate due to various causes includingmultipathand signal blockage indoors, or where acquiring a satellite fix would take too long.[2]Such systems include assisted GPS, urban positioning services through hotspot databases, andindoor positioning systems.[3]Wi-Fi positioning takes advantage of the rapid growth in the early 21st century of wireless access points in urban areas.[4] The most common technique for positioning using wireless access points is based on a rough proxy for the strength of the received signal (received signal strength indicator, orRSSI) and the method of "fingerprinting".[5][6][7]Typically a wireless access point is identified by itsSSIDandMAC address, and these data are compared to a database of supposed locations of access points so identified. The accuracy depends on the accuracy of the database (e.g. if an access point has moved its entry is inaccurate), and the precision depends on the number of discovered nearby access points with (accurate) entries in the database and the precisions of those entries. The access point location database gets filled by correlating mobile device location data (determined by other systems, such as Galileo or GPS) with Wi‑Fi access point MAC addresses.[8]The possible signal fluctuations that may occur can increase errors and inaccuracies in the path of the user. To minimize fluctuations in the received signal, there are certain techniques that can be applied to filter the noise. In the case of low precision, some techniques have been proposed to merge the Wi-Fi traces with other data sources such asgeographical informationand time constraints (i.e.,time geography).[9] Accurate indoor localization is becoming more important for Wi‑Fi–based devices due to the increased use ofaugmented reality,social networking, health care monitoring, personal tracking,inventory controland other indoorlocation-awareapplications.[10][11] Inwireless security, it is an important method used to locate and maprogue access points.[12][13] The popularity and low price of Wi-Fi network interface cards is an attractive incentive to use Wi-Fi as the basis for a localization system and significant research has been done in this area in the past 15 years.[5][7][14] The problem of Wi‑Fi–based indoor localization of a device is that of determining the position of client devices with respect to access points. Many techniques exist to accomplish this, and these may be classified based on the four different criteria they use:received signal strength indication(RSSI),fingerprinting,angle of arrival(AoA) andtime of flight(ToF).[14][15] In most cases the first step to determine a device's position is to determine the distance between the target client device and a few access points. With the known distances between the target device and access points,trilaterationalgorithms may be used to determine the relative position of the target device,[11]using the known position of access points as a reference. Alternatively, the angles of arriving signals at a target client device can be employed to determine the device's location based ontriangulationalgorithms.[14] A combination of these techniques may be used to improve the precision of a system.[14] RSSI localization techniques are based on measuring rough relative signal strength at a client device from several different access points, and then combining this information with a propagation model to determine the distance between the client device and the access points.Trilateration(sometimes called multilateration) techniques can be used to calculate the estimated client device position relative to the expected position of access points.[11][14] Though one of the cheapest and easiest methods to implement, its disadvantage is that it does not provide very good precision (median of 2–4m), because the RSSI measurements tend to fluctuate according to changes in the environment ormultipath fading.[5] Ciscouses RSSI to locate devices through its access points. Access points collect the location data and update the location on the Cisco cloud calledCisco DNA Spaces.[16] Monte Carlo samplingis a statistical technique used in indoor Wi-Fi mapping to estimate the location of wireless nodes. The process involves creating wireless signal strength maps using a two-step parametric and measurement-driven ray-tracing approach. This accounts for the absorption and reflection characteristics of various obstacles in the indoor environment.[17] The location estimates are then computed usingBayesian filteringon sample sets derived by Monte Carlo sampling. This method has been found to provide good location estimates of users with sub-room precision using received signal strength indication (RSSI) readings from a single access point.[18] Traditional fingerprinting is also RSSI-based, but it simply relies on the recording of the signal strength from several access points in range and storing this information in a database along with the known coordinates of the client device in an offline phase. This information can be deterministic[5]or probabilistic.[7]During the online tracking phase, the current RSSI vector at an unknown location is compared to those stored in the fingerprint and the closest match is returned as the estimated user location. Such systems may provide a median accuracy of 0.6m and tail accuracy of 1.3m.[14][19] Its main disadvantage is that any changes to the environment, such as adding or removing furniture or buildings, may change the "fingerprint" that corresponds to each location, requiring an update to the fingerprint database. However, integration with other sensors such as cameras can be used in order to deal with a changing environment.[20] With the advent of MIMO Wi-Fi interfaces, which use multiple antennas, it is possible to estimate theAoAof the multipath signals received at the antenna arrays in the access points, and applytriangulationto calculate the location of client devices. SpotFi,[14]ArrayTrack[10]and LTEye[21]are proposed solutions which employ this kind of technique. Typical computation of the AoA is done with theMUSIC algorithm. Assuming an antenna array ofM{\displaystyle M}antennas equally spaced by a distance ofd{\displaystyle d}and a signal arriving at the antenna array throughL{\displaystyle L}propagation paths, an additional distance ofdsin⁡θ{\displaystyle d\sin \theta }is traveled by the signal to reach the second antenna of the array.[14] Considering that thek{\displaystyle k}-th propagation path arrives with angleθk{\displaystyle \theta _{k}}with respect to the normal of the antenna array of the access point,γk{\displaystyle \gamma _{k}}is the attenuation experienced at any antenna of the array. The attenuation is the same in every antenna, except for a phase shift which changes for every antenna due to the extra distance traveled by the signal. This means that the signal arrives with an additional phase of −2π⋅d⋅sin⁡(θ)⋅(f/c)⋅(2−1){\displaystyle -2\pi \cdot d\cdot \sin(\theta )\cdot (f/c)\cdot (2-1)} at the second antenna and −2π⋅d⋅sin⁡(θ)⋅(f/c)⋅(m−1){\displaystyle -2\pi \cdot d\cdot \sin(\theta )\cdot (f/c)\cdot (m-1)} at them{\displaystyle m}-th antenna.[14] Therefore, the following complex exponential can be used as a simplified representation of the phase shifts experienced by each antenna as a function of the AoA of the propagation path:[14] ϕ(θk)=exp⁡(−j⋅2π⋅d⋅sin⁡(θk)⋅f/c){\displaystyle \phi (\theta _{k})=\exp(-j\cdot 2\pi \cdot d\cdot \sin(\theta _{k})\cdot f/c)} The AoA can then be expressed as the vectora→(θk)γk{\displaystyle {\vec {a}}(\theta _{k})\gamma _{k}}of received signals due to thek{\displaystyle k}-th propagation path, wherea→(θk){\displaystyle {\vec {a}}(\theta _{k})}is the steering vector and given by:[14]a→(θk)=[1,ϕ(θk),…,ϕ(θk)M−1]T{\displaystyle {\vec {a}}(\theta _{k})=[1,\ \phi (\theta _{k}),\ \dots ,\ \phi (\theta _{k})^{M-1}]^{T}}There is one steering vector for each propagation path, and the steering matrixA{\displaystyle \mathbf {A} }(of dimensionsM⋅L{\displaystyle M\cdot L}) is then defined as:[14]A=[a→(θ1),…,a→(θL)]{\displaystyle \mathbf {A} =[{\vec {a}}(\theta _{1}),\dots ,{\vec {a}}(\theta _{L})]}and the received signal vectorx→{\displaystyle {\vec {x}}}is:[14]x→=AΓ→{\displaystyle {\vec {x}}=\mathbf {A} {\vec {\Gamma }}}whereΓ→=[γ→1…γ→L]{\displaystyle {\vec {\Gamma }}=[{\vec {\gamma }}_{1}\dots {\vec {\gamma }}_{L}]}is the vector complex attenuations along theL{\displaystyle L}paths.[14]OFDMtransmits data over multiple different sub carriers, so the measured received signalsx→{\displaystyle {\vec {x}}}corresponding to each sub carrier form the matrixX{\displaystyle \mathbf {X} }expressed as:[14]X=[x→1…x→L]=A[Γ→1…Γ→L]=AF{\displaystyle \mathbf {X} =[{\vec {x}}_{1}\dots {\vec {x}}_{L}]=\mathbf {A} [{\vec {\Gamma }}_{1}\dots {\vec {\Gamma }}_{L}]=\mathbf {AF} }The matrixX{\displaystyle \mathbf {X} }is given by the channel state information (CSI) matrix which can be extracted from modern wireless cards with special tools such as the Linux 802.11n CSI Tool.[22] This is where theMUSICalgorithm is applied in, first by computing the eigenvectors ofXXH{\displaystyle \mathbf {X} \mathbf {X} ^{H}}(whereXH{\displaystyle \mathbf {X} ^{H}}is the conjugate transpose ofX{\displaystyle \mathbf {X} }) and using the vectors corresponding to eigenvalue zero to calculate the steering vectors and the matrixA{\displaystyle \mathbf {A} }.[14]The AoAs can then be deduced from this matrix and used to estimate the position of the client device throughtriangulation. Though this technique is usually more accurate than others, it may require special hardware in order to be deployed, such as an array of six to eight antennas[10]or rotating antennas.[21]SpotFi[14]proposes the use of asuperresolutionalgorithm which takes advantage of the number of measurements taken by each of the antennas of the Wi-Fi cards with only three antennas, and also incorporates ToF-based localization to improve its accuracy. Time of flight(ToF) localization approach takes timestamps provided by the wireless interfaces to calculate the ToF of signals and then use this information to estimate the distance and relative position of one client device with respect to access points. The granularity of such time measurements is in the order of nanoseconds and systems which use this technique have reported localization errors in the order of 2m.[14]Typical applications for this technology are tagging and locating assets in buildings, for which room-level accuracy (~3m) is usually enough.[24] The time measurements taken at the wireless interfaces are based on the fact that RF waves travel close to the speed of light, which remains nearly constant in most propagation media in indoor environments. Therefore, the signal propagation speed (and consequently the ToF) is not affected so much by the environment as the RSSI measurements are.[23] Unlike traditional ToF-based echo techniques, such as those used inRADARsystems, Wi-Fi echo techniques use regular data and acknowledgement communication frames to measure the ToF.[23] As in the RSSI approach, the ToF is used only to estimate the distance between the client device and access points. Then atrilaterationtechnique can be used to calculate the estimated position of the device relative to the access points.[24]The greatest challenges in the ToF approach consist in dealing with clock synchronization issues, noise, sampling artifacts and multipath channel effects.[24]Some techniques use mathematical approaches to remove the need for clock synchronization.[15] More recently, theWi-Fi Round Trip Timestandard has provided fine ToF ranging capabilities to Wi‑Fi. As of 2019[update],Frenchlaw requiresdronesweighing more than 800 grams to broadcast their GPS coordinates, speed, heading, aircraft type, and serial numbers via Wi-Fi.[25]The data is encoded inWi-Fi beacon framesvia an802.11 vendor element(with theSGSDN'sOUIof 6A:5C:35[26]) containingTLV-encoded subfields specified in the legislation.[27] Although the data format is intended for use in moving aerial vehicles, it can easily be adapted to static Wi-Fi access points (i.e. by setting the horizontal speed field to zero). The GPS coordinates from these broadcasts are specified to a precision of 10−5decimal degrees, and can be read by Wi-Fi client devices, and then used to infer the position of the client device in conjunction with confirmation from other techniques and data sources. Citing the specific privacy concerns arising out of WPS, Google suggested a unified approach for excluding a particular access point from taking part in determining location using WPS, supposedly by every access point owner deliberatelyopting outfor each access point to be excluded.[28]Appending "_nomap" to a wireless access point's SSID excludes it from Google's WPS database.[29]Mozilla Location Servicehonors _nomap as a method ofopting outof its location service.[30] A number of public Wi-Fi location databases are available (only active projects):
https://en.wikipedia.org/wiki/Wi-Fi_positioning_system
RONJA(Reasonable Optical Near Joint Access) is afree-space optical communicationsystem developed in theCzech Republicby Karel Kulhavý of Twibright Labs. Released in 2001. It transmits datawirelesslyusing beams oflight. Ronja can be used to create a 10 Mbit/sfull duplexEthernetpoint-to-pointlink. It has been estimated that 1,000 to 2,000 links have been built worldwide.[4] The basic configuration has a range of 1.4 km (0.87 mi). The device consists of a receiver andtransmitterpipe (optical head) mounted on a sturdy adjustable holder. Twocoaxial cablesare used to connect the rooftop installation with a protocol translator installed in the house near acomputerorswitch. By doubling or tripling the transmitter pipe, the range can be extended to 1.9 km (1.2 mi). Building instructions, blueprints, and schematics are published under theGNU Free Documentation License, with development using onlyfree softwaretools. The author calls this level of freedom "User Controlled Technology".[5]Ronja is a project ofTwibright Labs. The building instructions are written with an inexperienced builder in mind. Basic operations likedrilling,solderingetc., are explained.[6]Several techniques – drilling templates,[7]detailed checks after soldering,[8][9][10][11]testing procedures[12][13][14]– are employed to minimize errors at critical places and help to speed up work.Printed circuit boardsare downloadable ready for manufacture, with instructions for the fabhouse.[15][16]People with no previous experience with building electronics have reported on the mailing list that the device ran on the first try. 154 installations worldwide have been registered into a gallery with technical data and pictures.[2] With the brightest variant ofLumiledsHPWT-BD00-F4000 LED and 130 mm diameter cheap magnifying glass lenses, the range is 1.4 km (0.87 mi).[5][17]The dimmer but more affordable E4000 variant of HPWT-BD00 yields 1.3 kilometres (0.81 mi).[18]The speed is always 10 Mbit/s full duplex regardless of the distance. By definition, clearvisibilitybetween the transmitter and receiver is essential. If the beam is obscured in any way, the link will stop working. Typically, problems may occur during conditions ofsnowor densefog.[20][21]One device weighs 15.5 kg (34 lb)[1]and requires 70 hours of building time.[22]It requires an ability to set full duplex manually on the network card or switch to take advantage of full duplex,[23]since it doesn't supportautonegotiation.[1]Must be plugged directly into PC or switch using the integral 1 metre (3 ft 3 in) Ethernet cable.[1] A complete RONJA system is made up of 2transceivers: 2 opticaltransmittersand 2 opticalreceivers. They are assembled individually or as a combination. The complete system layout is shown in theblock diagram. The usual approach in FSO (Free Space Optics)preamplifiersis to employ atransimpedance amplifier. A transimpedance amplifier is a very sensitivebroadbandhigh-speed device featuring afeedback loop. This fact means the layout is plagued with stability problems and special compensation ofPIN diodecapacitancemust be performed, therefore this doesn't allow selection of a wide range of cheap PIN photodiodes with varying capacitances. Ronja however uses a feedbackless design[8]where the PIN has a high workingelectrical resistance(100kilohms)[8]which together with the total input capacitance (roughly 8 pF, 5 pF PIN and 3 pF[24]inputMOSFETcascode) makes the device operate with apassbandon a 6 dB/oct slope of low pass formed by PIN working resistance and total input capacitance.[25][26]The signal is then immediately amplified to remove the danger of contamination bysignal noise, and then a compensation of the 6 dB/oct slope is done by derivator element on the programming pins[27]of an NE592 video amplifier.[28][26]A surprisingly flat characteristic is obtained. If the PIN diode is equipped with 3 kΩ working resistor to operate in flat band mode, the range is reduced to about 30% due tothermal noisefrom the 3 kΩ resistor. The HSDL4220infraredLEDis originally unsuitable for 10 Mbit/s operation. It has abandwidthof 9 MHz,[29]where 10 Mbit/sManchester-modulated systems need bandwidth of around 16 MHz. Operation in a usual circuit with current drive would lead to substantial signal corruption and range reduction. Therefore, Twibright Labs developed a special driving technique consisting of driving the LED directly with 15-fold 74AC04 gate output in parallel with RF voltage applied current-unlimited directly to the LED through large capacitors.[30]As the voltage to keep the nominal LED average current (100mA) varies with temperature and component tolerances, an AC-bypassed current sense resistor is put in series with the LED. A feedback loop measures voltage on this resistor and keeps it at a preset level by varying supply voltage of the 74AC04 gates. Therefore, the nominally digital[31]74AC04 is operating as a structured powerCMOSswitch completely in analog mode. This way the LEDjunctionis flooded and cleared ofcarriersas quickly as possible, basically byshort circuitdischarge. This pushes the speed of the LED to maximum, which makes the output optical signal fast enough so that the range/power ratio is the same as with the faster red HPWT-BD00-F4000 LED. The side effects of this brutal driving technique are: 1) the LED overshoots at the beginning of longer (5 MHz/1 MHz) impulses to about 2x brightness. This was measured to have no adverse effect on range. 2) A blockingceramic capacitorbank backing up the 74AC04 switching array is crucial for correct operation, because charging and discharging the LED is done by short circuit. Under dimensioning this bank causes the leading and trailing edges of the optical output to grow longer. Ronja Twister is an electronic interface for free space optical datalink based on counter and shift register chips. It is a part of the Ronja design. It is effectively an optical Ethernet transceiver without the optical drive part.[32] The original design has been superseded with Twister2 but the logic circuit remained the same.[33] Soderberg, studying Ronja sociologically, writes: "Arguably, the first project that vindicated the methods and licensing schemes of free software development, applied those practices to open hardware development, and pulled off a state-of-the-art technology without any backing from universities or firms, was the Ronja project."[34] The wholetoolchainis built strictly upon free tools[35]and thesource filesare provided, free, under theGPL.[36]This allows anyone to enter the development, start manufacture or invest into the technology withoutentry costs. Such costs normally can includesoftware licencecosts, time investment into resolution of compatibility issues between proprietary applications, or costs ofintellectual propertylicence negotiations. The decision to conceive the project this way was inspired by observed organizational efficiency ofFree Software. On Christmas 2001, Ronja became the world's first 10 Mbit/s Free Space Optics device with free sources.[37] Examples of tools used in development:
https://en.wikipedia.org/wiki/RONJA
Aspatial light modulator(SLM) is a device that can control theintensity,phase, orpolarizationoflightin a spatially varying manner. A simple example is anoverhead projectortransparency. Usually when the term SLM is used, it means that the transparency can be controlled by acomputer. SLMs are primarily marketed forimage projection, displays devices,[1]andmaskless lithography.[citation needed]SLMs are also used inoptical computingandholographic optical tweezers. Usually, an SLM modulates the intensity of the light beam. However, it is also possible to produce devices that modulate the phase of the beam or both the intensity and the phase simultaneously. It is also possible to produce devices that modulate the polarization of the beam, and modulate the polarization, phase, and intensity simultaneously.[2] SLMs are used extensively inholographic data storagesetups to encode information into a laser beam similarly to the way a transparency does for an overhead projector. They can also be used as part of aholographic display technology. In the 1980s, large SLMs were placed on overhead projectors to project computer monitor contents to the screen. Since then, more modernprojectorshave been developed where the SLM is built inside the projector. These are commonly used in meetings for presentations. Liquid crystal SLMs can help solve problems related to laser microparticle manipulation. In this case spiral beam parameters can be changed dynamically.[3] As its name implies, the image on an electrically addressed spatial light modulator is created and changed electronically, as in most electronic displays. EASLMs usually receive input via a conventional interface such as VGA or DVI input. They are available at resolutions up toQXGA(2048 × 1536). Unlike ordinary displays, they are usually much smaller (having an active area of about 2 cm²) as they are not normally meant to be viewed directly. An example of an EASLM is thedigital micromirror device (DMD)at the heart ofDLPdisplays orLCoSDisplays usingferroelectricliquid crystals(FLCoS) ornematic liquid crystals(electrically controlled birefringence effect). Spatial light modulators can be either reflective or transmissive depending on their design and purpose.[4] DMDs, short for digital micromirror devices, are spatial light modulators that specifically work with binary amplitude-only modulation.[5][6]Each pixel on the SLM can only be in one of two states: "on" or "off". The main purpose of the SLM is to control and adjust the amplitude of the light. Phase modulation can be achieved using a DMD by using Lee holography techniques, or by using the superpixel method.[7][6] The image on an optically addressed spatial light modulator, also known as alight valve, is created and changed by shining light encoded with an image on its front or back surface. A photosensor allows the OASLM to sense the brightness of each pixel and replicate the image usingliquid crystals. As long as the OASLM is powered, the image is retained even after the light is extinguished. An electrical signal is used to clear the whole OASLM at once. They are often used as the second stage of a very-high-resolution display, such as one for a computer-generated holographic display. In a process called active tiling, images displayed on an EASLM are sequentially transferred to different parts on an OASLM, before the whole image on the OASLM is presented to the viewer. As EASLMs can run as fast as 2500 frames per second, it is possible to tile around 100 copies of the image on the EASLM onto an OASLM while still displaying full-motion video on the OASLM. This potentially gives images with resolutions of above 100 megapixels. Multiphoton intrapulse interference phase scan(MIIPS) is a technique based on the computer-controlled phase scan of a linear-array spatial light modulator. Through the phase scan to an ultrashort pulse, MIIPS can not only characterize but also manipulate the ultrashort pulse to get the needed pulse shape at target spot (such astransform-limited pulsefor optimized peak power, and other specific pulse shapes). This technique features with full calibration and control of the ultrashort pulse, with no moving parts, and simple optical setup. Linear array SLMs that use nematic liquid crystal elements are available that can modulate amplitude, phase, or both simultaneously.[8][9]
https://en.wikipedia.org/wiki/Spatial_light_modulator
Super Wi-Firefers toIEEE 802.11g/n/ac/axWi-Fi implementations over unlicensed 2.4 and 5GHzWi-Fibands but with performance enhancements for antenna control, multiple path beam selection, advance control for best path, and applied intelligence for load balancing giving it bi-directional connectivity range for standard wifi enabled devices over distances of up to 1,700 meters. Hong Kong–based Altai Technologies[1]developed and patented Super Wi-Fi technology and manufacturers a product line of base stations and access points deployed extensively around the world beginning in 2007. Due to its extended range and advanced interference mitigation, Super Wi-Fi is primarily used for expansive outdoor and heavy industrial use cases.[2]Krysp Wireless, LLC[3]is Altai Technologies' Master Distributor for North America focused on the sale and distribution of Super Wi-Fi products for large enterprises, WISPs and municipal deployments. Altai's Super Wi-Fi technology should not be confused with the FCC's use of the term relating to proposed plans announced in 2012 for using TV white space spectrum to support delivery of long range internet access. Super Wi-Fi is a term originally coined by the United StatesFederal Communications Commission(FCC) to describe awireless networkingproposal which the FCC plans to use for the creation of longer-distance wirelessInternet access.[4][5]The use of the trademark "Wi-Fi" in the name has been criticized because it is neither based on Wi-Fi technology nor endorsed by theWi-Fi Alliance.[4]A trade show has also been called the "Super WiFi Summit" (without hyphen).[6] Various standards such asIEEE 802.22andIEEE 802.11afhave been proposed for this concept. The term "White-Fi"[7]has also been used to indicate the use of white space for IEEE 802.11af.[8][9] Altai Technologies' Super Wi-Fi leverages a dynamic use of unlicensed 2.4 and 5 GHz bands to seamlessly migrate nomadic device connections from one band to the other depending on their distance from the Super Wi-Fi base station/access point. This dynamic use of both unlicensed bands combined with patented throughput optimization and interference mitigation is what supports Super Wi-Fi's extended range. Conversely, The FCC's Super Wi-Fi proposal is a network backhaul solution that uses the lower-frequencywhite spacesbetweentelevision channel frequencies.[10]These lower frequencies allow the signal to travel further and penetrate walls better than the higher frequencies previously used.[10]The FCC's plan was to allow those white space frequencies to be used for free, as happens with shorter-range Wi-Fi andBluetooth.[10]However, due to concerns of interference to broadcast, Super Wi-Fi devices cannot access the TV spectrum at will. The FCC has made mandatory the utilization of aTV white space database(also referred to as ageolocation database), which must be accessed by Super Wi-Fi devices before the latter gain access to the VHF-UHF spectrum. The white space database evaluates the potential for interference to broadcast and either grants or denies access of Super Wi-Fi devices to the VHF-UHF spectrum. Continuing research exists evaluating the potential for Super Wi-Fi Networks for coverage and performance.[11][12] Altai Technologies' Super Wi-Fi deployment use cases around the world include container ports, heavy industrial complexes, campus environments, mining operations, agriculture and airports among others.[13]Proof of concept deployments for the FCC's Super Wi-Fi initiative leveraging TV white space includeRice University, in partnership with the nonprofit organizationTechnology For All, installed the first residential deployment of Super Wi-Fi in east Houston in April 2011. The network uses white spaces forbackhauland provides access to clients using 2.4GHzWi-Fi.[14]A month later, a public Super Wi-Fi network was deployed inCalgary,Alberta, when Calgary-based company WestNet Wireless launched the network for free and paid subscribers.[15]The United States' first public Super Wi-Fi network was deployed inWilmington, North Carolina, on January 26, 2012.Florida-based company Spectrum Bridge launched a network for public use with access at Hugh MacRae park.[16]West Virginia Universitylaunched the first campus Super Wi-Fi network on July 9, 2013.[17] Currently, Microsoft is using TV Whitespaces to provide Super Wi-Fi connectivity in select regions across Africa, Asia, North America, and South America.[18]This is after running successful trials back in 2012 in countries such as Belgium, Kenya, Switzerland, Singapore, the United Kingdom, the United States, and Uruguay. As of 2021, Microsoft runs the service underProject Mawinguin Microsoft 4Afrika to provide low-cost internet access within rural communities in the African continent.[19]The countries served include the likes of Kenya, Namibia, Tanzania, South Africa, Ghana and Botswana.
https://en.wikipedia.org/wiki/Super_Wi-Fi
In the computer industry,vaporware(orvapourware) is a product, typically computerhardwareorsoftware, that is announced to the general public but is late, never actually manufactured, or officially canceled. Use of the word has broadened to include products such as automobiles. Vaporware is often announced months or years before its purported release, with few details about its development being released. Developers have been accused of intentionally promoting vaporware to keep customers from switching to competing products that offer more features.[1]Network Worldmagazine called vaporware an "epidemic" in 1989 and blamed the press for not investigating if developers' claims were true. Seven major companies issued a report in 1990 saying that they felt vaporware had hurt the industry's credibility. The United States accused several companies of announcing vaporware early enough to violateantitrust laws, but few have been found guilty. "Vaporware" was coined by aMicrosoftengineer in 1982 to describe the company'sXenix operating systemand appeared in print at least as early as the May 1983 issue ofSinclair Usermagazine (spelled 'Vapourware' in UK English).[2]It became popular among writers in the industry as a way to describe products they felt took too long to be released.InfoWorldmagazine editor Stewart Alsop helped popularize it by lampooningBill Gateswith aGolden Vaporwareaward for the late release of his company's first version ofWindowsin 1985. "Vaporware", sometimes synonymous with "vaportalk" in the 1980s,[3]has no single definition. It is generally used to describe a hardware or software product that has been announced, but that the developer is unlikely to release any time soon, if ever.[4][5] The first reported use of the word was in 1982 by an engineer at the computer software companyMicrosoft.[6]Ann Winblad, president ofOpen Systems Accounting Software, wanted to know if Microsoft planned to stop developing itsXenixoperating systemas some of Open System's products depended on it. She asked two Microsoft software engineers, John Ulett and Mark Ursino, who confirmed that development of Xenix had stopped. "One of them told me, 'Basically, it's vaporware'," she later said. Winblad compared the word to the idea of "selling smoke", implying Microsoft was selling a product it would soon not support.[3] Winblad described the word to influential computer expertEsther Dyson,[3]who published it for the first time in her monthly newsletterRELease 1.0. In an article titled "Vaporware" in the November 1983 issue ofRELease 1.0, Dyson defined the word as "good ideas incompletely implemented". She described three software products shown atCOMDEXin Las Vegas that year with bombastic advertisements. She stated that demonstrations of the "purported revolutions, breakthroughs and new generations" at the exhibition did not meet those claims.[4][7] The practice existed before Winblad's account. In a January 1982 review of the newIBM Personal Computer,BYTEfavorably noted that IBM "refused to acknowledge the existence of any product that is not ready to be put on dealers' shelves tomorrow. Although this is frustrating at times, it is a refreshing change from some companies' practice of announcing a product even before its design is finished".[8]When discussingColeco's delay in releasing theAdam,Creative Computingin March 1984 stated that the company "did not invent the common practice of debuting products before they actually exist. In microcomputers, to do so otherwise would be to break with a veritable tradition".[9]Recalling that aLanier Business Productsword processorbecame available immediately after its announcement,Creative Computingwrote that year, "If we were to re-enact that scene today, I wouldn't get my machine for at least six months, maybe a year".[10] After Dyson's article, the word "vaporware" became popular among writers in the personal computer software industry as a way to describe products they believed took too long to be released after their first announcement.[6]InfoWorldmagazine editor Stewart Alsop helped popularize its use by givingBill Gates, then-CEO of Microsoft, with aGolden Vaporwareaward for Microsoft releasingWindowsin 1985, 18 months late. Alsop presented it to Gates at a celebration for the release while the song "The Impossible Dream" played in the background.[11][12] "Vaporware" took another meaning when it was used to describe a product that did not exist. A new company namedOvation Technologiesannounced itsoffice suiteOvation in 1983.[13]The company invested in an advertising campaign that promoted Ovation as a "great innovation", and showed a demonstration of the program at computer trade shows.[6][14]The demonstration was well received by writers in the press, was featured in a cover story for an industry magazine, and reportedly created anticipation among potential customers.[14]Executives later revealed that Ovation never existed. The company created the fake demonstration in an unsuccessful attempt to raise money to finish their product,[13]and is "widely considered the mother of all vaporware," according to Laurie Flynn ofThe New York Times.[6] Use of the term spread beyond the computer industry.Newsweekmagazine'sAllan Sloandescribed the manipulation of stocks byYahoo!andAmazon.comas "financial vaporware" in 1997.[15]Popular Sciencemagazine uses a scale ranging from "vaporware" to "bet on it" to describe release dates of new consumer electronics.[16]Car manufacturerGeneral Motors' plans to develop and sell an electric car were called vaporware by an advocacy group in 2008[17]andCar and Drivermagazine retroactively described theVector W8supercar as vaporware in 2017.[18] The term is like ascarlet letterhung around the neck of software developers. [...] Like any overused and abused word, vaporware has lost its meaning. A product missing its announced release date, and the labeling of it as vaporware by the press, can be caused by its development taking longer than planned. Most software products are not released on time, according to researchers in 2001 who studied the causes and effects of vaporware;[12]"I hate to say yes, but yes", a Microsoft product manager stated in 1984, adding that "the problem isn't just at Microsoft". The phenomenon is so common thatLotus' release of1-2-3on time in January 1983, three months after announcing it, amazed many.[3] Software developmentis a complex process, and developers are often uncertain how long it will take to complete any given project.[12][19]Fixing errors in software, for example, can make up a significant portion of its development time, and developers are motivated not to release software with errors because it could damage their reputation with customers. Last-minute design changes are also common.[12]Large organizations seem to have more late projects than smaller ones, and may benefit from hiring individual programmers on contract to write software rather than using in-house development teams. Adding people to a late software project does not help; according toBrooks' Law, doing so increases the delay.[3] Not all delays in software are the developers' fault. In 1986, theAmerican National Standards InstituteadoptedSQLas the standard database manipulation language. Software companyAshton-Tatewas ready to releasedBase IV, but pushed the release date back to add support for SQL. The company believed that the product would not be competitive without it.[14]As the word became more commonly used by writers in the mid-1980s,InfoWorldmagazine editor James Fawcette wrote that its negative connotations were unfair to developers because of these types of circumstances.[20] Vaporware also includes announced products that are never released because of financial problems, or because the industry changes during its development.[14]When3D Realmsfirst announcedDuke Nukem Foreverin 1997, the video game was early in its development.[21]The company's previous game released in 1996,Duke Nukem 3D, was a critical and financial success, and customer anticipation for its sequel was high. As personal computer hardware speeds improved at a rapid pace in the late 1990s, it created an "arms race" between companies in the video game industry, according toWired News. 3D Realms repeatedly moved the release date back over the next 12 years to add new, more advanced features. By the time 3D Realms went out of business in 2009 with the game still unreleased,Duke Nukem Foreverhad become synonymous with the word "vaporware" among industry writers.[22][23]The game was revived and released in 2011. However, due to a 13-year period of fan anticipation and design changes in the industry, the game received a mostly negative reception from critics and fans. A company notorious for vaporware can improve its reputation. In the 1980s, video game makerWestwood Studioswas known for shipping products late. However, by 1993, it had so improved thatComputer Gaming Worldreported "many publishers would assure [us] that a project was going to be completed on timebecauseWestwood was doing it".[24] Announcing products early—months or years before their release date,[25]also called "preannouncing",[26]has been an effective way by some developers to make their products successful. It can be seen as a legitimate part of their marketing strategy, but is generally not popular with industry press.[27]The first company to release a product in a given market often gains an advantage. It can set the standard for similar future products, attract a large number of customers, and establish its brand before competitor's products are released.[14]Public relations firm Coakley-Heagerty used an early announcement in 1984 to build interest among potential customers. Its client wasNolan Bushnell, formerly ofAtari Inc.who wanted to promote the newSente Technologies, but his contract with Atari prohibited doing so until a later date. The firm created an advertising campaign—including brochures and a shopping-mall appearance—around a large ambiguous box covered in brown paper to increase curiosity until Sente could be announced.[3] Early announcements send signals not only to customers and the media, but also to providers of support products,regulatory agencies, financial analysts, investors, and other parties.[27]For example, an early announcement can relay information to vendors, letting them know to prepare marketing and shelf space. It can signal third-party developers to begin work on their own products, and it can be used to persuade a company's investors that they are actively developing new, profitable ideas.[26]Microsoft described this in 1995, duringUnited States v. Microsoft, as "not in fact vaporware, but pre-disclosure" if not done with "a desire to mislead".[6]WhenIBMannounced its Professional Workstation computer in 1986, they noted the lack of third-party programs written for it at the time, signaling those developers to start preparing. Microsoft usually announces information about its operating systems early because third-party developers are dependent on that information to develop their own products.[26]Alsop proposed in 1995 that instead of early public announcements, companies should, usingnondisclosure agreements, privately notify important customers.[6] A developer can strategically announce a product that is in the early stages of development, or before development begins, to gain competitive advantage over other developers.[28]In addition to the "vaporware" label, this is also called "ambush marketing", and "fear, uncertainty and doubt" (FUD) by the press.[26]If the announcing developer is a large company, this may be done to influence smaller companies to stop development of similar products. The smaller company might decide their product will not be able to compete, and that it is not worth the development costs.[28]It can also be done in response to a competitor's already released product. The goal is to make potential customers believe a second, better product will be released soon. The customer might reconsider buying from the competitor, and wait.[29]In 1994, as customer anticipation increased for Microsoft's new version of Windows (codenamed "Chicago"),Appleannounced a set of upgrades to its ownSystem 7operating system that were not due to be released until nearly two years later.The Wall Street Journalwrote that Apple did this to "blunt Chicago's momentum".[30] A premature announcement can cause others to respond with their own. WhenVisiCorpannouncedVisi Onin November 1982, it promised to ship the product by spring 1983. The news forcedQuarterdeck Office Systemsto announce in April 1983 that itsDESQwould ship in November 1983. Microsoft responded by announcingWindows 1.0in fall 1983, and Ovation Technologies followed by announcing Ovation in November.InfoWorldnoted in May 1984 that of the four products only Visi On had shipped, albeit more than a year late and with only two supported applications.[3] my own estimate is that at the time of announcement, 10% of software products don't actually exist [...] Vendors that are unwilling to [prove it exists] shouldn't announce their packages to the press Industry publications widely accused companies of using early announcements intentionally to gain competitive advantage over others. In his 1989Network Worldarticle,Joe Mohenwrote the practice had become a "vaporware epidemic", and blamed the press for not investigating claims by developers. "If the pharmaceutical industry were this careless, I could announce a cure for cancer today – to a believing press."[31]In 1985 Stewart Alsop began publishing his influential monthlyVaporlist, a list of companies he felt announced their products too early, hoping to dissuade them from the practice;[6]among the entries in January 1988 were aVerbatim Corp.optical drivethat was 30 months late,WordPerfectfor Macintosh (12 months), IBMOS/2 1.1(nine months), and Lotus 1-2-3 for OS/2 and Macintosh (nine and three months late, respectively).[32]WiredMagazine began publishing a similar list in 1997. Seven major software developers—including Ashton-Tate,Hewlett-Packard, andSybase—formed a council in 1990, and issued a report condemning the "vacuous product announcement dubbed vaporware and other misrepresentations of product availability" because they felt it had hurt the industry's credibility.[33] In the United States, announcing a product that does not exist to gain a competitive advantage is illegal via Section 2 of theSherman Antitrust Actof 1890, but few hardware or software developers have been found guilty of it. The section requires proof that the announcement is both provably false, and has actual or likely market impact.[34]False or misleading announcements designed to influence stock prices are illegal under United Statessecurities fraudlaws.[35]The complex and changing nature of the computer industry, marketing techniques, and lack of precedent for applying these laws to the industry can mean developers are not aware their actions are illegal. TheU.S. Securities and Exchange Commissionissued a statement in 1984 with the goal of reminding companies that securities fraud also applies to "statements that can reasonably be expected to reach investors and the trading markets".[36] Several companies have been accused in court of using knowingly false announcements to gain market advantage. In 1969, the United States Justice Department accused IBM of doing this in the caseUnited States v. IBM. After IBM's competitor,Control Data Corporation(CDC), released a computer, IBM announced theSystem/360 Model 91. The announcement resulted in a significant reduction in sales of CDC's product. The Justice Department accused IBM of doing this intentionally because the System/360 Model 91 was not released until two years later.[37][38]IBM avoided preannouncing products during the antitrust case, but after the case ended it resumed the practice. The company likely announced itsPCjrin November 1983—four months before general availability in March 1984—to hurt sales of rival home computers during theimportant Christmas sales season.[39][40]In 1985The New York Timeswrote[41] Because of its position in the industry, an announcement of a future I.B.M. product, or even a rumor of one, is enough to slow competitors' sales. Some critics say that I.B.M. is trying to lock out competitors when it issues statements outlining the general trend of future products. I.B.M. insists the practice is necessary to help customer planning. The practice was not called "vaporware" at the time, but publications have since used the word to refer specifically to it. Similar cases have been filed againstKodak,AT&T, andXerox.[42] US District JudgeStanley Sporkinwas a vocal opponent of the practice during his review of the settlement resulting fromUnited States v. Microsoft Corp.in 1994. "Vaporware is a practice that is deceitful on its face and everybody in the business community knows it," said Sporkin.[43]One of the accusations made during the trial was that Microsoft has illegally used early announcements. The review began when three anonymous companies protested the settlement, claiming the government did not thoroughly investigate Microsoft's use of the practice. Specifically, they claimedMicrosoftannounced its Quick Basic 3 program to slow sales of its competitorBorland's recently released Turbo Basic program.[42][6]The review was dismissed for lack of explicit proof.[42]
https://en.wikipedia.org/wiki/Vaporware
WirelessHARTwithintelecommunicationsandcomputing, is awireless sensor networkingtechnology. It is based on theHighway Addressable Remote Transducer Protocol(HART). Developed as a multi-vendor,interoperablewireless standard, WirelessHART was defined for the requirements of process field device networks. The protocol utilizes a time synchronized, self-organizing, and self-healing mesh architecture. The protocol supports operation in the 2.4 GHzISM bandusingIEEE 802.15.4standard radios. The underlying wireless technology is based on the work ofDust Networks'TSMPtechnology.[1] The standard was initiated in early 2004 and developed by 37 HART Communications Foundation (HCF) companies that - amongst others - includedABB,Emerson,Endress+Hauser,Pepperl+Fuchs,Siemens,Freescale Semiconductor, Software Technologies Group (which developed the initial WirelessHART WiTECK stack), and AirSprite Technologies which went on to form WiTECK, an open non-profit membership organization whose mission is to provide a reliable, cost-effective, high-quality portfolio of core enabling system software for industrial wireless sensing applications, under a company and platform-neutral umbrella. WirelessHART was approved by a vote of the 210 member general HCF membership, ratified by the HCF Board of Directors, and introduced to the market in September 2007.[2]On September 27, 2007, theFieldbus Foundation,ProfibusNutzerorganisation, and HCF announced a wireless cooperation team to develop a specification for a common interface to a wireless gateway, further protecting users' investments in technology and work practices for leveraging these industry-pervasive networks. Following its completed work on the WirelessHART standard in September 2007, the HCF offeredInternational Society of Automation(ISA) an unrestricted, royalty-free copyright license, allowing theISA100committee access to the WirelessHART standard.[3] Backward compatibility with the HART “user layer” allows transparent adaptation of HART compatiblecontrol systemsand configuration tools to integrate new wireless networks and their devices, as well as continued use of proven configuration and system-integration work practices. It is estimated that 25 million HART field devices are installed worldwide, and approximately 3 million new wired HART devices are shipping each year. In September 2008, Emerson became the first process automation supplier to begin production shipments for its WirelessHART enabled products.[4] During the summer of 2009NAMUR, an international user association in the chemical and pharmaceutical processing industries, conducted a field test of WirelessHART to verify alignment with the NAMUR requirements for wireless automation in process applications.[5] WirelessHart was approved by theInternational Electrotechnical Commission(IEC) in January 2009 with revision released in April 2010. The latest edition, version 2, was released in 2016 as IEC/PAS 62591:2016.[6]
https://en.wikipedia.org/wiki/WirelessHART
TheEnOceantechnology is anenergy harvestingwireless technology used primarily in building automation systems, but also in other application fields such as industry, transportation, and logistics. The energy harvesting wireless modules are manufactured and marketed by the company EnOcean, headquartered in Oberhaching nearMunich. The modules combine micro energy converters with ultra low power electronics and wireless communications and enable batteryless,wireless sensors, switches, and controls. In March 2012, the EnOcean wireless standard was ratified as the international standardISO/IEC14543-3-10,[1]which is optimized for wireless solutions with ultra-low power consumption and energy harvesting. The standard covers the OSI (Open Systems Interconnection) layers 1-3 which are the physical, data link and networking layers. EnOcean is offering its technology and licenses for the patented features within the EnOcean Alliance framework. EnOcean technology is based on the energetically efficient exploitation of applied slight mechanical motion and other potentials from the environment, such as indoor light and temperature differences, using the principles ofenergy harvesting. In order to transform such energy fluctuations into usable electrical energy, electromagnetic,solarcells, andthermoelectricenergy converters are used. EnOcean-based products (such as sensors and light switches) perform without batteries and are engineered to operate maintenance-free. The radio signals from these sensors and switches can be transmitted wirelessly over a distance of up to 300 meters in the open and up to 30 meters inside buildings. Early designs from the company usedpiezogenerators, but were later replaced with electromagnetic energy sources to reduce the operating force (3.5newtons), and increase the service life to 100 operations a day for more than 25 years. EnOcean wireless data packets are relatively small, with the packet being only 14 bytes long and are transmitted at 125 kbit/s. RF energy is only transmitted for the 1's of the binary data, reducing the amount of power required. Three packets are sent atpseudo-randomintervals reducing the possibility ofRFpacket collisions. Modules optimized for switching applications transmit additional data packets on release ofpush-button switches, enabling other features such as light dimming to be implemented.[2]The transmission frequencies used for the devices are 902 MHz, 928.35 MHz, 868.3 MHz and 315 MHz. On May 30, 2017 EnOcean unveiled a series of light switches utilizingBluetooth Low Energyradio (2.4 GHz).[3] One example of the technology is a battery-free wireless light switch. Advantages are that it saves time and material by eliminating the need to install wires between the switch and controlled device, e.g., a light fixture. It also reduces noise on switched circuits, as the switching is performed locally at the load. Other lighting applications includeoccupancy sensors,light sensorsand key card switches. Furthermore, heating, ventilation, and air conditioning (hvac) applications such as temperature sensors, humidity sensors,CO2sensors, metering sensors already use EnOcean’s energy harvestingwireless technology. EnOceanGmbHis a venture-funded spin-off company ofSiemens AGfounded in 2001. It is a German company headquartered inOberhaching, nearMunich, which currently[as of?]employs 60 people in Germany and the USA. It is a technology supplier of energy harvesting wireless modules (transmitters, receivers, transceivers, energy converter) to companies (e.g.Siemens Building Technologies, Distech Controls,Zumtobel, Omnio,Osram, Eltako, Wieland Electric, Pressac, Peha, Thermokon, Wago, Herga), which develop and manufacture products used inbuilding automation(light,shading,hvac),industrial automation, and other application fieldsautomotive industry(replacement of the conventional battery in tyre pressure sensors). The company won the Bavarian Innovation Prize 2002[4]for its technology, the award "Technology Pioneer 2006"[5]by theWorld Economic Forum, the "Top-10 Product for 2007" award by Building Green[6]and was among the global cleantech 100 in 2011.[7] In November 2007,MK Electric, the manufacturer of consumer electrical fitments in the UK, adopted EnOcean technology for a wireless switches. However, pricing was far beyond the cost of providing traditional switches, so very little traction in 'new build' applications was made. The range is now entirely discontinued.[citation needed] In April 2012, EnOcean wireless technology was ratified as the international wireless standard ISO/IEC 14543-3-10 Information technology - Home Electronic Systems (HES) - Part 3-10: Wireless Short-Packet (WSP) protocol optimized for energy harvesting - Architecture and lower layer protocols.[8][9] A group of companies including EnOcean,Texas Instruments,Omnio,Sylvania,Masco, andMK Electricformed the EnOcean Alliance in April 2008 as a non-profit, mutual benefit organization. The EnOcean Alliance aims to internationalise this technology, and is dedicated to creating interoperability between the products of OEM partners, in order to bring about the existence of a broad range of interoperable wireless monitoring and controlling products for use in and around residential, commercial and industrial buildings. For this the EnOcean Alliance has drawn up the application level protocols are referred to as EEPs (EnOcean Equipment Profiles). Together with the three lower levels of the international wireless standard ISO/IEC 14543-3-10 the Alliance lay the foundation for a fully interoperable, open wireless technology. More than 250 companies currently belong to the EnOcean Alliance. The headquarters of the organization is in San Ramon, California. Market research company WTRS estimated that EnOcean module shipments might reach $1.4B in 2013.[10] EnOcean is supported by Fhem[11]and ago control.[12]Fhem and ago control are GPL licensed software suites for house automation. They are used to automate some common tasks in the household like switching lamps, shutters, heating, etc., and to log events like temperature, humidity, and power consumption. Both run as servers that are controlled via web front-end, telnet, command line, or TCP/IP directly.
https://en.wikipedia.org/wiki/EnOcean
Z-Waveis awirelesscommunications protocol used primarily for residential and commercial building automation. It is amesh networkusing low-energy radio waves to communicate from device to device,[2]allowing for wireless control of smart home devices, such as smart lights, security systems, thermostats, sensors, smart door locks, and garage door openers.[3][4]The Z-Wave brand and technology are owned bySilicon Labs. Over 300 companies involved in this technology are gathered within the Z-Wave Alliance. Like other protocols and systems aimed at the residential, commercial,MDUand building markets, a Z-Wave system can be controlled from a smart phone, tablet, or computer, and locally through a smart speaker, wirelesskeyfob, or wall-mounted panel with a Z-Wave gateway or central control device serving as both the hub or controller.[3][5]Z-Wave provides the application layer interoperability between home control systems of different manufacturers that are a part of its alliance. There is a growing number of interoperable Z-Wave products; over 1,700 in 2017,[6]over 2,600 by 2019,[7]and over 4,000 by 2022.[8][9] The Z-Wave protocol was developed by Zensys, a Danish company based inCopenhagen, in 1999.[10][11][12]That year, Zensys introduced a consumer light-control system, which evolved into Z-Wave as a proprietarysystem on a chip(SoC) home automation protocol on an unlicensed frequency band in the 900 MHz range.[13]Its 100 series chip set was released in 2003, and its 200 series was released in May 2005,[3]with the ZW0201 chip offering high performance at a low cost.[14]Its 500 series chip, also known as Z-Wave Plus, was released in March 2013, with four times the memory, improved wireless range, improved battery life, an enhanced S2 security framework, and the SmartStart setup feature.[15]Its 700 series chip was released in 2019, with the ability to communicate up to 100 meters directly from point-to-point, or 800 meters across an entire Z-Wave network, an extended battery life of up to 10 years, and comes with S2 and SmartStart technology.[8][1]In July 2019, the Z-Wave Plus v2 certification was announced. It is designed for devices built on the 700 platform.[8]The Z-Wave Long Range (LR) specification was announced in September 2020, a new specification with up to four-times the wireless range of standard Z-Wave.[8]Z-Wave's 800 series chip was released in late 2021, with improved security and battery life over the 700 series.[16] The technology began to catch on in North America around 2005, when five companies, includingDanfoss,Ingersoll-RandandLeviton Manufacturing, adopted Z-Wave.[12]They formed the Z-Wave Alliance, whose objective is to promote the use of Z-Wave technology, with all certified products by companies in the Alliance interoperable.[11][12]In 2005,Bessemer Venture Partnersled a $16 million third seed round for Zensys.[12]In May 2006,Intel Capitalannounced that it was investing in Zensys, a few days after Intel joined the Z-Wave Alliance.[14]In 2008, Zensys received investments fromPanasonic,Cisco Systems, Palamon Capital Partners and Sunstone Capital.[12] Z-Wave was acquired bySigma Designsin December 2008.[12][17]Following the acquisition, Z-Wave's U.S. headquarters inFremont, Californiawere merged with Sigma's headquarters inMilpitas, California.[12][18]As part of the changes, the trademark interests in Z-Wave were retained in the United States by Sigma Designs and acquired by a subsidiary ofAeotec Groupin Europe.[19][20] On January 23, 2018, Sigma announced it planned to sell the Z-Wave technology and business assets toSilicon Labsfor $240 million,[21]and the sale was completed on April 18, 2018.[22] In 2005, there were six products on the market that used Z-Wave technology. By 2012, assmart hometechnology was becoming increasingly popular, there were approximately 600 products using Z-Wave technology available in the U.S.[11]As of June 2022, there are over 4,000 Z-Wave certified interoperable products.[7][9] Z-Wave's interoperability at the application layer ensures that devices can share information and allows all Z-Wave hardware and software to work together. Its wireless mesh networking technology enables any node to talk to adjacent nodes directly or indirectly, controlling any additional nodes. Nodes that are within range communicate directly with one another. If they aren't within range, they can link with another node that is within range of both to access and exchange information.[4]In September 2016, certain parts of the Z-Wave technology were made publicly available, when then-owner Sigma Designs released a public version of Z-Wave's interoperability layer, with the software added to Z-Wave's open-source library.[23]The Z-Wave MAC/PHY is globally standardized by theInternational Telecommunication Unionas ITU 9959 radio.[24]The open-source availability allows software developers to integrate Z-Wave into devices with fewer restrictions. Z-Wave's S2 security, Z/IP for transporting Z-Wave signals over IP networks, and Z-Wave middleware are all open source as of 2016.[23]In 2020, the Z-Wave Alliance ratified the Z-Wave specification, adding the application to open-source development. The Alliance Technical Working Group manages Z-Wave specification development and maintains a library of standard implementations for Z-Wave compliant products.[25] Established in 2005 and re-incorporated as a non-profit in 2020, the Z-Wave Alliance is a member-driven standards development organization dedicated to market development, technical Z-Wave specification and device certification, and education on Z-Wave technology. Z-Wave Alliance is a consortium of over 300 companies in the residential and commercial connected technology market. Z-Wave Alliance certifies devices to standards that guarantee interoperability with full backwards compatibility among all generations of Z-Wave devices. These standards include specifications for reliability, range, power consumption, and device interoperability.[5][11][26][27] In October 2013, a new protocol and interoperability certification program called Z-Wave Plus was announced, based upon new features and higher interoperability standards bundled together and required for the 500 seriessystem on a chip(SoC), and including some features that had been available since 2012 for the 300/400 series SoCs.[28]In February 2014, the first product was certified by Z-Wave Plus.[29] In 2016, the Alliance launched a Z-Wave Certified Installer Training program to give installers, integrators and dealers the tools to deploy Z-Wave networks and devices in their residential and commercial jobs. That year, the Alliance announced the Z-Wave Certified Installer Toolkit (Z-CIT), a diagnostics and troubleshooting device that can be used during network and device setup and can also function as a remote diagnostics tool.[30] Z-Wave Long Range (LR) was announced in September 2020, a new specification with an increased range over regular Z-Wave signals. The LR specification is managed and certified under the Z-Wave Plus v2 certification.[8]On March 15, 2022, the Z-Wave Alliance announced that Ecolink, a security and home automation brand, was the first to complete Z-Wave LR certification, with the Ecolink 700 Series Garage Door Controller.[31] Z-Wave Alliance maintains the Z-Wave certification program. There are two components to Z-Wave certification: technical certification and market certification.[32] In December 2019, Z-Wave announced the Z-Wave Source Code Project, in which it would release thesource codeto its platform, for members to contribute to the advancement of the standard, under the supervision of the newly-established OS Work Group. The project is available to alliance members onGitHub.[33][34] In December 2019, the Z-Wave Alliance announced that the Z-Wave specification would become a ratified,multi-sourcewireless standard. It includes the ITU.G9959 PHY/MAC radio specification, the application layer, the network layer, and the host-device communication protocol. Instead of being a single-source specification, it will become a multi-source, wireless smart home standard developed by collective working group members of the Z-Wave Alliance.[35]The Z-Wave Alliance would become astandards development organization(SDO), while continuing to manage the certification program.[36]In August 2020, the Z-Wave Alliance officially became incorporated as an independent nonprofit standards development organization, with seven founding members under its new SDO structure:Alarm.com,Assa Abloy, Leedarson,Ring,Silicon Labs, StratIS, and Qolsys. Under the SDO, there are new membership levels, workgroups, and committees, including technical working groups specific to features, and certification, security, and marketing groups.[37]In 2025, Z-Wave released the 2024B specification for improved functionality and regulatory compliance, following the release of 2024A the previous year. They also introduced the new Accelerator membership level, for startups and young IoT companies.[38] Z-Wave is designed to provide reliable, low-latency transmission of small data packets at data rates up to 100 kbit/s,[39]and is suitable for control and sensor applications,[40]unlikeWi-Fiand otherIEEE 802.11-basedwireless LANsystems that are designed primarily for high data rates. Communication distance between two nodes is 200 meters line of sight outdoors and 50 meters line of sight indoors,[41]and with message ability to hop up to four times between nodes, it gives enough coverage for most residential houses.Modulationisfrequency-shift keying(FSK) withManchester encoding,[40]and other supported modulations schemes includeGFSKand DSSS-OQPSK.[42] Z-Wave uses the Part 15 unlicensed industrial, scientific, and medical (ISM) band,[43]operating on varying frequencies globally. For instance, in Europe it operates at the 868-869 MHz band while in North America the band varies from 908-916 MHz when Z-Wave is operating as a mesh network and 912-920 MHz when Z-Wave is operating with a star topology in Z-Wave LR mode.[44][4]Z-Wave's mesh network band competes with somecordless telephonesand other consumer electronics devices, but avoids interference withWi-Fi,Bluetoothand other systems that operate on the crowded2.4 GHzband.[5]The lower layers, MAC and PHY, are described byITU-TG.9959 and fully backwards compatible. In 2012, theInternational Telecommunication Union(ITU) included the Z-Wave PHY and MAC layers as an option in its G.9959 standard for wireless devices under 1 GHz. Data rates include 9600 bit/s and 40 kbit/s, with output power at 1 mW or 0 dBm.[4] Z-Wave has been released to be used frequencies with the following frequency bands in various parts of the world:[45][44] Traditional hub-and-spoke networks include one central hub or access point to which all devices are connected, such as a wireless device connecting to a router. Z-Wave devices create a mesh network, where devices can communicate with each other in addition to the central hub. Advantages to a mesh network include greater range and compatibility, and a stronger network.[46] Z-Wave LR devices operate on a star network topology that features the hub at a central point and then establishes a direct connection to each device, rather than sending signals from node to node until the intended destination is met, as in a mesh network. The key difference between a star network and a mesh network is the direct hub-to-device connection. Both Z-Wave LR and traditional Z-Wave nodes can coexist within the same network.[42] The simplest network is a single controllable device and a primary controller. Devices can communicate to one another by using intermediate nodes to actively route around and circumvent household obstacles or radio dead spots that might occur in themultipathenvironment of a house.[40]A message from node A to node C can be successfully delivered even if the two nodes are not within range, providing that a third node B can communicate with nodes A and C. If the preferred route is unavailable, the message originator will attempt other routes until a path is found to the C node. Therefore, a Z-Wave network can span much farther than the radio range of a single unit; however, with several of these hops a slight delay may be introduced between the control command and the desired result.[47] Additional devices can be added at any time, as can secondary controllers, including traditional hand-held controllers, key-fob controllers, wall-switch controllers and PC applications designed for management and control of a Z-Wave network. A Z-Wave network can consist of up to 232 devices, or up to 4,000 nodes on a single smart-home network with Z-Wave LR. Both allow the option ofbridgingnetworks if more devices are required.[4] A device must be "included" to the Z-Wave network before it can be controlled via Z-Wave. This process (also known as "pairing" and "adding") is usually achieved by pressing a sequence of buttons on the controller and on the device being added to the network. This sequence only needs to be performed once, after which the device is always recognized by the controller. Devices can be removed from the Z-Wave network by a similar process. The controller learns the signal strength between the devices during the inclusion process and will utilize this information when calculating routes. In the event that devices have been moved and the previously stored signal strength is wrong, the controller may issue a new route resolution through one or more explore frames. Each Z-Wave network is identified by a Network ID, and each device is further identified by a Node ID. The Network ID (also called Home ID) is the common identification of all nodes belonging to one logical Z-Wave network. The Network ID has a length of 4 bytes (32 bits) and is assigned to each device, by the primary controller, when the device is "included" into the Network. Nodes with different Network IDs cannot communicate with each other. The Node ID is the address of a single node in the network. The Node ID has a length of 1 byte (8 bits) and must be unique in its network.[48] The Z-Wave chip is optimized for battery-powered devices, and most of the time remains in a power saving mode to consume less energy, waking up only to perform its function.[13]With Z-Wave mesh networks, each device in the house bounces wireless signals around the house, which results in low power consumption, allowing devices to work for years without needing to replace batteries.[23]For Z-Wave units to be able to route unsolicited messages, they cannot be in sleep mode. Therefore, battery-operated devices are not designed as repeater units. Mobile devices, such as remote controls, are also excluded since Z-Wave assumes that all repeater capable devices in the network remain in their original detected position. Z-Wave is based on a proprietary design, supported by Sigma Designs as its primary chip vendor, but the Z-Wave business unit was acquired by Silicon Labs in 2018.[22][4][49]In December 2019, Silicon Labs announced that it would release the Z-Wave specification as an open wireless standard for development to be certified by the Z-Wave Alliance.[36] An early vulnerability was uncovered in AES-encrypted Z-Wave door locks that could be remotely exploited to unlock doors without the knowledge of the encryption keys, and due to the changed keys, subsequent network messages, as in "door is open", would be ignored by the established controller of the network. The vulnerability was not due to a flaw in the Z-Wave protocol specification but was an implementation error by the door-lock manufacturer.[50] On November 17, 2016, the Z-Wave Alliance announced stronger security standards for devices receiving Z-Wave Certification as of April 2, 2017. Known as Security 2 (or S2), it provides advanced security for smart home devices, gateways and hubs.[27][51]It shores up encryption standards for transmissions between nodes, and mandates new pairing procedures for each device, with unique PIN orQR codeson each device. The new layer of authentication is intended to prevent hackers from taking control of unsecured or poorly-secured devices.[52][53]According to the Z-Wave Alliance, the new security standard is the most advanced security available on the market for smart home devices and controllers, gateways and hubs.[54]The 800 series chip, released in late 2021, continues to support standard S2 security capabilities, as well as Silicon Labs Secure Vault technology, enabling wireless devices with PSA Certification Level 3 security. In 2022, researchers published several vulnerabilities in the Z-Wave chipsets up to the 700 series,[55]based on an open-source protocol-specific fuzzer.[56]As a result, depending on the chipset and device, an attacker within Z-Wave radio range can deny service, cause devices to crash, deplete batteries, intercept, observe, and replay traffic, and control vulnerable devices. The related CVEs (CVE-2020-9057, CVE-2020-9058, CVE-2020-9059, CVE-2020-9060, CVE-2020-9061, CVE-2020-10137) were published by CERT.[57]Z-Wave devices with 100, 200, 300 series chipsets cannot be updated to fix the vulnerabilities. For devices with 500 and 700 chipset series those vulnerabilities could be mitigated through firmware updates.[58] The chip for Z-Wave nodes is the ZW0500, built around anIntel MCS-51microcontrollerwith an internal system clock of 32 MHz. The RF part of the chip contains anGisFSKtransceiver for a software selectable frequency. With a power supply of 2.2-3.6 volts, it consumes 23mA in transmit mode.[40]Its features include AES-128 encryption, a 100 kbit/s wireless channel, concurrent listening on multiple channels, and USB VCP support.[59] At theConsumer Electronics Showon January 8, 2018, Sigma Designs introduced its Z-Wave 700 platform.[60]The 700 series chip was released in 2019.[8]It enables a new class of smart home devices that can be used outdoors, with a range of up to 300 feet, and that can operate on a coin-cell battery for up to a decade. Though the 700 series uses a 32-bit ARM Cortex SoC, it remains backward compatible with all other Z-Wave devices.[60]It includes enhanced S2 security framework as well as the SmartStart setup feature.[8]In July 2019, the Z-Wave Alliance announced Z-Wave Plus v2 certification, designed for devices built on the 700 platform, for stronger interoperability and security, and an easier installation process.[8] Z-Wave Long Range (LR) was announced in September 2020, a new specification with an improved range over regular Z-Wave signals.[8]The specification supports a maximum output power of 30 dBm, which can be used to bolster transmission range by up to several miles. In testing, Z-Wave LR had a transmission distance of 1-mile (1.6 km) direct line of sight utilizing +14-dBm output power.[61]Z-Wave LR is an extra 100-kb/s DSSS OQPSK modulation addition to the Z-Wave protocol. The modulation is treated as a fourth channel, allowing gateways to add LR nodes to the existing Z-Wave channel scanning. Z-Wave LR also increases scalability on a single smart-home network by up to 4,000 nodes, a 20x increase compared to Z-Wave.[61]Z-Wave LR operates on low power so that sensors can last for 10 years on a single coin cell. It is backwards compatible and interoperable with other Z-Wave devices.[8] In December 2021, Silicon Labs announced the availability of the Z-Wave 800 system-on-chips and modules for the Z-Wave smart home and automation ecosystem. It is described as secure, ultra-low powered, and wireless, for Internet of Things devices, with an improved battery life as compared to the 700 series.[16] Forsmart homewireless networking, there are numerous technologies working together. Z-Wave operates on the sub1GHz (low bandwidth) vs 2.4 GHz (high bandwidth) to capitalize on the application-level benefits of low power, long range, less RF interference. WiFi and Bluetooth operate on the 2.4 GHz bandwidth which manages a lot of traffic among devices that consume a lot of power. Other network standards include Bluetooth LE andThread. Z-Wave has better interoperability thanZigBee, but ZigBee has a faster data transmission rate. Thread and Zigbee operate on the busy Wi-Fi standard frequency of 2.4 GHz, while Z-Wave operates below 1 GHz, which has reduced noise and congestion, and a greater coverage area. All three are mesh networks.[62][63] The Z-Wave MAC/PHY is globally standardized by the International Telecommunication Union as ITU 9959 radio, and the Z-Wave Interoperability, Security (S2), Middleware and Z-Wave over IP specifications were all released into the public domain in 2016, and Z-Wave has become a fully-ratified open-source protocol for development.[63] OpenZWave is a library, written in C++ and wrappers and supporting projects, to interface different languages and protocol(s) allowing anyone to create applications to control devices on a Z-Wave network, without requiring in-depth knowledge of the Z-Wave protocol. This software is currently aimed at application developers who wish to incorporate Z-Wave functionality into their applications.[64]As of November 17, 2022 OpenZWave is no longer being actively maintained.[65] Matter, brought forth by theConnectivity Standards Alliance, and founded on December 19, 2019, aims to unify device communication so that connected devices will work together, across both wireless technologies and smart home ecosystems. Z-Wave networks have IP at the gateway level, enabling cloud connectivity to Matter. They can also work together at the local network level.[66]
https://en.wikipedia.org/wiki/Z-Wave
CIPURSEis an open security standard fortransitfare collection systems. It makes use ofsmart cardtechnologies and additional security measures. The CIPURSE open security standard[1]was established by theOpen Standard for Public Transportation Alliance[2]to address the needs of local and regional transit authorities for automatic fare collection systems based on smart card technologies and advanced security measures. Products developed in conformance with the CIPURSE standard[3]are intended to: The open CIPURSE standard is intended to: All of these factors are intended to reduce operating costs and increase flexibility for transport system operators. In the past, public transport systems were often implemented using standalone, proprietary fare collection systems. In such cases, each fare collection system employed unique fare media (such as its own style of ticket printed on card) and data management systems. Because fare collection systems did not interoperate with each other, payment schemes and tokens varied widely between local and regional systems, and new systems were often costly to develop and maintain. Transport systems are migrating tomicrocontroller-based fare collection systems. These are converging with similar applications and technologies, such as branded credit-debitpayment cards,micropayments, multi-application cards, andNear Field Communication(NFC) mobile phones and devices. These schemes will enable passengers to use transit tokens seamlessly across multiple transit systems. These new applications demand higher levels of security than most existing schemes that they will replace. The OSPT Alliance defined the CIPURSE standard to provide an open platform for securing both new and legacy transit fare collection[4]applications. Systems using the CIPURSE open security standard address public transport services, collection of transport fares, and transactions related to micropayments. The transition to an open standard platform creates opportunities to adopt open standards for important parts of the fare collection system, including data management, the media interface and security. An open standard for developing secure transit fare collection solutions could make systems more cost-effective, secure, flexible, scalable and extensible. In December 2010, the OSPT Alliance introduced the first draft of the CIPURSE standard. It employs existing, proven open standards, including theISO/IEC 7816smart card standard, as well as the 128-bitAdvanced Encryption Standardand theISO/IEC 14443protocol layer. Designed for low-cost silicon implementations,[citation needed]the CIPURSE security concept uses an authentication scheme that is resistant to most of today’s electronic attacks. Its security mechanisms include a uniquecryptographic protocolfor fast and efficient implementations with robust, inherent protection againstdifferential power analysis(DPA) andDifferential fault analysisattacks. Because the protocol is inherently resistant to these kinds of attacks and does not require dedicated hardware measures, it should be both more secure and less costly. It is intended to guard againstcounterfeiting,cloning,eavesdropping,man-in-the-middle attacksand other security threats. The CIPURSE standard also: OSPT Alliance technology providers are allowed to add functionality outside the common core (which is defined in the standard) to differentiate their products, so long as they do not jeopardize interoperability of the core functions.[5] Introduced in late 2012, Version 2.0 of the CIPURSE Specification is the latest version. Designed as a layered, modular architecture with application-specific profiles, the open and secure CIPURSE V2 standard comprises a single, consistent set of specifications for all security, personalization, administration and life-cycle management functions needed to create a broad range of interoperable transit applications – from inexpensive single-ride or daily paper tickets to rechargeable fixed-count or weekly plastic tickets to longer-term smart card- or smart phone-based commuter tickets that can also support loyalty and other applications. Three application-specific profiles – subsets of the CIPURSE V2 standard tailored for different use cases – have been defined, with which vendors are required to comply when creating products targeting these applications: Products based on different profiles can be added to fare collection systems at any time and can be used in parallel to provide transit operators the greatest flexibility in offering riders a range of transit fare options. Because they are derived from the same set of specifications, all the profiles are interoperable, reflect the same design criteria and have the same appearance, enabling developers to create products according to a family concept. With its modular “onion-layered” design, the CIPURSE standard can be easily enhanced in the future with additional functionality and new profiles created to address changes in technology and business. The CIPURSE V2 specification enables technology suppliers to develop and deliver innovative, more secure and interoperable transit fare collection solutions for cards, stickers, fobs, mobile phones and other consumer devices, as well as infrastructure components. In early 2013, the OSPT introduced the CIPURSE V2 Mobile Guidelines, a comprehensive set of requirements and use cases for developing and deploying CIPURSE-secured transit fare mobile apps for near field communication (NFC)-enabled smartphones, tablets and other smart devices. Providing everything developers need to implement and use the CIPURSE V2 open security standard when embedded in an NFC mobile device, the new guidelines enable transit operators to enhance their systems to support mobile ticketing with these new form factors. Founded by smart card manufacturersGiesecke & Devrient GmbH(G&D) andOberthur Technologiesand chip suppliersInfineon Technologies AG, andINSIDE Secure S.A.(formerly INSIDE Contactless) in January 2010, the OSPT Alliance[6]collectively defined the CIPURSE standard. The Alliance partners test their products for conformance with CIPURSE to demonstrate interoperability,[7]and have engaged an independent test authority to test compliance with the standard, interoperability, and performance.[8] The OSPT Alliance[9]is a nonprofit industry organization open to technology vendors, transit operators, government agencies, systems integrators, mobile device manufacturers, trusted service operators, consultants, industry associations and others wishing to participate in the organization’s education, marketing and technology development activities. As of February 2019, Full members of the alliance are:[10] The alliance is open to companies on the component supply and system integration side, as well as transport agencies and other standards bodies, to contribute their experience and knowledge to the development of the CIPURSE open standard.
https://en.wikipedia.org/wiki/CIPURSE
Device-to-Device (D2D)communication in cellular networks is defined as direct communication between two mobile users without traversing theBase Station(BS) orcore network. D2D communication is generally non-transparent to the cellular network and it can occur on thecellular frequencies(i.e., inband) orunlicensed spectrum(i.e., outband). In a traditional cellular network, all communications must go through the BS even if communicating parties are in range for proximity-based D2D communication. Communication through BS suits conventional low data rate mobile services such as voice call andtext messagingin which users are seldom close enough for direct communication. However, mobile users in today's cellular networks use high data rate services (e.g., video sharing, gaming,proximity-awaresocial networking) in which they could potentially be in range for direct communications (i.e., D2D). Hence, D2D communications in such scenarios can greatly increase the spectral efficiency of the network. The advantages of D2D communications go beyond spectral efficiency; they can potentially improve throughput,energy efficiency, delay, and fairness.[1][2] Existing data delivery protocols in D2D communications mainly assume that mobile nodes willingly participate in data delivery, share their resources with each other, and follow the rules of underlying networking protocols. Nevertheless, rational nodes in real-world scenarios have strategic interactions and may act selfishly for various reasons (such as resource limitations, the lack of interest in data, or social preferences).[3] D2D Communications is used for This article related totelecommunicationsis astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Device-to-device
TheEZ-Linkcard is a rechargeablecontactless smart cardandelectronic moneysystem that is primarily used as a payment method forpublic transportsuch as bus and rail lines inSingapore. A standard EZ-Link card is acredit-card-sizedstored-valuecontact-less smart-card that comes in a variety of colours, as well as limited edition designs. It is sold by SimplyGo Pte Ltd, a merged entity of TransitLink and EZ-Link since 2024, a subsidiary of theLand Transport Authority(LTA), and can be used on travel modes across Singapore, including theMass Rapid Transit(MRT), theLight Rail Transit(LRT),public buseswhich are operated bySBS Transit,SMRT Buses,Tower Transit SingaporeandGo-Ahead Singapore, as well as theSentosa Express. Established in 2001, the first generation of the card was based on the SonyFeliCasmart card technology and was promoted as the means for speedier boarding times on the city-state's bus and rail services. It had a monopoly on public transportation fare payments in Singapore until September 2009, when theNETS FlashPaycard, which had a monopoly overElectronic Road Pricing(ERP) toll payments, entered the market for transportation payments (and vice versa). EZ-Link cards are distributed and managed by EZ-Link Pte. Ltd., also a subsidiary of Singapore's Land Transport Authority. In September 2009,CEPASEZ-Link cards replaced the original EZ-Link card, expanding the card's usage to taxis, ERP gantries (with the dual-mode in-vehicle unit), car parks (which have been upgraded to accept CEPAS-compliant cards), convenience stores, supermarkets and fast food restaurants. Compared toNETS FlashPayhowever, EZ-Link has lesser acceptance at retail shops. EZ-Link can also be used as a payment card at vending machines throughout the country. Account-based CEPAS EZ-Link card was launched in January 2021.[1] In March 2023, theLand Transport Authorityannounced plans to merge their subsidiaries TransitLink and EZ-Link into a single entitySimplyGo.[2] TheLand Transport Authorityintroduced its pilot testing of the card to 100,000 volunteers on 26 February 2000. Initially for commuters who made at least five trips on MRT/LRT per week, the card was branded as the "Super Rider". As an incentive, volunteers were given 10% rebate off their regular fare during the one-month period.[3]Two further tests were made, with the scheme extending to frequent bus users on selected routes on an invitation basis.[4] The card is commonly used inSingaporeas a smartcard for paying transportation fees in thecity-state'sMass Rapid Transit(MRT),Light Rail Transit(LRT) andpublic bus services. The EZ-Link function is also used in concession cards forstudents in nationally recognised educational institutes, full-time national service personnel serving in theSingapore Armed Forces,Singapore Civil Defence ForceandSingapore Police Forceorsenior citizenswho are over 60 years old.[5] The system is similar to thePasmoandICOCAcards, and the card's use have since been expanded to retail, private transport, government services, community services, educational institutes and vending machines. On 17 October 2007, local telcoStarHuband EZ-Link Pte Ltd declared the start of a 6-month trial on phones with a Subscriber Identity Module (SIM) EZ-Link card.[6] Since 2009, Singapore motorists can use EZ-Link cards in their new generation In-Vehicle Unit to pay for Electronic Road Pricing (ERP) and Electronic Parking System (EPS) payments.[7][8]In August 2016, EZ-Link introduced a post-paid ERP payment service called EZ-Pay.[9] In March 2016, EZ-Link concluded a trial with the Land Transport Authority and theInfocomm Development Authority of Singaporeon the use of compatible mobile phones withNear-Field Communication(NFC) technology to make public transport payments.[10] In February 2018, EZ-Link and NTUC Social Enterprises launched a partnership to promote cashless payments. This allowed EZ-Link card holders with a linked NTUC Plus card to earn LinkPoints with EZ-Link purchases, spare change could also be used to top-up EZ-Link cards when customers make cash payments at Cheers convenience stores, and EZ-Link acceptance was extended to NTUC FairPrice supermarkets and NTUC Unity pharmacies.[11]However, EZ-Link payments at FairPrice and Unity stores were ceased on 3 May 2023 until further notice.[12]On 12 June 2024, EZ-Link acceptance is reenabled at Fairprice, with a slow rollout over a small number of outlets initially.[13] In April 2018, the card also gained acceptance on NETS terminals inhawker centresacross Singapore.[14] In September 2018, the EZ-Link card became part of a unified cashless payment system rolled out at 500 hawker stalls across Singapore.[15] In April 2019, EZ-Link announced it was working with Touch 'N Go to create a dual currency cross-border card for public transport.[16]The card was launched on 17 August 2020.[17] In 2007, theLand Transport Authority(LTA) and theSingapore Tourism Boardlaunched the Singapore Tourist Pass produced by EZ-Link to offer tourists unlimited rides on Singapore's public transport system.[18][19] In 2015, EZ-Link introduced 'EZ-Charms', trinkets that have full EZ-Link functionalities, such as the Hello Kitty EZ-Charms,[20]that received overwhelming response.[21] In 2017, EZ-Link launched EZ-Link Wearables, wearable devices that have full EZ-Link functionalities such as fitness trackers.[22] A trial to test the system was held from 29 August to 28 October 2008. The trial, which involved some 5,000 commuters, generated 1.7 million transactions and has confirmed that the system is ready for revenue service. Developed in-house by theLTA, SeP is built on the Singapore Standard for Contactless ePurse Application (CEPAS) which allows any smart card that complies with the standard to be used with the system and in a wide variety of payment applications. With SeP, commuters were able to use cards issued by any card issuer for transit purposes as long as the card complied with theCEPASstandard and included the transit application. Commuters could eventually useCEPAS-compliant cards for Electronic Road Pricing (ERP) payments in vehicles fitted with the new generation In-vehicle Unit (IU), Electronic Parking System (EPS) carparks and other electronic payment systems that supported the CEPAS standard. During the free one-for-one exchange exercise, most of them replaced their cards during the direct card replacement exercise in 2009. Others seemed to replace new cards after the old cards were out of value and become collectors' value. The new EZ-Link cards also have a higher storage capacity of S$500.00 instead of the previous S$100.00 limit but most passengers keep to the $100 limit in case of loss of card.[23] The EZ-Link App is a mobile application developed by EZ-Link that is available on the Google Play Store and App Store. It was first released as an Android-exclusive app in 2013 under the name 'My EZ-Link Mobile App',[24]and is used for: On 9 March 2020, EZ-Link launched the EZ-Link Wallet, an e-wallet for mobile phones. Compared to the EZ-Link card which is based on NFC, the EZ-Link Wallet is based on QR code, bypassing the need for payment terminals, relying on smartphones and a printed QR code. It is compliant with the SGQR code system. An email address and local mobile number are required to register for an EZ-Link account. Users have to top-up the e-wallet with a debit/credit card, and make payments by scanning the QR code at a retail shop and entering the payment amount. Payment can be authorised with either a 6-digit PIN or the phone's fingerprint scanner. Up to 6 debit/credit cards can be saved in the EZ-Link app.[29] Users can earn EZ-Link Rewards points for each digital wallet transaction, which can be used to redeem vouchers. The EZ-Link Wallet can also be used overseas at an Alipay Connect-enabled merchant in Japan. The following payment networks are supported by the EZ-Link Wallet: SimplyGo was launched in March 2019 forMasterCardusers as a separate account-based ticketing system allowing commuters to pay their public transport fares using bank cards.[31]SimplyGo expanded toVisaon 6 June[32]andNetson 16 November.[33]When the system launched, Senior Parliamentary Secretary for Transport Baey Yam Keng said that SimplyGo was not intended to replace other payment methods such as EZ-Link.[34]In September 2020, a pilot program to expand the use of SimplyGo with EZ-Link adult cards was launched.[35]This was followed on 28 January 2021 by the rollout of account-based EZ-Link cards for adults. Commuters could also update their existing EZ-Link cards to the new system.[36][37] Concession cards were included in SimplyGo on 19 October 2022, with the option to upgrade student concession cards only available in 2023.[38]In March 2023, theLand Transport Authority(LTA) announced that it would merge the TransitLink SimplyGo and EZ-Link mobile apps into a single "SimplyGo" app.[2][39]On 15 June, the EZ-Link Pte Ltd's (EZ-Link) and Transit Link Pte Ltd's (TransitLink) transit and travel card-related services were consolidated under the "SimplyGo" branding.[40]On 9 January 2024, LTA announced that EZ-Link cards that had not yet been upgraded to SimplyGo and Nets Flashpay cards would be deprecated on 1 June 2024.[41][42]By then, a majority of commuters were already using SimplyGo, and the existing card-based system was near the end of itsoperational lifespan. As it would also be costly to run both ticketing systems, the LTA decided to proceed with SimplyGo.[43] Many commuters expressed dissatisfaction with the change,[44]particularly the inability to ascertain the fares charged at the transaction points on buses and the MRT after their cards were upgraded to SimplyGo.[43]When the issue was raised in 2023, the LTA explained that, as most of the SimplyGo features involve back-end processing, riders could not view their stored value card balance and deductions at MRT fare gates and bus readers. The fare transactions could only be viewed on the SimplyGo app.[45]The LTA said that while it would be possible to implement the feature for SimplyGo users, it would take "a few more seconds" for the information from the backend to be displayed at the fare gates, and hence would slow down commuters who were entering or exiting.[46] During the week after LTA's announcement, several commuters attempted to upgrade their EZ-Link cards to the SimplyGo platform. The high transaction volume caused the SimplyGo system to become less stable and responsive, resulting in longer processing times and failed upgrades that lead to commuters' cards being invalidated.[47]On 19 January 2024, the SimplyGo upgrade feature on ticketing machines at MRT stations have been restricted to "TUK with Supervision".[48] On 22 January, transport ministerChee Hong Tatannounced that the LTA reversed their decision and decided to extend the use of the card-based system. Those who had converted their cards to the new SimplyGo system during the January period could revert to the old system if they preferred to at no additional cost.[49]Chee also acknowledged that the issues encountered during the transition could have been avoided "with better preparation". An additional S$40 million (US$28.99 million) would be invested to maintain both systems.[50] The EZ-Link card operates on aradio frequency(RF) interface of 13.56 MHz at 212 kbit/s, with the potential for communication speeds in excess of 847 kbit/s. It employs theManchester bit coding schemefor noise tolerance against distance fluctuation between the card and the contactless reader, and implements theTriple DESalgorithm for security. An adult EZ-Link card costs S$12, inclusive of a S$5 non-refundable card cost and a $7 card value.[51][52] There was a problem with commuters attempting to evade paying the full fare with the prior magnetic fare card system. Under the EZ-Link system, when users tap their card on the entry card reader, the system deducts the maximum fare payable from their bus stop to the end of the bus route. If they tap their card on the exit reader when they disembark, the system will return an amount based on the remaining bus stages to the end of the bus route. If they fail to tap the card on the exit reader when they disembark, the entry card reader would have already deducted the maximum fare payable to the end of the bus route.[53] EZ-Link card holders can top up their cards at the following places: A Refund Service Charge of $1 will be charged per month for EZ-link cards that have expired for 2 years or more until the value is refunded or fully depleted. This applies to the remaining card balance, and not the initial deposit or cost of the card that is non-refundable. Refund may be requested at ticketing offices. In addition, commuters may replace expiring EZ-link cards before 31 December 2024 at a subsidised cost of $3.[54] On 10 January 2024, LTA announced that EZ-Link adult cards which have not yet been upgraded to SimplyGo will no longer be accepted for public transport fare payment from 1 June 2024, due to phasing out of the legacy card-based ticketing system. Commuters with EZ-Link Adult Cards may upgrade to the SimplyGo system at any ticketing machine and retain their current cards.[55][56]The decision was reversed by the authorities on 22 January 2024 following significant backlash, and existing EZ-Link cards can continue to be used after 1 June 2024.[50] • EZ-Link cards • Concession cards • EZ-Link Motoring cards (Card-based Offline Debit) ✓ It can be used for retail and public transport payments, without remote management functionality. ✓ Commuters can see their fare cost and card balance at the gantry. ✓ The card-based offline debit EZ-Link cards and EZ-Link Motoring cards are compatible with dual mode in-vehicle units for ERP and carpark payments. Thecard-based offline debit EZ-Link cardswere temporary suspended from March 2022 to June 2024, to encourage adoption of the SimplyGo account-based system.[57][58] EZ-Link Motoring cards(with a non-account-based card profile & similar functionality) are still sold at 7-Eleven/Cheers convenience stores, selected Caltex petrol stations, Vicom centres, STA Inspection centres. EZ-Link Motoring cards cannot be converted to be used on the SimplyGo system.[59] (* a service fee is chargeable) • SimplyGo EZ-Link cards • SimplyGo Concession cards (Account-based Online Debit) As the card information is stored on a central server, the card balance can be topped up without presence of physical card. Concession cards are only available for: children under 7 years old, students, full-time National Servicemen, senior citizens aged 60 years and above, persons with disabilities, Workfare Income Supplement recipients. ✓ It is compatible with the SimplyGo system for remote management of public transport cards. ✗ Fare cost and card balance will not be displayed at the gantry. Commuters have to create an account and sign in to the SimplyGo website or app, to view their travel history and its related fares. ✗ These account-based online debit cards are not compatible with ERP and carpark payments. • A locally issued Visa/Mastercard card is required to make top-ups.
https://en.wikipedia.org/wiki/EZ-link
FeliCais a contactlessRFIDsmart cardsystem fromSonyinJapan, primarily used inelectronic moneycards. The name stands forFelicity Card. First utilized in theOctopus cardsystem inHong Kong,[1]the technology is used in a variety of cards also in countries such asSingapore,Japan,Indonesia,Macau, thePhilippinesand theUnited States. FeliCa'sencryptionkey is dynamically generated each time mutualauthenticationis performed, preventing fraud such as impersonation. FeliCa is externally powered, i.e. it does not need a battery to operate. The card uses power supplied from the special FeliCa card reader when the card comes in range. When the data transfer is complete, the reader will stop the supply of power. FeliCa was proposed forISO/IEC 14443Type C but was rejected.[citation needed]However,ISO/IEC 18092(Near Field Communication) uses some similar modulation methods. It usesManchester codingat 212 kbit/s in the 13.56 MHz range. A proximity of 10 centimeters or less is required for communication. FeliCa complies with JIS: X6319-4: Specification of implementation for integrated circuit(s) cards - Part 4: High speed proximity cards. The standard is regulated byJICSAP(Japan IC Card System Application Council). TheUK IT security evaluation and certification schemeprovides more detail as to the internal architecture of the FeliCa card (RC-S860). FeliCa IC card (hardware) and its operating system has obtained ISO15408Evaluation Assurance Level4 (EAL4), a standard which indicates the security level of information technology and consumer products. FeliCa is also included as a condition of the NFC Forum Specification Compliance.[2] A new version of FeliCa IC chip was announced in June 2011 and had enhanced security adopting theAdvanced Encryption Standard(AES) encryption.[3]Sony claimed the next generation chip would have a higher performance, reliability and lower power consumption.[4]The newest generation of the technology was announced by Sony in 2020, which introduced higher[clarification needed]levels of encryption and additional security options[example needed]to meet market needs.[5] FeliCa supports simultaneous access of up to 8 blocks (1 block is 16octets). If an IC card is moved outside of the power-supplied area during the session, the FeliCa card automatically discards incomplete data to restore the previous state. Mobile FeliCa is a modification of FeliCa for use in mobile phones byFeliCa Networks[1], a subsidiary company of bothNTT DoCoMoandSony. DoCoMo has developed awallet phoneconcept based on Mobile FeliCa and has developed a wide network of partnerships and business models.auand SoftBank (formerVodafone Japan) have also licensed mobile FeliCa from FeliCa Networks. TheOsaifu-Keitai(おサイフケータイ)system (literal translation: "wallet-phone") was developed by NTT DoCoMo, and introduced in July 2004 and later licensed to Vodafone and au, which introduced the product in their own mobile phone ranges under the same name. Using Osaifu-Keitai, multiple FeliCa systems (such as Suica and Edy) can be accessed from a single mobile phone. On January 28, 2006, au introducedMobile Suicawhich is used primarily on the railway networks owned byJR East. On September 7, 2016, Apple announcedApple Paynow features FeliCa technology. Users who purchasediPhone 7orApple Watch Series 2in Japan can now add Suica cards into their Apple Pay wallets and tap their devices just like regular Suica cards.[6][7]Users can either transfer the balance from a physical Suica card to the Apple Pay wallet, or create a virtual Suica card in the wallet from theJR Eastapplication.[8]On September 12, 2017, Apple announced newiPhone 8,iPhone X, andApple Watch Series 3models featuring "Global FeliCa", i.e. NFC-F and licensed FeliCa middleware incorporated in all devices sold worldwide, not just ones sold in Japan.[9] On October 9, 2018, Google announced that its latestPixeldevice, the Pixel 3, would support FeliCa in models purchased in Japan. This feature enables support for WAON, Suica, and various other FeliCa-based services throughGoogle Payand the Osaifu-Keitai system. Successor models including the3aand4have the same support of Mobile Felica in Japan-sold models. Sony has built a FeliCa reader/writer known as "FeliCa Port" into theirVAIOPC line. Using the device, FeliCa cards can be used over the Internet for shopping and charging FeliCa cards. An external USB FeliCa PC reader/writer has been released as well, calledPaSoRi. It is USB-powered and allows one to perform online transactions and top upEZ-linkcards in Singapore withcredit cardsordebit cardsanywhere, as long as there is direct access to the Internet. The Sony PaSoRi Reader is not compatible with the new ez-link cards.[10] As FeliCa is thede factosmart card ticketing system standard in Japan, many of these cards have integrated services. A particular region/operator may accept multiple cards. The table below shows the integrated services FeliCa cards have for each Japanese region.
https://en.wikipedia.org/wiki/FeliCa
Java Cardis a software technology that allowsJava-based applications (applets) to be run securely onsmart cardsand more generally on similar secure smallmemory footprintdevices[1]which are called "secure elements" (SE). Today, a secure element is not limited to its smart cards and other removable cryptographic tokens form factors; embedded SEs soldered onto a device board and new security designs embedded into general purpose chips are also widely used. Java Card addresses this hardware fragmentation and specificities while retaining code portability brought forward by Java. Java Card is the tiniest of Java platforms targeted for embedded devices. Java Card gives the user the ability to program the devices and make them application specific. It is widely used in different markets: wireless telecommunications within SIM cards and embedded SIM, payment within banking cards[2]and NFC mobile payment and for identity cards, healthcare cards, and passports. Several IoT products like gateways are also using Java Card based products to secure communications with a cloud service for instance. The first Java Card was introduced in 1996 bySchlumberger's card division which later merged withGemplusto formGemalto. Java Card products are based on the specifications bySun Microsystems(later asubsidiaryofOracle Corporation). Many Java card products also rely on the GlobalPlatform specifications for the secure management of applications on the card (download, installation, personalization, deletion). The main design goals of the Java Card technology are portability, security and backward compatibility.[3] Java Card aims at defining a standardsmart cardcomputing environment allowing the same Java Card applet to run on different smart cards, much like a Java applet runs on different computers. As in Java, this is accomplished using the combination of a virtual machine (the Java Card Virtual Machine), and a well-defined runtime library, which largely abstracts the applet from differences between smart cards. Portability remains mitigated by issues of memory size, performance, and runtime support (e.g. for communication protocols or cryptographic algorithms). Moreover, vendors often expose proprietaryAPIsspecific to their ecosystem, further limiting portability for applets that rely on such calls. To address these limitations,Vasilios MavroudisandPetr Svendaintroduced JCMathLib, an open-source cryptographic wrapper library for Java Card, enabling low-level cryptographic computations not supported by the standard API.[4][5][6] Java Card technology was originally developed for the purpose of securing sensitive information stored onsmart cards. Security is determined by various aspects of this technology: At the language level, Java Card is a precise subset of Java: all language constructs of Java Card exist in Java and behave identically. This goes to the point that as part of a standard build cycle, a Java Card program is compiled into a Java class file by a Java compiler; the class file is post-processed by tools specific to the Java Card platform. However, many Java language features are not supported by Java Card (in particular types char, double, float and long; thetransientqualifier; enums; arrays of more than one dimension; finalization; object cloning; threads). Further, some common features of Java are not provided at runtime by many actual smart cards (in particular typeint, which is the default type of a Java expression; and garbage collection of objects). Java Card bytecode run by the Java Card Virtual Machine is a functional subset ofJava 2 bytecoderun by a standard Java Virtual Machine but with a different encoding to optimize for size. A Java Card applet thus typically uses less bytecode than the hypothetical Java applet obtained by compiling the same Java source code. This conserves memory, a necessity in resource constrained devices like smart cards. As a design tradeoff, there is no support for some Java language features (as mentioned above), and size limitations. Techniques exist for overcoming the size limitations, such as dividing the application's code into packages below the 64KiBlimit. Standard Java Card class library and runtime support differs a lot from that in Java, and the common subset is minimal. For example, the Java Security Manager class is not supported in Java Card, where security policies are implemented by the Java Card Virtual Machine; and transients (non-persistent, fast RAM variables that can be class members) are supported via a Java Card class library, while they have native language support in Java. The Java Card runtime and virtual machine also support features that are specific to the Java Card platform: Coding techniques used in a practical Java Card program differ significantly from those used in a Java program. Still, that Java Card uses a precise subset of the Java language speeds up the learning curve, and enables using a Java environment to develop and debug a Java Card program (caveat: even if debugging occurs with Java bytecode, make sure that the class file fits the limitation of Java Card language by converting it to Java Card bytecode; and test in a real Java Card smart card early on to get an idea of the performance); further, one can run and debug both the Java Card code for the application to be embedded in a smart card, and a Java application that will be in the host using the smart card, all working jointly in the same environment. Oracle has released several Java Card platform specifications and is providing SDK tools for application development. Usually smart card vendors implement just a subset of algorithms specified in Java Card platform target and the only way to discover what subset of specification is implemented is to test the card.[7] The version 3.0 of the Java Card specification (draft released in March 2008) is separated in two editions: theClassic Editionand theConnected Edition.[10] Java Card 3.1 was released in January 2019.
https://en.wikipedia.org/wiki/Java_Card
Object hyperlinkingis extending theInternetto objects and locations in the real world. Object hyperlinking aims to extend the Internet to the physical world by attaching tags withURLsto tangible objects or locations. These object tags can then be read by a wireless mobile device and information about objects and locations retrieved and displayed. However, object hyperlinking may also be sensible for contexts other than the Internet (e.g. withdata objectsindata baseadministering or with textcontent management). Linking an object or a location to the Internet is a more involved process than linking two web pages. An object hyperlinking system requires seven components: There are a number of different competing tagging systems. The objecthyperlinkingsystems described above will make it possible to link comprehensive and editable information to any object or location. How this capability can best be used remains to be seen. What has emerged so far is a mixture of social and commercial applications.
https://en.wikipedia.org/wiki/Object_hyperlinking
Pokenis a cloud-basedevent managementplatform utilized bytrade showsand exhibitions, corporate and association events, and sports and youth events.[1]The modular platform includes features and services such as registration and badging,[2]match-making, meeting scheduling, mobile apps, NFC interactive USB devices, lead generation devices,[3]gamification, access control, and metrics reporting.[4] The company, Poken S.A., was founded in December 2007 inLausanne,Switzerland, by Stéphane Doutriaux. The founder developed the initial concept of an interactive USB device for sharing personal information and social networks by touch[5]while doing his MBA atIMD, a business school inLausanne. This project was inspired by an application running on the IMD campus developed by Gabriel Klein in 2004, one of the first employees of Poken.[6]The technology was developed in collaboration with the Berner Fachhochschule, a university of applied sciences in Biel, Switzerland. The project started in July 2008 and ended in December of the same year. The first release of the interactive USB device, called ‘Sparks,’ was in January 2009. Since the initial development of the Poken 'Spark,' the company has expanded its lines of products and services.[7] Poken has a network of partners and resellers in over 12 countries, with headquarters in London, Lausanne, Dubai, and New York,[8]and has delivered events in over 80 international locations. In 2017, Poken was acquired by GES.[9] Poken is a modular, end-to-end platform that consists of both software and hardware products. The Poken interactive USB device utilizesNear Field Communication(NFC) technology to exchangeonline social networkingdata between two devices. The primary information exchanged via the Poken is a ‘social business card,’ a digital replacement for a physicalbusiness card. By touching two devices together, a unique ID is exchanged that links to contact information on the Poken website. Contact information acquired using the Poken can be uploaded to the Poken website using a built-in USB connector. The user's Poken contact card can contain any information they want to share, for example, URLs, mobile numbers, email addresses, and locations. It can also be configured with links to the user on over 40 social networking sites. When used with Poken Touch points, the Poken interactive USB can collect digital content via touch, such as brochures and magazines, interact with installations such as media walls and digital surveys, and be used for access control and meeting check-in.[10]
https://en.wikipedia.org/wiki/Poken
Akeychain(/ˈkitʃeɪn/ⓘ) (alsokeyring) is a small ring or chain of metal to which severalkeys, or fobs can be attached. The termskeyringandkeychainare often used interchangeably to mean both the individual ring, or a combined unit of a ring and fob. The length of a keychain or fob may also allow an item to be used more easily than if connected directly to a keyring. Some keychains allow one or both ends to rotate, keeping the keychain from becoming twisted, while the item is being used. Keychains are one of the most commonsouvenirandadvertisingitems. In the 1950s and 1960s, with the improvement of plastic manufacturing techniques, promotional items including keychains became unique. Businesses could place their names and logos on promotional keychains that were three-dimensional for less cost than the standard metal keychains. Keychains are small and inexpensive enough to become promotional items for larger national companies that might give them out by the millions. For example, with the launch of a new movie or television show, those companies might partner with food companies to provide a character keychain in each box of cereal. These same qualities also make them cheap and easy to produce for consumers, and these have become popular souvenir and novelty items. Destination souvenir keychains will often bear the name of the destination or be shaped like something people relate to the destination, such as a sandal for a beach, or skis for a mountain. The ease of production has created a wide range of options for consumers and businesses alike. A keychain can also be a connecting link between a keyring and thebelt, bag, or other garment. Keychains with an actual chain or string are usually used by personnel whose job demands frequent use of keys, such as asecurity guard, prison officer,janitor, orretailstore manager. The chain is often retractable, and therefore may be anylonrope, instead of an actual metal chain. The chain ensures that the keys remain attached to the individual using them, makes accidental loss less likely, and saves on wear and tear on thepocketsof the user. Many keychains also offer other functions that the owner wants easily accessible as well. These can include army knives, bottle openers, nail clippers, pill cases, or pepper spray among many others. An electronic key finder is also a useful item found on many keys that will beep when summoned for quick finding when misplaced. A keyring or "split ring" is acircle cotterthat holdskeysand other small items sometimes connected to keychains. Other types of keyrings are made of leather, wood and rubber. These are the central component to a keychain. Keyrings were invented in the 19th century by Samuel Harrison.[1]The most common form of the keyring is a single piece of metal in a 'double loop'. Either end of the loop can be pried open to allow a key to be inserted and slid along the spiral until it becomes wholly engaged onto the ring. Noveltycarabinersare also commonly used as keyrings for ease of access and exchange. Often the keyring is adorned with a fob for self-identification or decor. Other forms of rings may use a single loop of metal or plastic with a mechanism to open and securely close the loop. Akey fobis a generally decorative and at times useful item many people often carry with theirkeys, on a ring or a chain, for ease of tactile identification, to provide a better grip, or to make a personal statement.Key fobcan also specifically refer to modern electroniccar keys, orsmart keys, which serve as both a key and remote. The wordfobmay be linked to thelow German dialectfor the wordFuppe, meaning "pocket"; however, the real origin of the word is uncertain. Fob pockets (meaning 'sneak proof' from the German wordfoppen) were pockets meant to deter thieves. A short "fob chain" was used to attach to items, like a pocket watch, placed in these pockets.[2] Fobs vary considerably in size, style and functionality. Most commonly they are simple discs of smooth metal or plastic, typically with a message or symbol such as that of alogo(as with conference trinkets) or a sign of an important group affiliation. A fob may be symbolic or strictly aesthetic, but it can also be a small tool. Many fobs are smallflashlights,compasses,calculators,penknives,discount cards,bottle openers,security tokens, andUSB flash drives. As electronic technology continues to become smaller and cheaper, miniature key-fob versions of (previously) larger devices are becoming common, such asdigital photo frames,remote controlunits forgarage door openers,barcode scannersand simplevideo games(e.g.Tamagotchi) or othergadgetssuch asbreathalyzers. Some retail establishments such as gasoline stations keep their bathrooms locked and customers must ask for the key from the attendant. In such cases the key often has a very large fob so that customers will not automatically pocket and walk off with the key after completing their ablutions. Key fobs offering added functionalities connected to online services may require additional subscription payment to access them.[3] Access controlkey fobs areelectronickey fobs that are used for controlling access to buildings or vehicles.[4]They are used for activating such things asremote keyless entry systemsonmotor vehicles.[5][6]Early electric key fobs operated usinginfraredand required a clearline-of-sightto function. These could be copied using a programmableremote control. More recent models usechallenge–response authenticationoverradio frequency, so these are harder to copy and do not need line-of-sight to operate. Programming these remotes sometimes requires the automotive dealership to connect a diagnostic tool, but many of them can be self-programmed by following a sequence of steps in the vehicle and usually requires at least one working key. Key fobs are used inapartment buildingsandcondominiumbuildings for controlling access to common areas (for example, lobby doors, storage areas, fitness room, pool). These usually contain apassive RFIDtag. The fob operates in much the same manner as aproximity cardto communicate (via a reader pad) with a central server for the building, which can be programmed to allow access only to those areas in which the tenant or owner is permitted to access, or only within certain time frames. Remote workersmay also use asecurity token– an electronic device often referred to as a fob – that provides one part of a three-way match to log in over anunsecurecomputer networkconnection to a secure network. (A well-known example is theRSA SecurIDtoken.) This kind of key fob may have a keypad on which the user must enter aPINto retrieve an access code, or it could be a display-only device. RFID key fobs can be easily cloned with tools like theProxmark3, and there are several companies in America that offer this service. The cost of keychains in the United States varies widely depending on their purpose. Advertising keychains begin at only a few cents to a few dollars each when purchased in large quantities as giveaways.[citation needed]Souvenir keychains ornoveltykeychains representing bands, movies, games, etc., are also considered to be inexpensive, ranging from US$1 up to US$15. Electronic keychains including games and small organizers start at a few dollars and can be up to US$50. Other keychain electronics including cameras, digital photo frames, and USB drives cost US$10 to US$100. The most popular focused keychain collections are advertising, souvenir, monument, popular characters and nostalgia-related items.[citation needed]Keychains are typically not made specifically for collecting on large scale, and do not hold their value as well as other collections. A standard keychain that was purchased for ten dollars new may only be worth less than a dollar once it has been owned regardless of condition. Collectors display and store their keychains in several different ways. Some collections are small enough that the collector can place all of their keychains on their standard key ring. Some larger collections can be stored and displayed on dowels, cork boards, tool racks, on large link chains, in display cases, hung on walls, or displayed on Christmas trees. Some collections are large enough that entire rooms are dedicated to the keychain collection.[citation needed] According toGuinness World Records, the largest collection of keychains consists of 62,257 items, achieved by Angel Alvarez Cornejo in Sevilla, Spain, as verified on 25 June 2016. His collection began at the age of 7. Due to the tremendous size of his collection he now stores his keychains in his garage and a rented warehouse.[7]The previous record holder was Brent Dixon ofGeorgia, United Stateswith the largest collection of keychains, at 41,418 non-duplicated ones.[8] By analogy to the physical object, the termskeychainandkeyringare often used forsoftwarethat storescryptographic keys. The term keychain was first introduced in a series of IBM developerWorks articles.[citation needed]The term is used inGNU Privacy Guardto store known keys on a keyring.Mac OS Xuses a password storage system calledKeychain. A "keyring" is also the name of apassword managerapplication working under theGNOMEdesktop manager (used for example inUbuntu operating system). In cryptography akeyringis a database of multiple keys or passwords. There are also portable password manager programs, such as Keepass and KeePassX.
https://en.wikipedia.org/wiki/Smart_keychain
TecTilesare anear field communication(NFC) application, developed bySamsung, for use with mobilesmartphonedevices.[1] Each TecTile is a low-cost[2]self-adhesive sticker with an embeddedNFC Tag.[3]They are programmed before use, which can be done simply by the user, using a downloadableAndroid app.[3] When an NFC-capable phone is placed or 'tapped' on a Tag, the programmed action is undertaken. This could cause a website to be displayed, the phone switched to silent mode, or many other possible actions. NFC Tags are an application ofRFIDtechnology. Unlike most RFID, which makes an effort to give a long reading range, NFC deliberately limits this range to only a few inches or almost touching the phone to the Tag. This is done deliberately, so that Tags have no effect on a phone unless there is a clear user action to 'trigger' the Tag. Although phones are usually touched to Tags, this does not require any 'docking' orgalvanic contactwith the Tag, so they are still considered to be a non-contact technology. Although NFC Tags can be used with many smartphones, TecTiles gained much prominence in late 2012 with the launch of theGalaxy SIII.[4][5] Some applications are intended for customising the behaviour of a user's own phone according to a location, e.g. a quiet mode when placed on a bedside table; others are intended for public use, e.g. publicising web content about a location.[note 1]This programming is carried out entirely on the Tag. Subject to security settings,anycompatible phone would have the same response when tapped on the Tag. When the Tag's response is aFacebook 'Like'or similar, this is carried out under the phone user's credentials (such as aFacebookidentity), rather than the Tag's identity. Samsung group Tags' functions under four headings:[6]Settings, Phone, Web and Social. A handful of examples: Tags may also be pre-programmed and distributed to users. Such a Tag could be set to take the user to a manufacturer's service support page and sent out stuck to washing machines or other domestic whitegoods. Factory-prepared Tags can also be printed with logos, or moulded into forms apart from stickers, such as key fobs or wristbands. The re-programmability of a Tag is claimed at over 100,000 programming cycles.[5] A Tag placed on a doorway or noticeboard may be re-programmed in situ and could thus have a long life (e.g. many conferences, meetings or events). Tags may be locked after programming,[3]to avoid unauthorized reprogramming. Locked tags may be unlocked only by the same phone that locked them. The duration of a locked Tag's relevance will be the main constraint on Tag lifetime if unlocking is not possible. The lifespan of a Tag is also likely to be limited by physical factors such as the glue adhesion, or the difficulty of peeling them from the glue. The TecTile app is not installed by default.[6]If a Tag is read before it is installed, the user is directed to the app download site. Using a Samsung TecTile NFC tag requires a device with theMIFAREClassic chipset.[7]This chipset is based on NXP's NFC controller, which is outside the NFC Forum's standard. Using a TecTile thus requires the NXP chipset. The NXP chipset is found in manyAndroidphones. Recently Android phone manufacturers have chosen to drop TecTile support; notably in Samsung's latest flagship phone theGalaxy S4[8]and Google'sNexus 4.[9]TecTiles also donotwork withBlackBerryandWindowsNFC phones. The new version of TecTile, calledTecTile 2, have improved compatibility,[10]but currently the Samsung Galaxy S4 is the only device that comes with native support for TecTile 2.[11] NFC Tags thatdocomply with NFC Forum Type 1 or Type 2 compatibility protocols[12]are much more widely compatible than theMIFAREdependant Samsung TecTile,[13]and are also widely available. Popular standards compliant NFC Tags are theNTAG213(137 bytes of usable memory), and theTopaz 512(480 bytes of usable memory).[14] The need for the installed app is one of the drawbacks to TecTile and to NFC Tags in general. The basic NFC Tag standards support Tags carryingURLs, where theschemeor protocol (e.g. thehttp://prefix) may be eitherhttp(for web addresses),telfor telephony, or an anonymousdatascheme. Although support for thehttpandtelschemes may be assumed in a basic handset, support for the others will not be available unless an App has been installed and registered to handle them. In general, NFC Tags (in the non-TecTile sense) are only useful for web addresses and telephony. To provide features beyond this, Samsung offers the TecTile App. Thiscouldhave used any scheme on a tag, or even invented a whole new scheme. When installed, such an App would register itself to handle these new schemes. However the App is not part of the default install for a handset, even a Samsung. To allow users to install the App automatically, on first encountering a TecTile, all the TecTile's sophisticated and phone-specific features are still provided through thehttpscheme. The basic URL is that for initially downloading the App, details of the TecTile operation are encoded as URL parameters within thequery stringin addition to this. When reading a Tag, one of two things happens: This convoluted behaviour was chosen to make the App effectively self-installing for naive users. Why the App was not supplied as default is unknown. The downsides of this design choice though are that the URLs required to activate TecTile functions are relatively long, meaning that non-TecTile NFC Tags with limited memory size (137 bytes) cannot generally be used for functions other than web addresses. Additionally, the lack of a non-proprietary approach to these more capable functions limits the development of NFC Tags as a general technique across all such handsets, rather than just Samsung TecTiles.
https://en.wikipedia.org/wiki/TecTiles
TransferJetis a close proximity wireless transfer technology initially proposed bySonyand demonstrated publicly in early 2008.[1]By touching (or bringing very close together) two electronic devices, TransferJet allows high speed exchange of data. The concept of TransferJet consists of a touch-activated interface which can be applied for applications requiring high-speed data transfer between two devices in a peer-to-peer mode without the need for external physical connectors.[2] TransferJet's maximum physical layer transmission rate is 560 Mbit/s. After allowing for error correction and other protocol overhead, the effective maximum throughput is 375 Mbit/s. TransferJet will adjust the data rate downward according to the wireless environment, thereby maintaining a robust link even when the surrounding wireless condition fluctuates. TransferJet has the capability of identifying the uniqueMACaddresses of individual devices, enabling users to choose which devices can establish a connection. By allowing only devices inside the household, for example, one can prevent data theft from strangers while riding a crowded train. If, on the other hand, one wishes to connect the device with any other device at a party, this can be done by simply disabling the filtering function. TransferJet uses the same frequency spectrum asUWB, but occupies only a section of this band available as a common worldwide channel. Since the RF power is kept under -70 dBm/MHz, it can operate in the same manner as that of UWB devices equipped withDAAfunctionality. In addition, this low power level also ensures that there will be no interference to other wireless systems, including other TransferJet systems, operating nearby. By reducing the RF power and spatial reach down to a few centimeters (about an inch or less), a TransferJet connection in its most basic mode does not require any initial setup procedure by the user for either device, and the action of spontaneously touching one device with another will automatically trigger the data transfer. More complex usage scenarios will require various means to select the specific data to send as well as the location to store (or method to process) the received data. TransferJet utilizes a newly developedTransferJet Couplerbased on the principle of electric induction field as opposed to radiation field for conventional antennas. The functional elements of a generic TransferJet Coupler consist of a coupling electrode or plate, a resonant stub and ground. Compared to conventional radiating antennas, the TransferJet Coupler achieves higher transmission gain and more efficient coupling in the near-field while providing sharp attenuation at longer distances. Because the Coupler generates longitudinal electric fields, there is no polarization and the devices can be aligned at any angle. TransferJet Specifications[3] Corresponds to low-intensity radio wave regulation in Japan and Taiwan, and with local regulations in other countries and regions. System can adjust the transmission rate depending on the wireless environment. π/2-shift BPSK Although sometimes confused withNear Field Communication, TransferJet depends on an entirely different technology and is also generally targeted for different usage scenarios focusing on high-speed data transfer. Thus these two systems will not interfere with each other and can even co-exist in the same location, as already implemented in certain products.[4]Other recent products combine TransferJet with wireless power to allow both data transfer and wireless charging capability simultaneously in the same location.[5]TransferJet, NFC and wireless power are the three major near-field (contact-less) technologies that are expected to eliminate the physical connections and cables currently required to interface devices with each other. Comparison with NFC audio/video streaming Electronic Payment, ID Tagging symmetrical TheTransferJet Consortium[6]was established in July 2008 to advance and promote the TransferJet Format, by developing the technical specifications and compliance testing procedures as well as creating a market for TransferJet-compliant, interoperable products. In September 2011, the consortium was registered as an independent non-profit industry association. As of June 2015, the Consortium is led by five Promoter companies, consisting of:JRC,NTT,Olympus,Sony(consortium administrator), andToshiba. The Consortium currently also has around thirty Adopter companies.[7]The TransferJet regular typeface and TransferJet logos are trademarks managed and licensed by the TransferJet Consortium. Commercial products have been introduced since January 2010 and the initial product categories include digital cameras,[8]laptop PCs,[9]USB cradle accessories,[10]USB dongle accessories[11]and office/business equipment.[12]Compliance testing equipment is provided by Agilent technologies and certification services are offered by Allion Test Labs. The first commercially available TransferJet development platform for embedded systems was launched by Icoteq Ltd in February 2015.[13]Smartphones with integrated TransferJet functionality were launched in June 2015 fromFujitsu,[14]andBkav.[15]Other product vendors includeBuffaloand E-Globaledge.[16] TransferJet X[17]is a new second-generation TransferJet specification capable of data transfer speeds of 13.1 Gbit/sec and above, or about 20 times the speed of current TransferJet. This specification uses the 60 GHz band and requires only 2 msec or less to establish a connection prior to the actual data transfer, thereby enabling the exchange of large content files even in the short amount of time it takes, for example, for a person to walk through a wicket gate. The TransferJet Consortium is currently defining the details of the TransferJet X ecosystem, based on the IEEE 802.15.3e standard[18]completed and published in June 2017. The HRCP Research and Development Partnership,[19]established in 2016, is developing anSoCsolution for implementing TransferJet X in a variety of products and services to be released starting around 2020.
https://en.wikipedia.org/wiki/TransferJet
Ultra-wideband(UWB,ultra wideband,ultra-wide bandandultraband) is aradio technologythat can use a very low energy level for short-range, high-bandwidth communications over a large portion of the radio spectrum.[1]UWB has traditional applications in non-cooperativeradar imaging. Most recent applications target sensor data collection, precise locating,[2]and tracking.[3][4]UWB support started to appear in high-endsmartphonesin 2019. Ultra-wideband is a technology for transmitting information across a wide bandwidth (>500MHz). This allows for the transmission of a large amount of signal energy without interfering with conventionalnarrowbandandcarrier wavetransmission in the same frequency band. Regulatory limits in many countries allow for this efficient use of radio bandwidth, and enable high-data-ratepersonal area network(PAN) wireless connectivity, longer-range low-data-rate applications, and the transparent co-existence of radar and imaging systems with existing communications systems. Ultra-wideband was formerly known aspulse radio, but the FCC and theInternational Telecommunication UnionRadiocommunication Sector (ITU-R) currently define UWB as an antenna transmission for which emitted signal bandwidth exceeds the lesser of 500 MHz or 20% of the arithmetic center frequency.[5]Thus, pulse-based systems—where each transmitted pulse occupies the UWB bandwidth (or an aggregate of at least 500 MHz of a narrow-band carrier; for example,orthogonal frequency-division multiplexing(OFDM))—can access the UWB spectrum under the rules. A significant difference between conventional radio transmissions and UWB is that conventional systems transmit information by varying the power level, frequency, or phase (or a combination of these) of a sinusoidal wave. UWB transmissions transmit information by generating radio energy at specific time intervals and occupying a large bandwidth, thus enablingpulse-positionor time modulation. The information can also be modulated on UWB signals (pulses) by encoding the polarity of the pulse, its amplitude and/or by using orthogonal pulses. UWB pulses can be sent sporadically at relatively low pulse rates to support time or position modulation, but can also be sent at rates up to the inverse of the UWB pulse bandwidth. Pulse-UWB systems have been demonstrated at channel pulse rates in excess of 1.3 billion pulses per second using a continuous stream of UWB pulses (Continuous Pulse UWB orC-UWB), while supporting forward error-correction encoded data rates in excess of 675 Mbit/s.[6] A UWB radio system can be used to determine the "time of flight" of the transmission at various frequencies. This helps overcomemultipath propagation, since some of the frequencies have aline-of-sighttrajectory, while other indirect paths have longer delays. With a cooperative symmetric two-way metering technique, distances can be measured to high resolution and accuracy.[7] Ultra-wideband (UWB) technology is utilised for real-time locationing due to its precision and reliability. It plays a role in various industries such as logistics, healthcare, manufacturing, and transportation. UWB's centimeter-level accuracy is valuable in applications in which using traditional methods may be unsuitable, such as in indoor environments, where GPS precision may be hindered. Its low power consumption ensures minimal interference and allows for coexistence with existing infrastructure. UWB performs well in challenging environments with its immunity to multipath interference, providing consistent and accurate positioning. In logistics, UWB increases inventory tracking efficiency, reducing losses and optimizing operations. Healthcare makes use of UWB in asset tracking, patient flow optimization, and in improving care coordination. In manufacturing, UWB is used for streamlining inventory management and enhancing production efficiency through accurate tracking of materials and tools. UWB supports route planning, fleet management, and vehicle security in transportation systems.[8] UWB uses multiple techniques for location detection:[9] Apple launched the first three phones with ultra-wideband capabilities in September 2019, namely, theiPhone 11,iPhone 11 Pro, and iPhone 11 Pro Max.[10][11][12]Apple also launched Series 6 of Apple Watch in September 2020, which features UWB,[13]and theirAirTagsfeaturing this technology were revealed at a press event on April 20, 2021.[14][4]The Samsung Galaxy Note 20 Ultra, Galaxy S21+, and Galaxy S21 Ultra also began supporting UWB,[15]along with the Samsung Galaxy SmartTag+.[16]TheXiaomi MIX 4released in August 2021 supports UWB, and offers the capability of connecting to selectAIoTdevices.[17] TheFiRa Consortiumwas founded in August 2019 to develop interoperable UWB ecosystems including mobile phones. Samsung, Xiaomi, and Oppo are currently members of the FiRa Consortium.[18]In November 2020,Android Open Source Projectreceived first patches related to an upcoming UWB API; "feature-complete" UWB support (exclusively for the sole use case of ranging between supported devices) was released in version 13 of Android.[19] Ultra-wideband gained widespread attention for its implementation insynthetic aperture radar (SAR)technology. Due to its high resolution capacities using lower frequencies, UWB SAR was heavily researched for its object-penetration ability.[23][24][25]Starting in the early 1990s, theU.S. Army Research Laboratory (ARL)developed various stationary and mobile ground-, foliage-, and wall-penetrating radar platforms that served to detect and identify buried IEDs and hidden adversaries at a safe distance. Examples include therailSAR, theboomSAR, theSIRE radar, and theSAFIRE radar.[26][27]ARL has also investigated the feasibility of whether UWB radar technology can incorporate Doppler processing to estimate the velocity of a moving target when the platform is stationary.[28]While a 2013 report highlighted the issue with the use of UWB waveforms due to target range migration during the integration interval, more recent studies have suggested that UWB waveforms can demonstrate better performance compared to conventional Doppler processing as long as a correctmatched filteris used.[29] Ultra-wideband pulseDoppler radarshave also been used to monitor vital signs of the human body, such as heart rate and respiration signals as well as human gait analysis and fall detection. It serves as a potential alternative tocontinuous-wave radar systemssince it involves less power consumption and a high-resolution range profile. However, its low signal-to-noise ratio has made it vulnerable to errors.[30][31] Ultra-wideband is also used in "see-through-the-wall" precision radar-imaging technology,[32][33][34]precision locating and tracking (using distance measurements between radios), and precision time-of-arrival-based localization approaches.[35]UWB radar has been proposed as the active sensor component in anAutomatic Target Recognitionapplication, designed to detect humans or objects that have fallen onto subway tracks.[36] Ultra-wideband characteristics are well-suited to short-range applications, such asPC peripherals,wirelessmonitors,camcorders, wirelessprinting, andfile transferstoportable media players.[37]UWB was proposed for use inpersonal area networks, and appeared in the IEEE 802.15.3a draft PAN standard. However, after several years of deadlock, the IEEE 802.15.3a task group[38]was dissolved[39]in 2006. The work was completed by the WiMedia Alliance and the USB Implementer Forum. Slow progress in UWB standards development, the cost of initial implementation, and performance significantly lower than initially expected are several reasons for the limited use of UWB in consumer products (which caused several UWB vendors to cease operations in 2008 and 2009).[40] UWB's precise positioning and ranging capabilities enable collision avoidance and centimeter-level localization accuracy, surpassing traditional GPS systems. Moreover, its high data rate and low latency facilitate seamless vehicle-to-vehicle communication, promoting real-time information exchange and coordinated actions. UWB also enables effective vehicle-to-infrastructure communication, integrating with infrastructure elements for optimized behavior based on precise timing and synchronized data. Additionally, UWB's versatility supports innovative applications such as high-resolution radar imaging for advanced driver assistance systems, secure key less entry via biometrics or device pairing, and occupant monitoring systems, potentially enhancing convenience, security, and passenger safety.[41] In the U.S.,ultra-widebandrefers to radio technology with abandwidthexceeding the lesser of 500 MHz or 20% of the arithmeticcenter frequency, according to the U.S.Federal Communications Commission(FCC). A February 14, 2002 FCC Report and Order[58]authorized the unlicensed use of UWB in the frequency range from 3.1 to 10.6GHz. The FCC powerspectral density(PSD) emission limit for UWB transmitters is −41.3 dBm/MHz. This limit also applies to unintentional emitters in the UWB band (the"Part 15"limit). However, the emission limit for UWB emitters may be significantly lower (as low as −75 dBm/MHz) in other segments of the spectrum. Deliberations in theInternational Telecommunication UnionRadiocommunication Sector (ITU-R) resulted in a Report and Recommendation on UWB[citation needed]in November 2005.UKregulatorOfcomannounced a similar decision[59]on 9 August 2007. There has been concern over interference between narrowband and UWB signals that share the same spectrum. Earlier, the only radio technology that used pulses wasspark-gap transmitters, which international treaties banned because they interfere with medium-wave receivers. However, UWB uses much lower levels of power. The subject was extensively covered in the proceedings that led to the adoption of the FCC rules in the US, and in the meetings of the ITU-R leading to its Report and Recommendations on UWB technology. Commonly-used electrical appliances emitimpulsive noise(for example, hair dryers), and proponents successfully argued that thenoise floorwould not be raised excessively by wider deployment of low power wideband transmitters.[60] In February 2002, the Federal Communications Commission (FCC) released an amendment (Part 15) that specifies the rules of UWB transmission and reception. According to this release, any signal with fractional bandwidth greater than 20% or having a bandwidth greater than 500 MHz is considered as an UWB signal. The FCC ruling also defines access to 7.5 GHz of unlicensed spectrum between 3.1 and 10.6 GHz that is made available for communication and measurement systems.[61] Narrowband signals that exist in the UWB range, such asIEEE 802.11atransmissions, may exhibit highPSDlevels compared to UWB signals as seen by a UWB receiver. As a result, one would expect a degradation of UWB bit error rate performance.[62]
https://en.wikipedia.org/wiki/Ultra-wideband
Narrowband Internet of things(NB-IoT) is alow-power wide-area network(LPWAN)radio technologystandard developed by3GPPforcellular networkdevices and services.[1][2]The specification was frozen in3GPP Release 13(LTE Advanced Pro), in June 2016.[3]Other 3GPPIoTtechnologies includeeMTC(enhanced Machine-Type Communication) and EC-GSM-IoT.[4] NB-IoT focuses specifically on indoor coverage, long battery life, and high connection density. NB-IoT uses a subset of theLTEstandard, but limits the bandwidth to a single narrow-band of 200kHz. It usesOFDMmodulation for downlink communication andSC-FDMAfor uplink communications.[5][6][7][8][9]IoT applications which require more frequent communications will be better served byLTE-M, which has no duty cycle limitations operating on the licensed spectrum. In March 2019, theGlobal Mobile Suppliers Association(GSA) announced that over 100 operators had either NB-IoT or LTE-M networks.[10]This number had risen to 142 deployed/launched networks by September 2019.[11] 2 Mbit/s (EGPRS2B) 16.9 kbit/s (single-tone) 2 Mbit/s (EGPRS2B) As of March 2019 GSA identified:[14] The3GPP-compliantLPWAdevice ecosystem continues to grow. In April 2019,GSAidentified 210 devices supporting either Cat-NB1/NB-2 or Cat-M1 – more than double the number in its GAMBoD database at the end of March 2018.[16]This figure had risen a further 50% by September 2019, with a total of 303 devices identified as supporting either Cat-M1, Cat-NB1 (NB-IoT) or Cat-NB2. Of these, 230 devices support Cat-NB1 (including known variants) and 198 devices support Cat-M1 (including known variants). The split of devices (as of September 2019) was 60.4% modules, 25.4% asset trackers, and 5.6% routers, with data loggers, femtocells, smart-home devices, and smart watches, USB modems, and vehicle on-board units (OBUs), making up the balance.[17] In 2018 first NB-IoT data loggers and other certified devices started to appear. For example ThingsLog released their first CE certified single channel NB-IoT data logger on Tindie in late 2018. To integrate NB-IoT into a maker board for IoT developments, SODAQ, a Dutch IoT hardware and software engineering company, crowdfunded an NB-IoT shield onKickstarter.[18]They then went on to partner with module manufactureru-bloxto create maker boards with NB-IoT andLTE-Mintegrated.[19] Since 2021, there also is a cheap all-in-one NB-IoT solution available to the general public developed by the Chinese manufacturer Ai-Thinker.[20] At the beginning of 2023 the Belgian company DPTechnics released theWalterIoT board which combines an ESP32-S3 together with aSequansMonarch 2 NB-IoT/LTE-M platform. The board is focused on long-term availability and includes a GNSS receiver.
https://en.wikipedia.org/wiki/NB-IoT
Zigbeeis anIEEE 802.15.4-basedspecificationfor a suite of high-levelcommunication protocolsused to createpersonal area networkswith small, low-powerdigital radios, such as forhome automation, medical device data collection, and other low-power low-bandwidth needs, designed for small scale projects which need wireless connection. Hence, Zigbee is a low-power, low-data-rate, and close proximity (i.e., personal area)wireless ad hoc network. The technology defined by the Zigbee specification is intended to be simpler and less expensive than other wirelesspersonal area networks(WPANs), such asBluetoothor more general wireless networking such asWi-Fi(orLi-Fi). Applications include wireless light switches,home energy monitors, traffic management systems, and other consumer and industrial equipment that requires short-range low-rate wireless data transfer. Its low power consumption limits transmission distances to 10–100 meters (33–328 ft)line-of-sight, depending on power output and environmental characteristics.[1]Zigbee devices can transmit data over long distances by passing data through amesh networkof intermediate devices to reach more distant ones. Zigbee is typically used in low data rate applications that require long battery life and secure networking. (Zigbee networks are secured by 128-bitsymmetric encryptionkeys.) Zigbee has a defined rate of up to250kbit/s, best suited for intermittent data transmissions from a sensor or input device. Zigbee was conceived in 1998, standardized in 2003, and revised in 2006. The name refers to thewaggle danceof honey bees after their return to the beehive.[2] Zigbee is a low-powerwireless mesh networkstandard targeted at battery-powered devices in wireless control and monitoring applications. Zigbee delivers low-latency communication. Zigbee chips are typically integrated with radios and withmicrocontrollers. Zigbee operates in the industrial, scientific and medical (ISM) radio bands, with the2.4GHzband being primarily used for lighting and home automation devices in most jurisdictions worldwide. While devices for commercial utility metering and medical device data collection often usesub-GHzfrequencies, (902-928MHzin North America, Australia, and Israel, 868-870 MHz in Europe, 779-787 MHz in China, even those regions and countries still using the 2.4 GHz for most globally sold Zigbee devices meant for home use. With data rates varying from around 20 kbit/s for sub-GHz bands to around 250 kbit/s for channels on the 2.4 GHz band range). Zigbee builds on thephysical layerandmedia access controldefined inIEEE standard 802.15.4for low-rate wireless personal area networks (WPANs). The specification includes four additional key components:network layer,application layer,Zigbee Device Objects(ZDOs) and manufacturer-defined application objects. ZDOs are responsible for some tasks, including keeping track of device roles, managing requests to join a network, and discovering and securing devices. The Zigbee network layer natively supports bothstarandtreenetworks, and genericmesh networking. Every network must have one coordinator device. Within star networks, the coordinator must be the central node. Both trees and meshes allow the use of Zigbeeroutersto extend communication at the network level. Another defining feature of Zigbee is facilities for carrying out secure communications, protecting the establishment and transport of cryptographic keys, ciphering frames, and controlling devices. It builds on the basic security framework defined in IEEE 802.15.4. Zigbee-style self-organizingad hoc digital radio networkswere conceived in the 1990s. The IEEE 802.15.4-2003 Zigbee specification was ratified on December 14, 2004.[3]TheConnectivity Standards Alliance(formerly Zigbee Alliance) announced availability of Specification 1.0 on June 13, 2005, known as theZigBee 2004 Specification. In September 2006, theZigbee 2006 Specificationwas announced, obsoleting the 2004 stack[4]The 2006 specification replaces the message andkey–value pairstructure used in the 2004 stack with acluster library. The library is a set of standardised commands, attributes and global artifacts organised under groups known as clusters with names such as Smart Energy, Home Automation, andZigbee Light Link.[5] In January 2017, Connectivity Standards Alliance renamed the library toDotdotand announced it as a new protocol to be represented by an emoticon (||:).They also announced it will now additionally run over other network types usingInternet Protocol[6]and will interconnect with other standards such asThread.[7]Since its unveiling, Dotdot has functioned as the default application layer for almost all Zigbee devices.[8] Zigbee Pro, also known as Zigbee 2007, was finalized in 2007.[9]A Zigbee Pro device may join and operate on a legacy Zigbee network and vice versa. Due to differences in routing options, a Zigbee Pro device must become a non-routing Zigbee End Device (ZED) on a legacy Zigbee network, and a legacy Zigbee device must become a ZED on a Zigbee Pro network.[10]It operates using the 2.4 GHz ISM band, and adds a sub-GHz band.[11] Zigbee protocols are intended for embedded applications requiringlow power consumptionand tolerating lowdata rates. The resulting network will use very little power—individual devices must have a battery life of at least two years to pass certification.[12][13][dubious–discuss] Typical application areas include: Zigbee is not for situations with high mobility among nodes. Hence, it is not suitable for tactical ad hoc radio networks in the battlefield, where high data rate and high mobility is present and needed.[citation needed][18] The first Zigbee application profile, Home Automation, was announced November 2, 2007.[citation needed]Additional application profiles have since been published. TheZigbee Smart Energy 2.0specifications define anInternet Protocol-basedcommunication protocolto monitor, control, inform, and automate the delivery and use of energy and water. It is an enhancement of the Zigbee Smart Energy version 1 specifications.[19]It adds services forplug-in electric vehiclecharging, installation, configuration and firmware download, prepay services, user information and messaging, load control,demand responseand common information and application profile interfaces for wired and wireless networks. It is being developed by partners including: Zigbee Smart Energy relies on Zigbee IP, a network layer that routes standard IPv6 traffic over IEEE 802.15.4 using6LoWPANheader compression.[20][21] In 2009, the Radio Frequency for Consumer Electronics Consortium (RF4CE) and Connectivity Standards Alliance (formerly Zigbee Alliance) agreed to deliver jointly a standard for radio frequency remote controls. Zigbee RF4CE is designed for a broad range of consumer electronics products, such as TVs and set-top boxes. It promised many advantages over existing remote control solutions, including richer communication and increased reliability, enhanced features and flexibility, interoperability, and no line-of-sight barrier.[22]The Zigbee RF4CE specification uses a subset of Zigbee functionality allowing to run on smaller memory configurations in lower-cost devices, such as remote control of consumer electronics. The radio design used by Zigbee has fewanalogstages and usesdigital circuitswherever possible. Products that integrate the radio and microcontroller into a single module are available.[23] The Zigbee qualification process involves a full validation of the requirements of the physical layer. All radios derived from the same validatedsemiconductor mask setwould enjoy the same RF characteristics. Zigbee radios have very tight constraints on power and bandwidth. An uncertified physical layer that malfunctions can increase the power consumption of other devices on a Zigbee network. Thus, radios are tested with guidance given by Clause 6 of the 802.15.4-2006 Standard.[24] This standard specifies operation in the unlicensed 2.4 to 2.4835 GHz[25](worldwide), 902 to 928 MHz (Americas and Australia) and 868 to 868.6 MHz (Europe)ISM bands. Sixteen channels are allocated in the 2.4 GHz band, spaced 5 MHz apart, though using only 2 MHz of bandwidth each. The radios usedirect-sequence spread spectrumcoding, which is managed by the digital stream into the modulator.Binary phase-shift keying(BPSK) is used in the 868 and 915 MHz bands, andoffset quadrature phase-shift keying(OQPSK) that transmits two bits per symbol is used in the 2.4 GHz band. The raw, over-the-air data rate is 250kbit/sperchannelin the 2.4 GHz band, 40 kbit/s per channel in the 915 MHz band, and 20 kbit/s in the 868 MHz band. The actual data throughput will be less than the maximum specified bit rate because of thepacket overheadand processing delays. For indoor applications at 2.4 GHz transmission distance is 10–20 m, depending on the construction materials, the number of walls to be penetrated and the output power permitted in that geographical location.[26]The output power of the radios is generally 0–20dBm(1–100 mW). There are three classes of Zigbee devices: The current Zigbee protocols support beacon-enabled and non-beacon-enabled networks. In non-beacon-enabled networks, an unslottedCSMA/CAchannel access mechanism is used. In this type of network, Zigbee routers typically have their receivers continuously active, requiring additional power.[29]However, this allows for heterogeneous networks in which some devices receive continuously while others transmit when necessary. The typical example of a heterogeneous network is awireless light switch: The Zigbee node at the lamp may constantly receive since it is reliably powered by the mains supply to the lamp, while a battery-powered light switch would remain asleep until the switch is thrown. In this case, the switch wakes up, sends a command to the lamp, receives an acknowledgment, and returns to sleep. In such a network the lamp node will be at least a Zigbee router, if not the Zigbee coordinator; the switch node is typically a Zigbee end device. In beacon-enabled networks, Zigbee routers transmit periodic beacons to confirm their presence to other network nodes. Nodes may sleep between beacons, thus extending their battery life. Beacon intervals depend on data rate; they may range from 15.36 milliseconds to 251.65824 seconds at 250 kbit/s, from 24 milliseconds to 393.216 seconds at 40 kbit/s and from 48 milliseconds to 786.432 seconds at 20 kbit/s. Long beacon intervals require precise timing, which can be expensive to implement in low-cost products. In general, the Zigbee protocols minimize the time the radio is on, so as to reduce power use. In beaconing networks, nodes only need to be active while a beacon is being transmitted. In non-beacon-enabled networks, power consumption is decidedly asymmetrical: Some devices are always active while others spend most of their time sleeping. Except for Smart Energy Profile 2.0, Zigbee devices are required to conform to the IEEE 802.15.4-2003 Low-rate Wireless Personal Area Network (LR-WPAN) standard. The standard specifies the lowerprotocol layers—thephysical layer(PHY), and themedia access controlportion of thedata link layer. The basic channel access mode iscarrier-sense multiple access with collision avoidance(CSMA/CA). That is, the nodes communicate in a way somewhat analogous to how humans converse: a node briefly checks to see that other nodes are not talking before it starts. CSMA/CA is not used in three notable exceptions: The main functions of thenetwork layerare to ensure correct use of theMAC sublayerand provide a suitable interface for use by the next upper layer, namely the application layer. The network layer deals with network functions such as connecting, disconnecting, and setting up networks. It can establish a network, allocate addresses, and add and remove devices. This layer makes use of star, mesh and tree topologies. The data entity of the transport layer creates and managesprotocol data unitsat the direction of the application layer and performs routing according to the current topology. The control entity handles the configuration of new devices and establishes new networks. It can determine whether a neighboring device belongs to the network and discovers new neighbors and routers. The routing protocol used by the network layer isAODV.[30]To find a destination device, AODV is used to broadcast a route request to all of its neighbors. The neighbors then broadcast the request to their neighbors and onward until the destination is reached. Once the destination is reached, a route reply is sent via unicast transmission following the lowest cost path back to the source. Once the source receives the reply, it updates its routing table with the destination address of the next hop in the path and the associated path cost. The application layer is the highest-level layer defined by the specification and is the effective interface of the Zigbee system to its end users. It comprises the majority of components added by the Zigbee specification: both ZDO (Zigbee device object) and its management procedures, together with application objects defined by the manufacturer, are considered part of this layer. This layer binds tables, sends messages between bound devices, manages group addresses, reassembles packets, and transports data. It is responsible for providing service to Zigbee device profiles. TheZDO(Zigbee device object), a protocol in the Zigbee protocol stack, is responsible for overall device management, security keys, and policies. It is responsible for defining the role of a device as either coordinator or end device, as mentioned above, but also for the discovery of new devices on the network and the identification of their offered services. It may then go on to establish secure links with external devices and reply to binding requests accordingly. The application support sublayer (APS) is the other main standard component of the stack, and as such it offers a well-defined interface and control services. It works as a bridge between the network layer and the other elements of the application layer: it keeps up-to-datebinding tablesin the form of a database, which can be used to find appropriate devices depending on the services that are needed and those the different devices offer. As the union between both specified layers, it also routes messages across the layers of theprotocol stack. An application may consist of communicating objects which cooperate to carry out the desired tasks. Tasks will typically be largely local to each device, such as the control of each household appliance. The focus of Zigbee is to distribute work among many different devices which reside within individual Zigbee nodes which in turn form a network. The objects that form the network communicate using the facilities provided by APS, supervised by ZDO interfaces. Within a single device, up to 240 application objects can exist, numbered in the range 1–240. 0 is reserved for the ZDO data interface and 255 for broadcast; the 241-254 range is not currently in use but may be in the future. Two services are available for application objects to use (in Zigbee 1.0): Addressing is also part of the application layer. A network node consists of an IEEE 802.15.4-conformant radiotransceiverand one or more device descriptions (collections of attributes that can be polled or set or can be monitored through events). The transceiver is the basis for addressing, and devices within a node are specified by anendpoint identifierin the range 1 to 240. For applications to communicate, the devices that support them must use a common application protocol (types of messages, formats and so on); these sets of conventions are grouped inprofiles. Furthermore, binding is decided upon by matching input and outputcluster identifiers[clarify]unique within the context of a given profile and associated to an incoming or outgoing data flow in a device. Binding tables contain source and destination pairs. Depending on the available information, device discovery may follow different methods. When the network address is known, the IEEE address can be requested usingunicastcommunication. When it is not, petitions arebroadcast. End devices will simply respond with the requested address while a network coordinator or a router will also send the addresses of all the devices associated with it. Thisextended discovery protocol[clarify]permits external devices to find out about devices in a network and the services that they offer, which endpoints can report when queried by the discovering device (which has previously obtained their addresses). Matching services can also be used. The use of cluster identifiers enforces the binding of complementary entities using the binding tables, which are maintained by Zigbee coordinators, as the table must always be available within a network and coordinators are most likely to have a permanent power supply. Backups, managed by higher-level layers, may be needed by some applications. Binding requires an established communication link; after it exists, whether to add a new node to the network is decided, according to the application and security policies. Communication can happen right after the association.Direct addressinguses both radio address and endpoint identifier, whereas indirect addressing uses every relevant field (address, endpoint, cluster, and attribute) and requires that they are sent to the network coordinator, which maintains associations and translates requests for communication. Indirect addressing is particularly useful to keep some devices very simple and minimize their need for storage. Besides these two methods,broadcastto all endpoints in a device is available, andgroup addressingis used to communicate with groups of endpoints belonging to a specified set of devices. As one of its defining features, Zigbee provides facilities for carrying outsecure communications, protecting the establishment and transport ofcryptographic keysand encrypting data. It builds on the basic security framework defined in IEEE 802.15.4. The basic mechanism to ensure confidentiality is the adequate protection of all keying material. Keys are the cornerstone of the security architecture; as such their protection is of paramount importance, and keys are never supposed to be transported through aninsecure channel. A momentary exception to this rule occurs during the initial phase of the addition to the network of a previously unconfigured device. Trust must be assumed in the initial installation of the keys, as well as in the processing of security information. The Zigbee network model must take particular care of security considerations, asad hoc networksmay be physically accessible to external devices. Also, the state of the working environment cannot be predicted. Within the protocol stack, different network layers are not cryptographically separated, so access policies are needed, and conventional design assumed. The open trust model within a device allows for key sharing, which notably decreases potential cost. Nevertheless, the layer which creates a frame is responsible for its security. As malicious devices may exist, every network layer payload must be ciphered, so unauthorized traffic can be immediately cut off. The exception, again, is the transmission of the network key, which confers a unified security layer to the grid, to a new connecting device. The Zigbee security architecture is based on CCM*, which adds encryption- and integrity-only features toCCM mode.[31]Zigbee uses 128-bit keys to implement its security mechanisms. A key can be associated either to a network, being usable by Zigbee layers and the MAC sublayer, or to a link, acquired through pre-installation, agreement or transport. Establishment of link keys is based on a master key which controls link key correspondence. Ultimately, at least, the initial master key must be obtained through a secure medium (transport or pre-installation), as the security of the whole network depends on it. Link and master keys are only visible to the application layer. Different services use differentone-wayvariations of the link key to avoid leaks and security risks. Key distribution is one of the most important security functions of the network. A secure network will designate one special device, thetrust center, which other devices trust for the distribution of security keys. Ideally, devices will have the trust center address and initial master key preloaded; if a momentary vulnerability is allowed, it will be sent as described above. Typical applications without special security needs will use a network key provided by the trust center (through the initially insecure channel) to communicate. Thus, the trust center maintains both the network key and provides point-to-point security. Devices will only accept communications originating from a key supplied by the trust center, except for the initial master key. The security architecture is distributed among the network layers as follows: According to the German computer e-magazineHeise Online, Zigbee Home Automation 1.2 uses fallback keys for encryption negotiation which are known and cannot be changed. This makes the encryption highly vulnerable.[32][33]The Zigbee 3.0 standard features improved security features and mitigates the aforementioned weakness by giving device manufacturers the option of using a custom installation key that is then shipped together with the device, thereby preventing the network traffic from ever using the fallback key altogether. This ensures that all network traffic is securely encrypted even while pairing the device. In addition, all Zigbee devices need to randomize their network key, no matter which pairing method they use, thereby improving security for older devices. The Zigbee coordinator within the Zigbee network can be set to deny access to devices that do not employ this key randomization, further increasing security. In addition, the Zigbee 3.0 protocol features countermeasures against removing already paired devices from the network with the intention of listening to the key exchange when re-pairing. Network simulators, likens-2,OMNeT++,OPNET, andNetSimcan be used to simulate IEEE 802.15.4 Zigbee networks. These simulators come with open sourceCorC++librariesfor users to modify. This way users can determine the validity of new algorithms before hardware implementation.
https://en.wikipedia.org/wiki/ZigBee
HarmonyOS(HMOS) (Chinese:鸿蒙;pinyin:Hóngméng;trans."Vast Mist") is adistributed operating systemdeveloped byHuaweiforsmartphones,tablets,smart TVs,smart watches,personal computersand othersmart devices. It has amicrokerneldesign with a single framework: the operating system selects suitable kernels from theabstraction layerin the case of devices that use diverse resources.[5][6][7] HarmonyOS was officially launched by Huawei, and first used inHonorsmart TVs, in August 2019.[8][9]It was later used in Huaweiwireless routers,IoTin 2020, followed bysmartphones,tabletsandsmartwatchesfrom June 2021.[10] From 2019 to 2024,versions 1 to 4of the operating system were based on code from theAndroidOpen Source Project (AOSP) and theLinux kernel; many Android apps could besideloadedon HarmonyOS.[11] The next iteration of HarmonyOS became known asHarmonyOS NEXT. HarmonyOS NEXT was announced on August 4, 2023, and officially launched on October 22, 2024.[12]It replaced theOpenHarmonymulti-kernel system with its own HarmonyOSmicrokernelat its core and removed all Android code. Since version 5, HarmonyOS only supports apps in itsnative"App" format.[13][14] In May 2025, the firstnotebookwith the HarmonyOS operating system was launched by Huawei, featuring "HarmonyOS PC", i.e. HarmonyOS NEXT 5 for thepersonal computerform factor.[15] HarmonyOS is designed with alayeredarchitecture, which consists of four layers; thekernellayer at the bottom provides the upper three layers, i.e., the system service layer, framework layer and application layer, with basic kernel capabilities, such asprocessandthreadmanagement,memory management,file system,network management, andperipheralmanagement.[16] The kernel layer incorporates a subsystem that accommodatesHarmonyOS kernelbased onmicrokernelas Rich Executed Environment (REE), catering to diverse smart devices. Depending on the device type, different kernels can be selected; for instance, like OpenHarmony base itself but with a single kernel, lightweight systems are chosen for low-power devices like watches andIoTdevices to execute lightweightHarmonyOS apps, whereas large-memory devices like mobile phones, tablets, and PCs utilize standard system. The dual-app framework was replaced with a single-app framework inHarmonyOS Next, supporting only native HarmonyOS apps withAPPformat.[17] The system includes a communication base called DSoftBus for integrating physically separate devices into a virtual Super Device, allowing one device to control others and sharing data among devices withdistributed communicationcapabilities.[18][19][20]"To address security concerns" arising from varying devices, the system provides a hardware-basedTrusted Execution Environment(TEE)microkernelto prevent leakage of sensitive personal data when they are stored or processed.[21] It supports several forms of apps, including native apps that can be installed fromAppGallery, installation-free Quick apps and lightweight Meta Services accessible by users on various devices.[22][23][24][25] When it launched the operating system, Huawei stated that HarmonyOS plans to become a microkernel-based, distributed OS that was completely different from Android and iOS in terms of target market towardsInternet of things.[26]A Huawei spokesperson subsequently stated that HarmonyOS supported multiple kernels and used a Linux kernel if a device had a large amount of RAM, and that the company had taken advantage of a large number of third-party open-source resources, including Linux kernel with POSIX APIs on OpenHarmony base, as a foundation to accelerate the development of its unified system stack as a future-proof, microkernel-based, and distributed OS running on multiple devices.[27][28][29] At its launch as an operating system for smartphones in 2021, HarmonyOS was, however, rumored byArs Technicato be a "rebranded version of Android andEMUI" with nearly "identical code bases".[30]Following the release of the HarmonyOS 2.0 beta,Ars TechnicaandXDA Developerssuggested that "the smartphone version of the OS had beenforkedfromAndroid 10".Ars Technicaalleged that it resembled the existing EMUI software used on Huawei devices, but with all references to "Android" replaced by "HarmonyOS". It was also noted that theDevEco Studiosoftware based onJetBrainsopen sourceIntelliJ IDEAIDE"shared components and tool chains" withAndroid Studio. When testing the new MatePad Pro in June 2021,Android AuthorityandThe Vergesimilarly observed similarities in "behavior", including that it was possible to install apps from AndroidAPK fileson the HarmonyOS-based tablet, and to run the Android 10easter eggapk app, reaffirming earlier rumor mills.[27][29] Reports surrounding an in-house operating system being developed by Huawei date back as far as 2012 in R&D stages with HarmonyOS NEXT system stack going back as early as 2015.[31][32]These reports intensified during theSino-American trade war, after theUnited States Department of Commerceadded Huawei to itsEntity Listin May 2019 under an indictment that it knowingly exported goods, technology and services of U.S. origin to Iran in violation ofsanctions. This prohibited U.S.-based companies from doing business with Huawei without first obtaining a license from the government.[33][34][35][36][37]Huawei executiveYu Chengdong[zh]described an in-house platform as a "plan B" in case it is prevented from usingAndroidon futuresmartphoneproducts due to the sanctions.[38][39][40] Prior to its unveiling, it was originally speculated to be amobile operating systemthat could replace Android on future Huawei devices. In June 2019, an Huawei executive toldReutersthat the OS was under testing in China, and could be ready "in months", but by July 2019, some Huawei executives described the OS as being anembedded operating systemdesigned for IoT hardware, discarding the previous statements for it to be a mobile operating system.[41] Some media outlets reported that this OS, referred to as "Hongmeng", could be released in China in either August or September 2019, with a worldwide release in the second quarter of 2020.[42][43]On 24 May 2019, Huawei registered "Hongmeng" as atrademarkin China.[44]The name "Hongmeng" (Chinese:鸿蒙;lit.'Vast Mist') came from Chinese mythology that symbolizes primordial chaos or the world before creation.[45]The same day, Huawei registered trademarks surrounding "Ark OS" and variants with theEuropean Union Intellectual Property Office.[46]In July 2019, it was reported that Huawei had also registered trademarks surrounding the word "Harmony" for desktop and mobile operating system software, indicating either a different name or a component of the OS.[47] Early versions of HarmonyOS, starting from version 1.0, employed a "kernelabstraction layer" (KAL) subsystem to support a multi-kernel architecture.[48]This allowed developers to choose different operating system kernels based on the resources available on each device. For low-powered devices such as wearables and Huawei's GT smartwatches, HarmonyOS utilized the LiteOS kernel instead of Linux. It also integrated theLiteOSSDKfor TV applications and ensured compatibility with Android apps through theArk Compilerand a dual-framework approach.[49]HarmonyOS 1.0's original L0-L2 source code branch was contributed to theOpenAtom Foundationto accelerate system development.[50] HarmonyOS 2.0 introduced a modified version of OpenHarmony's L3-L5 source code, expanding its compatibility across smartphones and tablets. Underneath the kernel abstraction layer (KAL) subsystem, HarmonyOS used theLinux kerneland theAOSPcodebase. This setup enabled AndroidAPKfiles andApp Bundles(AAB) to run natively, similar to older HuaweiEMUI-based devices, without needingrootaccess.[51][52] Additionally, HarmonyOS supported native apps packaged forHuawei Mobile Servicesthrough the Ark Compiler, leveraging the OpenHarmony framework within its dual-framework structure at the System Service Layer. This configuration allowed the operating system to run apps developed with restricted HarmonyOSAPIs.[53] Until the release of HarmonyOS 5.0.0, known asHarmonyOS NEXT5, using its microkernel within a single framework, replacing the operating system dual-framework approach for Huawei's HarmonyOS devices with the AOSP codebase.[14][54] On 9 August 2019, three months after theEntity Listban, Huawei publicly unveiled HarmonyOS, which Huawei said it had been working on since 2012, at its inaugural developers' conference inDongguan. Huawei described HarmonyOS as afree,microkernel-based distributed operating system for various types of hardware. The company focused primarily on IoT devices, including smart TVs,wearable devices, andin-car entertainmentsystems, and did not explicitly position HarmonyOS as a mobile OS.[55][56][57] HarmonyOS 2.0 launched at the Huawei Developer Conference on 10 September 2020. Huawei announced it intended to ship the operating system on its smartphones in 2021.[58]The first developer beta of HarmonyOS 2.0 was launched on 16 December 2020. Huawei also released theDevEco StudioIDE, which is based onIntelliJ IDEA, and a cloud emulator for developers in early access.[59][60] Huawei officially released HarmonyOS 2.0 and launched new devices shipping with the OS in June 2021, and started rolling out system upgrades to Huawei's older phones for users gradually.[61][62][29] On July 27, 2022, Huawei launched HarmonyOS 3 providing an improved experience across multiple devices such as smartphones, tablets, printers, cars and TVs. It also launched Petal Chuxing, a ride-hailing app running on the new version of the operating system.[63][64][65][66] On 29 June 2023, Huawei launched the first developer beta of HarmonyOS 4.[67]On 4 August 2023, Huawei officially announced and released HarmonyOS 4 as a public beta.[68]On 9 August, it rolled the operating system out on 34 different existing Huawei smartphone and tablet devices—albeit as a public beta build.[69]Alongside HarmonyOS 4, Huawei also announced the launch ofHarmonyOS NEXT, which is a "pure" HarmonyOS version, without Android libraries and therefore incompatible with Android apps post-software convergence.[70] On 18 January 2024, Huawei announced commercialisation of HarmonyOS NEXT with Galaxy stable version rollout which will begin in Q4 2024 based on OpenHarmony 5.0 (API 12) version after OpenHarmony 4.1 (API 11) based Q2 Developer Beta after release of public developer access of HarmonyOS NEXT Developer Preview 1 that has been in the hands of closed cooperative developers partners since August 2023 debut. The new system of upcoming HarmonyOS 5 version that replaced HarmonyOS multi-kernel dual-frame system convergence for unified system stack of the unified app ecosystem for commercial Huawei consumer devices.[71][72] On March 11, 2024, Huawei announced the early recruitment for the new test experience version of Huawei HarmonyOS 4 firmware update that includes performance improvements, purer and better user experiences. HarmonyOS version 4.0.0.200 (C00E200R2P7) of the firmware was gradually rolled out on March 12, 2024.[73][74] On April 11, 2024, it has been reported that Huawei opened the registration and rolled out public beta of HarmonyOS 4.2 for 24 devices. On the same day, the company announced its incoming HarmonyOS 5.0 operating system version of Galaxy Edition version underHarmonyOS NEXTsystem that will first be released as open beta program for developers and users at its annual Huawei Developer Conference in June 2024 before Q4 commercial consumer release with upcoming Mate 70 flagship, among other ecosystem devices.[75][76] On April 18, 2024, Huawei Pura 70 flagship series lineup received HarmonyOS 4.2.0.137 update, after release.[77] On April 17, 2024, Huawei's chairman Eric Xu revealed plans to push nativeHarmonyOS NEXTsystem for next gen HarmonyOS in global markets as the company's focus at Huawei's Analyst Summit 2024 (HAS 2024) to Chinese and international press which was reported in various international outlets on April 22, 2024.[78][79] On May 17, 2024, during the HarmonyOS Developer Day (HDD) event, Huawei announced HarmonyOS upgrade with the new HarmonyOS NEXT base will begin commercial use by September with over 800 million units of devices and 4,000 apps in use for a target of 5,000 apps at launch.[80][81] On June 21, 2024, during Huawei Developer Conference (HDC) keynote, Huawei announced HarmonyOS NEXT Developer Beta for registered developers and 3,000 pioneer users on limited models such as Huawei Mate 60 Series, Huawei Mate X5 Series and Huawei MatePad Pro 13.2 tablet. The consumer beta version is expected to be released in August 2024 while the stable build to be made available in Q4 2024.[82]During the conference, Huawei formerly announced in-house Cangjie programming language for the new native system alongside releasing the Developer Preview Beta recruitment program.[83] On October 22, 2024, at Huawei HarmonyOS Next event, it was officially revealed as "pure blood" HarmonyOS NEXT 5 brand transitioning to HarmonyOS 5, incorporated as HarmonyOS 5.0.0 version, for public beta with 2025 expansions. Ahead of flagship devices with stable builds factory in November.[84] The HarmonyOS interface is overhauled with native HarmonyOS Design system as "Harmonious aesthetics" philosophy[85]by ang Zhiyan, Chief UX Designer at Huawei Consumer BGf or the native launcher system that has an emphasis on 'vivid' system colours and reflective 'spatial' visual of light, blur, glow with glassmorphism andneumorphismsoft UI that is a medium betweenskeuomorphismandflat design. In addition to standard folders that require tapping on them to display their contents, folders can be enlarged to always show their contents without text labels directly on the home screen.[86] Apps can support "snippets", which expose a portion of the app's functionality (such as a media player's controls, or a weather forecast) via an iOS style pop-up window by swiping left after holding the app icon in context menu, and can be pinned to the home screen as awidget. Apps and services can providecards; as of HarmonyOS 3.0, cards can also be displayed as widgets with different sizes and shapes to adapt to the home screen layout, and can also be stacked.[87][88] The user interface font of HarmonyOS on HarmonyOS Next base isHarmonyOS Sans. It is designed to be easy to read, unique, and universal. The system font was used throughout the operating system alongside previous Android-based EMUI 12 and up, including third-party HarmonyOS and former Android apps.[89] Unlike Meta Services that are installation-free, traditional apps need installation. They are available to users throughHuawei AppGallery, which serves as theapplication storefor HarmonyOS with HarmonyOS-native apps.[90][91]HarmonyOS-native apps have access to capabilities such as distributed communications and cards.[92][93] Similar toapplets, Quick apps weresingle-page appswritten usingJavaScriptandCSS, with code volume about one fifth of that of a traditional app.[94][95]They are developed based on the industry standards formulated by the Quick App Alliance, comprising mainstream mobile phone manufacturers in China.[96][97] Quick apps are available to users through the AppGallery, Quick App Center, Huawei Assistant, etc., on supported devices. They are installation-free, updated automatically, and their shortcuts can be added by users to the home screen for ease of access.[96][98] Managed and distributed by Huawei Ability Gallery, Meta Services (formerly, Atomic Services) are lightweight and consist of one or moreHarmonyOS Ability Packages(HAPs) to implement specific convenient services, providing users with dynamic content and functionality.[99]They are accessible via the Service Center from devices, and presented as cards that can be added to a favorite list or pinned to the home screen. Meta Services are installation-free since the accompanying code is downloaded in the background.[100][99][101]They can also be synchronized across multiple devices, such as updating the driver's location on the watch in real time after the user hails a taxi on the mobile phone.[102] Note:Meta Services (a component of HarmonyOS) should not to be confused with products and services fromMeta Platforms(the parent company of Facebook). The Service Collaboration Kit (SCK) provides users with cross-device interaction, allowing them to use the camera, scanning, and gallery functions of other devices. For example,tabletsor2-in-1 laptopscan utilize these features from a connected smartphone. To utilize these features, both devices running HarmonyOS NEXT must be logged into the same Huawei account and have WLAN and Bluetooth enabled.[103] Harmony Intelligence allows users to deploy AI-based applications on HarmonyOS, usingPanGu5.0LLMand its embedded variants, alongside newCeliacapabilities, HiAI Foundation Kit,MindSporeLite Kit, Neural Network Runtime Kit, and Computer Vision. These features improve performance, reduce power consumption, and enable efficient AI processing on devices with Kirin chips.[104][105][106][107][108] HarmonyOS supports cross-platform interactions between supported devices via the "Super Device" interface; devices are paired via a "radar" screen by dragging icons to the centre of the screen.[109][110][111][112]Examples of Super Device features include allowing users to play back media saved inside a smartphone through a paired PC,smart TVorspeakers; share PC screen recordings back to a smartphone; run multiple phone apps in a PC window; share files between a paired smartphone and PC; share application states between the paired devices, etc.[113][114][115] Incorporated into HarmonyOS 4,NearLink(previously known as SparkLink) is a set of standards that combine the strengths of traditionalwireless technologieslikeBluetoothandWi-Fi, while emphasizing improved performance in areas like response time, energy efficiency, signal range, and security. It consists of two access modes: SparkLink Low Energy (SLE) and SparkLink Basic (SLB). SLE is designed for low-power consumption, low-latency, and high-reliability applications, with a data transmission rate reportedly up to 6 times that of Bluetooth; SLB is tailored for high-speed, high-capacity, and high-precision applications, with a data transmission rate reportedly around 2 times that of Wi-Fi.[116][117][118][119] HarmonyOS platform was not designed for a single device at the beginning but developed as a distributed operating system for various devices with memory sizes ranging from 128KB to over 4GB. Hence, the hardware requirements are flexible for the operating system and it may only need 128KB of memory for a variety of smart terminal devices.[120][121] Huawei stated that HarmonyOS would initially be used on devices targeting the Chinese market. The company's former subsidiary brand,Honor, unveiled the Honor Vision line of smart TVs as the first consumer electronics devices to run HarmonyOS in August 2019.[122][57]The HarmonyOS 2.0 beta launched on 16 December 2020 and supported theP30 series,P40 series,Mate 30 series,Mate 40 series,P50 series, and the MatePad Pro.[123] Stable HarmonyOS 2.0 was released for smartphones and tablets as updates for the P40 andMate X2in June 2021. NewHuawei Watch,MatePad Proand PixLab X1 desktop printer models shipping with HarmonyOS were also unveiled at the time.[62][29][124]In October 2021, HarmonyOS 2.0 had over 150 million users.[125][126] The primaryIDEknown as DevEco Studio for developing HarmonyOS apps was released by Huawei on September 9, 2020, based on IntelliJ IDEA and Huawei's SmartAssist.[127]The IDE includes DevEco Device Tool,[128]an integrated development tool for customizing HarmonyOS components, coding, compiling and visual debugging, similar to other third party IDEs such asVisual Studio CodeforWindows,LinuxandmacOS.[129] Applications for HarmonyOS are mostly built using components ofArkUI, a Declarative User Interface framework. ArkUI elements are adaptable to various devices and include new interface rules with automatic updates along with HarmonyOS updates.[130] HarmonyOS usesApp Packfiles suffixed with .app, also known as APP files, for distribution of software via AppGallery. Each App Pack has one or moreHarmonyOS Ability Packages(HAP) containing code for their abilities, resources, libraries, and aJSONfile with configuration information.[131] HarmonyOS as a universal single IoT platform allows developers to write apps once and run everywhere across devices such as phones, tablets, personal computers, TVs, cars, smartwatches, single board computers underOpenHarmony, and screen-less IoT devices such as smart speakers.[132] As of October 2024, there were reportedly over 6.75 million registered developers participated in developing HarmonyOS apps.[133] On May 18, 2021, Huawei revealed a plan to upgrade its HarmonyOS Connect brand with a standard badge during a summit in Shanghai to help industrial partners in producing, selling and operating products with third-party OEMs as part of the HarmonyOS system, framework and the Huawei Smart Life (formerly Huawei AI Life) app. Allowing for fast and low-cost connections to users, smart devices like speakers, fridges and cookers of different brands powered by HarmonyOS can be connected and merged into a super device with a single touch of smartphone without the need to install apps. Also, HiLink protocols for mesh and wireless routers connectivity with devices alongside other smart devices that are platform agnostic that connects to HarmonyOS devices.[134] The HarmonyOS Connect sets the platform apart from traditional mobile and computing platforms and the company's previous ecosystem attempts with its Android based EMUI andLiteOSconnectivity in the past.[135] On April 27, 2021, Huawei launched a smart cockpit solution powered by HarmonyOS for electric and autonomous cars powered by its Kirin line of asystem-on-chip(SoC) solution. Huawei opened up APIs to help automobile OEMs, suppliers and ecosystem partners in developing features to meet user requirements. Huawei designed a modular SoC for cars that will be pluggable and easy to upgrade to maintain the peak performance of the cockpit. Users would be able to upgrade the chipset as one can upgrade on an assembled desktop computer with its scalable distributed OS.[136] On December 21, 2021, Huawei launched a new smart console brand, HarmonySpace, a specialized HarmonyOS vehicle operating system. Based on Huawei's 1+8 ecology, apps on smartphones and tablets can be connected to the car seamlessly with HarmonySpace, which also provides smartphone projection capability.[137][138] On December 23, 2021, Huawei announced a new smart select car product – AITO M5, a medium-sizeSUVwith HarmonyOS ecosystem through continuous AI learning optimization and over-the-air upgrades.[139]On July 4, 2022, Huawei officially launched AITO smart select car product to be shipped to customers sometime in August 2022. During the launch, the company received 10,000 pre-orders in 2 hours for its M7 model.[140] Huawei MagLink built on interconnected Cockpit solution, enables drivers to make the mobile phone application full amount of car, no need for telephony navigation. Huawei's car solution through seamless HarmonyOS system application, eliminate the need for drivers to use mobile phone navigation nor the need to install mobile phone holders. With this solution, enables more built in accessible entertainment and information services. The integration of software and hardware technologies installed on the car, achieving “mobile whole-house intelligence.”[141] On 14 September 2021, Huawei announced the launch of MineHarmony OS, a customized operating system by Huawei based on its in-house HarmonyOS based on OpenHarmony for industrial use. MineHarmony is compatible with about 400 types of underground coal mining equipment, providing the equipment with a single interface to transmit and collect data for analysis. Wang Chenglu, President of Huawei's consumer business AI and smart full-scenario business department, indicated that the launch of MineHarmony OS signified that the HarmonyOS ecology had taken a step further fromB2CtoB2B.[142][143][144] On December 23, 2021, Yu Chengdong, CEO of Huawei Consumer Business Group, claimed that HarmonyOS had reached 300 million smartphones and othersmart devices, including 200 million devices in the ecosystem and 100 million third-party consumer products from industry partners.[145] Market research conducted in China by Strategy Analytics showed that Harmony OS was the third largest smartphone platform afterApple iOSandGoogle Android, reaching a record high of 4% market share in China during the first quarter of 2022, up from zero just a year earlier. This increase in market share took place after the operating system was also launched for smartphone devices in June 2021. The research claimed that in the first quarter of 2022 the platform outgrew its rivals, such as Android and Apple iOS, from a low install base of about 150 million smart devices overall, particularly due to the good support in China and the HarmonyOS software upgrades that Huawei made available for its older handset models and its former sub-brands such asHonor.[146][147] On August 8, 2022, after the soft launch of HarmonyOS 3, Sina Finance, part ofSina Corporation, and Huawei Central reported that the number of Huawei HarmonyOS Connect devices had exceeded 470 million units. By summer 2022, 14 OpenHarmony distributions had been launched.[148][149] In the third quarter of 2023, HarmonyOS captured a 3% share of the global smartphone market and 13% within China, despite Huawei's limitation to LTE at the time.[150]At the launch of HarmonyOS 4 in August 2023, it was noted that the operating system had been integrated into over 700 million devices. By January 18, 2024, during Huawei's HarmonyOS Ecology Conference in China, this number had risen to over 800 million devices, as reported by Huawei.[151][152] In the first quarter of 2024, HarmonyOS reached a 4% market share globally and captured 17% of the Chinese market, surpassing iOS to become the second largest mobile platform domestically, as reported by Counterpoint Research on May 25, 2024.[153][154]During the HDC 2024 keynote conference, it was announced that HarmonyOS had reached 900 million active on June 21, 2024.[155] On October 22, 2024, Huawei announced at its HarmonyOS NEXT 5 event that the HarmonyOS platform has active 1 billion users.[156] In terms of architecture, HarmonyOS has close relationship with OpenEuler, which is a community edition ofEulerOS, as they have implemented the sharing of kernel technology as revealed by Deng Taihua, President of Huawei's Computing Product Line.[157]The sharing is reportedly to be strengthened in the future in the areas of the distributedsoftware bus, system security, app framework, device driver framework and new programming language.[158] OpenHarmonyis an open-source version of HarmonyOS donated by Huawei to theOpenAtom Foundation, built around a LiteOS kernel descended from originalLiteOSoperating system. It supports devices running a mini system such as printers, speakers, smartwatches and any other smart device with memory as small as 128 KB, or running a standard system with memory greater than 128 MB.[159]The open-source operating system contains the basic capabilities of HarmonyOS and does not depend on theAndroid Open Source Project(AOSP) code.[160] On August 4, 2023, at Huawei Developers Conference 2023 (HDC), Huawei officially announced HarmonyOS NEXT, the next iteration system version of HarmonyOS, supporting only nativeAPPapps viaArk CompilerwithHuawei Mobile Services (HMS), and ending the support for Android apk apps.[161] Built on a custom version ofOpenHarmony, HarmonyOS NEXTproprietarysystem has the HarmonyOSmicrokernelat its core with a single framework, departing from the common Linux kernel and aimed to replace the current multi-kernel HarmonyOS.[14] Among the first batch of over 200 developers,McDonald'sandKFC in Chinabecame two of the first multinational food companies to adopt HarmonyOS Next.[162][163] In May 2019, Huawei applied for registration of the trademark "Hongmeng" through the Chinese Patent OfficeCNIPA, but the application was rejected in pursuance to Article 30 of thePRCTrade Mark Law, citing the trademark was similar to that of "CRM Hongmeng" in graphic design and "Hongmeng" in Chinese word.[164] In less than a week before launching HarmonyOS 2.0 and new devices by Huawei, theBeijing Intellectual Property Courtannounced the first-instance judgement in May 2021 to uphold the decision by CNIPA as the trademark was not sufficiently distinctive in terms of its designated services.[165][166] However, it was reported that the trademark had officially been transferred from Huizhou Qibei Technology to Huawei by end of May 2021.[167] On October 22, 2024, It has been reported that Huawei has applied for registration of more than 400 HarmonyOS related trademarks in China.[168]
https://en.wikipedia.org/wiki/HarmonyOS
TheHuawei Mate 60(stylized asHUAWEI Mate60) is a series of high-end 2023smartphoneproduct by the ChineseHuaweicorporation from itsHuawei Mate series.[3]It has aKirin 9000sSoC chipsetdesigned byHiSiliconand produced by theSMICfoundry.[4]The device supports satellite network communications and5G.[5] The Huawei Mate 60 is the first Huawei smartphone to feature a 7nmSoCdesigned and manufactured in mainland China, despite the imposition of US sanctions on the company.[6][7] The CPU HiSilicon Kirin 9000S is a SoC supposed to consist of four high-performance cores (one at up to 2.62 GHz and three at up to 2,150 MHz) that is based on HiSilicon's custom TaiShan microarchitecture and four energy-efficient cores (up to 1,530 MHz) based onARM's Cortex 510.[8]The smartphone also uses the Maleoon 910 graphics processing unit operating at up to 750 MHz.[8] According to third-party testing, after plugging in theSIM cardthe network standard indication of the phone does not show a5Gconnection, and Huawei does not mention supporting 5G in the parameter details; the actual network speed test shows that its performance is 5G.[9]Reports also believe that it has the ability to support 5G.[5][10][11] Huawei focuses more on promoting its capabilities as a satellite communication terminal.[citation needed]The Mate 60 series smartphone supports satellite call functions through theTiantong system,[12][13][14][15]and short message sending and receiving functions through theBeidou system.[16][17] Mate 60 also supportsNearLink, a short-range wireless communication technology that combines the features ofBluetoothandWi-Fiwith enhanced prerequisites, and can be used in the future onInternet of ThingsandInternet of Vehicles.[18][19] At the end of 2023, Huawei Mate 60 Pro+ was the best smartphone camera in the world according toDxOMark.[20][21] The Mate 60 was launched with the operating system HarmonyOS 4. In the second quarter of 2025, an update from HarmonyOS 4 to the operating system HarmonyOS NEXT 5.0.1, which does not support Android apps, was to become available.[22] The launch of the Huawei Mate 60 garnered significant attention, and was widely touted as a victory against US government sanctions intended to stop Chinese companies from producing or obtaining advanced chips.[23][24]Huawei's breakthrough raised concerns within the US government that technological restrictions alone were unable to prevent Huawei from obtaining advanced chips:[25]theU.S. Department of Commercelaunched an investigation into the situation at the end of 2023.[26] On 5 March 2024, a report by Counterpoint Research claimed that although overall Chinese smartphone sales were 7% lower in the first six weeks of 2024, compared with the same period in 2023, Apple’s recently launched flagshipiPhone 15was selling exceptionally badly, with Apple’s overall smartphone unit sales falling 24% in the relevant period, because buyers were turning towards devices made by Huawei.[27]According to Counterpoint Research, Huawei saw unit sales rise by 64% in the period.[28] Huaweihad told their customers that stores inShenzhenwould only have a certain amount of phones to sell, which resulted in long lines outside of every store.[29]On August 30, 2023, Huawei Mall launched the Mate 60pre-orderpage.[30]On September 3 of that same year, the Mate 60 Pro was fully on-sale. At 18:08, online platforms such as Huawei Mall,Taobao,Tmall, andJD.comsold out all available colors in just one minute after opening sales to the public. There were also lines of people waiting to buy at Huawei stores across China.[31]On September 8, Huawei Mall launched the Mate 60 Pro+ pre-order page.[32]
https://en.wikipedia.org/wiki/Huawei_Mate_60
Internet Connection Sharing(ICS) is aWindows servicethat enables oneInternet-connected computer to share its Internet connection with other computers on alocal area network(LAN). The computer that shares its Internet connection serves as agateway device, meaning that all traffic between other computers and the Internet go through this computer. ICS providesDynamic Host Configuration Protocol(DHCP) andnetwork address translation(NAT) services for the LAN computers. ICS was a feature ofWindows 98 SEand all versions ofWindowsreleased for personal computers thereafter. ICS routesTCP/IPpackets from a small LAN to the Internet. ICS provides NAT services, mapping individualIP addressesof local computers to unusedportnumbers in the sharing computer. Because of the nature of the NAT, IP addresses on the local computer are not visible on the Internet. All packets leaving or entering the LAN are sent from or to the IP address of the external adapter on the ICS host computer. Typically, ICS can be used when there are severalnetwork interface cardsinstalled on the host computer. In this case, ICS makes an Internet connection available on one network interface to be accessible to one other interface that is explicitly designated as the private network. ICS can also sharedial-up(includingPSTN,ISDNandADSLconnections),PPPoEandVPNconnections. Starting with Windows XP, ICS is integrated withUPnP, allowing remote discovery and control of the ICS host. It also has a Quality of Service Packet Scheduler component.[1]When an ICS client is on a relatively fast network and the ICS host is connected to the Internet through a slow link, Windows may incorrectly calculate the optimal TCP receive window size based on the speed of the link between the client and the ICS host, potentially affecting traffic from the sender adversely. The ICS QoS component sets the TCP receive window size to the same as it would be if the receiver were directly connected to the slow link. ICS also includes a local DNS resolver in Windows XP to provide name resolution for all network clients on the home network, including non-Windows-based network devices. When connected to aWindows domain, the computer can have aGroup Policyto restrict the use of ICS, but when at home, ICS can be enabled. The service is not customizable in terms of which addresses are used for the internal subnet, and contains no provisions for bandwidth limiting or other features. ICS was initially designed to connect only to Windows computers: computers on other operating systems required different steps to utilize ICS.[2]On Windows XP, the server, by default, gets the IP address 192.168.0.1. (This default can be changed within the interface settings of the network adapter or in theWindows Registry.) It provides NAT services to the entire 192.168.0.x subnet, even if the address on the client was set manually, not by the DHCP server. SinceWindows 7, the 192.168.137.x subnet has been used by default. Alternatives to ICS include hardware homeroutersandWireless access pointswith integrated Internet access hardware, such asbroadband over power lines,WiMAXorDSL modems.
https://en.wikipedia.org/wiki/Internet_Connection_Sharing
Amobile Internet device(MID) is amultimediacapablemobile deviceprovidingwirelessInternetaccess.[1][2][3]They are designed to provide entertainment, information andlocation-based servicesfor personal or business use. They allow 2-way communication and real-time sharing. They have been described as filling a niche between smartphones andtablet computers.[4] As all the features of MID started becoming available on smartphones and tablets, the term is now mostly used to refer to both low-end as well as high-end tablets.[5] The form factor of mobile Internet tablets from Archos is very similar to the Lenovo image on the right. The class has included multiple operating systems: Windows CE, Windows 7 and Android. The Android tablet uses anARM CortexCPU and atouchscreen. Intel announced a prototype MID at theIntel Developer Forumin Spring 2007 inBeijing. A MID development kit by Sophia Systems using Intel Centrino Atom was announced in April 2008.[6] Intel MID platforms are based on an Intel processor and chipset which consume less power than most of the x86 derivatives. A few platforms have been announced as listed below: Intel's first generation MID platform (codenamedMcCaslin) contains a 90 nmIntel A100/A110 processor(codenamedStealey) which runs at 600–800 MHz. On 2 March 2008, Intel introduced theIntel Atomprocessor brand[7]for a new family of low-power processor platforms. The components have thin, small designs and work together to "enable the best mobile computing and Internet experience" on mobile and low-power devices. Intel's second generation MID platform (codenamedMenlow) contains a 45 nm Intel Atom processor (codenamedSilverthorne) which can run up to 2.0 GHz and a System Controller Hub (codenamedPoulsbo) which includesIntel HD Audio(codenamedAzalia). This platform was initially branded asCentrinoAtom but such practice was discontinued in Q3 2008. Intel's third generation MID/smartphone platform (codenamedMoorestown) contains a 45 nmIntel Atomprocessor (codenamedLincroft) and a separate 65 nm Platform Controller Hub (codenamedLangwell). Since the memory controller and graphics controller are all now integrated into the processor, thenorthbridgehas been removed and the processor communicates directly with thesouthbridgevia theDMIbus interface. Intel's fourth generation MID/smartphone platform (codenamedMedfield) contains their first completeIntel Atom SoC(codenamedPenwell), produced on 32 nm. Intel's MID/smartphone platform (codenamedClover Trail+) based on itsClover Trailtablet platform. It contains a 32 nmIntel Atom SoC(codenamedCloverview). Intel's fifth generation MID/smartphone platform (codenamedMerrifield) contains a 22 nmIntel Atom SoC(codenamedTangier). Intel's sixth generation MID/smartphone platform (codenamedMoorefield) contains a22nmIntel Atom SoC(codenamedAnniedale). Intel's seventh generation MID/smartphone platform (codenamedMorganfield) contains a 14 nmIntel Atom SoC(codenamedBroxton). Intel announced collaboration withUbuntuto create Ubuntu for mobile internet devices distribution, known asUbuntu Mobile. Ubuntu's website said the new distribution "will provide a rich Internet experience for users of Intel’s 2008 Mobile Internet Device (MID) platform."[11]Ubuntu Mobile ended active development in 2009.
https://en.wikipedia.org/wiki/Mobile_Internet_device
Amodulator-demodulator, commonly referred to as amodem, is acomputer hardwaredevice that convertsdata from a digital formatinto a format suitable for an analogtransmission mediumsuch as telephone or radio. A modem transmits data bymodulatingone or morecarrier wavesignals to encodedigital information, while the receiverdemodulatesthe signal to recreate the original digital information. The goal is to produce asignalthat can be transmitted easily and decoded reliably. Modems can be used with almost any means of transmitting analog signals, fromLEDstoradio. Early modems were devices that used audible sounds suitable for transmission over traditionaltelephonesystems andleased lines. These generally operated at 110 or 300 bits per second (bit/s), and the connection between devices was normally manual, using an attachedtelephone handset. By the 1970s, higher speeds of 1,200 and 2,400 bit/s for asynchronous dial connections, 4,800 bit/s for synchronous leased line connections and 35 kbit/s for synchronous conditioned leased lines were available. By the 1980s, less expensive 1,200 and 2,400 bit/s dialup modems were being released, and modems working on radio and other systems were available. As device sophistication grew rapidly in the late 1990s, telephone-based modems quickly exhausted the availablebandwidth, reaching 56 kbit/s. The rise of public use of theinternetduring the late 1990s led to demands for much higher performance, leading to the move away from audio-based systems to entirely new encodings oncable televisionlines and short-range signals insubcarrierson telephone lines. The move tocellular telephones, especially in the late 1990s and the emergence ofsmartphonesin the 2000s led to the development of ever-faster radio-based systems. Today, modems are ubiquitous and largely invisible, included in almost every mobile computing device in one form or another, and generally capable of speeds on the order of tens or hundreds of megabytes per second. Modems are frequently classified by the maximum amount of data they can send in a givenunit of time, usually expressed inbits per second(symbolbit/s, sometimes abbreviated "bps") or rarely inbytes per second(symbolB/s). Modern broadband modem speeds are typically expressed in megabits per second (Mbit/s). Historically, modems were often classified by theirsymbol rate, measured inbaud. The baud unit denotes symbols per second, or the number of times per second the modem sends a new signal. For example, theITU-T V.21standard usedaudio frequency-shift keyingwith two possible frequencies, corresponding to two distinct symbols (or one bit per symbol), to carry 300 bits per second using 300 baud. By contrast, the originalITU-T V.22standard, which could transmit and receive four distinct symbols (two bits per symbol), transmitted 1,200 bits by sending 600 symbols per second (600 baud) usingphase-shift keying. Many modems are variable-rate, permitting them to be used over a medium with less than ideal characteristics, such as a telephone line that is of poor quality or is too long. This capability is often adaptive so that a modem can discover the maximum practical transmission rate during the connect phase, or during operation. Modems grew out of the need to connectteleprintersover ordinary phone lines instead of the more expensive leased lines which had previously been used forcurrent loop–based teleprinters and automatedtelegraphs. The earliest devices which satisfy the definition of a modem may have been themultiplexersused bynews wire servicesin the 1920s.[1] In 1941, theAlliesdeveloped a voiceencryptionsystem calledSIGSALYwhich used avocoderto digitize speech, then encrypted the speech with one-time pad and encoded the digital data as tones using frequency shift keying. This was also a digital modulation technique, making this an early modem.[2] Commercial modems largely did not become available until the late 1950s, when the rapid development of computer technology created demand for a method of connecting computers together over long distances, resulting in theBell Companyand then other businesses producing an increasing number of computer modems for use over both switched and leased telephone lines. Later developments would produce modems that operated overcable television lines,power lines, and variousradio technologies, as well as modems that achievedmuch higher speedsover telephone lines. A dial-up modem transmits computer data over an ordinaryswitchedtelephone line that has not been designed for data use. It was once a widely known technology, mass-marketed globallydial-up internet access. In the 1990s, tens of millions of people in the United States alone used dial-up modems for internet access.[3] Dial-up service has since been largely superseded bybroadband internet,[4]such asDSL. Mass production of telephone line modems in the United States began as part of theSAGEair-defense system in 1958, connecting terminals at various airbases, radar sites, and command-and-control centers to the SAGE director centers scattered around the United States andCanada. Shortly afterwards in 1959, the technology in the SAGE modems was made available commercially as theBell 101, which provided 110 bit/s speeds. Bell called this and several other early modems "datasets". Some early modems were based ontouch-tonefrequencies, such as Bell 400-style touch-tone modems.[5] TheBell 103Astandard was introduced byAT&Tin 1962. It provided full-duplex service at 300 bit/s over normal phone lines.Frequency-shift keyingwas used, with the call originator transmitting at 1,070 or 1,270Hzand the answering modem transmitting at 2,025 or 2,225 Hz.[6] The 103 modem would eventually become a de facto standard once third-party (non-AT&T) modems reached the market, and throughout the 1970s, independently made modems compatible with the Bell 103 de facto standard were commonplace.[7]Example models included theNovation CATand theAnderson-Jacobson. A lower-cost option was thePennywhistle modem, designed to be built using readily available parts.[8] Teletype machines were granted access to remote networks such as theTeletypewriter Exchangeusing the Bell 103 modem.[9]AT&T also produced reduced-cost units, the originate-only 113D and the answer-only 113B/C modems. The201AData-Phonewas a synchronous modem using two-bit-per-symbolphase-shift keying(PSK) encoding, achieving 2,000 bit/s half-duplex over normal phone lines.[10]In this system the two tones for any one side of the connection are sent at similar frequencies as in the 300 bit/s systems, but slightly out of phase. In early 1973,Vadicintroduced theVA3400which performed full-duplex at 1,200 bit/s over a normal phone line.[11] In November 1976, AT&T introduced the 212A modem, similar in design, but using the lower frequency set for transmission. It was not compatible with the VA3400,[12]but it would operate with 103A modem at 300 bit/s. In 1977, Vadic responded with the VA3467 triple modem, an answer-only modem sold to computer center operators that supported Vadic's 1,200-bit/s mode, AT&T's 212A mode, and 103A operation.[13] A significant advance in modems was theHayes Smartmodem, introduced in 1981. The Smartmodem was an otherwise standard 103A 300 bit/s direct-connect modem, but it introduced a command language which allowed the computer to make control requests, such as commands to dial or answer calls, over the same RS-232 interface used for the data connection.[14]The command set used by this device became a de facto standard, theHayes command set, which was integrated into devices from many other manufacturers. Automatic dialing was not a new capability—it had been available via separateAutomatic Calling Units, and via modems using theX.21interface[15]—but the Smartmodem made it available in a single device that could be used with even the most minimal implementations of the ubiquitous RS-232 interface, making this capability accessible from virtually any system or language.[16] The introduction of the Smartmodem made communications much simpler and more easily accessed. This provided a growing market for other vendors, who licensed the Hayes patents and competed on price or by adding features.[17]This eventually led to legal action over use of the patented Hayes command language.[18] Dial modems generally remained at 300 and 1,200 bit/s (eventually becoming standards such asV.21andV.22) into the mid-1980s. Commodore's 1982VicModemfor theVIC-20was the first modem to be sold under $100, and the first modem to sell a million units.[19] In 1984,V.22biswas created, a 2,400-bit/s system similar in concept to the 1,200-bit/s Bell 212. This bit rate increase was achieved by defining four or sixteen distinct symbols, which allowed the encoding of two or four bits per symbol instead of only one. By the late 1980s, many modems could support improved standards like this, and 2,400-bit/s operation was becoming common. Increasing modem speed greatly improved the responsiveness of online systems and madefile transferpractical. This led to rapid growth ofonline serviceswith large file libraries, which in turn gave more reason to own a modem. The rapid update of modems led to a similar rapid increase in BBS use. The introduction ofmicrocomputersystems with internalexpansion slotsmade small internal modems practical. This led to a series of popular modems for theS-100 busandApple IIcomputers that could directly dial out, answer incoming calls, and hang up entirely from software, the basic requirements of abulletin board system(BBS). The seminalCBBSfor instance was created on an S-100 machine with a Hayes internal modem, and a number of similar systems followed. Echo cancellationbecame a feature of modems in this period, which allowed both modems to ignore their own reflected signals. This way both modems can simultaneously transmit and receive over the full spectrum of the phone line, improving the available bandwidth.[20] Additional improvements were introduced byquadrature amplitude modulation(QAM) encoding, which increased the number of bits per symbol to four through a combination of phase shift and amplitude. Transmitting at 1,200 baud produced the 4,800 bit/sV.27terstandard, and at 2,400 baud the 9,600 bit/sV.32. Thecarrier frequencywas 1,650 Hz in both systems. The introduction of these higher-speed systems also led to the development of the digitalfaxmachine during the 1980s. While early fax technology also used modulated signals on a phone line, digital fax used the now-standard digital encoding used by computer modems. This eventually allowed computers to send and receive fax images. In the early 1990s, V.32 modems operating at 9,600 bit/s were introduced, but were expensive and were only starting to enter the market when V.32bis was standardized, which operated at 14,400 bit/s. Rockwell International's chip division developed a new driver chip set incorporating theV.32bisstandard and aggressively priced it.Supra, Inc.arranged a short-term exclusivity arrangement with Rockwell, and developed theSupraFAXModem 14400based on it. Introduced in January 1992 at$399(or less), it was half the price of the slower V.32 modems already on the market. This led to a price war, and by the end of the year V.32 was dead, never having been really established, and V.32bis modems were widely available for$250. V.32bis was so successful that the older high-speed standards had little advantages. USRobotics (USR) fought back with a 16,800 bit/s version of HST, while AT&T introduced a one-off 19,200 bit/s method they referred to asV.32ter, but neither non-standard modem sold well. Consumer interest in these proprietary improvements waned during the lengthy introduction of the28,800 bit/sV.34standard. While waiting, several companies decided to release hardware and introduced modems they referred to asV.Fast. In order to guarantee compatibility with V.34 modems once a standard was ratified (1994), manufacturers used more flexible components, generally aDSPandmicrocontroller, as opposed to purpose-designedASICmodem chips. This would allow later firmware updates to conform with the standards once ratified. The ITU standard V.34 represents the culmination of these joint efforts. It employed the most powerful coding techniques available at the time, including channel encoding and shape encoding. From the mere four bits per symbol (9.6kbit/s), the new standards used the functional equivalent of 6 to 10 bits per symbol, plus increasing baud rates from 2,400 to 3,429, to create 14.4, 28.8, and33.6 kbit/smodems. This rate is near the theoreticalShannon limitof a phone line.[21] While56 kbit/sspeeds had been available for leased-line modems for some time, they did not become available for dial up modems until the late 1990s. In the late 1990s, technologies to achieve speeds above33.6 kbit/sbegan to be introduced. Several approaches were used, but all of them began as solutions to a single fundamental problem with phone lines. By the time technology companies began to investigate speeds above33.6 kbit/s, telephone companies had switched almost entirely to all-digital networks. As soon as a phone line reached a local central office, aline cardconverted the analog signal from the subscriber to a digital one and conversely. While digitally encoded telephone lines notionally provide the same bandwidth as the analog systems they replaced, the digitization itself placed constraints on thetypesof waveforms that could be reliably encoded. The first problem was that the process of analog-to-digital conversion is intrinsically lossy, but second, and more importantly, the digital signals used by the telcos were not "linear": they did not encode all frequencies the same way, instead utilizing a nonlinear encoding (μ-lawanda-law) meant to favor the nonlinear response of the human ear to voice signals. This made it very difficult to find a56 kbit/sencoding that could survive the digitizing process. Modem manufacturers discovered that, while the analog to digital conversion could not preserve higher speeds,digital-to-analogconversions could. Because it was possible for an ISP to obtain a direct digital connection to a telco, adigital modem– one that connects directly to a digital telephone network interface, such as T1 or PRI – could send a signal that utilized every bit of bandwidth available in the system. While that signal still had to be converted back to analog at the subscriber end, that conversion would not distort the signal in the same way that the opposite direction did. The first 56k (56 kbit/s) dial-up option was a proprietary design fromUSRobotics, which they called "X2" because 56k was twice the speed (×2) of 28k modems. At that time, USRobotics held a 40% share of the retail modem market, while Rockwell International held an 80% share of the modemchipsetmarket. Concerned with being shut out, Rockwell began work on a rival 56k technology. They joined withLucentandMotorolato develop what they called "K56Flex" or just "Flex". Both technologies reached the market around February 1997; although problems with K56Flex modems were noted in product reviews through July, within six months the two technologies worked equally well, with variations dependent largely on local connection characteristics.[22] The retail price of these early 56k modems was aboutUS$200, compared to$100for standard 33k modems. Compatible equipment was also required at theInternet service providers(ISPs) end, with costs varying depending on whether their current equipment could be upgraded. About half of all ISPs offered 56k support by October 1997. Consumer sales were relatively low, which USRobotics and Rockwell attributed to conflicting standards.[23] In February 1998, TheInternational Telecommunication Union(ITU) announced the draft of a new56 kbit/sstandardV.90with strong industry support. Incompatible with either existing standard, it was an amalgam of both, but was designed to allow both types of modem by a firmware upgrade. The V.90 standard was approved in September 1998 and widely adopted by ISPs and consumers.[23][24] TheITU-T V.92standard was approved by ITU in November 2000[25]and utilized digitalPCMtechnology to increase the upload speed to a maximum of48 kbit/s. The high upload speed was a tradeoff. Use of the48 kbit/supstream rate would reduce the downstream as low as40 kbit/sdue to echo effects on the line. To avoid this problem, V.92 modems offer the option to turn off the digital upstream and instead use a plain 33.6 kbit/sanalog connection in order to maintain a high digital downstream of50 kbit/sor higher.[26] V.92 also added two other features. The first is the ability for users who have call waiting to put theirdial-up Internetconnection on hold for extended periods of time while they answer a call. The second feature is the ability to quickly connect to one's ISP, achieved by remembering the analog and digital characteristics of the telephone line and using this saved information when reconnecting. These values are maximum values, and actual values may be slower under certain conditions (for example, noisy phone lines).[27]For a complete list see the companion articlelist of device bandwidths. Abaudis one symbol per second; each symbol may encode one or more data bits. Many dial-up modems implement standards fordata compressionto achieve higher effective throughput for the same bitrate.V.44[34]is an example used in conjunction withV.92to achieve speeds greater than 56k over ordinary phone lines.[35] As telephone-based 56k modems began losing popularity, some Internet service providers such asNetzero/Juno,Netscape, and others started using pre-compression to increase apparent throughput. This server-side compression can operate much more efficiently than the on-the-fly compression performed within modems, because the compression techniques are content-specific (JPEG, text, EXE, etc.).The drawback is a loss in quality, as they uselossy compressionwhich causes images to become pixelated and smeared. ISPs employing this approach often advertised it as "accelerated dial-up".[36] These accelerated downloads are integrated into theOpera[37]andAmazon Silk[38]web browsers, using their own server-side text and image compression requiring all data to pass through their own servers before reaching the user.[38] Dial-up modems can attach in two different ways: with an acoustic coupler, or with a direct electrical connection. The caseHush-A-Phone Corp. v. United States, which legalized acoustic couplers, applied only to mechanical connections to a telephone set, not electrical connections to the telephone line. TheCarterfonedecision of 1968, however, permitted customers to attach devices directly to a telephone line as long as they followed stringent Bell-defined standards for non-interference with the phone network.[39]This opened the door to independent (non-AT&T) manufacture of direct-connect modems, that plugged directly into the phone line rather than via an acoustic coupler. WhileCarterfonerequired AT&T to permit connection of devices, AT&T successfully argued that they should be allowed to require the use of a special device to protect their network, placed in between the third-party modem and the line, called aData Access Arrangementor DAA. The use of DAAs was mandatory from 1969 to 1975 when the new FCC Part 68 rules allowed the use of devices without a Bell-provided DAA, subject to equivalent circuitry being included in the third-party device.[40] Virtually all modems produced after the 1980s are direct-connect. While Bell (AT&T) provided modems that attached via direct wire connection to the phone network as early as 1958, their regulations at the time did not permit the direct electrical connection of any non-Bell device to a telephone line. However, theHush-a-Phone rulingallowed customers to attach any deviceto a telephone setas long as it did not interfere with its functionality. This allowed third-party (non-Bell) manufacturers to sell modems utilizing anacoustic coupler.[39] With an acoustic coupler, an ordinary telephone handset was placed in a cradle containing a speaker and microphone positioned to match up with those on the handset. The tones used by the modem were transmitted and received into the handset, which then relayed them to the phone line.[41] Because the modem was not electrically connected, it was incapable of picking up, hanging up or dialing, all of which required direct control of the line. Touch-tone dialing would have been possible, but touch-tone was not universally available at this time. Consequently, the dialing process was executed by the user lifting the handset, dialing, then placing the handset on the coupler. To accelerate this process, a user could purchase adialerorAutomatic Calling Unit. Early modems could not place or receive calls on their own, but required human intervention for these steps. As early as 1964, Bell provided automatic calling units that connected separately to a second serial port on a host machine and could be commanded to open the line, dial a number, and even ensure the far end had successfully connected before transferring control to the modem.[42]Later on, third-party models would become available, sometimes known simply asdialers, and features such as the ability to automatically sign in to time-sharing systems.[43] Eventually this capability would be built into modems and no longer require a separate device. Prior to the 1990s, modems contained all the electronics and intelligence to convert data in discrete form to an analog (modulated) signal and back again, and to handle the dialing process, as a mix of discrete logic and special-purpose chips. This type of modem is sometimes referred to ascontroller-based.[44] In 1993, Digicom introduced theConnection 96 Plus, a modem which replaced the discrete and custom components with a general purpose digital signal processor, which could be reprogrammed to upgrade to newer standards.[45] Subsequently, USRobotics released theSportster Winmodem, a similarly upgradable DSP-based design.[46] As this design trend spread, both terms –soft modemandWinmodem– obtained a negative connotation in non-Windows-based computing circles because the drivers were either unavailable for non-Windows platforms, or were only available as unmaintainable closed-source binaries, a particular problem for Linux users.[47] Later in the 1990s, software-based modems became available. These are essentially sound cards, and in fact a common design uses theAC'97audio codec, which provides multichannel audio to a PC and includes three audio channels for modem signals. The audio sent and received on the line by a modem of this type is generated and processed entirely in software, often in a device driver. There is little functional difference from the user's perspective, but this design reduces the cost of a modem by moving most of the processing power into inexpensive software instead of expensive hardwareDSPsor discrete components. Soft modems of both types either are internal cards or connect over external buses such asUSB. They never utilize RS-232 because they require high bandwidth channels to the host computers to carry the raw audio signals generated (sent) or analyzed (received) by software. Since the interface is not RS-232, there is no standard for communication with the device directly. Instead, soft modems come with drivers which create an emulated RS-232 port, which standard modem software (such as an operating system dialer application) can communicate with. "Voice" and "fax" are terms added to describe any dial modem that is capable of recording/playing audio or transmitting/receiving faxes. Some modems are capable of all three functions.[48] Voice modemsare used forcomputer telephony integrationapplications as simple as placing/receiving calls directly through a computer with a headset, and as complex as fully automatedrobocallingsystems. Fax modems can be used for computer-based faxing, in which faxes are sent and received without inbound or outbound faxes ever needing to ever be printed on paper. This differs fromefax, in which faxing occurs over the internet, in some cases involving no phone lines whatsoever. The ITU-T V.150.1 Recommendation defines procedures for the inter-operation of PSTN to IP gateways.[49]In a classic example of this setup, each dial-up modem would connect to a modem relay gateway. The gateways are then connected to an IP network (such as the Internet). The analog connection from the modem is terminated at the gateway and the signal is demodulated. The demodulated control signals are transported over the IP network in anRTPpacket type defined asState Signaling Events(SSEs). The data from the demodulated signal is sent over the IP network via a transport protocol (also defined as an RTP payload) calledSimple Packet Relay Transport(SPRT). Both the SSE and SPRT packet formats are defined in the V.150.1 Recommendation (Annex C and Annex B respectively). The gateway at the remote end that receives the packets uses the information to re-modulate the signal for the modem connected at that end. While the V.150.1 Recommendation is not widely deployed, a pared down version of the recommendation called "Minimum Essential Requirements (MER) for V.150.1 Gateways" (SCIP-216) is used inSecure Telephonyapplications.[50] While traditionally a hardware device, fully software-based modems with the ability to be deployed in a cloud environment (such asMicrosoft AzureorAWS) do exist.[51]Leveraging aVoice-over-IP(VoIP) connection through aSIP Trunk, the modulated audio samples are generated and sent over an IP network viaRTPand an uncompressed audio codec (such asG.711μ-law or a-law). A 1994Software Publishers Associationfound that although 60% of computers in US households had a modem, only 7% of households went online.[52]ACEAstudy in 2006 found that dial-up Internet access was declining in the US. In 2000, dial-up Internet connections accounted for 74% of all US residential Internet connections.[citation needed]The United States demographic pattern for dial-up modem users per capita has been more or less mirrored in Canada and Australia for the past 20 years. Dial-up modem use in the US had dropped to 60% by 2003, and stood at 36% in 2006.[citation needed]Voiceband modems were once the most popular means ofInternetaccess in the US, but with the advent of new ways of accessing the Internet, the traditional 56K modem was losing popularity. The dial-up modem is still widely used by customers in rural areas where DSL, cable, wireless broadband, satellite, or fiber optic service are either not available or they are unwilling to pay what the available broadband companies charge.[53]In its 2012 annual report,AOLshowed it still collected around $700 million in fees from about three million dial-up users. TDDdevices are a subset of theteleprinterintended for use by the deaf or hard of hearing, essentially a small teletype with a built-in dial-up modem and acoustic coupler. The first models produced in 1964 utilizedFSKmodulation much like early computer modems. Aleased linemodem also uses ordinary phone wiring, like dial-up and DSL, but does not use the same network topology. While dial-up uses a normal phone line and connects through the telephone switching system, and DSL uses a normal phone line but connects to equipment at the telco central office, leased lines do not terminate at the telco. Leased lines are pairs of telephone wire that have been connected together at one or more telco central offices so that they form a continuous circuit between two subscriber locations, such as a business' headquarters and a satellite office. They provide no power or dialtone - they are simply a pair of wires connected at two distant locations. A dialup modem will not function across this type of line, because it does not provide the power, dialtone and switching that those modems require. However, a modem with leased-line capability can operate over such a line, and in fact can have greater performance because the line is not passing through the telco switching equipment, the signal is not filtered, and therefore greater bandwidth is available. Leased-line modems can operate in 2-wire or 4-wire mode. The former uses a single pair of wires and can only transmit in one direction at a time, while the latter uses two pairs of wires and can transmit in both directions simultaneously. When two pairs are available, bandwidth can be as high as 1.5 Mbit/s, a full dataT1circuit.[54] While the slower leased line modems used, e.g.,RS-232, interfaces, the faster wideband modems used, e.g.,V.35. The termbroadbandwas previously[55][56]used to describe communications faster than what was available on voice grade channels. The termbroadbandgained widespread adoption in the late 1990s to describe internet access technology exceeding the 56 kilobit/s maximum of dialup. There are many broadband technologies, such as various DSL (digital subscriber line) technologies and cable broadband. DSL technologies such asADSL,HDSL, andVDSLuse telephone lines (wires that were installed by a telephone company and originally intended for use by a telephone subscriber) but do not utilize most of the rest of the telephone system. Their signals are not sent through ordinary phone exchanges, but are instead received by special equipment (aDSLAM) at the telephone company central office. Because the signal does not pass through the telephone exchange, no "dialing" is required, and the bandwidth constraints of an ordinary voice call are not imposed. This allows much higher frequencies, and therefore much faster speeds. ADSL in particular is designed to permit voice calls and data usage over the same line simultaneously. Similarly,cable modemsuse infrastructure originally intended to carry television signals, and like DSL, typically permit receiving television signals at the same time as broadband internet service. Other broadband modems includeFTTxmodems,satellite modems, andpower linemodems. Different terms are used for broadband modems, because they frequently contain more than just a modulation/demodulation component. Because high-speed connections are frequently used by multiple computers at once, many broadband modems do not have direct (e.g. USB) PC connections. Rather they connect over a network such as Ethernet or Wi-Fi. Early broadband modems offeredEthernethandoff allowing the use of one or more public IP addresses, but no other services such as NAT and DHCP that would allow multiple computers to share one connection. This led to many consumers purchasing separate "broadband routers," placed between the modem and their network, to perform these functions.[57][58] Eventually, ISPs began providingresidential gatewayswhich combined the modem and broadband router into a single package that provided routing,NAT, security features, and evenWi-Fiaccess in addition to modem functionality, so that subscribers could connect their entire household without purchasing any extra equipment. Even later, these devices were extended to provide "triple play" features such as telephony and television service. Nonetheless, these devices are still often referred to simply as "modems" by service providers and manufacturers.[59] Consequently, the terms "modem", "router", and "gateway" are now used interchangeably in casual speech, but in a technical context "modem" may carry a specific connotation of basic functionality with no routing or other features, while the others describe a device with features such as NAT.[60][61] Broadband modems may also handle authentication such asPPPoE. While it is often possible to authenticate a broadband connection from a users PC, as was the case with dial-up internet service, moving this task to the broadband modem allows it to establish and maintain the connection itself, which makes sharing access between PCs easier since each one does not have to authenticate separately. Broadband modems typically remain authenticated to the ISP as long as they are powered on. Any communication technology sending digital data wirelessly involves a modem. This includesdirect broadcast satellite,WiFi,WiMax,mobile phones,GPS,BluetoothandNFC. Modern telecommunications and data networks also make extensive use ofradio modemswhere long distance data links are required. Such systems are an important part of thePSTN, and are also in common use for high-speedcomputer networklinks to outlying areas wherefiber opticis not economical. Wireless modems come in a variety of types, bandwidths, and speeds. Wireless modems are often referred to as transparent or smart. They transmit information that is modulated onto a carrier frequency to allow many wireless communication links to work simultaneously on different frequencies.[relevant?] Transparent modems operate in a manner similar to their phone line modem cousins. Typically, they werehalf duplex, meaning that they could not send and receive data at the same time. Typically, transparent modems are polled in a round robin manner to collect small amounts of data from scattered locations that do not have easy access to wired infrastructure. Transparent modems are most commonly used by utility companies for data collection. Smart modems come with media access controllers inside, which prevents random data from colliding and resends data that is not correctly received. Smart modems typically require more bandwidth than transparent modems, and typically achieve higher data rates. The IEEE802.11standard defines a short range modulation scheme that is used on a large scale throughout the world. Modems which use a mobile telephone system (GPRS,UMTS,HSPA,EVDO,WiMax,5Getc.), are known as mobile broadband modems (sometimes also called wireless modems). Wireless modems can be embedded inside alaptop, mobile phone or other device, or be connected externally. External wireless modems includeconnect cards, USB modems, andcellular routers. MostGSMwireless modems come with an integratedSIM cardholder(i.e.Huawei E220, Sierra 881.) Some models are also provided with a microSD memory slot and/or jack for additional external antenna, (Huawei E1762, Sierra Compass 885.)[62][63] The CDMA (EVDO) versions do not typically useR-UIMcards, but useElectronic Serial Number(ESN) instead. Until the end of April 2011, worldwide shipments of USB modems surpassed embedded 3G and 4G modules by 3:1 because USB modems can be easily discarded. Embedded modems may overtake separate modems as tablet sales grow and the incremental cost of the modems shrinks, so by 2016, the ratio may change to 1:1.[64] Like mobile phones, mobile broadband modems can be SIM locked to a particular network provider. Unlocking a modem is achieved the same way as unlocking a phone, by using an'unlock code'.[citation needed] A device that connects to a fiber optic network is known as anoptical network terminal(ONT) or optical network unit (ONU). These are commonly used infiber to the homeinstallations, installed inside or outside a house to convert the optical medium to a copper Ethernet interface, after which a router or gateway is often installed to perform authentication, routing, NAT, and other typical consumer internet functions, in addition to "triple play" features such as telephony and television service. They are not a modem,[disputed–discuss]although they perform a similar function and are sometimes referred to as a modem. Fiber optic systems can use quadrature amplitude modulation to maximize throughput. 16QAM uses a 16-point constellation to send four bits per symbol, with speeds on the order of 200 or 400 gigabits per second.[65][66]64QAM uses a 64-point constellation to send six bits per symbol, with speeds up to 65 terabits per second. Although this technology has been announced, it may not yet be commonly used.[67][68][69] Although the namemodemis seldom used, some high-speed home networking applications do use modems, such aspowerline ethernet. TheG.hnstandard for instance, developed byITU-T, provides a high-speed (up to 1 Gbit/s)local area networkusing existing home wiring (power lines, phone lines, andcoaxial cables). G.hn devices useorthogonal frequency-division multiplexing(OFDM) to modulate a digital signal for transmission over the wire. As described above, technologies like Wi-Fi and Bluetooth also use modems to communicate over radio at short distances. Anull modemcable is a specially wired cable connected between theserial portsof two devices, with the transmit and receive lines reversed. It is used to connect two devices directly without a modem. The same software or hardware typically used with modems (such as Procomm or Minicom) could be used with this type of connection. A null modem adapter is a small device with plugs at both ends which is placed on the termination of a normal "straight-through" serial cable to convert it into a null-modem cable. A "short haul modem" is a device that bridges the gap between leased-line and dial-up modems. Like a leased-line modem, they transmit over "bare" lines with no power or telco switching equipment, but are not intended for the same distances that leased lines can achieve. Ranges up to several miles are possible, but significantly, short-haul modems can be used formediumdistances, greater than the maximum length of a basic serial cable but still relatively short, such as within a single building or campus. This allows a serial connection to be extended for perhaps only several hundred to several thousand feet, a case where obtaining an entire telephone or leased line would be overkill. While some short-haul modems do in fact use modulation, low-end devices (for reasons of cost or power consumption) are simple "line drivers" that increase the level of the digital signal but do not modulate it. These are not technically modems, but the same terminology is used for them.[70]
https://en.wikipedia.org/wiki/Modem#Mobile_broadband
Open Garden, Inc.is an Americanmobile virtual network operator (MVNO)based inMiami,Florida, that sellseSIM-based prepaid mobile data subscriptions. Open Garden, Inc. was co-founded in 2011 bybusinessmanMicha Benoliel,Internetarchitect Stanislav Shalunov,software developerGreg Hazel and systems architect Taylor Ongaro, in San Francisco, California in theUnited States.[3][4] Between 2011 and 2015, Open Garden developed software of the same name, a proprietary internet community-based connection sharingsoftware application[5][6]that shared Internet access with other devices viaWi-FiorBluetooth. When the person whose Internet connection was being shared left the network, the application automatically detected and connected to the next best available connection.[7] After raising $2 millionseed moneyfrom a group ofangel investorsin September 2012,[8]Open Garden started developing and incorporatingroll out multi-hop connectivityandchannel bondinginto their application.[9]The new funding was led by Allan Green, an early investor in Phone.com and Mobileway,David Ulevitch, then CEO ofOpenDNS, Derek Parham, creator ofGoogle Apps for Business, and Digital Garage, which also invested inTwitterandPath.[10]The application was free for download forAndroidandmacOS, and at that time, Open Garden planned afreemium business model, with paid features likeVPN accessfor business users.[11] Open Garden and its product were introduced on October 11, 2011 at the Android Open 2011, where they won the Startup Showcase Award.[12]On May 26, 2012, Open Garden won the Most Innovative Startup Award at theTechCrunchDisrupt Conference 2012.[13]Toward the end of Conference, one of the judges,venturecapitalistFred WilsonfromUnion Square Ventures, said that Open Garden was his favorite all along, stating that what the company does is the most worthy of the conference name - Disrupt.[14]The following year, on October 23, 2013, Open Garden won the G-Startup Award at theGlobal Mobile Innovator's Conference.[15]
https://en.wikipedia.org/wiki/Open_Garden
Asmartbookwas a class ofmobile devicethat combined certain features of both asmartphoneandnetbookcomputer, produced between 2009 and 2010.[1]Smartbooks were advertised with features such asalways on, all-day battery life,3G, orWi-Ficonnectivity andGPS(all typically found in smartphones) in alaptopor tablet-style body with a screen size of 5 to 10 inches and a physical or softtouchscreenkeyboard.[2] A German company soldlaptopsunder the brandSmartbookand held atrademarkfor the word in many countries (not including some big markets likeUnited States,China,Japan, orIndia). It acted to preempt others from using the termsmartbookto describe their products.[3][4] Smartbooks tended to be designed more for entertainment purposes than for productivity and typically targeted to work with online applications.[5]They were projected to be sold subsidized throughmobile network operators, likemobile phones, along with a wireless data plan.[6] The advent of much more populartabletslikeAndroidtablets and theiPad, coupled with the prevailing popularity of conventionaldesktop computersandlaptopshave displaced the smartbook.[7] The smartbook concept was mentioned byQualcommin May 2009 during marketing for itsSnapdragontechnology, with products expected later that year.[8]Difficulties in adapting key software (in particular, Adobe's proprietaryFlash Player) to the ARM architecture[9]delayed releases until the first quarter of 2010.[10] Smartbooks would have been powered by processors which were more energy-efficient than traditional ones typically found in desktop and laptop computers.[1]The first smartbooks were expected to use variants of theLinuxoperating system, such as Google'sAndroidorChromeOS. TheARMprocessor would have allowed them to achieve longer battery life than many larger devices usingx86processors.[8][9]In February 2010,ABI Researchprojected that 163 million smartbooks would ship in 2015.[11] In many countries the wordSmartbookwas a trademark registered by Smartbook AG.[12][13]In August 2009 a German court ruled Qualcomm must block access from Germany to all its webpages containing the wordSmartbookunless Smartbook AG is mentioned.[14]Smartbook AG defended its trademark.[4][15]A February 2010 ruling prevented Lenovo from using the term.[16] By the end of 2010, Qualcomm CEOPaul Jacobsadmitted thattablet computerssuch as theiPadalready occupied the niche of the smartbook, so the name was dropped.[7]In February 2011 Qualcomm won its legal battle when the German patent office ruled the words "smart" and "book" could be used.[17]However, several trademarks have been registered.[18][19][20][21] In March 2009 the Always Innovating company announced theTouch Book.[22]It was based on theTexas InstrumentsOMAP 3530which implemented theARM Cortex-A8architecture. It was originally developed from the Texas InstrumentsBeagle Board. It had a touchscreen and a detachable keyboard which contained a second battery. The device came with a Linux operating system and the company offered to license their hardware designs.[22][23][24] Sharp Electronics, introduced their PC-Z1 "Netwalker" device in August 2009 with a promised ship date of October 2009. It featured a 5.5" touchscreen, runs Ubuntu on anARM Cortex-A8basedFreescale i.MX515and was packaged in a small clamshell design. Sharp reported the device weighs less than 500 grams and will run 10 hours on one battery charge. The device is said to run 720p video, and have both 2D and 3D graphics acceleration. It comes withAdobe Flash Lite3.1 installed.[25] Pegatron, anAsuscompany, showed a working prototype of a smartbook in August 2009. It consisted of an ARM Cortex-A8 basedFreescale i.MX515supports 2D/3D graphics as well as720pHD video, 512 MB DDR2RAM, 1024x600 8.9" LCD screen, Bluetooth 2.0, 802.11g and run off aSD card. It also featured one USB and one micro USB port, a VGA port as well as a card reader. The smartbook ranUbuntuNetbook 9.04 and contained a version ofAdobe Flash Playerwhich was out of date. Thebill of materialsfor the Pegatron smartbook prototype was $120.[26] In November 2009 Pegatron said it had received a large number of orders for smartbooks that would launch in early 2010. The devices were rumored to sell for about $200 when subsidized.Asusannounced plans to release their own smartbook in the first quarter of 2010.[27] Qualcommwas expected to announce a smartbook on November 12, 2009, at an analyst meeting.[28]ALenovodevice concept was shown, and announced in January 2010. In May 2010 the Skylight was cancelled.[29] In late January 2010 a U.S.Federal Communications Commission(FCC) listing featured a device fromHPthat was referred assmartbook, while a prototype of the same device was already shown earlier. In beginning February on Mobile World Congress in Barcelona, HP announced it will bring this device to market. The specifications will most likely be following:[30][31][32][33] In the end of March 2010 the smartbook made an appearance at FCC again, this time listing its3Gcapabilities. According to FCC, the device will support GSM 850 and 1900, as well as WCDMA II and V bands. These WCDMA bands may indicate the usage in AT&T network in the United States.[34][35]Details of the product is now available on the HP website.[36][37] In June 2010, a smartbook device fromToshibawas announced. It featuresNvidia Tegraprocessor and is able to remain instand-by modefor up to 7 days.[38][39]The device was officially available at the ToshibaUnited Kingdomsite.[40]Originally delivered with Android v2.1 (upgradable to v2.2 since 2011[41]) it can also be modified to run a customizedLinuxdistribution. In Japan, was sold as "Dynabook AZ". TheGenesicompany announced an MX Smartbook as part of theirEfikaline in August 2010.[42]It was originally priced atUS$349, and some reviewers questioned if it was small enough to fit this definition.[43][44]It is ostensibly a derivative of the above-mentioned Pegatron design. In September 2009,Foxconnannounced it is working on smartbook development.[45]In November 2009, aQuanta Computerpre-production Snapdragon powered sample smartbook device that ranAndroidwas unveiled.[46][47]Companies likeAcer Inc.planned to release a smartbook, but due to the popularity of tablets,MacBook AirandUltrabooks, plans were scrapped.[48]
https://en.wikipedia.org/wiki/Smartbook
Asmartphoneis amobile phonewith advancedcomputingcapabilities. It typically has atouchscreeninterface, allowing users to access a wide range of applications and services, such asweb browsing,email, andsocial media, as well asmultimediaplayback andstreaming. Smartphones have built-incameras,GPS navigation, and support for various communication methods, including voice calls,text messaging, and internet-based messaging apps. Smartphones are distinguished from older-designfeature phonesby their more advanced hardware capabilities and extensivemobile operating systems, access to theinternet, business applications,mobile payments, and multimedia functionality, including music, video,gaming,radio, andtelevision. Smartphones typically featuremetal–oxide–semiconductor(MOS)integrated circuit(IC) chips, various sensors, and support for multiplewirelesscommunication protocols. Examples of smartphone sensors includeaccelerometers,barometers,gyroscopes, andmagnetometers; they can be used by both pre-installed and third-party software to enhance functionality. Wireless communication standards supported by smartphones includeLTE,5G NR,Wi-Fi,Bluetooth, andsatellite navigation. By the mid-2020s, manufacturers began integratingsatellite messagingand emergency services, expanding their utility in remote areas without reliablecellular coverage. Smartphones have largely replacedpersonal digital assistant(PDA) devices,handheld/palm-sized PCs,portable media players(PMP),[1]point-and-shoot cameras,camcorders, and, to a lesser extent,handheld video game consoles,e-readerdevices,pocket calculators, andGPS tracking units. Following the rising popularity of theiPhonein the late 2000s, the majority of smartphones have featured thin, slate-likeform factorswith large,capacitivetouch screens with support formulti-touchgestures rather than physical keyboards. Most modern smartphones have the ability for users to download or purchase additionalapplicationsfrom a centralizedapp store. They often have support forcloud storageand cloud synchronization, andvirtual assistants. Since the early 2010s, improved hardware and faster wireless communication havebolstered the growth of the smartphone industry. As of 2014[update], over a billion smartphones are sold globally every year. In 2019 alone, 1.54 billion smartphone units were shipped worldwide.[2]As of 2020[update], 75.05 percent of the world population were smartphone users.[3] Early smartphones were marketed primarily towards the enterprise market, attempting to bridge the functionality of standalone PDA devices with support for cellulartelephony, but were limited by their bulky form,short battery life, slow analog cellular networks, and the immaturity of wireless data services. These issues were eventually resolved with theexponential scalingandminiaturizationofMOS transistorsdown tosub-micron levels(Moore's law), the improvedlithium-ion battery, fasterdigitalmobile data networks(Edholm's law), and more maturesoftwareplatformsthat allowed mobile deviceecosystemsto develop independently ofdata providers. In the 2000s,NTT DoCoMo'si-modeplatform,BlackBerry,Nokia'sSymbianplatform, andWindows Mobilebegan to gain market traction, with models often featuringQWERTYkeyboards orresistive touchscreeninput and emphasizing access topush emailandwireless internet. In the early 1990s,IBMengineerFrank Canovaconsidered that chip-and-wireless technology was becoming small enough to use inhandheld devices.[5]The first commercially available device that could be properly referred to as a "smartphone" began as a prototype called "Angler" developed by Canova in 1992 while at IBM and demonstrated in November of that year at theCOMDEXcomputer industry trade show.[6][7][8]A refined version was marketed to consumers in 1994 byBellSouthunder the nameSimon Personal Communicator. In addition to placing and receivingcellularcalls, the touchscreen-equipped Simon could send and receivefaxesandemails. It included an address book, calendar, appointment scheduler, calculator, world time clock, and notepad, as well as other visionary mobile applications such as maps, stock reports and news.[9] TheIBM Simonwas manufactured byMitsubishi Electric, which integrated features with its owncellular radiotechnologies.[10]It featured aliquid-crystal display(LCD) andPC Cardsupport.[11]The Simon was commercially unsuccessful, particularly due to its bulky form factor and limitedbattery life,[12]usingNiCadbatteries rather than thenickel–metal hydride batteriescommonly used in mobile phones in the 1990s, orlithium-ion batteriesused in modern smartphones.[13] The term "smart phone" (intwowords) was not coined until a year after the introduction of the Simon, appearing in print as early as 1995, describing AT&T's PhoneWriter Communicator.[14][non-primary source needed] The term "smartphone" (asoneword) was first used byEricssonin 1997 to describe a new device concept, theGS88.[15] Beginning in the mid-to-late 1990s, many people who had mobile phones carried a separate dedicated PDA device, running early versions of operating systems such asPalm OS,Newton OS,SymbianorWindows CE/Pocket PC. These operating systems would later evolve into earlymobile operating systems. Most of the "smartphones" in this era were hybrid devices that combined these existing familiar PDA OSes with basic phone hardware. The results were devices that were bulkier than either dedicated mobile phones or PDAs, but allowed a limited amount of cellular Internet access. PDA and mobile phone manufacturers competed in reducing the size of devices. The bulk of these smartphones combined with their high cost and expensive data plans, plus other drawbacks such as expansion limitations and decreased battery life compared to separate standalone devices, generally limited their popularity to "early adopters" and business users who needed portable connectivity. In March 1996,Hewlett-Packardreleased theOmniGo 700LX, a modifiedHP 200LXpalmtopPC with aNokia 2110mobile phonepiggybackedonto it andROM-based software to support it. It had a 640 × 200 resolution CGA compatible four-shade gray-scale LCD screen and could be used to place and receive calls, and to create and receive text messages, emails and faxes. It was also 100%DOS5.0 compatible, allowing it to run thousands of existing software titles, including early versions ofWindows. In August 1996,Nokiareleased theNokia 9000 Communicator, a digital cellular PDA based on theNokia 2110with an integrated system based on thePEN/GEOS 3.0operating system fromGeoworks. The two components were attached by a hinge in what became known as aclamshell design, with the display above and a physicalQWERTY keyboardbelow. The PDA providede-mail; calendar, address book,calculatorand notebook applications; text-based Web browsing; and could send and receive faxes. When closed, the device could be used as a digital cellular telephone. In June 1999,Qualcommreleased the "pdQ Smartphone", aCDMAdigitalPCSsmartphone with an integratedPalmPDA and Internet connectivity.[16] Subsequent landmark devices included: In 1999, Japanese wireless providerNTT DoCoMolaunchedi-mode, a newmobile internetplatform which provided data transmission speeds up to 9.6 kilobits per second, and access web services available through the platform such as online shopping. NTT DoCoMo's i-mode usedcHTML, a language which restricted some aspects of traditionalHTMLin favor of increasing data speed for the devices. Limited functionality, small screens and limited bandwidth allowed for phones to use the slower data speeds available. The rise of i-mode helped NTT DoCoMo accumulate an estimated 40 million subscribers by the end of 2001, and ranked first in market capitalization in Japan and second globally.[26] Japanese cell phones increasingly diverged from global standards and trends to offer other forms of advanced services and smartphone-like functionality that were specifically tailored to theJapanese market, such asmobile paymentsand shopping,near-field communication(NFC) allowingmobile walletfunctionality to replacesmart cardsfor transit fares, loyalty cards, identity cards, event tickets, coupons, money transfer, etc., downloadable content like musicalringtones,games, andcomics, and1segmobile television.[27][28]Phones built by Japanese manufacturers used customfirmware, however, and did not yet feature standardizedmobile operating systemsdesigned to cater tothird-party application development, so their software and ecosystems were akin to very advancedfeature phones. As with other feature phones, additional software and services required partnerships and deals with providers. The degree of integration between phones and carriers, unique phone features, non-standardized platforms, and tailoring to Japanese culture made it difficult for Japanese manufacturers to export their phones, especially when demand was so high in Japan that the companies did not feel the need to look elsewhere for additional profits.[29][30][31] The rise of3Gtechnology in other markets and non-Japanese phones with powerful standardizedsmartphone operating systems,app stores, and advanced wireless network capabilities allowed non-Japanese phone manufacturers to finally break in to the Japanese market, gradually adopting Japanese phone features likeemojis, mobile payments, NFC, etc. and spreading them to the rest of the world. Phones that made effective use of any significant data connectivity were still rare outside Japan until the introduction of theDanger Hiptopin 2002, which saw moderate success among U.S. consumers as theT-MobileSidekick. Later, in the mid-2000s, business users in the U.S. started to adopt devices based onMicrosoft'sWindows Mobile, and thenBlackBerrysmartphones fromResearch In Motion. American users popularized the term "CrackBerry" in 2006 due to the BlackBerry's addictive nature.[32]In the U.S., the high cost of data plans and relative rarity of devices with Wi-Fi capabilities that could avoid cellular data network usage kept adoption of smartphones mainly to business professionals and "early adopters". Outside the U.S. and Japan,Nokiawas seeing success with its smartphones based onSymbian, originally developed byPsionfor their personal organisers, and it was the most popular smartphone OS inEuropeduring the middle to late 2000s. Initially, Nokia's Symbian smartphones were focused on business with theEseries,[33]similar to Windows Mobile andBlackBerrydevices at the time. From 2002 onwards, Nokia started producing consumer-focused smartphones, popularized by the entertainment-focusedNseries. Until 2010, Symbian was the world's most widely used smartphone operating system.[34] Thetouchscreenpersonal digital assistant (PDA)–derived nature of adapted operating systems likePalm OS, the "Pocket PC" versions of what was laterWindows Mobile, and theUIQinterface that was originally designed for pen-based PDAs onSymbian OSdevices resulted in some early smartphones having stylus-based interfaces. These allowed forvirtual keyboardsand handwriting input, thus also allowing easy entry of Asian characters.[35] By the mid-2000s, the majority of smartphones had a physicalQWERTYkeyboard. Most used a "keyboard bar" form factor, like theBlackBerryline,Windows Mobilesmartphones,Palm Treos, and some of theNokia Eseries. A few hid their full physical QWERTY keyboard in asliding form factor, like theDanger Hiptopline. Some even had only anumeric keypadusingT9 text input, like theNokia Nseriesand other models in theNokia Eseries.Resistive touchscreenswithstylus-based interfaces could still be found on a few smartphones, like thePalm Treos, which had dropped their handwriting input after a few early models that were available in versions withGraffitiinstead of a keyboard. The late 2000s and early 2010s saw a shift in smartphone interfaces away from devices with physical keyboards and keypads to ones with large finger-operatedcapacitivetouchscreens.[36]The first phone of any kind with a large capacitive touchscreen was theLG Prada, announced byLGin December 2006.[37]This was a fashionablefeature phonecreated in collaboration with Italian luxury designerPradawith a 3" 240 x 400 pixel screen, a 2-Megapixel digital camera with 144p video recording ability, anLED flash, and a miniature mirror for self portraits.[38][39] In January 2007,Apple Computerintroduced theiPhone.[40][41][42]It had a 3.5"capacitivetouchscreen with twice the common resolution of mostsmartphone screens at the time,[43]and introducedmulti-touchto phones, which allowed gestures such as "pinching" to zoom in or out on photos, maps, and web pages. The iPhone was notable as being the first device of its kind targeted at the mass market to abandon the use of a stylus, keyboard, or keypad typical of contemporary smartphones, instead using a large touchscreen for direct finger input as its main means of interaction.[35] The iPhone'soperating systemwas also a shift away from older operating systems (which older phones supported and which were adapted from PDAs andfeature phones) to an operative system powerful enough to not require using a limited,stripped down web browserthat can only render pages specially formatted using technologies such asWML,cHTML, orXHTMLand instead ran a version of Apple'sSafaribrowser that could render full websites[44][45][46]not specifically designed for mobile phones.[47] Later Apple shipped asoftware updatethat gave the iPhone a built-in on-device App Store allowing direct wireless downloads ofthird-partysoftware.[48][49]This kind of centralized App Store and freedeveloper tools[50][51]quickly became the new main paradigm for all smartphone platforms for softwaredevelopment,distribution, discovery,installation, and payment, in place of expensive developer tools that required official approval to use and a dependence onthird-party sourcesproviding applications for multiple platforms.[36] The advantages of a design with software powerful enough to support advanced applications and a large capacitive touchscreen affected the development of another smartphone OS platform,Android, with a more BlackBerry-like prototype device scrapped in favor of a touchscreen device with a slide-out physical keyboard, as Google's engineers thought at the time that a touchscreen could not completely replace a physical keyboard and buttons.[52][53][54]Android is based around a modified Linux kernel, again providing more power thanmobile operating systemsadapted from PDAs and feature phones. The first Android device, the horizontal-slidingHTC Dream, was released in September 2008.[55] In 2012,Asusstarted experimenting with a convertible docking system namedPadFone, where the standalone handset can when necessary be inserted into atablet-sized screen unit with integrated supportive battery and used as such. In 2013 and 2014, Samsung experimented with the hybrid combination ofcompact cameraand smartphone, releasing theGalaxy S4 ZoomandK Zoom, each equipped with integrated 10×optical zoomlens and manual parameter settings (including manual exposure and focus) years before these were widely adapted among smartphones. The S4 Zoom additionally has a rotary knob ring around the lens and a tripod mount. While screen sizes have increased, manufacturers have attempted to make smartphones thinner at the expense of utility and sturdiness, since a thinner frame is more vulnerable to bending and has less space for components, namely battery capacity.[56][57] The iPhone and later touchscreen-only Android devices together popularized the slateform factor, based on a largecapacitive touchscreenas the sole means of interaction, and led to the decline of earlier, keyboard- and keypad-focused platforms.[36]Later, navigation keys such as the home,back,menu,taskandsearchbuttons have also been increasingly replaced by nonphysical touch keys, then virtual, simulated on-screen navigation keys, commonly with access combinations such as a long press of the task key to simulate a short menu key press, as with home button to search.[58]More recent "bezel-less" types have their screen surface space extended to the unit's front bottom to compensate for the display area lost for simulating the navigation keys. While virtual keys offer more potential customizability, their location may be inconsistent among systems depending on screen rotation and software used. Multiple vendors attempted to update or replace their existing smartphone platforms and devices to better-compete with Android and the iPhone; Palm unveiled a new platform known aswebOSfor itsPalm Prein late-2009 to replacePalm OS, which featured a focus on a task-based "card" metaphor and seamless synchronization and integration between various online services (as opposed to the then-conventional concept of a smartphone needing a PC to serve as a "canonical, authoritative repository" for user data).[59][60]HPacquired Palm in 2010 and released several other webOS devices, including thePre 3andHP TouchPadtablet. As part of a proposed divestment of its consumer business to focus on enterprise software, HP abruptly ended development of future webOS devices in August 2011, and sold the rights to webOS toLG Electronicsin 2013, for use as asmart TVplatform.[61][62] Research in Motionintroduced the vertical-slidingBlackBerry Torchand BlackBerry OS 6 in 2010, which featured a redesigned user interface, support for gestures such as pinch-to-zoom, and a new web browser based on the sameWebKitrendering engine used by the iPhone.[63][64]The following year, RIM released BlackBerry OS 7 and new models in theBoldand Torch ranges, which included a new Bold with a touchscreen alongside its keyboard, and the Torch 9860—the first BlackBerry phone to not include a physical keyboard.[65]In 2013, it replaced the legacy BlackBerry OS with a revamped,QNX-based platform known asBlackBerry 10, with the all-touchBlackBerry Z10and keyboard-equippedQ10as launch devices.[66] In 2010, Microsoft unveiled a replacement for Windows Mobile known asWindows Phone, featuring a new touchscreen-centric user interface built aroundflat designand typography, a home screen with "live tiles" containing feeds of updates from apps, as well as integratedMicrosoft Officeapps.[67]In February 2011, Nokia announced that it had entered into a major partnership with Microsoft, under which it would exclusively use Windows Phone on all of its future smartphones, and integrate Microsoft'sBingsearch engine andBing Maps(which, as part of the partnership, would also licenseNokia Mapsdata) into all future devices. The announcement led to the abandonment of both Symbian, as well asMeeGo—a Linux-based mobile platform it was co-developing with Intel.[68][69][70]Nokia's low-endLumia 520saw strong demand and helped Windows Phone gain niche popularity in some markets,[71]overtaking BlackBerry in global market share in 2013.[72][73] In mid-June 2012,Meizureleased its mobile operating system,Flyme OS. Many of these attempts to compete with Android and iPhone were short-lived. Over the course of the decade, the two platforms became a clearduopolyin smartphone sales and market share, with BlackBerry, Windows Phone, and other operating systems eventually stagnating to little or no measurable market share.[74][75]In 2015, BlackBerry began to pivot away from its in-house mobile platforms in favor of producing Android devices, focusing on a security-enhanced distribution of the software. The following year, the company announced that it would also exit the hardware market to focus more on software and its enterprise middleware,[76]and began to license the BlackBerry brand and its Android distribution to third-party OEMs such asTCLfor future devices.[77][78] In September 2013, Microsoft announced its intent to acquire Nokia's mobile device business for $7.1 billion, as part of a strategy under CEOSteve Ballmerfor Microsoft to be a "devices and services" company.[79]Despite the growth of Windows Phone and theLumiarange (which accounted for nearly 90% of all Windows Phone devices sold),[80]the platform never had significant market share in the key U.S. market,[71]and Microsoft was unable to maintain Windows Phone's momentum in the years that followed, resulting in dwindling interest from users and app developers.[81]After Balmer was succeeded bySatya Nadella(who has placed a larger focus on software and cloud computing) as CEO of Microsoft, it took a $7.6 billionwrite-offon the Nokia assets in July 2015, and laid off nearly the entireMicrosoft Mobileunit in May 2016.[82][83][79] Prior to the completion of the sale to Microsoft, Nokia released a series of Android-derived smartphones foremerging marketsknown asNokia X, which combined an Android-based platform with elements of Windows Phone and Nokia's feature phone platformAsha, using Microsoft and Nokia services rather than Google.[84] The first commercialcamera phonewas theKyoceraVisual Phone VP-210, released in Japan in May 1999.[85]It was called a "mobile videophone" at the time,[86]and had a 110,000-pixelfront-facing camera.[85]It could send up to two images per second over Japan'sPersonal Handy-phone System(PHS)cellular network, and store up to 20JPEGdigital images, which could be sent overe-mail.[85]The first mass-market camera phone was theJ-SH04, aSharpJ-Phonemodel sold in Japan in November 2000.[87][88]It could instantly transmit pictures via cell phonetelecommunication.[89] By the mid-2000s, higher-endcell phonescommonly had integrated digital cameras. In 2003camera phonesoutsold stand-alone digital cameras, and in 2006 they outsold film and digital stand-alone cameras. Five billion camera phones were sold in five years, and by 2007 more than half of theinstalled baseof all mobile phones were camera phones. Sales of separate cameras peaked in 2008.[90] Many early smartphones did not have cameras at all, and earlier models that had them had low performance and insufficient image and video quality that could not compete with budget pocket cameras and fulfill user's needs.[91]By the beginning of the 2010s almost all smartphones had an integrated digital camera. The decline in sales of stand-alone cameras accelerated due to the increasing use of smartphones with rapidly improving camera technology for casual photography, easierimage manipulation, and abilities to directlyshare photosthrough the use ofappsand web-based services.[92][93][94][95]By 2011, cell phones with integrated cameras were selling hundreds of millions per year. In 2015, digital camera sales were 35.395 million units or only less than a third of digital camera sales numbers at their peak and also slightly less than film camera sold number at their peak.[96][97] Contributing to the rise in popularity of smartphones being used over dedicated cameras for photography, smaller pocket cameras have difficulty producingbokehin images, but nowadays, some smartphones have dual-lens cameras that reproduce the bokeh effect easily, and can even rearrange the level of bokeh after shooting. This works by capturing multiple images with different focus settings, then combining the background of the main image with amacro focus shot. In 2007, theNokia N95was notable as a smartphone that had a 5.0Megapixel(MP) camera, when most others had cameras with around 3 MP or less than 2 MP. Some specialized feature phones like theLG Viewty,Samsung SGH-G800, andSony Ericsson K850i, all released later that year, also had 5.0 MP cameras. By 2010, 5.0 MP cameras were common; a few smartphones had 8.0 MP cameras and theNokia N8,Sony Ericsson Satio,[98]andSamsung M8910 Pixon12[99]feature phone had 12 MP. The main camera of the 2009Nokia N86uniquely features a three-levelaperturelens.[100] The Altek Leo, a 14-megapixel smartphone with 3x optical zoom lens and 720p HD video camera was released in late 2010.[101] In 2011, the same year theNintendo 3DSwas released, HTC unveiled theEvo 3D, a3D phonewith a dual five-megapixel rear camera setup for spatial imaging, among the earliestmobile phones with more than one rear camera. The 2012Samsung Galaxy S3introduced the ability to capture photos usingvoice commands.[102] In 2012, Nokia announced and released theNokia 808 PureView, featuring a 41-megapixel 1/1.2-inch sensor and a high-resolution f/2.4Zeissall-aspherical one-group lens. The high resolution enables four times of losslessdigital zoomat 1080p and six times at 720p resolution, usingimage sensor cropping.[103]The 2013Nokia Lumia 1020has a similar high-resolution camera setup, with the addition ofoptical image stabilizationand manual camera settings years before common among high-end mobile phones, although lackingexpandable storagethat could be of use for accordingly highfile sizes. Mobileoptical image stabilizationwas first introduced by Nokia in 2012 with theLumia 920, and the earliest known smartphone with an optically stabilized front camera is theHTC 10from 2016.[104]Optical image stabilization enables prolongedexposure timesfor low-light photography and smoothing out handheld video shaking, since the appearance of shakes magnifies over a larger display such as amonitorortelevision set, which would be detrimental to the watching experience. Since 2012, smartphones have become increasingly able to capture photos while filming. The resolution of those photos resolution may vary between devices. Samsung has used the highest image sensor resolution at the video's aspect ratio, which at 16:9 is 6 Megapixels (3264 × 1836) on theGalaxy S3and 9.6 Megapixels (4128 × 2322) on theGalaxy S4.[105][106]The earliest iPhones with such functionality,iPhone 5and5s, captured simultaneous photos at 0.9 Megapixels (1280 × 720) while filming.[107] Starting in 2013 on theXperia Z1, Sony experimented with real-timeaugmented realitycamera effects such as floating text, virtual plants, volcano, and a dinosaur walking in the scenery.[108]Apple later did similarly in 2017 with theiPhone X.[109] In the same year,iOS 7introduced the later widely implemented viewfinder intuition, whereexposure valuecan be adjusted through vertical swiping, after focus and exposure has been set by tapping, and even while locked after holding down for a brief moment.[110]On some devices, this intuition may be restricted by software in video/slow motion modes and for front camera. In 2013, Samsung unveiled theGalaxy S4 Zoomsmartphone with the grip shape of acompact cameraand a 10×optical zoomlens, as well as a rotary knob ring around the lens, as used on higher-end compact cameras, and anISO 1222tripod mount. It is equipped with manual parameter settings, including for focus and exposure. The successor 2014Samsung Galaxy K Zoombrought resolution and performance enhancements, but lacks the rotary knob and tripod mount to allow for a more smartphone-like shape with less protruding lens.[111] The 2014Panasonic Lumix DMC-CM1was another attempt at mixing mobile phone with compact camera, so much so that it inherited theLumixbrand. While lacking optical zoom, its image sensor has aformatof 1", as used in high-end compact cameras such as theLumix DMC-LX100andSony CyberShot DSC-RX100series, with multiple times the surface size of a typical mobile camera image sensor, as well as support for light sensitivities of up to ISO 25600, well beyond the typical mobile camera light sensitivity range. As of 2021[update], no successor has been released.[112][113] In 2013 and 2014, HTC experimentally traded in pixel count for pixel surface size on theirOne M7andM8, both with only four megapixels, marketed asUltraPixel, citing improved brightness and less noise in low light, though the more recent One M8 lacksoptical image stabilization.[114] The One M8 additionally was one of the earliest smartphones to be equipped with adual camerasetup. Its software allows generating visual spatial effects such as 3D panning, weather effects, and focus adjustment ("UFocus"), simulating the postphotographic selective focusing capability of images produced by alight-field camera.[115]HTC returned to a high-megapixel single-camera setup on the 2015One M9. Meanwhile, in 2014, LG Mobile started experimenting withtime-of-flight camerafunctionality, where a rearlaserbeam that measures distance accelerates autofocus. Phase-detection autofocuswas increasingly adapted throughout the mid-2010s, allowing for quicker and more accurate focusing thancontrast detection. In 2016,Appleintroduced theiPhone 7 Plus, one of the phones to popularize a dual camera setup. TheiPhone 7 Plusincluded a main 12 MP camera along with a 12 MP telephoto camera.[116]In early 2018Huaweireleased a new flagship phone, theHuawei P20 Pro, one of the first triple camera lens setups withLeicaoptics.[117]In late 2018,Samsungreleased a new mid-range smartphone, theGalaxy A9 (2018)with the world's first quad camera setup. TheNokia 9 PureViewwas released in 2019 featuring a penta-lens camera system.[118] 2019 saw the commercialization of high resolution sensors, which usepixel binningto capture more light. 48 MP and 64 MP sensors developed by Sony and Samsung are commonly used by several manufacturers. 108 MP sensors were first implemented in late 2019 and early 2020. With stronger getting chipsets to handle computing workload demands at higher pixel rates, mobile video resolution and framerate has caught up with dedicated consumer-grade cameras over years. In 2009, theSamsung Omnia HDbecame the first mobile phone with720pHD video recording. In the same year, Apple brought video recording initially to theiPhone 3GS, at 480p, whereas the 2007original iPhoneand 2008iPhone 3Glacked video recording entirely. 720p was more widely adapted in 2010, on smartphones such as the originalSamsung Galaxy S,Sony Ericsson Xperia X10,iPhone 4, andHTC Desire HD. The early 2010s brought a steep increase in mobile video resolution.1080pmobile video recording was achieved in 2011 on theSamsung Galaxy S2,HTC Sensation, andiPhone 4s. In 2012 and 2013, select devices with 720p filming at 60 frames per second were released: theAsus PadFone 2andHTC One M7, unlike flagships of Samsung, Sony, and Apple. However, the 2013Samsung Galaxy S4 Zoomdoes support it. In 2013, theSamsung Galaxy Note 3introduced2160p(4K) video recording at 30frames per second, as well as 1080p doubled to 60frames per secondfor smoothness. Other vendors adapted 2160p recording in 2014, including theoptically stabilizedLG G3. Apple first implemented it in late 2015 on theiPhone 6sand 6s Plus. The framerate at 2160p was widely doubled to 60 in 2017 and 2018, starting with theiPhone 8,Galaxy S9,LG G7, andOnePlus 6. Sufficient computing performance of chipsets and image sensor resolution and its reading speeds have enabled mobile4320p(8K) filming in 2020, introduced with theSamsung Galaxy S20andRedmi K30 Pro, though some upper resolution levels were foregone (skipped) throughout development, including1440p(2.5K),2880p(5K), and3240p (6K), except 1440p on Samsung Galaxyfront cameras. Among mid-range smartphone series, the introduction of higher video resolutions was initially delayed by two to three years compared to flagship counterparts. 720p was widely adapted in 2012, including with theSamsung Galaxy S3 Mini,Sony Xperia go, and 1080p in 2013 on theSamsung Galaxy S4 MiniandHTC One mini. The proliferation of video resolutions beyond 1080p has been postponed by several years. The mid-classSony Xperia M5supported 2160p filming in 2016, whereas Samsung's mid-class series such as theGalaxy JandA serieswere strictly limited to 1080p in resolution and 30 frames per second at any resolution for six years until around 2019, whether and how much for technical reasons is unclear. A lower video resolution setting may be desirable to extend recording time by reducing space storage and power consumption. The camera software of some smartphones is equipped with separate controls for resolution,frame rate, andbit rate. An example of a smartphone with these controls is theLG V10.[119] A distinction between different camera software is the method used to store high frame rate video footage, with more recent phones[a]retaining both the image sensor's original output frame rate and audio, while earlier phones do not record audio and stretch the video so it can be played back slowly at default speed. While the stretched encoding method used on earlier phones enables slow motion playback onvideo player softwarethat lacks manual playback speed control, typically found on older devices, if the aim were to achieve a slow motion effect, the real-time method used by more recent phones offers greater versatility for video editing, where slowed down portions of the footage can be freely selected by the user, and exported into a separate video. A rudimentary video editing software for this purpose is usually pre-installed. The video can optionally be played back at normal (real-time) speed, acting as usual video. The earliest smartphone known to feature a slow motion mode is the 2009Samsung i8000 Omnia II, which can record at QVGA (320×240) at 120 fps (frames per second). Slow motion is not available on theGalaxy S1,Galaxy S2,Galaxy Note 1, andGalaxy S3flagships. In early 2012, theHTC One Xallowed 768×432 pixel slow motion filming at an undocumented frame rate. The output footage has been measured as a third of real-time speed.[120] In late 2012, theGalaxy Note 2brought back slow motion, with D1 (720 × 480) at 120 fps. In early 2013, theGalaxy S4andHTC One M7recorded at that frame rate with 800 × 450, followed by theNote 3andiPhone 5swith 720p (1280 × 720) in late 2013, the latter of which retaines audio and original sensor frame rate, as with all later iPhones. In early 2014, theSony Xperia Z2andHTC One M8adapted this resolution as well. In late 2014, theiPhone 6doubled the frame rate to 240 fps, and in late 2015, theiPhone 6sadded support for 1080p (1920 × 1080) at 120 frames per second. In early 2015, theGalaxy S6became the first Samsung mobile phone to retain the sensor framerate and audio, and in early 2016, theGalaxy S7became the first Samsung mobile phone with 240 fps recording, also at 720p. In early 2015, theMT6795chipset byMediaTekpromised 1080p@480 fps video recording. The project's status remains indefinite.[121] Since early 2017, starting with theSony Xperia XZ, smartphones have been released with a slow motion mode that unsustainably records at framerates multiple times as high, by temporarily storing frames on the image sensor's internal burst memory. Such a recording lasts a few real-time seconds at most. In late 2017, theiPhone 8brought 1080p at 240 fps, as well as 2160p at 60 fps, followed by the Galaxy S9 in early 2018. In mid-2018, theOnePlus 6brought 720p at 480 fps, sustainable for one minute. In early 2021, theOnePlus 9 Probecame the first phone with 2160p at 120 fps. The first smartphones to recordHDR videowere the early 2013Sony Xperia Zand mid-2013Xperia Z Ultra, followed by the early 2014Galaxy S5, all at 1080p. Mobile phones with multiplemicrophonesusually allow video recording withstereo audiofor spaciality, with Samsung, Sony, and HTC initially implementing it in 2012 on theirSamsung Galaxy S3,Sony Xperia S, andHTC One X.[105][122][123]Apple implemented stereo audio starting with the 2018iPhone Xsfamily andiPhone XR.[124] Emphasis is being put on the front camera since the mid-2010s, where front cameras have reached resolutions as high as typical rear cameras, such as the 2015LG G4(8 megapixels),Sony Xperia C5 Ultra(13 megapixels), and 2016Sony Xperia XA Ultra(16 megapixels, optically stabilized). The 2015LG V10brought a dual front camera system where the second has a wider angle for group photography. Samsung implemented a front-camera sweep panorama (panorama selfie) feature since theGalaxy Note 4to extend the field of view. In 2012, theGalaxy S3andiPhone 5brought720pHD front video recording (at 30 fps). In early 2013, theSamsung Galaxy S4,HTC One M7andSony Xperia Zbrought 1080p Full HD at that framerate, and in late 2014, theGalaxy Note 4introduced 1440p video recording on the front camera. Apple adapted1080pfront camera video with the late 2016iPhone 7. In 2019, smartphones started adapting2160p4K video recording on the front camera, six years after rear camera 2160p commenced with theGalaxy Note 3. In the early 2010s, larger smartphones with screen sizes of at least 140 millimetres (5.5 in) diagonal, dubbed "phablets", began to achieve popularity, with the 2011Samsung Galaxy Note seriesgaining notably wide adoption.[125][126]In 2013, Huawei launched theHuawei Mate series, sporting a 155 millimetres (6.1 in) HD (1280 x 720) IPS+ LCD display, which was considered to be quite large at the time.[127] Some companies began to release smartphones in 2013 incorporatingflexible displaysto create curved form factors, such as theSamsung Galaxy RoundandLG G Flex.[128][129][130] By 2014,1440pdisplays began to appear on high-end smartphones.[131]In 2015, Sony released theXperia Z5 Premium, featuring a4K resolutiondisplay, although only images and videos could actually be rendered at that resolution (all other software was shown at 1080p).[132] New trends for smartphone displays began to emerge in 2017, with both LG and Samsung releasing flagship smartphones (LG G6andGalaxy S8), utilizing displays with talleraspect ratiosthan the common16:9ratio, and a high screen-to-body ratio, also known as a "bezel-less design". These designs allow the display to have a larger diagonal measurement, but with a slimmer width than 16:9 displays with an equivalent screen size.[133][134][135]Another trend popularized in 2017 were displays containing tab-like cut-outs at the top-centre—colloquially known as a "notch"—to contain the front-facing camera, and sometimes other sensors typically located along the top bezel of a device.[136][137]These designs allow for "edge-to-edge" displays that take up nearly the entire height of the device, with little to no bezel along the top, and sometimes a minimal bottom bezel as well. This design characteristic appeared almost simultaneously on the Sharp Aquos S2 and theEssential Phone,[138]which featured small circular tabs for their cameras, followed just a month later by theiPhone X, which used a wider tab to contain a camera and facial scanning system known asFace ID.[139]The 2016LG V10had a precursor to the concept, with a portion of the screen wrapped around the camera area in the top-left corner, and the resulting area marketed as a "second" display that could be used for various supplemental features.[140] Other variations of the practice later emerged, such as a "hole-punch" camera (such as those of theHonorView 20, and Samsung'sGalaxy A8sandGalaxy S10)—eschewing the tabbed "notch" for a circular or rounded-rectangular cut-out within the screen instead,[141]whileOpporeleased the first "all-screen" phones with no notches at all,[142]including one with a mechanical front camera that pops up from the top of the device (Find X),[143]and a 2019 prototype for a front-facing camera that can be embedded and hidden below the display, using a special partially-translucent screen structure that allows light to reach theimage sensorbelow the panel.[144]The first implementation was theZTEAxon 20 5G, with a 32 MP sensor manufactured by Visionox.[145] Displays supportingrefresh rateshigher than 60 Hz (such as 90 Hz or 120 Hz) also began to appear on smartphones in 2017; initially confined to "gaming" smartphones such as theRazer Phone(2017) andAsus ROG Phone(2018), they later became more common on flagship phones such as thePixel 4(2019) andSamsung Galaxy S21 series(2021). Higher refresh rates allow for smoother motion and lower input latency, but often at the cost of battery life. As such, the device may offer a means to disable high refresh rates, or be configured to automatically reduce the refresh rate when there is low on-screen motion.[146][147] An early implementation of multiple simultaneous tasks on a smartphone display are thepicture-in-picturevideo playback mode ("pop-up play") and "live video list" with playing video thumbnails of the 2012Samsung Galaxy S3, the former of which was later delivered to the 2011Samsung Galaxy Notethrough a software update.[148][149]Later that year, asplit-screenmode was implemented on theGalaxy Note 2, later retrofitted on the Galaxy S3 through the "premium suite upgrade".[150] The earliest implementation ofdesktop and laptop-like windowingwas on the 2013Samsung Galaxy Note 3.[151] Smartphones utilizingflexible displayswere theorized as possible once manufacturing costs and production processes were feasible.[152]In November 2018, the startup company Royole unveiled the first commercially availablefoldable smartphone, the Royole FlexPai. Also that month, Samsung presented a prototype phone featuring an "Infinity Flex Display" at its developers conference, with a smaller, outer display on its "cover", and a larger, tablet-sized display when opened. Samsung stated that it also had to develop a new polymer material to coat the display as opposed to glass.[153][154][155]Samsung officially announced theGalaxy Fold, based on the previously demonstrated prototype, in February 2019 for an originally-scheduled release in late-April.[156]Due to various durability issues with the display and hinge systems encountered by early reviewers, the release of the Galaxy Fold was delayed to September to allow for design changes.[157] In November 2019, Motorola unveiled a variation of the concept with its re-imagining of theRazr, using a horizontally-folding display to create aclamshellform factor inspired by its previousfeature phone range of the same name.[158]Samsung would unveil a similar device known as theGalaxy Z Flipthe following February.[159] The first smartphone with afingerprint readerwas theMotorola Atrix 4Gin 2011.[160]In September 2013, theiPhone 5Swas unveiled as the first smartphone on a major U.S. carrier since the Atrix to feature this technology.[161]Once again, the iPhone popularized this concept. One of the barriers of fingerprint reading amongst consumers was security concerns, howeverApplewas able to address these concerns by encrypting this fingerprint data onto the A7 Processor located inside the phone as well as make sure this information could not be accessed by third-party applications and is not stored in iCloud or Apple servers[162] In 2012, Samsung introduced theGalaxy S3(GT-i9300) with retrofittablewireless charging, pop-up video playback,4G-LTEvariant (GT-i9305)quad-coreprocessor. In 2013,Fairphonelaunched its first"socially ethical"smartphone at theLondon Design Festivalto address concerns regarding the sourcing of materials in the manufacturing[163]followed byShiftphonein 2015.[164]In late 2013, QSAlpha commenced production of a smartphone designed entirely around security, encryption and identity protection.[165] In October 2013,Motorola MobilityannouncedProject Ara, a concept for amodular smartphoneplatform that would allow users to customize and upgrade their phones with add-on modules that attached magnetically to a frame.[166][167]Ara was retained by Google following its sale of Motorola Mobility toLenovo,[168]but was shelved in 2016.[169]That year, LG and Motorola both unveiled smartphones featuring a limited form of modularity for accessories; theLG G5allowed accessories to be installed via the removal of its battery compartment,[170]while theMoto Zutilizes accessories attached magnetically to the rear of the device.[171] Microsoft, expanding upon the concept of Motorola's short-lived "Webtop", unveiled functionality for itsWindows 10 operating system for phonesthat allows supported devices to bedockedfor use with a PC-styleddesktop environment.[172][173] Samsung and LG used to be the"last standing"manufacturers to offer flagship devices with user-replaceable batteries. But in 2015, Samsung succumbed to theminimalismtrend set by Apple, introducing theGalaxy S6without a user-replaceable battery. In addition, Samsung was criticised for pruning long-standing features such asMHL, MicroUSB 3.0,water resistanceandMicroSDcard support, of which the latter two came back in 2016 with theGalaxy S7and S7 Edge. As of 2015[update], the globalmedianfor smartphone ownership was 43%.[174]Statistaforecast that 2.87 billion people would own smartphones in 2020.[175] Within the same decade, rapid deployment of LTE cellular network and general availability of smartphones have increased popularity of thestreaming televisionservices, and the correspondingmobile TVapps.[176] Major technologies that began to trend in 2016 included a focus onvirtual realityandaugmented realityexperiences catered towards smartphones, the newly introducedUSB-Cconnector, and improving LTE technologies.[177] In 2016, adjustablescreen resolutionknown from desktop operating systems was introduced to smartphones for power saving, whereas variable screenrefresh rateswere popularized in 2020.[178][179] In 2018, the first smartphones featuring fingerprint readers embedded withinOLEDdisplays were announced, followed in 2019 by an implementation using an ultrasonic sensor on theSamsung Galaxy S10.[180][181] In 2019, the majority of smartphones released have more than one camera, are waterproof with IP67 and IP68 ratings, and unlock using facial recognition or fingerprint scanners.[182] Designs first implemented by Apple have been replicated by other vendors several times. These include a sealed body that does not allow replacing the battery, a lack of the physical audio connector (since the iPhone 7 from 2016), a screen with a cut-out area at the top for the earphone and front-facing camera and sensors (colloquially known as "notch"; since the iPhone X from 2017), the exclusion of a charging wall adapter from the scope of delivery (since the iPhone 12 from 2019), and a camera user interface with circular and usually solid-colour shutter button and a camera mode selector using perpendicular text and separate camera modes for photo and video (since iOS 7 from 2013).[183][184][185][186][187][188] In 2020, the first smartphones featuring high-speed5Gnetwork capability were announced.[189] Since 2020, smartphones have decreasingly been shipped with rudimentary accessories like apower adapterandheadphonesthat have historically been almost invariably within the scope of delivery. This trend was initiated with Apple'siPhone 12, followed by Samsung and Xiaomi on theGalaxy S21andMi 11respectively, months after havingmockedthe same through advertisements. The reason cited is reducing environmental footprint, though reaching raised charging rates supported by newer models demands a new charger shipped through separate packaging with its own environmental footprint.[190] With the development of thePinePhoneandLibrem 5in the 2020s, there are intensified efforts to make open sourceGNU/Linux for smartphonesa major alternative toiOSand Android.[191][192][193]Moreover, associated software enabledconvergence(beyond convergent[194]andhybridapps) by allowing the smartphones to be used like a desktop computer when connected to a keyboard, mouse and monitor.[195][196][197][198] In the early 2020s, manufacturers began to integratesatellite connectivityinto smartphone devices for use in remote areas, where local terrestrial communication infrastructures, such aslandlineandcellularnetworks, are not available. Due to the antenna limitations in the conventional phones, in the early stages of implementation satellite connectivity would be limited to thesatellite messagingand satellite emergency services.[199][200] A typical smartphone contains a number ofmetal–oxide–semiconductor(MOS)integrated circuit(IC) chips,[201]which in turn contain billions of tinyMOS field-effect transistors(MOSFETs).[202]A typical smartphone contains the following MOS IC chips:[201] Some are also equipped with anFM radioreceiver, a hardwarenotification LED, and an infrared transmitter for use asremote control. A few models have additionalsensorssuch asthermometerfor measuring ambient temperature,hygrometerfor humidity, and a sensor forultraviolet raymeasurement. A few smartphones designed around specific purposes are equipped with uncommon hardware such as a projector (Samsung Beam i8520andSamsung Galaxy Beam i8530),optical zoom lenses(Samsung Galaxy S4 ZoomandSamsung Galaxy K Zoom),thermal camera, and evenPMR446(walkie-talkieradio)transceiver.[208][209] Smartphones havecentral processing units(CPUs), similar to those in computers, but optimised to operate in low power environments. In smartphones, the CPU is typically integrated in aCMOS(complementarymetal–oxide–semiconductor)system-on-a-chip(SoC)application processor.[201] The performance of mobile CPU depends not only on the clock rate (generally given in multiples ofhertz)[210]but also on thememory hierarchy. Because of these challenges, the performance of mobile phone CPUs is often more appropriately given by scores derived from various standardized tests to measure the real effective performance in commonly used applications. Smartphones are typically equipped with a power button and volume buttons. Some pairs of volume buttons are unified. Some are equipped with a dedicated camera shutter button. Units for outdoor use may be equipped with an "SOS" emergency call and "PTT" (push-to-talkbutton). The presence of physical front-side buttons such as thehomeand navigation buttons has decreased throughout the 2010s, increasingly becoming replaced by capacitive touch sensors and simulated (on-screen) buttons.[211] As with classic mobile phones, early smartphones such as theSamsung Omnia IIwere equipped with buttons for accepting and declining phone calls. Due to the advancements of functionality besides phone calls, these have increasingly been replaced by navigation buttons such as "menu" (also known as "options"), "back", and "tasks". Some early 2010s smartphones such as theHTC Desirewere additionally equipped with a "Search" button (🔍) for quick access to a web search engine or apps' internal search feature.[212] Since 2013, smartphones' home buttons started integratingfingerprint scanners, starting with theiPhone 5sandSamsung Galaxy S5. Functions may be assigned to button combinations. For example,screenshotscan usually be taken using the home and power buttons, with a short press on iOS and one-second holding Android OS, the two most popular mobile operating systems. On smartphones with no physical home button, usually the volume-down button is instead pressed with the power button. Some smartphones have a screenshot and possiblyscreencastshortcuts in the navigation button bar or the power button menu.[213][214][215] One of the main characteristics of smartphones is thescreen. Depending on the device's design, the screen fills most or nearly all of the space on a device's front surface. Many smartphone displays have anaspect ratioof16:9, but taller aspect ratios became more common in 2017, as well as the aim to eliminate bezels by extending the display surface to as close to the edges as possible. Screen sizes are measured in diagonalinches. Phones with screens larger than 5.2 inches are often called "phablets". Smartphones with screens over 4.5 inches in size are commonly difficult to use with only a single hand, since most thumbs cannot reach the entire screen surface; they may need to be shifted around in the hand, held in one hand and manipulated by the other, or used in place with both hands. Due to design advances, some modern smartphones with large screen sizes and "edge-to-edge" designs have compact builds that improve their ergonomics, while the shift to taller aspect ratios have resulted in phones that have larger screen sizes whilst maintaining the ergonomics associated with smaller 16:9 displays.[216][217][218] Liquid-crystal displays(LCDs) andorganic light-emitting diode(OLED) displays are the most common. Some displays are integrated with pressure-sensitive digitizers, such as those developed byWacomandSamsung,[219]and Apple'sForce Touchsystem. A few phones, such as theYotaPhoneprototype, are equipped with a low-powerelectronic paperrear display, as used ine-book readers. Some devices are equipped with additional input methods such as astylusfor higher precision input and hovering detection or aself-capacitive touch screenslayer for floating finger detection. The latter has been implemented on few phones such as theSamsung Galaxy S4,Note 3,S5,Alpha, andSony Xperia Sola, making the Galaxy Note 3 the only smartphone with both so far. Hovering can enable previewtooltipssuch as on thevideo player's seek bar, in text messages, and quick contacts on thedial pad, as well aslock screenanimations, and the simulation of ahoveringmouse cursoron web sites.[220][221][222] Some styluses support hovering as well and are equipped with a button for quick access to relevant tools such as digitalpost-it notesand highlighting of text and elements when dragging while pressed, resembling drag selection using acomputer mouse. Some series such as theSamsung Galaxy Note seriesandLG G Stylusseries have an integrated tray to store the stylus in.[223] Few devices such as theiPhone 6suntiliPhone XsandHuawei Mate Sare equipped with apressure-sensitive touch screen, where the pressure may be used to simulate a gas pedal in video games, access to preview windows and shortcut menus, controlling the typing cursor, and a weight scale, the latest of which has been rejected by Apple from theApp Store.[224][225] Some early 2010s HTC smartphones such as theHTC Desire (Bravo)andHTC Legendare equipped with an optical track pad for scrolling and selection.[226] Many smartphones except Apple iPhones are equipped with low-powerlight-emitting diodesbesides the screen that are able to notify the user about incoming messages, missed calls, low battery levels, and facilitate locating the mobile phone in darkness, with marginial power consumption. To distinguish between the sources of notifications, the colour combination and blinking pattern can vary. Usually three diodes in red, green, and blue (RGB) are able to create a multitude of colour combinations. Smartphones are equipped with a multitude of sensors to enable system features and third-party applications. Accelerometersandgyroscopesenable automatic control of screen rotation. Uses by third-party software includebubble levelsimulation. Anambient light sensorallows for automatic screen brightness and contrast adjustment, and anRGB sensorenables the adaption of screen colour. Many mobile phones are also equipped with abarometersensor to measure air pressure, such as Samsung since 2012 with theGalaxy S3, and Apple since 2014 with theiPhone 6. It allows estimating and detecting changes in altitude. Amagnetometercan act as adigital compassby measuringEarth's magnetic field. Samsung equips their flagship smartphones since the 2014Galaxy S5andGalaxy Note 4with aheart ratesensor to assist in fitness-related uses and act as a shutter key for the front-facing camera.[227] So far, only the 2013Samsung Galaxy S4andNote 3are equipped with anambient temperature sensorand ahumidity sensor, and only theNote 4with anultravioletradiation sensor which could warn the user about excessive exposure.[228][229] A rear infraredlaserbeam for distance measurement can enabletime-of-flight camerafunctionality with acceleratedautofocus, as implemented on select LG mobile phones starting withLG G3andLG V10. Due to their currently rare occurrence among smartphones, not much software to utilize these sensors has been developed yet. WhileeMMC(embedded multi media card)flash storagewas most commonly used in mobile phones, its successor,UFS(Universal Flash Storage) with higher transfer rates emerged throughout the 2010s for upper-class devices.[230] While the internal storage capacity of mobile phones has been near-stagnant during the first half of the 2010s, it has increased steeper during its second half, withSamsungfor example increasing the available internal storage options of their flagship class units from 32 GB to 512 GB within only 21⁄2years between 2016 and 2018.[231][232][233][234] The space for data storage of some mobile phones can be expanded usingMicroSDmemory cards, whose capacity has multiplied throughout the 2010s (→SD card § 2009–2019: SDXC). Benefits overUSB on the gostorage andcloud storageincludeofflineavailability andprivacy, not reserving and protruding from thecharging port, no connection instability orlatency, no dependence on voluminousdata plans, and preservation of the limited rewriting cycles of the device's permanent internal storage. Large amounts of data can be moved immediately between devices by changing memory cards, large-scaledata backupscan be created offline, and data can be read externally should the smartphone be inoperable.[235][236][237] In case of technicaldefectswhich make the device unusable or unbootableas a result of liquid damage, fall damage, screen damage,bending damage,malware, or bogussystem updates,[238]etc., data stored on the memory card is likelyrescueableexternally, while data on the inaccessible internal storage would belost. A memory card can usually[b]immediately be re-used in a different memory-card-enabled device with no necessity for priorfile transfers. Somedual-SIMmobile phones are equipped with a hybrid slot, where one of the two slots can be occupied by either aSIM cardor a memory card. Some models, typically of higher end, are equipped with three slots including one dedicated memory card slot, for simultaneous dual-SIM and memory card usage.[239] The location of both SIM and memory card slots vary among devices, where they might be located accessibly behind the back cover or else behind the battery, the latter of which denieshot swapping.[240][241] Mobile phones with non-removable rear cover typically house SIM and memory cards in a small tray on the handset's frame, ejected by inserting a needle tool into a pinhole.[242] Some earlier mid-range phones such as the 2011Samsung Galaxy FitandAcehave a sideways memory card slot on the frame covered by a cap that can be opened without tool.[243] Originally,mass storageaccess was commonly enabled to computers through USB. Over time, mass storage access was removed, leaving theMedia Transfer Protocolas protocol for USB file transfer, due to its non-exclusive access ability where the computer is able to access the storage without it being locked away from the mobile phone's software for the duration of the connection, and no necessity for commonfile systemsupport, as communication is done through anabstraction layer. However, unlike mass storage, Media Transfer Protocol lacks parallelism, meaning that only a single transfer can run at a time, for which other transfer requests need to wait to finish. This, for example, denies browsing photos and playing back videos from the device during an active file transfer. Some programs and devices lack support for MTP. In addition, the direct access andrandom accessof files through MTP is not supported. Any file is wholly downloaded from the device before opened.[244] Some audio quality enhancing features, such asVoice over LTEandHD Voicehave appeared and are often available on newer smartphones. Sound quality can remain a problem due to the design of the phone, the quality of the cellular network and compression algorithms used inlong-distance calls.[245][246]Audio quality can be improved using aVoIPapplication overWi-Fi.[247]Cellphones have small speakers so that the user can use aspeakerphonefeature and talk to a person on the phone without holding it to their ear. The small speakers can also be used to listen to digital audio files of music or speech or watch videos with an audio component, without holding the phone close to the ear. However, integrated speakers may be small and of restricted sound quality to conserve space. Some mobile phones such as theHTC One M8and theSony Xperia Z2are equipped withstereophonicspeakers to create spacial sound when in horizontal orientation.[248] The3.5mm headphone receptible(coll."headphone jack") allows the immediate operation of passiveheadphones, as well as connection to other external auxiliary audio appliances. Among devices equipped with the connector, it is more commonly located at the bottom (charging port side) than on the top of the device. The decline of the connector's availability among newly released mobile phones among all major vendors commenced in 2016 with its lack on the AppleiPhone 7. Anadapterreserving the charging port can retrofit the plug. Battery-powered, wireless Bluetooth headphones are an alternative. Those tend to be costlier however due to their need for internal hardware such as a Bluetoothtransceiverand a battery with a charging controller, and a Bluetooth coupling is required ahead of each operation.[249] Smartphones typically featurelithium-ionorlithium-polymer batteriesdue to their highenergy densities. Batteries chemically wear down as a result of repeated charging and discharging throughout ordinary usage, losing both energy capacity and output power, which results in loss of processing speeds followed by system outages.[250]Battery capacity may be reduced to 80% after few hundred recharges, and the drop in performance accelerates with time.[251][252]Some mobile phones are designed with batteries that can be interchanged upon expiration by the end user, usually by opening the back cover. While such a design had initially been used in most mobile phones, including those with touch screen that were not Apple iPhones, it has largely been usurped throughout the 2010s by permanently built-in, non-replaceable batteries; a design practice criticized forplanned obsolescence.[253] Due to limitations ofelectrical currentsthat existing USB cables' copper wires could handle, charging protocols which make use of elevatedvoltagessuch asQualcomm Quick ChargeandMediaTek Pump Expresshave been developed to increase the power throughput for faster charging, to maximize the usage time without restricted ergonomy and to minimize the time a device needs to be attached to a power source. The smartphone's integratedcharge controller(IC) requests the elevated voltage from a supportedcharger. "VOOC" by Oppo, also marketed as "dash charge", took the counter approach and increased current to cut out some heat produced from internally regulating the arriving voltage in the end device down to the battery's charging terminal voltage, but is incompatible with existing USB cables, as it requires the thicker copper wires of high-current USB cables. Later,USB Power Delivery(USB-PD) was developed with the aim to standardize the negotiation of charging parameters across devices of up to 100 Watts, but is only supported on cables with USB-C on both endings due to the connector's dedicated PD channels.[254] While charging rates have been increasing, with 15wattsin 2014,[255]20 Watts in 2016,[256]and 45 watts in 2018,[257]the power throughput may be throttled down significantly during operation of the device.[258][c] Wireless charginghas been widely adapted, allowing for intermittent recharging without wearing down the charging port through frequent reconnection, withQibeing the most common standard, followed byPowermat. Due to the lower efficiency of wireless power transmission, charging rates are below that of wired charging, and more heat is produced at similar charging rates. By the end of 2017, smartphone battery life has become generally adequate;[259]however, earlier smartphone battery life was poor due to the weak batteries that could not handle the significant power requirements of the smartphones' computer systems and color screens.[260][261][262] Smartphone users purchase additional chargers for use outside the home, at work, and in cars and by buying portable external "battery packs". External battery packs include generic models which are connected to the smartphone with a cable, and custom-made models that "piggyback" onto a smartphone's case. In 2016, Samsung had to recall millions of theGalaxy Note 7smartphones due to an explosive battery issue.[263]For consumer convenience,wireless chargingstations have been introduced in some hotels, bars, and other public spaces.[264] A technique to minimize power consumption is the panel self-refresh, whereby the image to be shown on the display is not sent at all times from the processor to the integrated controller (IC) of the display component, but only if the information on screen is changed. The display's integrated controller instead memorizes the last screen contents and refreshes the screen by itself. This technology was introduced around 2014 and has reduced power consumption by a few hundred milliwatts.[265] Cameras have become standard features of smartphones. As of 2019[update]phone cameras are now a highly competitive area of differentiation between models, with advertising campaigns commonly based on a focus on the quality or capabilities of a device's main cameras. Images are usually saved in theJPEGfile format; some high-end phones since the mid-2010s also haveRAWimaging capability.[266][267] Typically smartphones have at least one main rear-facing camera and a lower-resolution front-facing camera for "selfies" andvideo chat. Owing to the limited depth available in smartphones forimage sensorsandoptics, rear-facing cameras are often housed in a "bump" that is thicker than the rest of the phone. Since increasingly thin mobile phones have more abundant horizontal space than the depth that is necessary and used in dedicated cameras for better lenses, there is additionally a trend for phone manufacturers to include multiple cameras, with each optimized for a different purpose (telephoto,wide angle, etc.). Viewed from back, rear cameras are commonly located at the top center or top left corner. A cornered location benefits by not requiring other hardware to be packed around the camera module while increasingergonomy, as the lens is less likely to be covered when held horizontally. Modern advanced smartphones have cameras withoptical image stabilisation(OIS), larger sensors, bright lenses, and even optical zoom plusRAWimages.HDR, "Bokehmode" with multi lenses and multi-shotnight modesare now also familiar.[268]Many new smartphone camera features are being enabled viacomputational photographyimage processingand multiple specialized lenses rather than larger sensors and lenses, due to the constrained space available inside phones that are being made as slim as possible. Some mobile phones such as theSamsung i8000 Omnia 2, someNokia Lumiasand someSony Xperiasare equipped with a physical camera shutter button. Those with two pressure levels resemble thepoint-and-shootintuition of dedicatedcompact cameras. The camera button may be used as ashortcutto quickly andergonomicallylaunch the camera software, as it is located more accessibly inside a pocket than the power button. Back covers of smartphones are typically made ofpolycarbonate, aluminium, or glass. Polycarbonate back covers may be glossy or matte, and possibly textured, like dotted on theGalaxy S5or leathered on theGalaxy Note 3andNote 4. While polycarbonate back covers may be perceived as less "premium" amongfashion- andtrend-oriented users, its utilitarian strengths and technical benefits include durability and shock absorption, greaterelasticityagainst permanent bending like metal, inability to shatter like glass, which facilitates designing it removable; better manufacturing cost efficiency, and no blockage of radio signals or wireless power like metal.[269][270][271][272] A wide range of accessories are sold for smartphones, includingcases,memory cards,screen protectors,chargers,wireless powerstations,USB On-The-Goadapters (for connecting USB drives and or, in some cases, a HDMI cable to an external monitor),MHLadapters, add-on batteries,power banks,headphones, combined headphone-microphones (which, for example, allow a person to privately conductcallson the device without holding it to the ear), andBluetooth-enabledpowered speakersthat enable users to listen to media from their smartphones wirelessly. Cases range from relatively inexpensive rubber or soft plastic cases which provide moderate protection from bumps and good protection from scratches to more expensive, heavy-duty cases that combine a rubber padding with a hard outer shell. Some cases have a "book"-like form, with a cover that the user opens to use the device; when the cover is closed, it protects the screen. Some "book"-like cases have additional pockets for credit cards, thus enabling people to use them aswallets. Accessories include products sold by the manufacturer of the smartphone and compatible products made by other manufacturers. However, some companies, likeApple, stopped including chargers with smartphones in order to "reducecarbon footprint", etc., causing many customers to pay extra for charging adapters. A mobile operating system (or mobile OS) is anoperating systemfor phones,tablets,smartwatches, or othermobile devices. Globally,AndroidandIOSare the two most used mobile operating systems based onusage share, with the former having been the best selling OS globally on all devices since 2013. Mobile operating systems combine features of apersonal computeroperating system with other features useful for mobile or handheld use; usually including, and most of the following considered essential in modern mobile systems; atouchscreen,cellular, Bluetooth, Wi-Fi Protected Access, Wi-Fi,Global Positioning System(GPS) mobile navigation,video-andsingle-frame picture cameras,speech recognition,voice recorder,music player,near-field communication, andinfrared blaster. By Q1 2018, over 383 million smartphones were sold with 85.9 percent running Android, 14.1 percent running iOS and a negligible number of smartphones running other OSes.[273]Android alone is more popular than the popular desktop operating system Windows, and in general, smartphone use (even without tablets) exceeds desktop use. Other well-known mobile operating systems areFlyme OSandHarmony OS. Mobile devices with mobile communications abilities (e.g., smartphones) contain two mobile operating systems—the main user-facing software platform is supplemented by a second low-level proprietaryreal-time operating systemwhich operates the radio and other hardware. Research has shown that these low-level systems may contain a range of security vulnerabilities permitting maliciousbase stationsto gain high levels of control over the mobile device.[274] A mobile app is a computer program designed to run on a mobile device, such as a smartphone. The term "app" is a short-form of the term "software application".[275] The introduction of Apple's App Store for the iPhone and iPod Touch in July 2008 popularized manufacturer-hostedonline distributionfor third-party applications (softwareandcomputer programs) focused on a single platform. There are a huge variety of apps, includingvideo games, music products and business tools. Up until that point, smartphone application distribution depended onthird-party sourcesproviding applications for multiple platforms, such asGetJar,Handango,Handmark, andPocketGear. Following the success of the App Store, other smartphone manufacturers launched application stores, such as Google's Android Market (later renamed to the Google Play Store) and RIM'sBlackBerry App World, Android-related app stores likeAptoide,Cafe Bazaar,F-Droid,GetJar, andOpera Mobile Store. In February 2014, 93% ofmobile developerswere targeting smartphones first for mobile app development.[276] Since 1996, smartphone shipments have had positive growth. In November 2011, 27% of all photographs created were taken with camera-equipped smartphones.[277]In September 2012, a study concluded that 4 out of 5 smartphone owners use the device to shop online.[278]Global smartphone sales surpassed the sales figures for feature phones in early 2013.[279]Worldwide shipments of smartphones topped 1 billion units in 2013, up 38% from 2012's 725 million, while comprising a 55% share of the mobile phone market in 2013, up from 42% in 2012. In 2013, smartphone sales began to decline for the first time.[280][281]In Q1 2016 for the first time the shipments dropped by 3 percentyear on year. The situation was caused by the maturing China market.[282]A report by NPD shows that fewer than 10% of US citizens have spent $1,000 or more on smartphones, as they are too expensive for most people, without introducing particularly innovative features, and amidHuawei,OppoandXiaomiintroducing products with similar feature sets for lower prices.[283][284][285]In 2019, smartphone sales declined by 3.2%, the largest in smartphone history, while China and India were credited with driving most smartphone sales worldwide.[286]It is predicted that widespread adoption of 5G will help drive new smartphone sales.[287][288] In 2011,Samsunghad the highest shipmentmarket shareworldwide, followed byApple. In 2013, Samsung had 31.3% market share, a slight increase from 30.3% in 2012, while Apple was at 15.3%, a decrease from 18.7% in 2012.Huawei,LGandLenovowere at about 5% each, significantly better than 2012 figures, while others had about 40%, the same as the previous years figure. Only Apple lost market share, although their shipment volume still increased by 12.9%; the rest had significant increases in shipment volumes of 36 to 92%.[289] In Q1 2014, Samsung had a 31% share and Apple had 16%.[290]In Q4 2014, Apple had a 20.4% share and Samsung had 19.9%.[291]In Q2 2016, Samsung had a 22.3% share and Apple had 12.9%.[292]In Q1 2017, IDC reported that Samsung was first placed, with 80 million units, followed by Apple with 50.8 million, Huawei with 34.6 million,Oppowith 25.5 million andVivowith 22.7 million.[293] Samsung's mobile business is half the size of Apple's, by revenue. Apple business increased very rapidly in the years 2013 to 2017.[294]Realme, a brand owned by Oppo, is the fastest-growing phone brand worldwide since Q2 2019. In China, Huawei andHonor, a brand owned by Huawei, have 46% of market share combined and posted 66% annual growth as of 2019[update], amid growing Chinese nationalism.[295]In 2019, Samsung had a 74% market share of 5G smartphones in South Korea.[296] In the first quarter of 2024, global smartphone shipments rose by 7.8% to 289.4 million units. Samsung, with a 20.8% market share, overtook Apple to become the leading smartphone manufacturer. Apple's smartphone shipments dropped 10%. Xiaomi secured the third spot with a 14.1% market share.[297] The rise in popularity of touchscreen smartphones and mobile apps distributed via app stores along with rapidly advancingnetwork,mobile processor, andstoragetechnologies led to aconvergencewhere separatemobile phones,organizers, andportable media playerswere replaced by a smartphone as the single device most people carried.[298][299][300][301][1][302]Advances indigital camera sensorsand on-deviceimage processingsoftware more gradually led to smartphones replacing simplercamerasfor photographs and video recording.[92]The built-inGPScapabilities andmappingapps on smartphones largely replaced stand-alonesatellite navigationdevices, and papermapsbecame less common.[90]Mobile gamingon smartphones greatly grew in popularity,[303]allowing many people to use them in place ofhandheld game consoles, and some companies tried creating game console/phone hybrids based on phone hardware and software.[304][305]People frequently have chosen not to getfixed-line telephone servicein favor of smartphones.[306][307]Music streamingapps and services have grown rapidly in popularity, serving the same use as listening to music stations on a terrestrial or satelliteradio.Streaming videoservices are easily accessed via smartphone apps and can be used in place of watchingtelevision. People have often stopped wearingwristwatchesin favor of checking the time on their smartphones, and many use the clock features on their phones in place ofalarm clocks.[308]Mobile phones can also be used as a digitalnote taking,text editingandmemorandumdevice whosecomputerizationfacilitates searching of entries. Additionally,in many lesser technologically developed regionssmartphones are people's first and only means ofInternet accessdue to their portability,[309][failed verification]withpersonal computersbeing relatively uncommon outside of business use. The cameras on smartphones can be used to photograph documents and send them via email ormessagingin place of usingfax(facsimile) machines.Payment apps and serviceson smartphones allow people to make less use of wallets, purses, credit and debit cards, and cash.Mobile bankingapps can allow people to deposit checks simply by photographing them, eliminating the need to take the physical check to anATMor teller.Guide bookapps can take the place of paper travel and restaurant/business guides, museum brochures, and dedicatedaudio guideequipment. In many countries, mobile phones are used to providemobile bankingservices, which may include the ability to transfer cash payments by secure SMS text message. Kenya'sM-PESAmobile banking service, for example, allows customers of the mobile phone operatorSafaricomto hold cash balances which are recorded on their SIM cards. Cash can be deposited or withdrawn from M-PESA accounts at Safaricom retail outlets located throughout the country and can be transferred electronically from person to person and used to pay bills to companies. Branchless bankinghas been successful in South Africa and thePhilippines. A pilot project inBaliwas launched in 2011 by theInternational Finance Corporationand anIndonesianbank,Bank Mandiri.[310] Another application of mobile banking technology isZidisha, a US-based nonprofit micro-lending platform that allows residents of developing countries to raise small business loans from Web users worldwide. Zidisha uses mobile banking for loan disbursements and repayments, transferring funds from lenders in the United States to borrowers in rural Africa who have mobile phones and can use the Internet.[311] Mobile payments were first trialled in Finland in 1998 when two Coca-Cola vending machines inEspoowere enabled to work with SMS payments. Eventually, the idea spread and in 1999, the Philippines launched the country's first commercial mobile payments systems with mobile operatorsGlobeandSmart. Some mobile phones can makemobile paymentsvia direct mobile billing schemes, or throughcontactless paymentsif the phone and thepoint of salesupportnear-field communication(NFC).[312]Enabling contactless payments through NFC-equipped mobile phones requires the co-operation of manufacturers, network operators, and retail merchants.[313][314] Someappsallows for sending and receivingfacsimile (fax), over a smartphone, including facsimile data (composed of rasterbi-levelgraphics) generated directly and digitally fromdocumentandimagefile formats. Films are increasingly made using smartphones and tablets, leading to the rise of dedicated film festivals for such films, including theSmartFone Flick FestinSydney, Australia;[315][316]Dublin Smartphone Film Festival; the International Mobil Film Festival based inSan Diego; the Spanish festival Cinephone – Festival Internacional de Cine con Smartphone; the African Smartphone International Film Festival;[317]Toronto Smartphone Film Festival; New York Mobile Film Festival; and others.[318] Cobaltis needed in order to manufacture smartphones' rechargeable batteries. Workers, including children, suffer injuries, amputations, and death as the result of the hazardous working conditions and mine tunnel collapses in theDemocratic Republic of the Congoduringartisanal mining of cobalt.[319]In 2019a lawsuitwas filed against Apple and other tech companies for the use ofchild laborin mining cobalt;[320][321]in 2024 the court ruled that the companies were not liable.[322]Apple announced it would convert to using recycled cobalt by 2025.[323] In 2012,University of Southern Californiastudy found thatunprotectedadolescent sexualactivity was more common among owners of smartphones.[324] A study conducted by theRensselaer Polytechnic Institute's (RPI) Lighting Research Center (LRC) concluded that smartphones, or any backlit devices, can seriously affectsleep cycles.[325] Some persons might become psychologically attached to smartphones, resulting in anxiety when separated from the devices.[326] A "smombie" (a combination of "smartphone" and "zombie") is a walking person using a smartphone and not paying attention as they walk, possibly risking an accident in the process, an increasing social phenomenon.[327]The issue of slow-moving smartphone users led to the temporary creation of a "mobile lane" for walking inChongqing,China.[328]The issue of distracted smartphone users led the city ofAugsburg, Germany, to embed pedestrian traffic lights in the pavement.[329] Mobile phone use while driving—includingcalling,text messaging, playing media,web browsing,gaming, using mapping apps or operating other phone features—is common but controversial, since it is widely considered dangerous due to what is known asdistracted driving. Being distracted while operating a motor vehicle has been shown to increase the risk ofaccidents. In September 2010, the USNational Highway Traffic Safety Administration(NHTSA) reported that 995 people were killed by drivers distracted by phones. In March 2011 a US insurance company,State Farm Insurance, announced the results of a study which showed 19% of drivers surveyed accessed the Internet on a smartphone while driving.[330]Many jurisdictions prohibit the use of mobile phones while driving. In Egypt, Israel, Japan, Portugal and Singapore, both handheld and hands-freecallingon a mobile phone (which uses aspeakerphone) is banned. In other countries, including the UK and France, and in many US states, calling is only banned on handheld phones, while hands-free calling is permitted. A 2011 study reported that over 90% of college students surveyed text (initiate, reply or read) while driving.[331]Thescientific literatureon the danger of driving while sending a text message from a mobile phone, ortexting while driving, is limited. A simulation study at theUniversity of Utahfound a sixfold increase in distraction-related accidents when texting.[332]Due to the complexity of smartphones that began to grow more after, this has introduced additional difficulties for law enforcement officials when attempting to distinguish one usage from another in drivers using their devices. This is more apparent in countries which ban both handheld and hands-free usage, rather than those which ban handheld use only, as officials cannot easily tell which function of the phone is being used simply by looking at the driver. This can lead to drivers being stopped for using their device illegally for a call when, in fact, they were using the device legally, for example, when using the phone's incorporated controls for car stereo,GPSorsatnav. A 2010 study reviewed the incidence of phone use whilecyclingand its effects on behavior and safety.[333]In 2013 a national survey in the US reported the number of drivers who reported using their phones to access the Internet while driving had risen to nearly one of four.[334]A study conducted by the University of Vienna examined approaches for reducing inappropriate and problematic use of mobile phones, such as using phones while driving.[335] Accidents involving a driver being distracted by being in acallon a phone have begun to be prosecuted as negligence similar to speeding. In theUnited Kingdom, from 27 February 2007, motorists who are caught using a handheld phone while driving will have three penalty points added to their license in addition to the fine of £60.[336]This increase was introduced to try to stem the increase in drivers ignoring the law.[337]Japanprohibits all use of phones while driving, including use of hands-free devices. New Zealand has banned handheld phone use since 1 November 2009. Many states in the United States have banned text messaging on phones while driving. Illinois became the 17th American state to enforce this law.[338]As of July 2010[update], 30 states had banned texting while driving, with Kentucky becoming the most recent addition on July 15.[339] Public Health Law Research maintains a list of distracted driving laws in theUnited States. This database of laws provides a comprehensive view of the provisions of laws that restrict the use of mobile devices while driving for all 50 states and the District of Columbia between 1992, when first law was passed through December 1, 2010. The dataset contains information on 22 dichotomous, continuous or categorical variables including, for example, activities regulated (e.g., texting versus talking, hands-free versus handheld calls, web browsing, gaming), targeted populations, and exemptions.[340] A "patent war" between Samsung and Apple started when the latter claimed that the originalGalaxy SAndroid phone copied the interface‍—‌and possibly the hardware‍—‌of Apple's iOS for theiPhone 3GS. There was also smartphone patents licensing and litigation involvingSony Mobile,Google,Apple Inc.,Samsung,Microsoft,Nokia,Motorola,HTC,HuaweiandZTE, among others. The conflict is part of thewider "patent wars"between multinational technology and software corporations. To secure and increasemarket share, companies granted apatentcan sue to prevent competitors from using the methods the patent covers. Since the 2010s the number of lawsuits, counter-suits, and trade complaints based on patents anddesignsin the market for smartphones, and devices based onsmartphone OSessuch as Android and iOS, has increased significantly. Initial suits, countersuits, rulings, license agreements, and other major events began in 2009 as the smartphone market stated to grow more rapidly by 2012. With the rise in number of mobile medical apps in the market place, government regulatory agencies raised concerns on the safety of the use of such applications. These concerns were transformed into regulation initiatives worldwide with the aim of safeguarding users from untrusted medical advice.[341]According to the findings of these medical experts in recent years, excessive smartphone use in society may lead to headaches, sleep disorders and insufficient sleep, while severe smartphone addiction may lead to physical health problems, such as hunchback, muscle relaxation and uneven nutrition.[342] There is a debate about beneficial and detrimental impacts of smartphones or smartphone-uses on cognition and mental health. Smartphone malware is easily distributed through an insecure app store.[343][344]Often, malware is hidden inpiratedversions of legitimate apps, which are then distributed through third-party app stores.[345][346]Malware risk also comes from what is known as an "update attack", where a legitimate application is later changed to include a malware component, which users then install when they are notified that the app has been updated.[347]As well, one out of three robberies in 2012 in the United States involved the theft of a mobile phone. An online petition has urged smartphone makers to installkill switchesin their devices.[348]In 2014, Apple's "Find my iPhone" and Google's "Android Device Manager" can locate, disable, and wipe the data from phones that have been lost or stolen. With BlackBerry Protect in OS version 10.3.2, devices can be rendered unrecoverable to even BlackBerry's own Operating System recovery tools if incorrectly authenticated or dissociated from their account.[349] Leaked documents from 2013 to 2016 codenamedVault 7detail the capabilities of theUnited StatesCentral Intelligence Agency(CIA) to perform electronic surveillance andcyber warfare, including the ability to compromise the operating systems of most smartphones (including iOS and Android).[350][351]In 2021, journalists and researchers reported the discovery ofspyware, calledPegasus, developed and distributed by a private company which can and has been used to infect iOS and Android smartphones often—partly via use of0-day exploits—without the need for any user-interaction or significant clues to the user and then be used to exfiltrate data, track user locations, capture film through its camera, and activate the microphone at any time.[352]Analysisof data trafficby popular smartphones running variants of Android found substantial by-default data collection and sharing with no opt-out by thispre-installed software.[353][354] Guidelines for mobile device security were issued by NIST[355]and many other organizations. For conducting a private, in-person meeting, at least one site recommends that the user switch the smartphone off and disconnect the battery.[356] Using smartphones late at night can disturb sleep, due to the blue light and brightly lit screen, which affectsmelatoninlevels andsleep cycles. In an effort to alleviate these issues, "Night Mode" functionality to change thecolor temperatureof a screen to a warmer hue based on the time of day to reduce the amount of blue light generated became available through several apps for Android and thef.luxsoftware forjailbrokeniPhones.[357]iOS 9.3integrated a similar, system-level feature known as "Night Shift." Several Android device manufacturers bypassed Google's initial reluctance to make Night Mode a standard feature in Android and included software for it on their hardware under varying names, beforeAndroid Oreoadded it to the OS for compatible devices.[358] It has also been theorized that for some users, addiction to use of their phones, especially before they go to bed, can result in "ego depletion." Many people also use their phones as alarm clocks, which can also lead to loss of sleep.[359][360][361][362][363] As the 2010s decade commenced, the sale figures of dedicated compact cameras decreased sharply since mobile phone cameras were increasingly perceived as serving as a sufficient surrogate camera.[364] Increases in computing power in mobile phones enabled fast image processing and high-resolution filming, with 1080p Full HD being achieved in 2011 and the barrier to 2160p 4K being breached in 2013. However, due to design and space limitations, smartphones lack several features found even on low-budget compact cameras, including ahot-swappablememory card and battery for nearly uninterrupted operation, physical buttons and knobs for focusing and capturing and zooming, abolt thread tripod mount, acapacitor-chargedxenon flashthat exceeds the brightness of smartphones' LED flashlights, and an ergonomic grip for steadier holding during handheld shooting, which enables longer exposure times. Since dedicated cameras can be more spacious, they can house larger image sensors and featureoptical zooming. Since the late 2010s, smartphone manufacturers have bypassed the lack of optical zoom to a limited extent by incorporating additional rear cameras with fixed magnification levels.[365][366] In mobile phones released since the second half of the 2010s, operational life span commonly is limited by built-in batteries which are not designed to be interchangeable. The life expectancy of batteries depends on usage intensity of the powered device, where activity (longer usage) and tasks demanding more energy expire the battery earlier. Lithium-ionandlithium-polymerbatteries, those commonly poweringportable electronics, additionally wear down more from fuller charge and deeper discharge cycles, and when unused for an extended amount of time while depleted, where self-discharging may lead to a harmful depth of discharge.[367][368][369] Manufacturers have prevented some smartphones from operating after repairs, by associating components' unique serial numbers to the device so it will refuse to operate or disable some functionality in case of a mismatch that would occur after a replacement. Locking of the serial number was first documented in 2015 on theiPhone 6, which would become inoperable from a detected replacement of the "home" button. Later, some functionality was restricted on Apple and Samsung smartphones when a battery replacement not authorized by the vendor was detected.[370][371]
https://en.wikipedia.org/wiki/Smartphone
Home automationordomotics[1]isbuilding automationfor ahome. A homeautomationsystem will monitor and/or control home attributes such as lighting, climate, entertainment systems, and appliances. It may also includehome securitysuch as access control and alarm systems. The phrasesmart homerefers to home automation devices that haveinternet access. Home automation, a broader category, includesanydevice that can be monitored or controlled via wireless radio signals, not just those having internet access. When connected with theInternet, home sensors and activation devices are an important constituent of theInternet of Things("IoT").[2] A home automation system typically connects controlled devices to a centralsmart home hub(sometimes called a "gateway"). Theuser interfacefor control of the system uses either wall-mounted terminals, tablet or desktop computers, a mobile phone application, or a Web interface that may also be accessible off-site through the Internet. Early home automation began with labor-saving machines. Self-contained electric or gas poweredhome appliancesbecame viable in the 1900s with the introduction ofelectric power distribution[3]and led to the introduction ofwashing machines(1904),water heaters(1889),refrigerators(1913),sewing machines,dishwashers, andclothes dryers. In 1975, the first general purpose home automation network technology,X10, was developed. It is a communication protocol for electronic devices. It primarily useselectric power transmissionwiring for signalling and control, where the signals involve briefradio frequencybursts ofdigital data, and remains the most widely available.[4] By 2012, in the United States, according to ABI Research, 1.5 million home automation systems were installed.[5]Per research firm Statista[6]more than 45 million smart home devices will be installed in U.S. homes by the end of the year 2018.[7]From 2018 to 2023, the number of U.S. homes equipped with smart devices grew at 10.2% per year to reach 63.43 million by 2023.[8] The word "domotics" is a contraction of the Latin word for a home (domus) and the wordrobotics.[1]The word "smart" in "smart home" refers to the system being aware of the state of its devices, which is done through the information and communication technologies (ICT) protocol and the Internet of Things (IoT).[9] Home automation is prevalent in a variety of different realms, including: In 2011,Microsoft Researchfound that home automation could involve a high cost of ownership, inflexibility of interconnected devices, and poor manageability.[21]When designing and creating a home automation system, engineers take into account several factors including scalability, how well the devices can be monitored and controlled, ease of installation and use for the consumer, affordability, speed, security, and ability to diagnose issues.[22]Findings from iControl showed that consumers prioritize ease-of-use over technical innovation, and although consumers recognize that new connected devices have an unparalleled cool factor, they are not quite ready to use them in their own homes yet.[23] Historically, systems have been sold as complete systems where the consumer relies on one vendor for the entire system including the hardware, the communications protocol, the central hub, and the user interface. However, there are nowopen hardwareandopen source softwaresystems which can be used instead of or with proprietary hardware.[21]Many of these systems interface with consumer electronics such as the Arduino or Raspberry Pi, which are easily accessible online and in most electronics stores.[24]In addition, home automation devices are increasingly interfaced with mobile phones through Bluetooth, allowing for increased affordability and customizability for the user.[9] Home automation suffers fromplatform fragmentationand lack oftechnical standards[25][26][27][28][29][30]a situation where the variety of home automation devices, in terms of both hardware variations and differences in the software running on them, makes the task of developing applications that work consistently between different inconsistent technologyecosystemshard.[31]Customers may hesitate to bet their IoT future onproprietary softwareor hardware devices that useproprietary protocolsthat may fade or become difficult to customize and interconnect.[32] The nature of home automation devices can also be a problem forsecurity,data securityanddata privacy, since patches to bugs found in the core operating system often do not reach users of older and lower-price devices.[33][34]One set of researchers say that the failure of vendors to support older devices with patches and updates leaves more than 87% of active devices vulnerable.[35][36] Concerns have been raised by tenants renting from landlords who decide to upgrade units with smart home technology.[37]These concerns include weak wireless connections that render the door or appliance unusable or impractical; the security of door passcodes kept by the landlord; and the potential invasion of privacy that comes with connecting smart home technologies to home networks.[38] Researchers have also conducted user studies to determine what the barriers are for consumers when integrating home automation devices or systems into their daily lifestyle. One of the main takeaways was regarding ease of use, as consumers tend to steer towards "plug and play" solutions over more complicated setups.[39]One study found that there were large gaps in the mental-models generated by users regarding how the devices actually work.[39]Specifically, the findings showed that there was a lot of misunderstanding related to where the data collected by smart devices was stored and how it was used.[39]For example, in a smart light setup, one participant thought that her iPad communicated directly with the light, telling it to either turn-off or on.[39]In reality, the iPad sends a signal to the cloud system that the company uses (in this case, the Hue Bridge) which then signals directly to the device.[39] Overall, this field is still evolving and the nature of each device is constantly changing. While technologists work to create more secure, streamlined, and standardized security protocols, consumers also need to learn more about how these devices work and what the implications of putting them in their homes can be. The growth of this field is currently limited not only by technology but also by a user's ability to trust a device and integrate it successfully into his/her daily life. Utilizing home automation could lead to more efficient and intelligent energy-saving techniques.[40]By integrating information and communication technologies (ICT) with renewable energy systems such as solar power or wind power, homes can autonomously make decisions about whether to store energy or expend it for a given appliance,[40]leading to overall positive environmental impacts and lower electricity bills for the consumers using the system. To do this, researchers propose using data from sensors regarding consumer activity within the home to anticipate consumer needs and balance that with energy consumption.[41] Furthermore, home automation has a large potential regarding family safety and security. According to a 2015 survey done by iControl, the primary drivers of the demand for smart and connected devices are first "personal and family security", and second "excitement about energy savings".[42]Home automation includes a variety of smart security systems and surveillance setups. This allows consumers to monitor their homes while away, and to give trusted family members access to that information in case anything bad happens. While there are many competing vendors, there are increasing efforts towardsopen sourcesystems. However, there are issues with the current state of home automation including a lack of standardized security measures and deprecation of older devices without backwards compatibility. Home automation has high potential for sharing data between family members or trusted individuals for personal security purposes and could lead to energy saving measures with a positive environmental impact in the future. The home automation market was worth US$64 billion in 2022 and is projected to grow to over $163 billion in 2028.[citation needed]
https://en.wikipedia.org/wiki/Home_automation
TheWi-Fi Allianceis a non-profit[1]organization that owns theWi-Fitrademark. Manufacturers may use the trademark to brand products certified for Wi-Fi interoperability. It is based inAustin, Texas. Early802.11products suffered frominteroperabilityproblems because theInstitute of Electrical and Electronics Engineers(IEEE) had no provision for testing equipment for compliance with its standards. In 1999, pioneers of a new, higher-speed variant endorsed theIEEE 802.11bspecification to form the Wireless Ethernet Compatibility Alliance (WECA) and branded the new technology Wi-Fi.[2][3] The group of companies included3Com, Aironet (acquired byCisco),Harris Semiconductor(nowIntersil),Lucent Technologies(the WLAN part was renamed as Orinoco, become part ofAvaya, then acquired byExtreme Networks),NokiaandSymbol Technologies(acquired byMotorola,Zebra Technologies, and nowExtreme Networks).[4] The alliance listsApple,Comcast,Samsung,Sony,LG,Intel,Dell,Broadcom,Cisco,Qualcomm,Motorola,Microsoft,Texas Instruments, andT-Mobileas key sponsors. The charter for this independent organization was to perform testing, certify interoperability of products, and to promote the technology.[5] WECA renamed itself theWi-Fi Alliancein 2002.[6] Most producers of 802.11 equipment became members, and as of 2012,[update]the Wi-Fi Alliance included over 550 member companies. The Wi-Fi Alliance extended Wi-Fi beyondwireless local area networkapplications into point-to-point and personal area networking and enabled specific applications such asMiracast. The Wi-Fi Alliance owns and controls the "Wi-Fi Certified"logo, a registeredtrademark, which is permitted only on equipment which has passed testing. Purchasers relying on that trademark may have greater chances of interoperation than otherwise. Testing involves not only radio and data format interoperability, butsecurity protocols, as well as optional testing for quality of service and power management protocols.[7]Wi-Fi Certified products have to demonstrate that they can perform well in networks with other Wi-Fi Certified products, running common applications, in situations similar to those encountered in everyday use. Certification employs 3 principles: The Wi-Fi Alliance definition of interoperability demands that products have to show satisfactory performance levels in typicalnetworkconfigurations and have to support both established and emerging applications. The Wi-Fi Alliance certification process includes three types of tests to ensure interoperability. Wi-Fi Certified products are tested for: The Wi-Fi Alliance provides certification testing in two levels:[8] Mandatory: Optional: There are a number of certification programs by Wi-Fi alliance:[14] The 802.11 protocols are IEEE standards, identified as 802.11b, 11g, 11n, 11ac, etc. In 2018 The Wi-Fi Alliance created the simpler generation labels Wi-Fi 4 - 6 beginning with Wi-Fi 5, retroactively added Wi-Fi 4 and later added Wi-Fi 6 and Wi-Fi 6E.[18][19][20]Wi-Fi 5 had Wave 1 and Wave 2 phases. Wi-Fi 6E extends the 2.4/5 GHz range to 6 GHz, where licensed. Listed in historical and capacity order. See the individual 802.11 articles for version details or802.11for a composite summary. WiGigrefers to 60 GHz wireless local area network connection. It was initially announced in 2013 byWireless Gigabit Alliance, and was adopted by the Wi-Fi Alliance in 2013. They started certifying in 2016. The first version of WiGig isIEEE 802.11ad, and a newer versionIEEE 802.11aywas released in 2021.[21][22][23] In October 2010, the Alliance began to certifyWi-Fi Direct, that allows Wi-Fi-enabled devices to communicate directly with each other by setting up ad-hoc networks, without going through awireless access pointor hotspot.[24][25]Since 2009 when it was first announced, some suggested Wi-Fi Direct might replace the need forBluetoothon applications that do not rely on Bluetooth low energy.[26][27] Wi-Fi Protected Accessis a security mechanism based on IEEE 802.11i amendment to the standard that the Wi-Fi Alliance started to certify from the year of 2003.[28] IBSS with Wi-Fi Protected Setup would enable the creation ofad hoc networkbetween devices directly without central access point.[29] Wi-Fi Passpoint, alternatively known asHotspot 2.0, is a solution for enabling inter-carrier roaming.[30]It utilizesIEEE 802.11u. Wi-Fi Easy Connect is a protocol that would enable easily establishing connections viaQR code.[31] Wi-Fi Protected Setup(WPS) is a network security standard to simply create a securewireless home network, created and introduced by Wi-Fi Alliance in 2006. Miracast, introduced in 2012, is a standard for wireless display connections from devices such as laptops, tablets, or smartphones. Its goal is to replace cables connecting from the device to the display.[32] Wi-Fi Aware is an interoperability certification program announced in January 2015 that enables device users, when in the range of a particular access point or another compatible device, to receive notifications of applications or services available in the proximity.[33][34]Later versions of this standard included new features such as the capability to establish a peer-to-peer data connection for file transfer.[35] Fears were voiced immediately in media that it would be predominantly used forproximity marketing.[36] Wi-Fi Location is a type ofWi-Fi positioning system, and the certification could help providing accuracy to in-door positioning.[37] TDLS, or Tunneled Direct Link Setup, is "a seamless way to stream media and other data faster between devices already on the same Wi-Fi network" based onIEEE 802.11zand added to Wi-Fi Alliance certification program in 2012. Devices using it communicate directly with one another, without involving the wireless network's router.[38] The certification of Wi-Fi Agile Multiband indicate devices can automatically connect and maintain connection in the most suitable way. It covers theIEEE 802.11kstandard about access point information report, theIEEE 802.11vstandard that enable exchanging information about state of network,IEEE 802.11ustandard about additional information of a Wi-Fi network,IEEE 802.11rabout fast transition roaming between different access points, as well as other technologies specified by Wi-Fi alliance. Wi-Fi EasyMesh is a certification program based on its Multi-Access Point specification for creating Wi-Fi meshes from products by different vendors,[39]based onIEEE 1905.1. It is intended to address the problem of Wi-Fi systems that need to cover large areas where several routers serve as multiple access points, working together to form a larger/extended and unified network.[40][41][42] Formerly known as Carrier Wi-Fi, Wi-Fi Vantage is a certification program for operators to maintain and manage quality Wi-Fi connections in high usage environment.[43]It includes a number of certification, such as Wi-Fi certified ac (as in 802.11ac), Passpoint, Agile Multiband, and Optimized Connectivity.[44] Wi-Fi Multimedia (WMM) or known asWireless Multimedia Extensionsis a Wi-Fi Alliance interoperability certification based on theIEEE 802.11estandard. It provides basicquality of service(QoS) features toIEEE 802.11networks. Wi-Fi Home Design is a set of guidelines released by Wi-Fi alliance for inclusion of wireless network in home design.[45] Wi-Fi HaLowis a standard for low-power wide-area (LPWA) connection standard using sub-1 GHz spectrum forIoTdevices. It is based onIEEE 802.11ah.[46]
https://en.wikipedia.org/wiki/Wi-Fi_Alliance#Wi-Fi_EasyMesh
KNXis anopen standard(seeEN 50090,ISO/IEC14543) for commercial and residentialbuilding automation. KNX devices can manage lighting, blinds and shutters,HVAC, security systems, energy management, audio video, domestic appliances, displays, remote control, etc. KNX evolved from three earlier standards; theEuropean Home Systems Protocol(EHS),BatiBUS, and theEuropean Installation Bus(EIB orInstabus). It can usetwisted pair(in atree, line orstartopology),powerline,RF, orIPlinks. On this network, the devices formdistributed applicationsand tight interaction is possible. This is implemented via interworking models with standardised datapoint types andobjects, modellinglogicaldevice channels. The KNX standard has been built on theOSI-basedEIBcommunication stackextended with thephysical layers, configuration modes and application experience ofBatiBUSandEHS. KNX installations can use several physical communication media: KNX is not based on a specific hardware platform and a network can be controlled by anything from an 8-bitmicrocontrollerto a PC, according to the demands of a particular building. The most common form of installation is over twisted pair medium. KNX is an approved standard by the following organisations, (inter alia):[1] It is administered by the KNX Associationcvba, a non-profit organisation governed by Belgian law which was formed in 1999. The KNX Association had 500 registered hardware and software vendor members from 45 nations as at 1 July 2021. It had partnership agreements with 100,000 installer companies in 172 countries and more than 500 registered training centres.[2]This is a royalty-freeopen standardand thus access to the KNX specifications is unrestricted.[3] KNX devices are commonly connected by a twisted pair bus and can be modified from a controller. The bus is routed in parallel to the electrical power supply to all devices and systems on the network linking:[4] Classifying devices as either "sensor" or "actuator" is outdated and simplistic. Many actuators include controller functionality, but also sensor functionality (for instance measuring operating hours, number of switch cycles, current, electrical power consumption, and more). Application software, together with system topology and commissioning software, is loaded onto the devices via a system interface component. Installed systems can be accessed via LAN, point to point links, or phone networks for central or distributed control of the system via computers, tablets and touch screens, and smartphones. The key features of the KNX architecture are: Central to the KNX architecture concepts aredatapoints(inputs, outputs, parameters, and diagnostic data) which represent process and control variables in the system. The standardised containers for these datapoints aregroup objectsandinterface object properties. The communication system offers a reduced instruction set to read and write datapoint values. Datapoints have to conform to standardiseddatapoint types, themselves grouped intofunctional blocks. These functional blocks and datapoint types are related to applications fields, but some of them are of general use (such as date and time). Datapoints may be accessed through unicast or multicast mechanisms. To logically link applications' datapoints across the network, KNX has three underlying binding schemes: one for free, one for structured and one for tagged binding: The common kernel sits on top of the physical layers and the medium-specific data link layer and is shared by all the devices on the KNX Network. It is OSI 7-layer model compliant: An installation has to be configured at the network topology level and at individual nodes or devices. The first level is a precondition or “bootstrap” phase, prior to the configuration of the distributed applications, i.e. binding and parameter setting. Configuration may be achieved through a combination of local activity on the devices (such pushing a button), and active network management communication over the bus (peer-to-peer, or more centralized master-slave). The KNX configuration mode: Some modes require more active management over the bus, whereas some others are mainly oriented towards local configuration. There are three categories of KNX devices: KNX encompasses tools for project engineering tasks such as linking a series of individual devices into a functioning installation and integrating different media and configuration modes. This is embodied in anEngineering Tool Software(ETS) suite. A KNX installation always consists of a set of devices connected to the bus or network. Device models vary according to node roles, capabilities, management features and configuration modes, and are all laid down in theprofiles. There are also general-purpose device models, such as for bus coupling units (BCUs) or bus interface modules (BIMs). Devices may be identified and subsequently accessed throughout the network either by their individual address, or by their unique serial number, depending on the configuration mode. (Unique serial numbers are allocated by the KNX Association Certification Department.) Devices can also disclose both a manufacturer specific reference and functional (manufacturer independent) information when queried. A KNX wired network can be formed withtree,lineandstartopologies, which can be mixed as needed;ringtopologies arenotsupported. A tree topology is recommended for a large installation. KNX can link up to 57,375 devices using16-bitaddresses. Coupling units allow address filtering which helps to improve performance given the limited bus signal speed. An installation based on KNXnet/IP allows the integration of KNX sub networks via IP as the KNX address structure is similar to an IP address. The TP1twisted pairbus (inherited from EIB) providesasynchronous, character oriented data transfer andhalf-duplexbidirectionaldifferential signalingwith a signaling speed of 9600 bit/s.Media access controlis viaCSMA/CA. Every bus user has equal data transmission rights and data is exchanged directly (peer-to-peer) between bus users.SELVpower is distributed via the same pair for low-power devices. A deprecated specification, TP0, running at a slower signalling speed of4800 bit/s, has been retained from the BatiBUS standard but KNX products cannot exchange information with BatiBUS devices. PL 110 power-line transmission is delivered usingspread frequency shift keyingsignalling with asynchronous transmission of data packets and half duplex bi-directional communication. It uses the central frequency 110 kHz (CENELEC B-band) and has a data rate of 1200 bit/s. It also uses CSMA. KNX Powerline is aimed at smartwhite goods, but the take-up has been low. An alternative variant, PL 132, has a carrier frequency centred on 132.5 kHz (CENELEC C-band). RF enables communication in the 868.3 MHz band for usingfrequency shift keyingwithManchester data encoding. KNXnet/IP port 3671 has integration solutions forIP-enabled media likeEthernet(IEEE 802.2),Bluetooth,WiFi/Wireless LAN(IEEE 802.11),FireWire(IEEE 1394) etc. Ignoring any preamble for medium-specific access and collision control, a frame format is generally: KNX Telegrams can be signed or encrypted thanks to the extension of the protocol that was developed starting in 2013, KNX Data Secure for securing telegrams on the traditional KNX media TP and RF and KNX IP Secure for securing KNX telegrams tunnelled via IP. KNX Data Secure became an EN standard (EN 50090-3-4) in 2018, KNX IP Secure an ISO standard (ISO 22510) in 2019. Any product labeled with the KNX trademark must be certified to conform with the standards (and thus interoperable with other devices) by accredited third party test labs. All products bearing the KNX logo are programmed through a common interface using the vendor-independent ETS software.
https://en.wikipedia.org/wiki/KNX
LonWorksorLocal Operating Networkis an open standard (ISO/IEC 14908) for networking platforms specifically created to address the needs of control applications. The platform is built on a protocol created byEchelon Corporationfor networking devices over media such astwisted pair,power lines,fiber optics, andwireless. It is used for the automation of various functions within buildings such aslightingandHVAC; seebuilding automation. The technology had its origins with chip designs, power lines,twisted pairs, signaling technology,routers, network management software, and other products fromEchelon Corporation. In 1999 the communications protocol (then known asLonTalk) was submitted toANSIand accepted as a standard for control networking (ANSI/CEA-709.1-B). Echelon's power line andtwisted pairsignaling technology were also submitted toANSIfor standardization and acceptance. Since then, ANSI/CEA-709.1 has been accepted as the basis for IEEE 1473-L (in-train controls),AARelectro-pneumatic braking systems for freight trains,IFSF(European petrol station control),SEMI(semiconductor equipment manufacturing), and in 2005 asEN 14908(European building automation standard). The protocol is also one of several data link/physical layers of theBACnetASHRAE/ANSIstandard forbuilding automation. Chinaratified the technology as a national controls standard, GB/Z 20177.1-2006, and as a building and intelligent community standard, GB/T 20299.4-2006; and in 2007 CECED, the European Committee of Domestic Equipment Manufacturers, adopted the protocol as part of its Household Appliances Control and Monitoring – Application Interworking Specification (AIS) standards. In 2008,ISOandIECgranted the communications protocol, twisted pair signaling technology, power line signaling technology, andInternet Protocol(IP) compatibility standard numbers ISO/IEC 14908-1, -2, -3, and -4.[1] By 2010, approximately 90 million devices were installed with LonWorks technology. Manufacturers in a variety of industries including building, home, street lighting, transportation, utility, and industrial automation have adopted the platform as the basis for their product and service offerings. Statistics as to the number of locations using the LonWorks technology are scarce, but products and applications built on top of the platform include such diverse functions as embedded machine control, municipal and highway/tunnel/street lighting, heating and air conditioning systems, intelligent electricity metering, subway train control, building lighting, stadium lighting and speaker control, security systems, fire detection and suppression, and newborn location monitoring and alarming, as well as remote power generation load control. Two physical-layer signaling technologies,twisted pairfree topologyand power-line carrier, are typically included in each of the standards created around the LonWorks technology. The two-wire layer operates at 78 kbit/s usingdifferential Manchester encoding, while the power line achieves either 5.4 or 3.6 kbit/s, depending on frequency.[2] Additionally, the LonWorks platform uses an affiliated IP tunneling standard—ISO/IEC 14908-4[3](ANSI/CEA-852)[4]—in use by a number of manufacturers[5]to connect the devices on previously deployed and new LonWorks platform-based networks to IP-aware applications or remote network-management tools. Many LonWorks platform-based control applications are being implemented with some sort of IP integration, either at the UI/application level or in the controls infrastructure. This is accomplished with Web services or IP-routing products available in the market. An Echelon Corporation-designedICconsisting of several 8-bit processors, theNeuron chipwas initially the only way to implement a LonTalk protocol node and is used in the large majority of LonWorks platform-based hardware. Since 1999, the protocol has been available for general-purpose processors: A port of the ANSI/CEA-709.1 standard to IP-based or 32-bit chips.[6] As of 14 September, 2018, Echelon Corporation was acquired byAdesto TechnologiesCorporation.[7]Adesto was then acquired byDialog Semiconductor[8]who were then acquired byRenesas Electronics.[9]As of 2024, Renesas continues to offer LonWorks (and BACnet) products.[10] One of the keys to the interoperability of the system is the standardisation of the variables used to describe physical things to LonWorks. This standards list is maintained by LonMark International, and each standard parameter is known as Standard Network Variable Type (SNVT, pronounced "sniv-it.") For example, a thermostat might report temperature using theSNVT_temp,defined as a 2-byte integer between zero and 65535, and representing a temperature between-274.0and 6279.5 degrees Celsius at a precision of 0.1 °C.[12]
https://en.wikipedia.org/wiki/LonWorks
BACnetis acommunication protocolfor building automation and control (BAC) networks that use theASHRAE,ANSI, andISO16484-5 standards[1]protocol. BACnet was designed to allow communication ofbuilding automationand control systems for applications such as heating, ventilating, and air-conditioning control (HVAC), lighting control, access control, and fire detection systems and their associated equipment. The BACnet protocol provides mechanisms for computerized building automation devices to exchange information, regardless of the particular building service they perform. The development of the BACnet protocol began in June, 1987, in Nashville, Tennessee, at the inaugural meeting of the ASHRAE BACnet committee, known at that time as SPC 135P, "EMCS Message Protocol".[2]The committee worked at reaching consensus using working groups to divide up the task of creating a standard. The working groups focused on specific areas and provided information and recommendations to the main committee. The first three working groups were the Data Type and Attribute Working Group, Primitive Data Format Working Group, and the Application Services Working Group. BACnet became ASHRAE/ANSI Standard 135 in 1995, andISO 16484-5in 2003. The Method of Test for Conformance to BACnet was published in 2003 as BSR/ASHRAE Standard 135.1. BACnet is under continuous maintenance by the ASHRAE Standing Standard Project Committee 135. BACnet had an almost immediate impact on theHVACcontrols industry. In 1996Alertonannounced a BACnet product line for HVAC controls, from the operator's workstation to smallvariable air volume(VAV) controllers.[3]Automated Logic Corporationand Delta Controls soon followed suit. On July 12, 2017, BACnet reached a milestone with the issuance of the 1000th Vendor ID. Vendor IDs are assigned by ASHRAE and are distributed internationally. Those vendor identifiers can be viewed atthe BACnet websiteArchived2009-11-21 at theWayback Machine. H. Michael (Mike) Newman, Manager of the Computer Section of the Utilities and Energy Management Department atCornell University, served as the BACnet committee chairman until June, 2000, when he was succeeded by his vice-chair of 13 years, Steven (Steve) Bushby fromNIST. During Steve Bushby's four-year term as committee chair the BACnet standard was republished twice, in 2001 and 2004, each time with new capabilities added to the standard. The 2001 version featured, among other things, extensions to support fire / life-safety systems. In June, 2004, 17 years after the first BACnet meeting and back in Nashville, William (Bill) Swan (a.k.a. "BACnet Bill") from Alerton began his four-year stint as committee chair. During his term the number of committee working groups grew to 11, pursuing areas such as support for lighting, access control, energy utility/building integration, and wireless communications. In January 2006 the BACnet Manufacturers Association and the BACnet Interest Group of North America combined their operation in a new organization calledBACnet InternationalArchived2020-08-17 at theWayback Machine. In June 2008, in Salt Lake City, Dave Robin from Automated Logic Corporation took over the reins as the new committee chair after serving 4 years as vice chair. During Dave's term, 22 addenda were published for the 135-2008 standard and republished as 135-2010. Several addenda were published for 135-2010 and the standard was republished as 135-2012. In June 2012, in San Antonio, Carl Neilson from Delta Controls took over the reins as the new committee chair after serving 4 years as vice chair. During Carl's term, 12 addenda were published for the 135-2012 standard and it was republished as 135-2016. Carl stepped down as chair in June 2015. In June 2015, Bernhard Isler, from Siemens, became chair after serving 3 years as vice chair and 4 years as secretary. During Bernhard's term, 10 addenda were published for the 135-2016 standard. Once addenda to 135.1-2013 was also published. Bernhard stepped down as chair in June 2018. In June 2018, Michael Osborne from Reliable Controls, became chair after serving 3 years as secretary and 3 years as vice chair. The BACnet protocol defines a number of services that are used to communicate between building devices. The protocol services include Who-Is, I-Am, Who-Has, I-Have, which are used for Device and Object discovery. Services such as Read-Property and Write-Property are used for data sharing. As of ANSI/ASHRAE 135-2016, the BACnet protocol defines 60 object types that are acted upon by the services. The BACnet protocol defines a number of data link and physical layers, includingARCNET,Ethernet, BACnet/IP, BACnet/IPv6, BACnet/MSTP,point-to-pointoverRS-232,multidrop serial buswithtoken passingoverRS-485,Zigbee, andLonTalk. ANSI/ASHRAE 135-2020 specifies 62 standard object types: BACnet Testing Laboratories ("BTL") was established by BACnet International to test products to BACnet standards and support compliance testing and interoperability testing activities and consists of BTL Manager and the BTL working group ("BTL-WG"). The general activities of the BTL are: The BTL also provides testing services through BACnet Laboratories. The BTL Managers and BTL working groups of BACnet International administer the test Laboratories. All BTL-recognized BACnet Test Organizations areISO 17025accredited. In January, 2017, a new BTL certification program was announced. Under this program, the work of the BTL and WSPCert (the European BACnet certification body) is merged. This merger forms a single point of testing for both the BTL Mark and the Certificate of Conformance.
https://en.wikipedia.org/wiki/BACnet
WaveLANwas a brand name for a family ofwireless networkingtechnology sold byNCR,AT&T,LucentTechnologies, and Agere Systems as well as being sold by other companies under OEM agreements. The WaveLAN name debuted on the market in 1990 and was in use until 2000, when Agere Systems renamed their products toORiNOCO. WaveLAN laid the important foundation for the formation of IEEE 802.11 working group and the resultant creation ofWi-Fi. WaveLAN has been used on two different families of wireless technology: WaveLAN was originally designed by NCR Systems Engineering, later renamed into WCND (Wireless Communication and Networking Division) atNieuwegein, in the province Utrecht in the Netherlands, a subsidiary ofNCR Corporation, in 1986–7, and introduced to the market in 1990 as a wireless alternative toEthernetandToken Ring.[1]The next year NCR contributed the WaveLAN design to the IEEE 802 LAN/MAN Standards Committee.[2]This led to the founding of the 802.11 Wireless LAN Working Committee which produced the original IEEE 802.11 standard, which eventually became the basis of the certification markWi-Fi. When NCR was acquired byAT&Tin 1991, becoming the AT&T GIS (Global Information Solutions) business unit, the product name was retained, as happened two years later when the product was transferred to the AT&T GBCS (Global Business Communications Systems) business unit, and again when AT&T spun off their GBCS business unit asLucentin 1995. The technology was also sold as WaveLAN under anOEMagreement byEpson,Hitachi, andNEC, and as the RoamAbout DS byDEC.[3]It competed directly withAironet'snon-802.11ARLANlineup,[4]which offered similar speeds, frequency ranges and hardware. Several companies also marketed wireless bridges and routers based on the WaveLANISAandPC cards, like the C-Spec OverLAN, KarlNet KarlBridge, Persoft Intersect Remote Bridge, and Solectek AIRLAN/Bridge Plus. Lucent's WavePoint II access point could accommodate both the classic WaveLAN PC cards as well as the WaveLAN IEEE cards.[5][6]Also, there were a number of compatible third-party products available to address niche markets such as: Digital Ocean's Grouper, Manta, and Starfish offerings for theApple Newtonand Macintosh; Solectek's 915 MHz WaveLAN parallel port adapter; Microplex's M204 WaveLAN-compatible wireless print server; NEC's Japanese-market only C&C-Net 2.4 GHz adapter for the NEC-bus; Toshiba's Japanese-market only WaveCOM 2.4 GHz adapter for the Toshiba-Bus; andTeklogix's WaveLAN-compatible Pen-based and Notebook terminals.[7] During this time frame, networking professionals also realized that sinceNetWare3.x and 4.x supported the WaveLAN cards and came with a Multi Protocol Router module that supported the IP/IPXRIPandOSPFrouting protocols, one could construct a wirelessroutednetwork using NetWare servers and WaveLAN cards for a fraction of the cost of building a wirelessbridgednetwork using WaveLAN access points.[8]Many NetWare classes and textbooks of the time included a NetWare OS CD with a 2-person license, so potentially the only cost incurred came from hardware.[9] When the802.11protocol was ratified, Lucent began producing chipsets and PC-cards to support this new standard under the name of WaveLAN IEEE. WaveLAN was among the first products certified by the Wi-Fi Alliance, originally called the Wireless Ethernet Compatibility Association (WECA). Shortly thereafter, Lucent spun off its semiconductor division that also produced the WaveLAN chipsets asAgere Systems. On June 17, 2002Proximacquired the IEEE 802.11 LAN equipment business including the trademark ORiNOCO from Agere Systems. Proxim later renamed its entire 802.11 wireless networking lineup to ORiNOCO, including products based onAtheroschipsets.[10] Classic WaveLAN operates in the900 MHzor 2.4 GHzISMbands. Being a proprietary pre-802.11 protocol, it is completely incompatible with the 802.11 standard. Soon after the publication of the IEEE 802.11 standard on November 18, 1997, WaveLAN IEEE was placed on the market. The pre-802.11 standard WaveLAN cards were based on the Intel 82586 EthernetPHYcontroller, which was a commonly used controller in its time and was found in many ISA and MCA Ethernet cards, such as theIntelEtherExpress 16 and the3COM3C523.[11]The WaveLAN IEEE ISA, MCA and PCMCIA cards used Medium Access Controller (MAC), HERMES, designed specifically for 802.11 protocol support. Theradio modemsection was hidden from the OS, thus making the WaveLAN card appear to be a typical Ethernet card, with the radio-specific features taken care of behind the scenes.[12] While the 900 MHz models and the early 2.4 GHz models operated on one fixed frequency, the later 2.4 GHz cards as well as some 2.4 GHz WavePoint access points had the hardware capacity to operate over ten channels, ranging from 2.412 GHz to 2.484 GHz, with the channels available being determined by theregion-specific firmware.[13] For security, WaveLAN used a 16-bit NWID (NetWorkID) field, which yielded 65,536 potential combinations; the radio portion of the device could receive radio traffic tagged with another NWID, but the controller would discard the traffic.DESencryption (56-bit) was an option in some of the ISA and MCA cards and all of the WavePoint access points. The full-length ISA and MCA cards had a socket for an encryption chip, the half-length 915 MHz ISA cards had solder pads for a socket which was never added, and the 2.4 GHz half-length ISA cards had the chip soldered directly to the board. For the IEEE 802.11 standard the goal was to provide data confidentiality comparable to that of a traditional wired network, using 64- and 128-bit data encryption technology. This first implementation was called “Wired Equivalent Privacy” (WEP). There are shortcomings in WaveLAN & initial 802.11 compatible devices security strategy: This was addressed by the 802.11i Wi-Fi Protected Access (WPA) that replaced WEP in the standard. Linuxhas included support for ISA Classic WaveLAN cards since the 2.0.37 kernel, while full support for the PC card Classic WaveLAN cards came later. Status of support for MCA Classic Wavelan cards is unknown.[18][19] FreeBSDversion 2.2.1-up[20]and theMach4kernel[21]have had native support for the ISA Classic WaveLAN cards for several years.OpenBSD[22]andNetBSD[23]do not natively support any of the Classic WaveLAN cards. Several open-source projects, such asNdisWrapperand Project Evil, currently exist that allow the use of NDIS drivers via a "wrapper". This allows non-Windows OS' to utilize the near-universal nature of drivers written for the Windows platform to the benefit of other operating systems, such as Linux, FreeBSD, andZETA. Classic WaveLAN technology was available for the MCA, ISA/EISA, and PCMCIA interfaces:
https://en.wikipedia.org/wiki/WaveLAN
TheInstitute of Electrical and Electronics Engineers(IEEE)[a]is an American501(c)(3)public charityprofessional organization forelectrical engineering,electronics engineering, and other related disciplines. The IEEE has a corporate office inNew York Cityand an operations center inPiscataway, New Jersey. The IEEE was formed in 1963 as an amalgamation of theAmerican Institute of Electrical Engineersand theInstitute of Radio Engineers.[5] The IEEE traces its founding to 1884 and theAmerican Institute of Electrical Engineers. In 1912, the rivalInstitute of Radio Engineerswas formed.[6]Although the AIEE was initially larger, the IRE attracted more students and was larger by the mid-1950s. The AIEE and IRE merged in 1963.[7] The IEEE is headquartered inNew York City, but most business is done at the IEEE Operations Center[8]inPiscataway, New Jersey, opened in 1975.[9] The Australian Section of the IEEE existed between 1972 and 1985, after which it split intostate- and territory-basedsections.[10] As of 2023[update], IEEE has over 460,000 members in 190 countries, with more than 66 percent from outside the United States.[11] IEEE claims to produce over 30% of the world's literature in the electrical, electronics, andcomputer engineeringfields, publishing approximately 200peer-reviewed journals[12]and magazines. IEEE publishes more than 1,700 conference proceedings every year.[13] The published content in these journals as well as the content from several hundred annualconferencessponsored by the IEEE are available in the IEEE Electronic Library (IEL)[14]available throughIEEEXplore[15]platform, for subscription-based access and individual publication purchases.[16] In addition to journals and conference proceedings, the IEEE also publishes tutorials and standards that are produced by its standardization committees. The organization also has its own IEEE paper format.[17] IEEE providesIEEE Editorial Style Manual for Authorsstyle guidefor article's authors and basic templates inMicrosoft WordandLaTeXfile formats .[18][19]It's based onThe Chicago Manual of Styleand doesn't cover "Grammar" and "Usage" styles which are provided by Chicago style guideline.[20][21] In April 2024 IEEE bannedLennatest images, and stated that they would decline papers containing them.[22][23] IEEE has 39 technical societies, each focused on a certain knowledge area, which provide specialized publications, conferences,business networkingand other services.[24] In September 2008, theIEEE History Committeefounded theIEEE Global History Network,[25][26][27]which now redirects toEngineering and Technology History Wiki.[28][25] The IEEE Foundation is a charitable foundation established in 1973[29]to support and promote technology education, innovation, and excellence.[30]It is incorporated separately from the IEEE, although it has a close relationship to it. Members of the Board of Directors of the foundation are required to be active members of IEEE, and one third of them must be current or former members of the IEEE Board of Directors. Initially, the role of the IEEE Foundation was to accept and administer donations for the IEEE Awards program, but donations increased beyond what was necessary for this purpose, and the scope was broadened. In addition to soliciting and administering unrestricted funds, the foundation also administers donor-designated funds supporting particular educational, humanitarian, historical preservation, and peer recognition programs of the IEEE.[30]As of the end of 2014, the foundation's total assets were nearly $45 million, split equally between unrestricted and donor-designated funds.[31] In May 2019, IEEE restrictedHuaweiemployees from peer reviewing papers or handling papers as editors due to the "severe legal implications" of U.S. government sanctions against Huawei.[32]As members of its standard-setting body, Huawei employees could continue to exercise their voting rights, attend standards development meetings, submit proposals and comment in public discussions on new standards.[33][34]The ban sparked outrage among Chinese scientists on social media. Some professors in China decided to cancel their memberships.[35][36] On June 3, 2019, IEEE lifted restrictions on Huawei's editorial and peer review activities after receiving clearance from the United States government.[37][38][39] On February 26, 2022, the chair of the IEEE Ukraine Section, Ievgen Pichkalov, publicly appealed to the IEEE members to "freeze [IEEE] activities and membership in Russia" and requested "public reaction and strict disapproval of Russia's aggression" from the IEEE and IEEE Region 8.[40]On March 17, 2022, an article in the form of Q&A interview with IEEE Russia (Siberia) senior member Roman Gorbunov titled "A Russian Perspective on the War in Ukraine" was published inIEEE Spectrumto demonstrate "the plurality of views among IEEE members" and the "views that are at odds with international reporting on the war in Ukraine".[41]On March 30, 2022, activist Anna Rohrbach created an open letter to the IEEE in an attempt to have them directly address the article, stating that the article used "common narratives in Russian propaganda" on the2022 Russian invasion of Ukraineand requesting theIEEE Spectrumto acknowledge "that they have unwittingly published a piece furthering misinformation and Russian propaganda."[42]A few days later a note from the editors was added on April 6[43]with an apology "for not providing adequate context at the time of publication", though the editors did not revise the original article.[44]
https://en.wikipedia.org/wiki/IEEE
Wireless LAN (WLAN) channels are frequently accessed usingIEEE 802.11protocols. The 802.11 standard provides several radio frequency bands for use in Wi-Fi communications, each divided into a multitude of channels numbered at 5 MHz spacing (except in the 45/60 GHz band, where they are 0.54/1.08/2.16 GHz apart) between the centre frequency of the channel. The standards allow for channels to be bonded together into wider channels for faster throughput. 802.11ahoperates in sub-gigahertz unlicensed bands. Each world region supports different sub-bands, and the channels number depends on the starting frequency on the sub-band it belongs to. Therefore there is no global channels numbering plan, and the channels numbers are incompatible between world regions (and even between sub-bands of a same world region). The following sub-bands are defined in the 802.11ah specifications: 14 channels are designated in the 2.4 GHz range, spaced 5 MHz apart from each other except for a 12 MHz space before channel 14.[2]The abbreviation F0designates each channel'sfundamental frequency. ^AIn the 2.4 GHz bands bonded 40 MHz channels are uniquely named by the primary and secondary 20 MHz channels, e.g. 9+13. In the 5 GHz bands they are denoted by the center of the wider band and the primary 20 MHz channel e.g. 42[40] ^BIn the US, 802.11 operation on channels 12 and 13 is allowed under low power conditions. The 2.4 GHz Part 15 band in the US allows spread-spectrum operation as long as the 50 dB bandwidth of the signal is within the range of 2,400–2,483.5 MHz[11]which fully encompasses channels 1 through 13. AFederal Communications Commission(FCC) document clarifies that only channel 14 is forbidden and that low-power transmitters with low-gain antennas may operate legally in channels 12 and 13.[12]Channels 12 and 13 are nevertheless not normally used in order to avoid any potential interference in the adjacent restricted frequency band, 2,483.5–2,500 MHz,[13]which is subject to strict emission limits set out in 47 CFR § 15.205.[14]Per recent FCC Order 16–181, "an authorized access point device can only operate in the2483.5–2495 MHzband when it is operating under the control of a Globalstar Network Operating Center and that a client device can only operate in the2483.5–2495 MHzband when it is operating under the control of an authorized access point"[15] ^CChannel 14 is valid only forDSSSandCCKmodes (Clause 18a.k.a.802.11b) in Japan.OFDM(i.e.,802.11g) may not be used. (IEEE 802.11-2007 § 19.4.2) Nations apply their own RF emission regulations to the allowable channels, allowed users and maximum power levels within these frequency ranges. Network operators should consult their local authorities as these regulations may be out of date as they are subject to change at anytime. Most of the world will allow the first thirteen channels in the spectrum. Interference happens when two networks try to operate in the same band, or when their bands overlap. The two modulation methods used have different characteristics of band usage and therefore occupy different widths: While overlapping frequencies can be configured at a location and will usually work, it can cause interference resulting in slowdowns, sometimes severe, particularly in heavy use. Certain subsets of frequencies can be used simultaneously at any one location without interference (see diagrams for typical allocations). The consideration of spacing stems from both the basic bandwidth occupation (described above), which depends on the protocol, and from attenuation of interfering signals over distance. In the worst case, using every fourth or fifth channel by leaving three or four channels clear between used channels causes minimal interference, and narrower spacing still can be used at further distances.[18][19]The "interference" is usually not actual bit-errors, but the wireless transmitters making space for each other. Interference resulting in bit-error is rare.[19]The requirement of the standard is for a transmitter to yield when it decodes another at a level of 3 dB above thenoise floor,[20]or when the non-decoded noise level is higher than a threshold Pthwhich, forWi-Fi 5and earlier, is between -76 and -80 dBm.[19] As shown in the diagram, bonding two 20 MHz channels to form a 40 MHz channel is permitted in the 2.4 GHz bands. These are generally referred to by the centres of the primary 20 MHz channel and the adjacent secondary 20 MHz channel (e.g. 1+5, 9+13, 13–9, 5–1). The primary 20 MHz channel is used for signalling and backwards compatibility, the secondary is only used when sending data at full speed. Except where noted, all information taken from Annex J of IEEE 802.11y-2008 This range is documented as only being allowed as a licensed band in the United States. However, not in the original specification, under newer frequency allocations from the FCC, it falls under the3.55–3.7 GHzCitizens Broadband Radio Serviceband. This allows for unlicensed use, under Tier 3 GAA rules, provided that the user doesn't cause harmful interference to Incumbent Access users or Priority Access Licensees and accepts all interference from these users,[21]and also follows of all the technical requirements in CFR 47 Part 96 Subpart E. A 40 MHz band is available from 3655 to 3695 MHz. It may be divided into eight 5 MHz channels, four 10 MHz channels, or two 20 MHz channels. The division into 5 MHz channels consumes all eight possible channel numbers, and so (unlike other bands) it is not possible to infer the width of a channel from its number. Instead each wider channel shares its channel number with the 5 MHz channel just above its mid frequency: and so on. In Japan since 2002, 80 MHz of spectrum from 4910 to 4990 MHz has been available for both indoor and outdoor use, once registered. Until 2017, an additional 60 MHz of spectrum from 5030 to 5090 MHz was available for registered use, however it has since been re-purposed and can no longer be used.[22] 50 MHz of spectrum from 4940 to 4990 MHz (WLAN channels 20–26) are in use by public safety entities in the United States. Within this spectrum there are two non-overlapping channels allocated, each 20 MHz wide. The most commonly used channels are 22 and 24. Source:[55] In 2007, theFCC(United States) began requiring that devices operating in the bands of 5.250–5.350 GHz and 5.470–5.725 GHz must employdynamic frequency selection(DFS) andtransmit power control(TPC) capabilities. This is to avoid interference with weather-radar and military applications.[56]In 2010, the FCC further clarified the use of channels in the 5.470–5.725 GHz band to avoid interference withTDWR, a type of weather radar system.[57]In FCC parlance, these restrictions are now referred to collectively as theOld Rules. On 10 June 2015, the FCC approved a new ruleset for 5 GHz device operation (called theNew Rules), which adds 160 and 80 MHz channel identifiers, and re-enables previously prohibited DFS channels, in Publication Number 905462.[58]This FCC publication eliminates the ability for manufacturers to have devices approved or modified under the Old Rules in phases; the New Rules apply in all circumstances as of 2 June 2016.[update][58] Source:[55] The UK'sOfcomregulations for unlicensed use of the 5 GHz band is similar to Europe, except that DFS is not required for the frequency range 5.725–5.850 GHz and the SRD maximum mean e.i.r.p is 200 mW instead of 25 mW.[59] Additionally, 5.925–6.425 GHz is also available for unlicensed use, as long as it is used indoors with an SRD of 250 mW. Germany requires DFS and TPC capabilities on 5.250–5.350 GHz and 5.470–5.725 GHz as well; in addition, the frequency range 5.150–5.350 GHz is allowed only for indoor use, leaving only 5.470–5.725 GHz for outdoor and indoor use.[60] Since this is the German implementation of EU Rule 2005/513/EC, similar regulations must be expected throughout the European Union.[27][28] European standardEN301 893 covers 5.15–5.725 GHz operation, and as of 23 May 2017[update]v2.1.1 has been adopted.[61]6 GHz can now be used.[62] Austria adopted Decision 2005/513/EC directly into national law.[63]The same restrictions as in Germany apply, only 5.470–5.725 GHz is allowed to be used outdoors and indoors.[citation needed] Japan's use of 10 and 20 MHz-wide 5 GHz wireless channels is codified byAssociation of Radio Industries and Businesses(ARIB) document STD-T71,Broadband Mobile Access Communication System (CSMA).[64]Additional rule specifications relating to 40, 80, and 160 MHz channel allocation has been taken on by Japan'sMinistry of Internal Affairs and Communications(MIC).[65] In Brazil, the use of TPC is required in the 5.150–5.350 GHz and 5.470–5.725 GHz bands is required, but devices without TPC are allowed with a reduction of 3 dB.[66]DFS is required in the 5.250–5.350 GHz and 5.470–5.725 GHz bands, and optional in the 5.150–5.250 GHz band.[67] As of 2015,[update]some of the Australian channels require DFS to be utilised (a significant change from the 2000 regulations, which allowed lower power operation without DFS).[8]As per AS/NZS 4268 B1 and B2, transmitters designed to operate in any part of 5250–5350 MHz and 5470–5725 MHz bands shall implement DFS in accordance with sections 4.7 and 5.3.8 and Annex D of ETSI EN 301 893 or alternatively in accordance with FCC paragraph 15.407(h)(2). Also as per AS/NZS 4268 B3 and B4, transmitters designed to operate in any part of 5250–5350 MHz and 5470–5725 MHz bands shall implement TPC in accordance with sections 4.4 and 5.3.4 of ETSI EN 301 893 or alternatively in accordance with FCC paragraph 15.407(h)(1). New Zealand regulation differs from Australian.[68] In the Philippines, theNational Telecommunications Commission(NTC) allows the use of 5150 MHz to 5350 MHz and 5470 MHz to 5850 MHz frequency bands indoors with an effective radiated power (ERP) not exceeding 250 mW. Indoor Wireless Data Network (WDN) equipment and devices shall not use external antenna. All outdoor equipment/radio station whether for private WDN or public WDN shall be covered by appropriate permits and licenses required under existing rules and regulations.[69] Singapore regulation requires DFS and TPC to be used in the 5.250–5.350 GHz band to transmit more than 100 mWeffective radiated power(EIRP), but no more than 200 mW, and requires DFS capability on 5.250–5.350 GHz below or equal to 100 mW EIRP, and requires DFS and TPC capabilities on 5.470–5.725 below or equal to 1000 mW EIRP. Operating 5.725–5.850 GHz above 1000 mW and below or equal to 4000 mW EIRP shall be approved on exceptional basis.[41] In South Korea, the Ministry of Science and ICT has public notices.신고하지 아니하고 개설할 수 있는 무선국용 무선설비의 기술기준, Technical standard for radio equipment for radio stations that can be opened without reporting. They allowed 160 MHz channel bandwidth from 2018 to 2016–27.[70] China MIIT expanded allowed channels as of 31 December 2012[update]to add UNII-1, 5150–5250 MHz, UNII-2, 5250–5350 MHz (DFS/TPC), similar to European standards EN 301.893 V1.7.1.[71]China MIIT expanded allowed channels as of 3 July 2017[update]to add U-NII-3, 5725–5850 MHz.[72] Indonesia allows use of the band5150–5350 MHzwith maximum EIRP of200 mW(23 dBm) and maximum bandwidth of160 MHz, and the band5725–5825 MHzwith the same maximum EIRP and maximum bandwidth of80 MHzfor indoor use. Outdoors, use of the band5725–5825 MHzwith maximum EIRP of4 W(36 dBm) is allowed, with a maximum bandwidth of20 MHz.[73][74] In exercise of the powers conferred by sections 4 and 7 of the Indian Telegraph Act, 1885 (13 of 1885) and sections 4 and 10 of the Indian Wireless Telegraphy Act, 1933 (17 of 1933) and in supersession of notification under G.S.R. 46(E), dated 28 January 2005 and notification under G.S.R. 36(E), dated 10 January 2007 and notification under G.S.R. 38(E), dated 19 January 2007, the Central Government made the rules, called the Use of Wireless Access System including Radio Local Area Network in 5 GHz band (Exemption from Licensing Requirement) Rules, 2018. The rules include criteria like 26 dB bandwidth[dubious–discuss]of the modulated signal measured relative to the maximum level of the modulated carrier, the maximum power within the specified measurement bandwidth, within the device operating band; measurements in the 5725–5875 MHz band are made over a bandwidth of 500 kHz; measurements in the 5150–5250 MHz, 5250–5350 MHz, and 5470–5725 MHz bands are made over a bandwidth of 1 MHz or 26 dB emission bandwidth of the device. No licence shall be required under indoor and outdoor environment to establish, maintain, work, possess or deal in any wireless equipment for the purpose of low power wireless access systems. Transmitters operating in 5725–5875 MHz, all emissions within the frequency range from the band edge to 10 MHz above or below the band edge shall not exceed an EIRP of−17 dBm/MHz; for frequencies 10 MHz or greater above or below the band edge, emission shall not exceed an EIRP of−27 dBm/MHz.[75][76] The802.11pamendment published on 15 July 2010, specifies WLAN in the licensed band of 5.9 GHz (5.850–5.925 GHz). The Wi-Fi Alliance has introduced the termWi‑Fi 6Eto identify and certify IEEE 802.11ax devices that support this new band, which is also used byWi-Fi 7 (IEEE 802.11be). Initialisms (precise definition below): On 23 April 2020, the FCC voted on and ratified a Report and Order[82][83]to allocate 1.2 GHz of unlicensed spectrum in the 6 GHz band (5.925–7.125 GHz) for Wi-Fi use. Standard-power access points are permitted indoors and outdoors at a maximum EIRP of 36 dBm in the U-NII-5 and U-NII-7 sub-bands with automatic frequency coordination (AFC). Note: Partial channels indicate channels that span UNII boundaries, which is permitted in 6 GHz LPI operation. Under the proposed channel numbers, the U-NII-7/U-NII-8 boundary is spanned by channels 185 (20 MHz), 187 (40 MHz), 183 (80 MHz), and 175 (160 MHz). The U-NII-6/U-NII-7 boundary is spanned by channels 115 (40 MHz), 119 (80 MHz), and channel 111 (160 MHz). For use in indoor environments, access points are limited to a maximum EIRP of 30 dBm and a maximum power spectral density of 5 dBm/MHz. They can operate in this mode on all four U-NII bands (5,6,7,8) without the use of automatic frequency coordination. To help ensure they are used only indoors, these types of access points are not permitted to be connectorized for external antennas, weather-resistant, or run on battery power.[83]: 41 The FCC may issue a ruling in the future on a third class of very low power devices such as hotspots and short-range applications. In November 2020, the Innovation, Science and Economic Development (ISED) of Canada published "Consultation on the Technical and Policy Framework for Licence-Exempt Use in the 6 GHz Band".[84]They proposed to allow licence-exempt operations in the 6 GHz spectrum for three classes of radio local area networks (RLANs): For indoor and outdoor use. Maximum EIRP of 36 dBm and maximumpower spectral density(PSD) of 23 dBm/MHz. Should employ Automated Frequency Coordination (AFC) control. For indoor use only. Maximum EIRP of 30 dBm and maximum PSD of 5 dBm/MHz. For indoor and outdoor use. Maximum EIRP of 14 dBm and maximum PSD of -8 dBm/MHz. ECC Decision (20)01 from 20 November 2020[85]allocated the frequency band from 5945 to 6425 MHz (corresponding almost to the US U-NII-5 band) for use by low-power indoor and very-low-power devices for Wireless Access Systems/Radio Local Area Networks (WAS/RLAN), with a portion specifically reserved for rail networks and intelligent transport systems.[86] Since July 2020, the UK'sOfcompermitted unlicensed use of the lower 6 GHz band (5945 to 6425 MHz, corresponding to the US U-NII-5 band) by Low Power indoor and Very Low Power indoor and mobile Outdoor device.[87][88] In April 2021, Australia'sACMAopened consultations for the 6 GHz band. The lower 6 GHz band (5925 to 6425 MHz, corresponding to the US U-NII-5 band) was approved for 250 mW EIRP indoors and 25 mW outdoors on March 4, 2022.[89]Further consideration is also being given to releasing the upper 6 GHz band (6425 to 7125 MHz) for WLAN use as well, although nothing has been officially proposed at this time. In March 2024, it was reported that the ACMA had begun industry consultation to lay the ground work to release the upper 6Ghz bands in the near future.[90]As of August 2024, the proposed options for the use of the upper 6Ghz bands had been published by the ACMA.[91] In September 2022,the Ministry of Internal Affairs and Communicationsannounced amendments to the ministerial order and notices related to the Radio Act.[92] For indoor use only. Maximum EIRP of 200 mW. For indoor and outdoor use. Maximum EIRP of 25 mW. In December 2022,Russian State Commission for Radio Frequenciesauthorised 6 GHz operation for low-power indoor (LPI) use with transmitter power control (TPC) limited to maximum EIRP of 200 mW and maximum PSD of 10 mW/MHz, and very low power (VLP) indoor and mobile outdoor use with maximum EIRP of 25 mW and maximum PSD of 1.3 mW/MHz.[93] In May 2023, Singapore'sIMDAwill amend its Regulations to allocate the radio frequency spectrum 5,925 MHz – 6,425 MHz for Wi-Fi use in Singapore.[94] On May 23, 2024, the Philippines' National Telecommunications Commission (NTC) is considering the use of 5925 MHz to 6425 MHz frequency bands indoors with an effective radiated power (ERP) not exceeding 250 mW and outdoors with an effective radiated power not exceeding 25 mW.[95]On July 5, 2024, the NTC has released Memorandum Circular No. 002-07-2024, allowing 6 GHz Wi-Fi use, with the added restriction that the use onunmanned aircraft systemsis prohibited.[96] The802.11ajstandards, also known asWiGig, operate in the45 GHzspectrum. The802.11ad/aj/aystandards, also known asWiGig, operate in the60 GHzV bandunlicensedISM bandspectrum. Indonesia allows the use of the band57–64 GHzwith maximum EIRP of10 W(40 dBm), and maximum bandwidth of2.16 GHz, for indoor use.[97][98]
https://en.wikipedia.org/wiki/List_of_WLAN_channels
An802.15.4radio moduleis a small device used to communicate wirelessly with other devices according to theIEEE802.15.4 protocol. This table lists production ready-to-use certified modules only, not radio chips. A ready-to-use module is a complete system with atransceiver, and optionally anMCUandantennaon a printed circuit board. While most of the modules in this list areZigbee,Thread,ISA100.11a, orWirelessHARTmodules, some don't contain enoughflash memoryto implement a Zigbee stack and instead run plain802.15.4protocol, sometimes with a lighter wireless protocol on top. These modules only include the RFtransceiverand do not include a microprocessor. As a result, the protocol stack will need to be handled by an external IC. They are lower in price than modules which contain a microprocessor and enable the integrator to choose any microprocessor. However, potentially more work is required for integrating the MCU and module. The following table lists vendor by alphabetical order: The following table lists vendor by alphabetical order: [94][95] 6 dBm: 55 mA The following is a list of companies producing modules yet to be added to the table.
https://en.wikipedia.org/wiki/Comparison_of_802.15.4_radio_modules
Matteris atechnical standardforsmart homeandIoT (Internet of Things)devices.[2][3][4]It aims to improveinteroperabilityand compatibility between different manufacturers and security, and always allowing local control as an option.[5][6][7] Matter originated in December 2019 as the Project Connected Home over IP (CHIP) working group, founded byAmazon,Apple,Googleand the Zigbee Alliance, now called theConnectivity Standards Alliance(CSA).[3][5]Subsequent members includeIKEA,Huawei, andSchneider.[8][9]Version 1.0 of the specification was published on 4 October 2022.[1][10][11]The Mattersoftware development kitisopen-sourceunder theApache License.[12] Asoftware development kit(SDK) is providedroyalty-free,[13][14]though the ability to commission a finished product into a Matter network in the field mandates certification and membership fees,[15][16]entailing both one-time, recurring, and per-product costs.[17]This is enforced using apublic key infrastructure(PKI) and so-called device attestation certificates.[15] Matter-compatible software updates for many existing hubs became available in late 2022,[18][19][20]with Matter-enabled devices andsoftware updatesstarting to release in 2023.[21] In December 2019, Amazon, Apple, Google, SamsungSmartThingsand theZigbeeAlliance announced the collaboration and formation of the working group of Project Connected Home over IP. The goal of the project is to simplify development for smart home product brands and manufacturers while increasing the compatibility of the products for consumers.[22][23] The standard operates onInternet Protocol(IP) and functions via one or more controllers that connect and manage devices within your local network, eliminating the need for multiple proprietary hubs. Matter-certified products are engineered to operate locally and do not depend on an internet connection for their core functions.[24]LeveragingIPv6 addressing,[25]the standard facilitates seamless communication with cloud services. Its goal is to facilitate interoperability among smart home devices, mobile apps, and cloud services, employing a specific suite of IP-based networking technologies such asmDNSandIPv6.[26]By adhering to a network design that operates at theApplication Layerof theOSI 7 layer model, Matter differs from protocols like Zigbee orZ-Waveand theoretically can function on any IPv6-enabled network. Presently, official support is limited toWi-Fi,Ethernet, and the wireless mesh networkThread.[27] Updates to the standard are planned to occur biannually.[28] For future versions, the working group has been working on support for ambient motion and presence sensing, environmental sensing and controls, closure sensors,energy management, Wi-Fi access points, cameras, andmajor appliances.[28] CSA maintains the official list of Matter-certified products,[36]and restricts use of the Matter logo to certified devices. Matter product certification is also stored on the CSA's Distributed Compliance Ledger (DCL),[37]which publishes attestation information about certified devices. The primary goal of Matter is to improve interoperability for the current smart home ecosystem. CSA and its members aim for the Matter logo to become ubiquitous and for consumers to instantly recognise it as a smart home device that will "just work".[48] There are numerous other benefits that Matter brings when compared to the current smart home ecosystem:[citation needed]
https://en.wikipedia.org/wiki/Matter_(standard)
Ashort-range device(SRD), described byECCRecommendation 70-03, is aradio-frequencytransmitterdevice used intelecommunicationthat has little capability of causingharmful interferenceto other radio equipment. Short-range devices are low-power transmitters, typically limited to 25–100 mWeffective radiated power(ERP) or less, depending on the frequency band, which limits their useful range to a few hundred meters, which do not require licenses to use. Short-rangewirelesstechnologies includeBluetooth,Wi-Fi,NearLink,near-field communication(NFC),LPWAN,ultra-wideband(UWB) andIEEE 802.15.4. They are implemented by chipsfabricatedasRF CMOSintegrated circuit(RF circuit).[1][2]As of 2009[update], short-range wireless chips ship approximately 1.7billion units annually, with Bluetooth accounting for over 55% of shipments and Wi-Fi around 35% of shipments.[1] Applications for short-range wireless devices includepower metersand otherremote instrumentation,RFIDapplications,radio-controlled models,fire, security and social alarms, vehicle radars, wireless microphones and earphones, traffic signs and signals (including control signals), remotegarage door openersandcar keys, barcode readers, motion detectors, and many others. TheEuropean Commissionmandates throughCEPTandETSIthe allocation of several device bands for these purposes, restricts the parameters of their use, and provides guidelines for avoiding radio interference.[3][4][5] According to ECC Rec. 70-03, there are several annexes which encapsulate specific usage patterns, maximum emission power and duty cycle requirements. In Europe, 863 to 870  MHz band has been allocated for license-free operation usingFHSS,DSSS, or analog modulation with either a transmissionduty cycleof 0.1%, 1% or 10% depending on the band, or Listen Before Talk (LBT) withAdaptive Frequency Agility(AFA).[3][4]Although this band falls under the Short Range Device umbrella, it is being used in Low-Power Wide-Area Network (LPWAN)wireless telecommunicationnetworks, designed to allow long-range communications at a lowbit rateamongthings(connected objects). (* = as of 1 January 2018) As of December 2011[update], unrestricted voice communications are allowed in the 869.7-870.0 MHz band with channel spacing of 25 kHz or less and maximum power output of 5 mW ERP.[6][7][8] SRD860 handheld transceivers were briefly available in mid 2000s, however they did not offer dual-band compatibility withPMR446and LPD433 bands. As of 2012[update], they have been put off-market. From January 2018, the four RFID frequencies are also available for data networks, with a power up to 500 mW and a bandwidth of 200 kHz. The center frequencies are: 865.7, 866.3, 866.9 and 867.5 MHz. Specific restrictions on usage apply, such as a low duty cycle, LBT (listen before transmit) and APC (adaptive power control).[9]
https://en.wikipedia.org/wiki/Short-range_device
Frequency allocation(orspectrum allocation) is the part ofspectrum managementdealing with the designation and regulation of theelectromagnetic spectrumintofrequency bands, normally done by governments in most countries.[1]Becauseradio propagationdoes not stop at national boundaries, governments have sought to harmonise the allocation of RF bands and their standardization. TheInternational Telecommunication Uniondefines frequency allocation as being of "a givenfrequency bandfor the purpose of its use by one or more terrestrial or space radiocommunication services or theradio astronomy serviceunder specified conditions".[2] Frequency allocationis also a special term, used in nationalfrequency administration. Other terms are: Several bodies set standards for frequency allocation, including: To improve harmonisation in spectrum utilisation, most service allocations are incorporated in national Tables of Frequency Allocations and Utilisations within the responsibility of the appropriate national administration. Allocations are: Allocations of military usage will be in accordance with the ITU Radio Regulations. In NATO countries, military mobile utilizations are made in accordance with theNATO Joint Civil/Military Frequency Agreement(NJFA). Some of the bands listed (e.g., amateur 1.8–29.7 MHz) have gaps / are not continuous allocations. (approx)
https://en.wikipedia.org/wiki/Frequency_allocation
Fixed wirelessis the operation ofwirelesscommunication devices or systems used to connect two fixed locations (e.g., building to building or tower to building) with a radio or other wireless link, such aslaser bridge.[1]Usually, fixed wireless is part of awireless LANinfrastructure. The purpose of a fixed wireless link is to enable data communications between the two sites or buildings. Fixed wireless data (FWD) links are often a cost-effective alternative to leasing fiber or installing cables between the buildings. Thepoint-to-pointsignaltransmissionsoccur through the air over aterrestrialmicrowaveplatform rather than throughcopperoroptical fiber; therefore, fixed wireless does not requiresatellitefeeds or localtelephoneservice. The advantages of fixed wireless include the ability to connect with users in remote areas without the need for laying new cables and the capacity for broadbandwidththat is not impeded by fiber or cable capacities. Fixed wireless devices usually derive their electrical power from thepublic utilitymains, unlike mobile wireless or portable wireless devices which tend to bebatterypowered. Fixed wireless services typically use a directional radio antenna on each end of the signal (e.g., on each building). These antennas are generally larger than those seen inWi-Fisetups and are designed for outdoor use. Several types of radio antennas are available that accommodate various weather conditions, signal distances and bandwidths. They are usually selected to make the beam as narrow as possible and thus focus transmit power to their destination, increasing reliability and reducing the chance of eavesdropping or data injection. The links are usually arranged as a point-to-point setup to permit the use of these antennas. This also permits the link to have better speed and or better reach for the same amount of power. These antennas are typically designed to be used in the unlicensedISM bandradio frequency bands (900 MHz, 1.8 GHz, 2.4 GHz and 5 GHz), however, in most commercial installations, licensed frequencies may be used to ensure quality of service (QoS) or to provide higher connection speeds. Businesses and homes can use fixed-wireless antenna technology to access broadband Internet andLayer 2networks using fixed wireless broadband. Networks which have redundancy and saturation and antennas that can aggregate signal from multiple carriers are able to offer fail-over and redundancy for connectivity not generally afforded by wired connections. In rural areas where wired infrastructure is not yet available, fixed-wireless broadband can be a viable option for Internet access.[2]
https://en.wikipedia.org/wiki/Fixed_wireless
LPD433(low power device 433 MHz) is aUHFband in which license free communication devices are allowed to operate in some regions. The frequencies correspond with theITU region 1ISM bandof 433.050MHzto 434.790 MHz. The frequencies used are within the70-centimeter band, which is currently otherwise reserved for government andamateur radiooperations in the United States and most nations worldwide. LPD hand-held radios are authorized for licence-free voice communications use in most of Europe using analogfrequency modulation(FM) as part ofshort range deviceregulations,[1]with 25kHzchannel spacing, for a total of 69 channels. In some countries, LPD devices may only be used with an integral and non-removableantennawith a maximum legal power output of 10mW. Voice communication in the LPD band was introduced to reduce the burden on the eight (now sixteen)[2]PMR446channels over shorter ranges (less than 1 km).[3] LPD is also used in vehicle key-less entry device, garage or gate openers and some outdoor home weather station products. In the UK, LPD433 equipment that meets the respectiveOfcomInterface Requirement can be used for model control, analogue/digitised voice andremote keyless entry systems.[4]There is significant scope for interference however, both on frequency and on adjacent frequencies, as the band is far from free. The frequencies from 430 to 440 MHz are allocated on a secondary basis to licensed radio amateurs who are allowed to use up to 40 W (16 dBW) between 430 and 432 MHz and 400 W (26 dBW) between 432 and 440 MHz. Channels 1 to 14 are UK amateur repeater outputs and channels 62 to 69 are UK amateur repeater inputs. This band is shared on a secondary basis for both licensed and licence exempt users, with the primary user being theMinistry of Defence.[5] Ofcom, together with theRSGBEmerging Technology Co-ordination Committee have produced guidelines to help mitigate the side effects of interference to an extent.[6][7] Switzerland permits the use of all 69 LPD433 channels with a maximum power output of 10 mW.[8][9] According to a recently published (June 2021) resolution of the Spanish government,[10]where it defines 'interface IR-266', non-specific mobile short-range devices may be used without authorization for voice applications with 'advanced mitigation techniques' (such as listening before talking[11]) from 434.040 to 434.790 MHz, with channels narrower than 25 kHz and with a maximum 'apparent radiated power' of 10 mW. This would make the use of LPD433 channels 40 to 69 possible in Spain. Europeanremote keyless entry systemsoften use the 433 MHz band, although, as in all of Europe, these frequencies are within the70-centimeter bandallocated toamateur radio, and interference results. In Germany, before the end of 2008,[12]radio control enthusiasts were able to use frequencies from channel 03 through 67 for radio control of any form of model (air or ground-based), all with odd channel numbers (03, 05, etc. up to ch. 67),[13]with each sanctioned frequency having 50 kHz of bandwidth separation between each adjacent channel. InITU region 2(the Americas), the frequencies that LPD433 uses are also within the70-centimeter bandallocated toamateur radio. In the United States LPD433 radios can only be used underFCC amateur regulationsbyproperly licensed amateur radio operators. In Malaysia, this band is also within the 70-centimeter band (430.000 – 440.000 MHz) allocated to amateur radio. Class B amateur radio holders are permitted to transmit up to 50 wattsPEPpower level.[14]There is no licence requirement for LPD as long as it complies with requirement regulated by Malaysian Communications And Multimedia Commission (MCMC). As regulated by MCMC in Technical Code for Short Range Devices,[15]remote control and security device are allowed up to 50 mWERPand up to 100 mW ERP for Short Range Communication (SRC) devices.RFIDare allowed up to 100 mWEIRP. ELF3 Hz/100 Mm30 Hz/10 Mm SLF30 Hz/10 Mm300 Hz/1 Mm ULF300 Hz/1 Mm3 kHz/100 km VLF3 kHz/100 km30 kHz/10 km LF30 kHz/10 km300 kHz/1 km MF300 kHz/1 km3 MHz/100 m HF3 MHz/100 m30 MHz/10 m VHF30 MHz/10 m300 MHz/1 m UHF300 MHz/1 m3 GHz/100 mm SHF3 GHz/100 mm30 GHz/10 mm EHF30 GHz/10 mm300 GHz/1 mm THF300 GHz/1 mm3 THz/0.1 mm
https://en.wikipedia.org/wiki/LPD433
There are several uses of the 2.4 GHzISM radio band. Interference may occur between devices operating at2.4 GHz. This article details the different users of the 2.4 GHz band, how they cause interference to other users and how they are prone to interference from other users. Many[quantify]of thecordless telephonesandbaby monitorsin theUnited StatesandCanadause the 2.4 GHz frequency,[1]the same frequency at whichWi-Fistandards802.11b,802.11g,802.11nand802.11axoperate. This can cause a significant decrease in speed, or sometimes[when?]the total blocking of the Wi-Fi signal when a conversation on the phone takes place.[2]There are several ways to avoid this[according to whom?], however, some[which?]simple, and some[which?]more complicated. The last will sometimes[when?]not be successful, as numerous[quantify]cordless phones useDigital Spread Spectrum. This technology was designed to ward off eavesdroppers, but the phone will change channels at random, leaving no Wi-Fi channel safe from phone interference. Bluetoothdevices intended for use in short-rangepersonal area networksoperate from 2.4 to 2.4835 GHz. To reduce interference with other protocols that use the 2.45 GHz band, the Bluetooth protocol divides the band into 80 channels (numbered from 0 to 79, each 1 MHz wide. Bluetooth Low Energy has half the number of channels, with each channel twice as wide as Bluetooth Classic) and changes channels up to 1600 times per second. Bluetooth also featuresAdaptive Frequency Hoppingwhich attempts to detect existing signals in theISM band, such asWi-Fichannels, and avoid them by negotiating a channel map between the communicating Bluetooth devices. The USB 3.0 computer cable standard has been proven to generate significant amounts of electromagnetic interference that can interfere with any Bluetooth devices a user has connected to the same computer.[3] Wi-Fi (/ˈwaɪfaɪ/)[4]is technology for radiowireless local area networkingof devices based on theIEEE 802.11standards.Wi‑Fiis a trademark of theWi-Fi Alliance, which restricts the use of the termWi-Fi Certifiedto products that successfully completeinteroperabilitycertification testing.[5] Devices that can use Wi-Fi technologies includedesktopsandlaptops,video game consoles,smartphonesandtablets,smart TVs, digital audio players, cars and modern printers. Wi-Fi compatible devices can connect to the Internet via a WLAN and awireless access point. Such an access point (orhotspot) has a range of about 20 meters (66 feet) indoors and a greater range outdoors. Hotspot coverage can be as small as a single room with walls that block radio waves, or as large as many square kilometres achieved by using multiple overlapping access points. Different versions of Wi-Fi exist, with different ranges, radio bands and speeds. Wi-Fi most commonly uses the 2.4 gigahertz (12 cm)UHFand 5.8 gigahertz (5 cm)SHFISMradio bands; these bands are subdivided into multiple channels. Each channel can betime-sharedby multiple networks. Thesewavelengthswork best forline-of-sight. Many common materials absorb or reflect them, which further restricts range, but can tend to help minimise interference between different networks in crowded environments. At close range, some versions of Wi-Fi, running on suitable hardware, can achieve speeds of over1 Gbit/s. Anyone within range with awireless network interface controllercan attempt to access a network; because of this, Wi-Fi is more vulnerable to attack (calledeavesdropping) than wired networks.Wi-Fi Protected Access(WPA) is a family of technologies created to protect information moving across Wi-Fi networks and includes solutions for personal and enterprise networks. Security features of WPA have included stronger protections and new security practices as the security landscape has changed over time. To guarantee no interference in any circumstances the Wi-Fi protocol requires 16.25 (11b) or 22 MHz (11g/n) of channel separation (as shown below). Any remaining gap is used as aguard bandto allow sufficient attenuation along the edge channels. This guardband is mainly used to accommodate older routers with modem chipsets prone to full channel occupancy, as most modern Wi-Fi modems are not prone to excessive channel occupancy. While overlapping frequencies can be configured and will usually work, it can cause interference resulting in slowdowns, sometimes severe, particularly in heavy use. Certain subsets of frequencies can be used simultaneously at any one location without interference (see diagrams for typical allocations): However, the exact spacing required when the transmitters are not colocated depends on the protocol, the data rate selected, the distances and the electromagnetic environment where the equipment is used.[6] The attenuation by relative channel adds to that due to distance and the effects of obstacles. Per the standards, for transmitters on the same channel, transmitters must take turns to transmit if they can detect each other 3 dB above thenoise floor(the thermal noise floor is around -101dBmfor 20 MHz channels).[8]On the other hand, transmitters will ignore transmitters on other channels if the attenuated signal strength from them is below a threshold Pthwhich, for nonWi-Fi 6systems, is between -76 and -80 dBm.[6]While there can be interference (bit errors) at a receiver, this is usually small if the received signal is more than 20 dB above the attenuated signal strength from transmitters on the other channels.[6] The overall effect is that if there is considerable overlap between adjacent channels transmitters they will often interfere with each other. In general, using every fourth or fifth channel by leaving three or four channels clear between used channels causes much less interference than sharing channels, and narrower spacing still can be used at further distances.[9][6] ManyZigbee/IEEE 802.15.4-based wireless data networks operate in the 2.4–2.4835 GHz band, and so are subject to interference from other devices operating in that same band. The definition is for 16 channels numbered 11–26 to occupy the space, each 2 MHz wide and spaced by 5 MHz. The F0of channel 11 is set at 2.405 GHz. The DSSS scheme is used to spread out the spectrum (from a data-rate of 250 kbit/s) and reduce interference.[7] To avoid interference fromIEEE 802.11networks, an IEEE 802.15.4 network can be configured to only use channels 15, 20, 25, and 26, avoiding frequencies used by the commonly usedIEEE 802.11 channels1, 6, and 11. The exact channel selection depends on the local popular 802.11 channel. For example, in a place that uses 1, 7, and 13 channels, the preference would be for channels 15, 16, 21, and 22. Channel coexistence is possible provided 8 meters of spacing between the 802.11 access point and the 802.15.4 device.[7] Some wireless peripherals like keyboards and mice use the 2.4 GHz band with a proprietary protocol. Amateur radiooperators in the US are able to operate from 2300 to 2450 MHz, except for 2310 to 2390 MHz. The frequencies between 2400 and 2410 MHz are reserved in the United States for amateur satellite communications and 2410 to 2450 MHz for broadband activities. Amateur radio operators are permitted up to 1500wattsof power and 22 MHz of bandwidth in the 2400 MHz band. Various portions of the 2400 MHz band are available to amateur radio operators inmany other countries.[10][11][12] Most domesticmicrowave ovensoperate by emitting a very high power signal in the 2.4 GHz band. Older devices have poor shielding,[13]and often emit a very "dirty" signal over the entire 2.4 GHz band.[a] This can cause considerable difficulties toWi-Fiand video[15]transmission, resulting in reduced range or complete blocking of the signal. TheIEEE802.11committee that developed the Wi-Fi specification conducted an extensive investigation into the interference potential of microwave ovens. A typical microwave oven uses a self-oscillatingvacuum power tubecalled amagnetronand a high voltagepower supplywith ahalf wave rectifier(often withvoltage doubling) and no DCfiltering. This produces an RFpulse trainwith aduty cyclebelow 50% as the tube is completely off for half of everyAC mainscycle: 8.33 ms in 60 Hz countries and 10 ms in 50 Hz countries. This property gave rise to a Wi-Fi "microwave oven interference robustness" mode that segments larger data frames into fragments each small enough to fit into the oven's "off" periods. The 802.11 committee also found that although the instantaneous frequency of a microwave oven magnetron varies widely over each half AC cycle with the instantaneous supply voltage, at any instant it is relativelycoherent, i.e., it occupies only a narrow bandwidth.[16]The802.11a/gsignal is inherently robust against such interference because it usesOFDMwitherror correctioninformation interleaved across the carriers; as long as only a few carriers are wiped out by strong narrow band interference, the information in them can be regenerated by the error correcting code from the carriers that do get through. Somebaby monitorsuse the 2.4 GHz band. Some transmit only audio but others also provide video. Wireless microphonesoperate as transmitters. Some digital wireless microphones use the 2.4 GHz band (e.g. AKG model DPT 70). Wireless speakersoperate as receivers. The transmitter is apreamplifierthat may be integrated in another device. Some wireless speakers use the 2.4 GHz band, with a proprietary protocol. They may be subject to dropouts caused by interference from other devices. Video senderstypically operate using anFMcarrierto carry a video signal from one room to another (for example,satellite TVorclosed-circuit television). These devices typically operate continuously but have low (10 mW) transmit power. However, some devices, especially wireless cameras, operate with (often unauthorized) high power levels, and havehigh-gain antennas.[citation needed] Amateur radiooperators can transmit two-wayamateur television(and voice) in the 2.4 GHz band—and all ISM frequencies above 902 MHz—with maximum power of 1500 watts in the US if the transmission mode does not includespread spectrumtechniques.[17][18]Other power levels apply per regions. In the UK, the maximum power level for a full licence is 400 watts.[19]In other countries, maximum power level for non-spread-spectrum emissions are set by local legislation.[citation needed] Although the transmitter of some video cameras appears to be fixed on one frequency, it has been found in several models that the cameras are actually frequency agile, and can have their frequency changed by disassembling the product and moving solder links orDIP switchesinside the camera. These devices are prone to interference from other 2.4 GHz devices, due to the nature of ananalog videosignal showing up interference very easily. A carrier to noise ratio of some 20 dB is required to give a "clean" picture. Continuous transmissions interfere with these, causing "patterning" on the picture, sometimes a dark or light shift, or complete blocking of the signal. Non-continuous transmissions, such as Wi-Fi, cause horizontal noise bars to appear on the screen, and can cause "popping" or "clicking" to be heard in the audio. Video senders are a big problem for Wi-Fi networks: Unlike intermittent Wi-Fi, they operate continuously and are typically only 10 MHz in bandwidth. This causes a very intense signal as viewed on a spectrum analyser, and completely obliterates over half a channel. The result of this, typically in aWireless Internet service provider-type environment, is that clients (who cannot hear the video sender due to the "hidden node" effect) can hear the Wi-Fi without any issues, but the receiver on the WISP's access point is completely obliterated by the video sender, so is extremely deaf. Furthermore, due to the nature of video senders, they are not interfered with by Wi-Fi easily, since the receiver and transmitter are typically located very close together, so the capture effect is very high. Wi-Fi also has a very wide spectrum, so only typically 30% of the peak power of the Wi-Fi actually affects the video sender. Wi-Fi is not continuous transmit, so the Wi-Fi signal interferes only intermittently with the video sender. A combination of these factors - low power output of the Wi-Fi compared to the video sender, the fact that typically the video sender is far closer to the receiver than the Wi-Fi transmitter and the FM capture effect means that a video sender may cause problems to Wi-Fi over a wide area, but the Wi-Fi unit causes few problems to the video sender.[citation needed] Many video senders on the market in the UK advertise a 100 mWequivalent isotropically radiated power(EIRP). However, the UK market only permits a 10 mW EIRP limit. These devices cause far more interference across a far wider area, due to their excessive power. Furthermore, UK video senders are required to operate across a 20 MHz bandwidth (not to be confused with 20 MHzdeviation). This more widely spread our and stronger power means that some foreign imported video senders are not legal, since they operate on a 15 MHz bandwidth or lower, which causes a higher spectral power density, increasing the interference. Furthermore, most other countries permit100 mW EIRPfor video senders, meaning a lot of video senders in the UK have excessive power outputs.[citation needed] Many radio-controlled drones, model aircraft, model boats and toys use the 2.4 GHz band. These radio systems can go up to 500 meters in radio-controlled cars and over 2.5 kilometres (1.6 mi) in drones / airplanes. Some garage door openers use the 2.4 GHz band. Certain car manufacturers use the 2.4 GHz frequency for theircar alarminternal movement sensors. These devices transmit on 2.45 GHz (between channels 8 and 9) at a strength of 500 mW. Because of channel overlap, this will cause problems for channels 6 and 11, which are commonly used default channels for Wi-Fi connections. Because the signal is transmitted as a continuous tone, it causes particular problems for Wi-Fi traffic. This can be clearly seen with spectrum analysers. These devices, due to their short range and high power, are typically not susceptible to interference from other devices on the 2.4 GHz band.[citation needed] Someradarsuse the 2.4 GHz band. Some'smart' power metersuse the 2.4 GHz band.[citation needed] Some new truly wireless power transmission uses the 2.4 GHz band.[citation needed] USB 3.0devices and cables, if not shielded properly, may introduce noise to the 2.4 GHz band.[20] Normally interference is not too hard to find. Products are coming onto the market cheaply which act asspectrum analyzersand use a standardUSBinterface into alaptop, meaning that the interference source can be fairly easily found with a little work, a directional antenna and driving around to find the interference. It is better to useEthernetor maybePLCwhen Wi-Fi can be avoided (but beware ofpower surges, they may happen through anyconductivecable). A general strategy for Wi-Fi is to only use the 5 GHz and 6 GHz bands for devices that also support it and to switch off the 2.4 GHz radios in the access points when this band is not needed anymore. Often solving interference is as simple as changing the channel of the offending device. This technique is considered part of the installation process. Where the channel of one system, such as awireless access pointcannot be changed, and it is being interfered with by something such as avideo sender, the owner of the video sender may change the channel it is using. Another cure is to offer an alternative product to the owner free of charge. Typically this would be a wired camera, which normally have far better performance than wireless cameras anyway, a cable to replace the video sender, or an alternative video sender which has been hard-wired to an alternative channel, with no means of changing it back to the offending frequency. Yet another cure is to move from 2.4 GHz to another frequency which lacks the vulnerability to interference inherent at that frequency, for example the 5 GHz frequency for 802.11a/n. If a device using a proprietary protocol is causing or suffering interference, replacing it with another one using a different communication scheme (proprietary or standard) might solve the problem. In extreme cases, where the interference is either deliberate or all attempts to get rid of the offending device have proved futile, it may be possible to look at changing the parameters of the network. Changing collinear antennas for high gain directional dishes normally works very well, since the narrow beam from a high gain dish will not physically "see" the interference. Oftensector antennaehave sharp "nulls" in their vertical pattern, so changing the tilt angle of sector antennas with a spectrum analyzer connected to monitor the strength of the interference can place the offending device within the null of the sector. High gain antennas on the transmitter end can "overpower" the interference, although their use may cause theeffective radiated power(ERP) of the signal to become too high, and so their use may not be legal. Interference caused by a Wi-Fi network to its neighbors can be reduced by adding more base stations to that network. Every Wi-Fi standard provides for automatic adjustment of the data rate to channel conditions; poor links (usually those spanning greater distances) automatically operate at lower speeds. Deploying additional base stations around the coverage area of a network, particularly in existing areas of poor or no coverage, reduces the average distance between a wireless device and its nearest access point and increases the average speed. The same amount of data takes less time to send, reduces channel occupancy, and gives more idle time to neighboring networks, improving the performance of all networks concerned. However, there is a maximum number of base stations that can be added, after which they disrupt the network more than that they help: any additional capacity is then sapped by control traffic.[21] The alternative of increasing coverage by adding an RF power amplifier to a single base station can bring similar improvements to a wireless network. The additional power offered by a linear amplifier will increase the signal-to-noise ratio at the client device, increasing the data rates used and reducing time spent transmitting data. The improved link quality will also reduce the number of retransmissions due to packet loss, further reducing channel occupancy. However, care must be taken to use a highlylinearamplifier in order to avoid adding excessive noise to the signal. All of the base stations in a wireless network should be set to the same SSID (which must be unique to all other networks within range) and plugged into the same logical Ethernet segment (one or more hubs or switches directly connected without IP routers). Wireless clients then automatically select the strongest access point from all those with the specified SSID, handing off from one to another as their relative signal strengths change. On many hardware and software implementations, this hand off can result in a short disruption in data transmission while the client and the new base station establish a connection. This potential disruption should be factored in when designing a network for low-latency services such asVoIP.
https://en.wikipedia.org/wiki/Electromagnetic_interference_at_2.4_GHz
Insteonis a proprietaryhome automation(domotics) system that enables light switches, lights, leak sensors, remote controls, motion sensors, and other electrically powered devices to interoperate through power lines, radio frequency (RF) communications, or both.[2][3]It employs a dual-meshnetworking topology[4]in which all devices are peers and each device independently transmits, receives, confirm and repeats messages.[5]Like other home automation systems, it had been associated with theInternet of things.[6] Every message received by an Insteon compatible device undergoes error detection and correction and is then retransmitted to improve reliability. All devices retransmit the same message simultaneously so that message transmissions are synchronous to thepowerlinefrequency, thus preserving the integrity of the message while strengthening the signal on the powerline and reducing RF dead zones. Insteon powerline messaging usesphase-shift keying. Insteon RF messaging usesfrequency-shift keying. Insteon is an integrated dual-mesh (formerly referred to as "dual-band") network that combines wirelessradio frequency(RF) and a building's existing electrical wiring,[7]in which all devices are peers and each device independently transmits, receives, and repeats messages.[8][9] The electrical wiring becomes a backup transmission medium in the event of RF/wirelessinterference. Conversely, RF/wireless becomes a backup transmission medium in the event of powerline interference. As apeer-to-peernetwork, devices do not require network supervision, thus allowing optional operation without central controllers androuting tables. Insteon devices can function without a central controller. Additionally, they may be managed by a central controller to implement functions such as control via smartphones and tablets, control scheduling, event handling, and problem reporting via email or text messaging. Insteon initially produced over 200 products using its technology, including LED bulbs, wall switches, wall keypads, sensors, plug in modules and embedded devices, along with central controllers for system management.[10]In June 2019, it was reported that Insteon was reducing the number of products it sold to focus on less commoditized connected products like smart lighting and electrical controls.[11] Insteon marketed two different central controllers: its own brand, called the Insteon Hub, and a newer HomeKit-enabled Insteon Hub Pro designed for Apple HomeKit compatibility.[12]In 2012, the company introduced the first network-controlled LED light bulb.[13] The Hub Pro was later discontinued, according to a note on Insteon's web site.[14] Older Insteon chip sets manufactured by Smartlabs can transmit, receive, and respond to (but not repeat)X10power line messages, thus enabling X10 networks to interoperate with Insteon.[15][16] In 2014, Insteon released apps forWindows 8andWindows Phone, as part of an agreement withMicrosoftto sell its kits atMicrosoft Storelocations.[17]The Windows Phone also featured voice control viaCortana.[18][19] In 2015, voice control was added via compatibility with Amazon Echo.[20]That same year,Logitechannounced that the remote for the Harmony Hub (asmart home hub) would support Insteon devices when deployed with an Insteon Hub.[21]Also in 2015, Insteon announced an initiative to integrate the Google-ownedNestlearning thermostat with the Insteon Hub.[22] Insteon was one of two launch partners forApple's HomeKitplatform, with the HomeKit-enabled Insteon Hub Pro.[23]In 2015, Insteon announced support for theApple Watch, allowing watch owners to control their home with an Insteon Hub.[24] Insteon-based products were launched in 2005 by Smartlabs,[1]the company which holds the trademark forInsteon.[25]A Smartlabs subsidiary, also namedInsteon, was created to market the technology.[26]CEO Joe Dada had previously founded Smarthome in 1992,[27]a home automation product catalog company, and operator of the Smarthome.com e-commerce site. In the late 1990s, Dada acquired two product engineering firms which undertook extensive product development efforts to create networking technology based on both power-line and RF communications. In 2004, the company filed for patent protection for the resultant technology,[28]called Insteon, and it was released in 2005. In 2012, the company released the first network-controlled light bulb using Insteon-enabled technology, and at that point Dada spun Insteon off from Smarthome.[27][29] In 2017, SmartLabs and the Insteon trademark were acquired by Richmond Capital Partners.[30] The company produced over 200 products featuring the technology.[27] In a community statement[31]published on the Insteon.com website, Smartlabs has revealed that it had been looking for a parent company to purchase and continue developing the Insteon ecosystem following supply-chain issues during the COVID-19 pandemic. This sale failed to materialize in March 2022 and subsequently a financial services firm had been tasked with optimizing the assets of the company. In mid-April 2022, the company appeared to have abruptly shut down.[32]In June 2022, a group of Insteon users acquired the company and its assets to rebuild the business.[33]In October 2022, Insteon services were brought back by the new owners.[34]
https://en.wikipedia.org/wiki/INSTEON
Alow-power, wide-area network(LPWANorLPWA network) is a type ofwireless telecommunicationwide area networkdesigned to allow long-range communication at a lowbit ratebetweenIoT devices, such as sensors operated on abattery. Low power, low bit rate, and intended use distinguish this type of network from awireless WANthat is designed to connect users or businesses, and carry more data, using more power. The LPWAN data rate ranges from 0.3 kbit/s to 50 kbit/s per channel. A LPWAN may be used to create a privatewireless sensor network, but may also be a service or infrastructure offered by a third party, allowing the owners of sensors to deploy them in the field without investing ingatewaytechnology. Some competing standards and vendors for LPWAN space include:[2] Ultra Narrowband(UNB), modulation technology used for LPWAN by various companies including:
https://en.wikipedia.org/wiki/LPWAN
TheneuRFonproject (named for a combination of "neuron" and "RF") was a research program begun in 1999 atMotorolaLabs to develop ad hoc wireless networking forwireless sensor networkapplications.[1]The biological analogy was that, while individual neurons were not very useful, in a large network they became very powerful; the same was thought to hold true for simple, low power wireless devices. Much of the technology developed in the neuRFon program was placed in theIEEE 802.15.4standard and in theZigbeespecification; examples are the 2.4GHzphysical layerof theIEEE 802.15.4standard and significant portions of the Zigbee multi-hoproutingprotocol.
https://en.wikipedia.org/wiki/NeuRFon
Sigfox0G technology is a globalLow-Power Wide-Area (LPWA)networking protocol founded in 2010[1]and adopted by 70+ Sigfox 0G Network Operators globally. Thiswireless networkwas designed to connect low-power objects such aselectricity meterssecurely, at low-cost, emitting small amounts of data. Sigfox is based inLabègenearToulouse, France, and once had over 375 employees inMadrid,San Francisco,SydneyandParis.[2][3] The former Sigfox entity had raised more than $300 million from investors that included Salesforce, Intel, Samsung, NTT, SK Telecom, energy groups Total and Air Liquide. In November 2016 Sigfox was valued at around €600 million. In January 2022 it filed for bankruptcy.[4] In April 2022 Singapore-based IoT company UnaBiz acquired the Sigfox 0G technology and its French network operations for a reported €25 million ($27m).[5] As of December 2024, the Sigfox 0G network managed by UnaBiz supports over 14 million active connected devices worldwide.[6] Sigfox employs differential binaryphase-shift keying(DBPSK) andGaussian frequency shift keying(GFSK) over theShort-range deviceband of 868 MHz in Europe, and theIndustrial, Scientific and Medical radio bandof 902 MHz in the US. It utilizes a wide-reaching signal that passes freely through solid objects, called "Ultra Narrowband" and requires little energy, being termed a "low-power wide-area network" (LPWAN). The network is based on one-hopstar topology.[7]The signal can also be used to easily cover large areas and to reach underground objects.[8]As of December 2024, the Sigfox 0G global network has covered a total of 5.8 million square kilometers in a total of 75 countries with 1.3 billion of the world population reached.[9] The Sigfox 0G technology is supported by a number of firms in the LPWAN industry such asNXP Semiconductors,Holtek,ST Microelectronics,SemtechandSilicon Labs. The ISM radio bands support limited bidirectional communication. The existing standard for Sigfox communications supports up to 140 uplink messages a day, each of which can carry a payload of 12octetsat a data rate of up to 100 bits per second, and/or 600bps in some regions.[10][11] Upon acquisition, UnaBiz released the Sigfox device library code for connected objects to the public and IoT development community to drive technology interoperability and the unification of LPWANs in the IoT industry.[12]The developer community can now visit the 0G technology’sGithub pageandBuildto access the new device library codes and related documentation.
https://en.wikipedia.org/wiki/Sigfox
Ultra-wideband impulse radio ranging(orUWB-IRranging) is awirelesspositioningtechnology based onIEEE 802.15.4zstandard,[1]which is a wireless communication protocol introduced byIEEE, for systems operating inunlicensed spectrum, equipped with extremely large bandwidth transceivers. UWB enables very accurateranging[2](in the order ofcentimeters) without introducing significantinterferencewithnarrowbandsystems. To achieve these stringent requirements, UWB-IR systems exploit the availablebandwidth[3](which exceeds 500 MHz for systems compliant to IEEE 802.15.4z protocol) that they support, which guarantees very accuratetiming(and thus ranging) and robustness againstmultipath, especially in indoor environments.[4]The available bandwidth also enables UWB systems tospread the signal powerover a largespectrum[5](this technology is thus calledspread spectrum[6]), avoiding narrowband interference.[7][8][9] UWB-IR relies on the low-power transmission of specific sequences of short-durationpulses. The transmit power is limited according toFCCregulations, in order to reduce interference and power consumption. The bands supported by the standard are the following ones: The primary time division in UWB systems is structured inframes. Each frame is composed by the concatenation of 2 sequences: The further time subdivisions of the preamble and the PPDU are organized in different ways. For localization purposes, only the preamble is employed (and described in detail later on), since it is specifically designed to perform accurate synchronization at receiver side. The SHR sequence is composed by the concatenation of 2 other subsequences: The transmitted SHR waveform (baseband equivalent) can be modeled as follows x(t)=∑k,nckn⋅p(t−nLTc−kNcpsTc){\displaystyle x(t)=\sum _{k,n}c_{kn}\cdot p{\big (}t-nLT_{c}-kN_{\mathrm {cps} }T_{c}{\big )}} where the parameters are defined as shown here below The received SHR waveform can instead be described as y(t)=∑k,n,ℓαℓ⋅ckn⋅p(t−nLTc−kNcpsTc−τℓ)+w(t){\displaystyle y(t)=\sum _{k,n,\ell }\alpha _{\ell }\cdot c_{kn}\cdot p{\big (}t-nLT_{c}-kN_{\mathrm {cps} }T_{c}-\tau _{\ell }{\big )}+w(t)} where the additional parameters are defined as follows In order to associate the propagation delay to a distance, there must exists aLoSpath between transmitter and receiver or, alternatively, a detailed map of the environment has to be known in order to perform localization based on the reflected rays. In presence of multipath, the large bandwidth is of paramount importance to distinguish all the replicas, which otherwise would significantly overlap at receiver side, especially in indoor environments. The propagation delay can be estimated through several algorithms, usually based on finding the peak of the cross-correlation between the received signal and the transmitted SHR waveform. Commonly used algorithms are maximum correlation and maximum likelihood.[10][11] There are two methods to estimate the mutual distance between the transceivers.[12][13][14]The first one is based on thetime of arrival(TOA) and it is called one-way ranging. It requires a priori synchronization between the anchors and it consists in estimating the delay and computing the range as r^=c⋅τ^{\displaystyle {\hat {r}}=c\cdot {\hat {\tau }}}whereτ^{\displaystyle {\hat {\tau }}}refers to the LoS path estimated delay. The second method is based on theround-trip time(RTT) and it is called two-way ranging. It consists in the following procedure: In this second case the distance between the 2 anchors can be computed as r^=12⋅c⋅(τ^−T){\displaystyle {\hat {r}}={\frac {1}{2}}\cdot c\cdot ({\hat {\tau }}-T)}Also in this caseτ^{\displaystyle {\hat {\tau }}}refers to the LoS path estimated delay. Performing ranging through UWB presents several advantages: However, there are also some disadvantages related to UWB systems:
https://en.wikipedia.org/wiki/UWB_ranging
Aircrack-ngis a network software suite consisting of a detector,packet sniffer,WEPandWPA/WPA2-PSKcracker and analysistool for802.11wireless LANs. It works with anywireless network interface controllerwhose driver supportsraw monitoring modeand can sniff802.11a,802.11band802.11gtraffic. Packages are released forLinuxandWindows.[2] Aircrack-ng is aforkof the original Aircrack project. It can be found as a preinstalled tool in many security-focused Linux distributions such asKali LinuxorParrot Security OS,[3]which share common attributes, as they are developed under the same project (Debian).[4] Aircrack was originally developed by French security researcherChristophe Devine.[5]Its main goal was to recover 802.11wireless networksWEPkeys using an implementation of theFluhrer, Mantin and Shamir (FMS) attackalongside the ones shared by a hacker named KoreK.[6][7][8] Aircrack wasforkedby Thomas D'Otreppe in February 2006 and released as Aircrack-ng (Aircrack Next Generation).[9] Wired Equivalent Privacywas the first securityalgorithmto be released, with the intention of providing data confidentiality comparable to that of a traditional wirednetwork.[10]It was introduced in 1997 as part of the IEEE 802.11 technical standard and based on theRC4cipher and theCRC-32checksumalgorithm forintegrity.[11] Due to U.S. restrictions on theexport of cryptographic algorithms, WEP was effectively limited to 64-bitencryption.[12]Of this, 40 bits were allocated to the key and 24 bits to theinitialization vector(IV), to form the RC4 key. After the restrictions were lifted, versions of WEP with a stronger encryption were released with 128 bits: 104 bits for thekey sizeand 24 bits for the initialization vector, known as WEP2.[13][14] The initialization vector works as aseed, which is prepended to the key. Via thekey-scheduling algorithm(KSA), the seed is used to initialize the RC4 cipher's state. The output of RC4'spseudo random generation algorithm(PRGA) follows aXORoperation in combination with theplaintext, and produces theciphertext.[15] The IV is constrained to 24 bits, which means that its maximum values are 16,777,216 (224), regardless of the key size.[16]Since the IV values will eventually be reused andcollide(given enough packets and time), WEP is vulnerable to statistical attacks.[17]William Arbaugh notes that a 50% chance of a collision exists after 4823 packets.[18] In 2003, theWi-Fi Allianceannounced that WEP had been superseded byWi-Fi Protected Access(WPA). In 2004, with the ratification of the full 802.11i standard (i.e. WPA2), the IEEE declared that both WEP and WEP2 have been deprecated.[19] Wi-Fi Protected Access(WPA) was designed to be implemented throughfirmware updatesrather than requiring dedicated hardware.[20]While still using RC4 at its core, it introduced significant improvements over its predecessor. WPA included two modes: WPA-PSK (WPA Personal) and WPA Enterprise. WPA-PSK (Wi-Fi Protected Access Pre-Shared Key), also known as WPA Personal, used a variant of theTemporal Key Integrity Protocol(TKIP) encryption protocol. It improved security by implementing the following features: TKIP allocated 48 bits to the IV compared to the 24 bits of WEP, so the maximum number is 281,474,976,710,656 (248).[22] In WPA-PSK, each packet was individually encrypted using the IV information, theMAC address, and the pre-shared key as inputs. The RC4 cipher was used to encrypt the packet content with the derived encryption key.[22] Additionally, WPA introduced WPA Enterprise, which provided enhanced security for enterprise-level networks. WPA Enterprise employed a more robust authentication mechanism known asExtensible Authentication Protocol(EAP). This mode required the use of anAuthentication Server(AS) such asRADIUS(Remote Authentication Dial-In User Service) to validate user credentials and grant access to the network. In 2015, the Wi-Fi Alliance recommended in a technical note that network administrators should discourage the use of WPA and that vendors should remove support for it and rely instead on the newer WPA2 standard.[24] WPA2(Wi-Fi Protected Access 2) was developed as an upgrade to the original WPA standard and ratified in 2004, and became mandatory for Wi-Fi certified products in 2006.[25]Like WPA, WPA2 provides two modes: WPA2-PSK (WPA2 Personal) and WPA2 Enterprise.[26] Unlike WPA, WPA2-PSK uses the more secureAdvanced Encryption Standard(AES) inCCM mode(Counter-Mode-CBC-MAC Protocol), instead ofTKIP.[21]AES provides stronger authentication, encryption and is less vulnerable to attacks.[27][28]A backward compatible version, called WPA/WPA2 (Personal) still made use of TKIP.[29] WPA2-PSK replaces the message integrity codeMichaelwithCCMP.[21] In 1995, before the WEP standard was available, computer scientistDavid Wagnerof thePrinceton Universitydiscussed a potential vulnerability in RC4.[15] In March 2000, a presentation by Dan Simon, Bernard Aboba, and Tim Moore ofMicrosoftprovided a summary of 802.11 vulnerabilities. They noted thatdenial of servicedeauthentication attacksare possible because the messages are unauthenticated and unencrypted (later implemented by the aireplay-ng tool).[30]In addition, they wrote that because some implementations of WEPderive the keyfrom a password,dictionary attacksare easier than purebrute force.[31][17] In May 2001, William A. Arbaugh of theUniversity of Marylandpresented his inductivechosen-plaintext attackagainst WEP with the conclusion that the protocol is vulnerable to packet forgery.[18] In July 2001, Borisov et al. published a comprehensive paper on the status of WEP and its various vulnerabilities.[17] In August 2001, in the paperWeaknesses in the Key Scheduling Algorithm of RC4, authors Scott Fluhrer, Itsik Mantin, andAdi Shamirperformed acryptoanalysisof the KSA, citing Wagner among others. They stated that they had not conducted an attack against WEP, and therefore couldn't claim that WEP was vulnerable.[32]However, other researchers implemented the attack and were able to demonstrate the protocol's insecurity.[33][13] In 2004, a hacker using the pseudonym KoreK posted a series of attacks on the NetStumbler.org forum, which were incorporated into the original aircrack 1.2 byChristophe Devine.[34][35]That same month, aircrack began supportingreplay attacksagainst WEP, which useARPrequests to generate more IVs and make key recovery easier.[36] Later that year, KoreK released the Chopchop attack, an active packet injector for WEP.[37]The name of the attack derives from its inherent working: a packet is intercepted, "chops" off a part of it and sends a modified version to the Access Point, who will drop it if not valid. By repeatedly trying multiple values, the message can gradually be decrypted.[38][39][40]The Chopchop attack was later improved by independent researchers.[41] In 2005, security researcher Andrea Bittau presented the paperThe Fragmentation Attack in Practice.The homonymous attackexploits the fact that WEP splits the data into smaller fragments, which are reassembled by the receiver. Taking advantage of the fact that at least part of the plaintext of some packetsmay be known, and that the fragments may have the same IV, data can be injected at will, flooding the network to statistically increase the chances of recovering the key.[15] In April 2007 a team at theDarmstadt University of TechnologyinGermanypresented a new attack, named "PTW" (from the researchers' names, Pyshkin, Tews, Weinmann). It decreased the number of initialization vectors or IVs needed to decrypt a WEP key and has been included in the aircrack-ng suite since the 0.9 release.[42][43] The first known attack on WPA was described by Martin Beck and Erik Tews in November 2008. They described an attack against TKIP in the paperPractical Attacks Against WEP and WPA. The proof of concept resulted in the creation oftkiptun-ng.[47]In 2009, their attack was improved and demonstrated by a research group from Norway.[50] The aircrack-ng software suite includes: aircrack-ng supportscrackingWEP (FMS, PTW, KoreK anddictionary attacks), WPA/WPA2 and WPA2 keys (using dictionary attacks).[51]While it doesn't support direct attacks onWPA3(introduced in 2018), it has been used successfully in combination with adowngrade attack.[52] airbase-ng incorporates techniques for attacking clients, instead of Access Points. Some of its features include an implementation of theCaffe Latte attack(developed by security researcher Vivek Ramachandran)[53]and the Hirte attack (developed by Martin Beck).[54]The WEP Hirte attack is a method of creating an Access Point with the sameSSIDof the network to be exploited (similar to anevil twin attack).[55]If a client (that was previously connected to the victim's access point) is configured to automatically reconnect, it will try the rogue AP. At this point, ARP packets are sent in the process of obtaining a local IP address, and airbase-ng can collect IVs that can later be used by aircrack-ng to recover the key.[56] aireplay-ng is aninjectorand framereplaytool.[51][57]Deauthentication attacksare supported.[30]Deauthentication refers to a feature of IEEE 802.11 which is described as "sanctioned technique to inform a rogue station that they have been disconnected from the network".[58]Since this management frame doesn't need to be encrypted and can be generated knowing only the client'sMAC address, aireplay-ng can force a client to disconnect and capture thehandshake(or to perform aDenial of serviceattack). In addition, a client deauthentication and subsequent reconnection will reveal ahidden SSID.[30] Other features include the ability to perform fake authentification, ARP request replay,fragmentation attack, the Caffe Latte and Chopchop attacks.[59] airmon-ng can place supportedwireless cardsinmonitor mode.[51]Monitor mode refers to a provision in the IEEE 802.11 standard for auditing and design purposes,[60]in which a wireless card can capture packets in air range.[61]It is able to detect potential programs that could interfere with proper operation andkillthem.[citation needed] airodump-ng is apacket sniffer.[51]It can store information in various formats, making it compatible with software other than the aircrack-ng suite. It supports channel-hopping.[62] airserv-ng is a wireless card server, which allows multiple wireless programs to use a card independently.[63] Virtual tunnel interface creator. Its main uses are monitoring the traffic as anintrusion detection system, and inject arbitrary traffic in a network.[64] A tool to automatize WEP cracking and logging of WPA handshakes. easside-ng is an automated tool which attempts connection to a WEP Access Point without knowing theencryption key. It uses thefragmentation attackand a remote server (which can be hosted with the toolbuddy-ng) in the attempt to recover an encrypted packet, exploiting the AP which will decrypt it for the attacker.[65] tkiptun-ng is a WPA/TKIPattack tool developed by Martin Beck. wesside-ng is aproof of conceptbased on the toolwesside, originally written by Andrea Bittau to demonstrate hisfragmentation attack. It is a tool designed to automate the process of recovering a WEP key.[15] airdecap-ng decrypts WEP or WPA encrypted capture files with known key.[36]It was formally known as airunwep and 802ether.[35] airdecloak-ng can remove WEP cloaked frames frompcapfiles. Cloaking refers to a technique for use bywireless intrusion prevention systems(which rely on WEP encryption) to inject packets encrypted with random keys into the air, in the attempt to makecrackingmore difficult.[66] airolib-ng can create a database ofpre-computed hash tablesby computing thePairwise Master Keys(PMK) captured during the 4-way handshaking process.[67]In WPA and WPA2, the PMK are derived from the password selected by the user, theSSIDname, its length, the number ofhashing iterations, and the key length.[68][6]During the 4-way handshaking process, the PMK is used, among other parameters, to generate a Pairwise Transient Key (PTK), which is used to encrypt data between the client and Access Point.[69][70] The hash tables can be reused, provided the SSID is the same.[71]Pre-computed tables for the most common SSIDs are available online.[72] Performs operations on a directory to search for pcap files and filter out relevant data. buddy-ng is a tool used in conjunction with the tool easside-ng, running on a remote computer. It is the receiving end that allows a packet decrypted by the access point to be captured.[65] ivstools can extractinitialization vectorsfrom a capture file (.cap). kstats is a tool for displaying theFluhrer, Mantin and Shamir attackalgorithm votes[note 1]for an IVS dump with a given WEP key. makeivs-ng is a testing tool used to generate an IVS file with a given WEP key. packetforge-ng can create and modify packets for injection. It supports packets such asarp requests,UDP,ICMPand custom packets.[73]It was originally written by Martin Beck.[74] wpaclean reduces the contents of the capture file (generated by airodump-ng) by keeping only what is related to the 4-wayhandshakeand a beacon. The former refers to a cryptographic process that establishes encryptionwithout publicly revealing the key.[75]Meanwhile, thebeacon frameis sent by the Access Point to announce its presence and other information to nearby clients.[76][77] airventriloquist-ng is a tool that can perform injection on encrypted packets.
https://en.wikipedia.org/wiki/Aircrack-ng
Inelectrical engineering,electromagnetic shieldingis the practice of reducing or redirecting theelectromagnetic field(EMF) in a space with barriers made ofconductiveormagneticmaterials. It is typically applied to enclosures, for isolating electrical devices from their surroundings, and tocablesto isolatewiresfrom the environment through which the cable runs (seeShielded cable). Electromagnetic shielding that blocksradio frequency(RF)electromagnetic radiationis also known asRF shielding. EMF shielding serves to minimizeelectromagnetic interference. The shielding can reduce thecouplingof radio waves, electromagnetic fields, andelectrostatic fields. A conductive enclosure used to block electrostatic fields is also known as aFaraday cage. The amount of reduction depends very much upon the material used, its thickness, the size of the shielded volume and thefrequencyof the fields of interest and the size, shape and orientation of holes in a shield to an incident electromagnetic field. Typical materials used for electromagnetic shielding include thin layer of metal,sheet metal, metal screen, andmetal foam. Common sheet metals for shielding include copper, brass, nickel, silver, steel, and tin. Shielding effectiveness, that is, how well a shield reflects or absorbs/suppresses electromagnetic radiation, is affected by the physical properties of the metal. These may include conductivity, solderability, permeability, thickness, and weight. A metal's properties are an important consideration in material selection. For example, electrically dominant waves are reflected by highly conductive metals like copper, silver, and brass, while magnetically dominant waves are absorbed/suppressed by a less conductive metal such as steel orstainless steel.[2]Further, any holes in the shield or mesh must be significantly smaller than thewavelengthof the radiation that is being kept out, or the enclosure will not effectively approximate an unbroken conducting surface. Another commonly used shielding method, especially with electronic goods housed in plastic enclosures, is to coat the inside of the enclosure with a metallic ink or similar material. The ink consists of a carrier material loaded with a suitable metal, typicallycopperornickel, in the form of very small particulates. It is sprayed on to the enclosure and, once dry, produces a continuous conductive layer of metal, which can be electrically connected to thechassis groundof the equipment, thus providing effective shielding. Electromagnetic shielding is the process of lowering the electromagnetic field in an area by barricading it with conductive or magnetic material.Copperis used for radio frequency (RF) shielding because it absorbsradioand otherelectromagnetic waves. Properly designed and constructedRF shielding enclosuressatisfy most RF shielding needs, from computer and electrical switching rooms to hospitalCAT-scanandMRIfacilities.[3][4] EMI (electromagnetic interference) shielding is of great research interest and several new types of nanocomposites made of ferrites, polymers, and 2D materials are being developed to obtain more efficient RF/microwave-absorbing materials (MAMs).[5]EMI shielding is often achieved byelectroless platingof copper as most popular plastics are non-conductive or by special conductive paint.[1] One example is ashielded cable, which has electromagnetic shielding in the form of a wire mesh surrounding an inner core conductor. The shielding impedes the escape of any signal from the core conductor, and also prevents signals from being added to the core conductor. Some cables have two separatecoaxialscreens, one connected at both ends, the other at one end only, to maximize shielding of both electromagnetic and electrostatic fields. The door of amicrowave ovenhas a screen built into the window. From the perspective of microwaves (with wavelengths of 12 cm) this screen finishes aFaraday cageformed by the oven's metal housing. Visible light, with wavelengths ranging between 400 nm and 700 nm, passes easily through the screen holes. RF shielding is also used to prevent access to data stored onRFIDchips embedded in various devices, such asbiometric passports.[6] NATOspecifies electromagnetic shielding for computers and keyboards to prevent passive monitoring ofkeyboardemissions that would allow passwords to be captured; consumer keyboards do not offer this protection primarily because of the prohibitive cost.[7] RF shielding is also used to protect medical and laboratory equipment to provide protection against interfering signals, including AM, FM, TV, emergency services, dispatch, pagers, ESMR, cellular, and PCS. It can also be used to protect the equipment at the AM, FM or TV broadcast facilities. Another example of the practical use of electromagnetic shielding would be defense applications. As technology improves, so does the susceptibility to various types of nefarious electromagnetic interference. The idea of encasing a cable inside a grounded conductive barrier can provide mitigation to these risks. Electromagnetic radiation consists of coupledelectricandmagneticfields. The electric field producesforceson thechargecarriers (i.e.,electrons) within the conductor. As soon as an electric field is applied to the surface of an ideal conductor, it induces acurrentthat causes displacement of charge inside the conductor that cancels the applied field inside, at which point the current stops. Similarly,varyingmagnetic fieldsgenerateeddy currentsthat act to cancel the applied magnetic field. (The conductor does not respond to static magnetic fields unless the conductor is moving relative to the magnetic field.) The result is thatelectromagnetic radiationis reflected from the surface of the conductor: internal fields stay inside, and external fields stay outside. Several factors serve to limit the shielding capability of real RF shields. One is that, due to theelectrical resistanceof the conductor, the excited field does not completely cancel the incident field. Also, most conductors exhibit aferromagneticresponse to low-frequency magnetic fields,[citation needed]so that such fields are not fully attenuated by the conductor. Any holes in the shield force current to flow around them, so that fields passing through the holes do not excite opposing electromagnetic fields. These effects reduce the field-reflecting capability of the shield. In the case of high-frequencyelectromagnetic radiation, the above-mentioned adjustments take a non-negligible amount of time, yet any such radiation energy, as far as it is not reflected, is absorbed by the skin (unless it is extremely thin), so in this case there is no electromagnetic field inside either. This is one aspect of a greater phenomenon called theskin effect. A measure of the depth to which radiation can penetrate the shield is the so-calledskin depth. Equipment sometimes requires isolation from external magnetic fields.[8]For static or slowly varying magnetic fields (below about 100 kHz) the Faraday shielding described above is ineffective. In these cases shields made of highmagnetic permeabilitymetalalloyscan be used, such as sheets ofpermalloyandmu-metal[9][10]or with nanocrystalline grain structure ferromagnetic metal coatings.[11]These materials do not block the magnetic field, as with electric shielding, but rather draw the field into themselves, providing a path for themagnetic field linesaround the shielded volume. The best shape for magnetic shields is thus a closed container surrounding the shielded volume. The effectiveness of this type of shielding depends on the material's permeability, which generally drops off at both very low magnetic field strengths and high field strengths where the material becomessaturated. Therefore, to achieve low residual fields, magnetic shields often consist of several enclosures, one inside the other, each of which successively reduces the field inside it. Entry holes within shielding surfaces may degrade their performance significantly. Because of the above limitations of passive shielding, an alternative used with static or low-frequency fields is active shielding, in which a field created byelectromagnetscancels the ambient field within a volume.[12]SolenoidsandHelmholtz coilsare types of coils that can be used for this purpose, as well as more complex wire patterns designed using methods adapted from those used in coil design formagnetic resonance imaging. Active shields may also be designed accounting for the electromagnetic coupling with passive shields,[13][14][15][16][17]referred to ashybridshielding,[18]so that there is broadband shielding from the passive shield and additional cancellation of specific components using the active system. Additionally,superconductingmaterials can expel magnetic fields via theMeissner effect. Suppose that we have a spherical shell of a (linear and isotropic) diamagnetic material withrelative permeabilityμr{\displaystyle \mu _{\text{r}}},with inner radiusa{\displaystyle a}and outer radiusb{\displaystyle b}.We then put this object in a constant magnetic field:H0=H0z^=H0cos⁡(θ)r^−H0sin⁡(θ)θ^{\displaystyle \mathbf {H} _{0}=H_{0}{\hat {\mathbf {z} }}=H_{0}\cos(\theta ){\hat {\mathbf {r} }}-H_{0}\sin(\theta ){\hat {\boldsymbol {\theta }}}}Since there are no currents in this problem except for possible bound currents on the boundaries of the diamagnetic material, then we can define a magnetic scalar potential that satisfiesLaplace's equation:H=−∇ΦM∇2ΦM=0{\displaystyle {\begin{aligned}\mathbf {H} &=-\nabla \Phi _{M}\\\nabla ^{2}\Phi _{M}&=0\end{aligned}}}whereB=μrH{\displaystyle \mathbf {B} =\mu _{\text{r}}\mathbf {H} }In this particular problem there is azimuthal symmetry so we can write down that the solution to Laplace's equation in spherical coordinates is:ΦM=∑ℓ=0∞(Aℓrℓ+Bℓrℓ+1)Pℓ(cos⁡θ){\displaystyle \Phi _{M}=\sum _{\ell =0}^{\infty }\left(A_{\ell }r^{\ell }+{\frac {B_{\ell }}{r^{\ell +1}}}\right)P_{\ell }(\cos \theta )}After matching the boundary conditions(H2−H1)×n^=0(B2−B1)⋅n^=0{\displaystyle {\begin{aligned}\left(\mathbf {H} _{2}-\mathbf {H} _{1}\right)\times {\hat {\mathbf {n} }}&=0\\\left(\mathbf {B} _{2}-\mathbf {B} _{1}\right)\cdot {\hat {\mathbf {n} }}&=0\end{aligned}}}at the boundaries (wheren^{\displaystyle {\hat {n}}}is aunit vectorthat is normal to the surface pointing from side 1 to side 2), then we find that the magnetic field inside the cavity in the spherical shell is:Hin=ηH0{\displaystyle \mathbf {H} _{\text{in}}=\eta \mathbf {H} _{0}}whereη{\displaystyle \eta }is anattenuation coefficientthat depends on the thickness of the diamagnetic material and the magnetic permeability of the material:η=9μr(2μr+1)(μr+2)−2(ab)3(μr−1)2{\displaystyle \eta ={\frac {9\mu _{\text{r}}}{\left(2\mu _{\text{r}}+1\right)\left(\mu _{\text{r}}+2\right)-2\left({\frac {a}{b}}\right)^{3}\left(\mu _{\text{r}}-1\right)^{2}}}}This coefficient describes the effectiveness of this material in shielding the external magnetic field from the cavity that it surrounds. Notice that this coefficient appropriately goes to 1 (no shielding) in the limit thatμr→1{\displaystyle \mu _{\text{r}}\to 1}. In the limit thatμr→∞{\displaystyle \mu _{\text{r}}\to \infty }this coefficient goes to 0 (perfect shielding). Whenμr≫1{\displaystyle \mu _{\text{r}}\gg 1}, then the attenuation coefficient takes on the simpler form:η=92(1−a3b3)μr{\displaystyle \eta ={\frac {9}{2\left(1-{\frac {a^{3}}{b^{3}}}\right)\mu _{\text{r}}}}}which shows that the magnetic field decreases likeμr−1{\displaystyle \mu _{\text{r}}^{-1}}.[19]
https://en.wikipedia.org/wiki/Electromagnetic_shielding
Kismetis anetwork detector,packet sniffer, andintrusion detection systemfor802.11wireless LANs. Kismet will work with any wireless card which supportsraw monitoring mode, and can sniff802.11a,802.11b,802.11g, and802.11ntraffic. The program runs underLinux,FreeBSD,NetBSD,OpenBSD, andmacOS. The client can also run onMicrosoft Windows, although, aside from external drones (seebelow), there's only one supported wireless hardware available as packet source. Distributed under theGNU General Public License,[2]Kismet isfree software. Kismet differs from other wireless network detectors in working passively. Namely, without sending any loggable packets, it is able to detect the presence of bothwireless access pointsand wireless clients, and to associate them with each other. It is also the most widely used and up to date open source wireless monitoring tool.[citation needed] Kismet also includes basic wirelessIDSfeatures such as detecting active wireless sniffing programs includingNetStumbler, as well as a number of wireless network attacks. Kismet features the ability to log all sniffed packets and save them in atcpdump/WiresharkorAirsnortcompatible file format. Kismet can also capture "Per-Packet Information" headers. Kismet also features the ability to detect default or "not configured" networks, probe requests, and determine what level of wireless encryption is used on a given access point. In order to find as many networks as possible, Kismet supports channel hopping. This means that it constantly changes from channel to channel non-sequentially, in a user-defined sequence with a default value that leaves big holes between channels (for example, 1-6-11-2-7-12-3-8-13-4-9-14-5-10). The advantage with this method is that it will capture more packets because adjacent channels overlap. Kismet also supports logging of the geographical coordinates of the network if the input from aGPSreceiver is additionally available. Kismet has three separate parts. Adronecan be used to collect packets, and then pass them on to aserverfor interpretation. A server can either be used in conjunction with a drone, or on its own, interpreting packet data, and extrapolating wireless information, and organizing it. Theclientcommunicates with the server and displays the information the server collects. With the updating of Kismet to -ng, Kismet now supports a wide variety of scanning plugins includingDECT, Bluetooth, and others. Kismet is used in a number of commercial and open source projects. It is distributed with Kali Linux.[3]It is used for wireless reconnaissance,[4]and can be used with other packages for an inexpensive wireless intrusion detection system.[5]It has been used in a number of peer reviewed studies such as "Detecting Rogue Access Points using Kismet".[6]
https://en.wikipedia.org/wiki/Kismet_(software)
CVE-2017-13078,CVE-2017-13079,CVE-2017-13080,CVE-2017-13081,CVE-2017-13082,CVE-2017-13084,CVE-2017-13086,CVE-2017-13087, KRACK("Key Reinstallation Attack") is areplay attack(a type of exploitable flaw) on theWi-Fi Protected Accessprotocol that securesWi-Ficonnections. It was discovered in 2016[1]by the Belgian researchersMathy VanhoefandFrank Piessensof theUniversity of Leuven.[2]Vanhoef's research group published details of the attack in October 2017.[3]By repeatedly resetting thenoncetransmitted in the third step of the WPA2handshake, an attacker can gradually match encrypted packets seen before and learn the fullkeychainused to encrypt the traffic. The weakness is exhibited in the Wi-Fi standard itself, and not due to errors in the implementation of a sound standard by individual products or implementations. Therefore, any correct implementation of WPA2 is likely to be vulnerable.[4]The vulnerability affects all major software platforms, includingMicrosoft Windows,macOS,iOS,Android,Linux,OpenBSDand others.[3] The widely used open-source implementationwpa_supplicant, utilized by Linux and Android, was especially susceptible as it can be manipulated to install an all-zerosencryption key, effectively nullifying WPA2 protection in aman-in-the-middle attack.[5][6]Version 2.7 fixed this vulnerability.[7] The security protocol protecting many Wi-Fi devices can essentially be bypassed, potentially allowing an attacker to intercept[8]sent and received data. The attack targets the four-way handshake used to establish anonce(a kind of "shared secret") in the WPA2 protocol. The standard for WPA2 anticipates occasional Wi-Fi disconnections, and allows reconnection using the same value for the third handshake (for quick reconnection and continuity). Because the standard does not require a different key to be used in this type of reconnection, which could be needed at any time, areplay attackis possible. An attacker can repeatedly re-send the third handshake of another device's communication to manipulate or reset the WPA2 encryption key.[9]Each reset causes data to be encrypted using the same values, so blocks with the same content can be seen and matched, working backwards to identify parts of thekeychainwhich were used to encrypt that block of data. Repeated resets gradually expose more of the keychain until eventually the whole key is known, and the attacker can read the target's entire traffic on that connection. According toUS-CERT: "US-CERT has become aware of several key management vulnerabilities in the4-way handshakeof the Wi-Fi Protected Access II (WPA2) security protocol. The impact of exploiting these vulnerabilities includes decryption, packet replay, TCP connection hijacking, HTTP content injection, and others. Note that as protocol-level issues, most or all correct implementations of the standard will be affected. The CERT/CC and the reporting researcher KU Leuven, will be publicly disclosing these vulnerabilities on 16 October 2017."[10] The paper describing the vulnerability is available online,[11]and was formally presented at the ACM Conference on Computer and Communications Security on 1 November 2017.[5]US-CERT is tracking this vulnerability, listed as VU#228519, across multiple platforms.[12]The followingCVE identifiersrelate to the KRACK vulnerability:CVE-2017-13077, CVE-2017-13078, CVE-2017-13079, CVE-2017-13080, CVE-2017-13081, CVE-2017-13082, CVE-2017-13084, CVE-2017-13086, CVE-2017-13087andCVE-2017-13088.[5] Some WPA2 users may counter the attack by updating Wi-Fi client and access point device software, if they have devices for which vendor patches are available.[13]However, vendors may delay in offering a patch, or not provide patches at all in the case of many older devices.[13][1] Patches are available for different devices to protect against KRACK, starting at these versions: In order to mitigate risk on vulnerable clients, some WPA2-enabled Wi-Fi access points have configuration options that can disable EAPOL-Key[clarification needed]frame re-transmission during key installation. Attackers cannot cause re-transmissions with a delayed frame transmission, thereby denying them access to the network, providedTDLSis not enabled.[24]One disadvantage of this method is that, with poor connectivity, key reinstallation failure may cause failure of the Wi-Fi link. In October 2018, reports emerged that the KRACK vulnerability was still exploitable in spite of vendor patches, through a variety of workarounds for the techniques used by vendors to close off the original attack.[25]
https://en.wikipedia.org/wiki/KRACK
List of software created and maintained by people other than the manufacturer of the product. The extent of support for (and testing on) particular hardware varies from project to project. Notablecustom-firmwareprojects forwireless routers. Many of these will run on various brands such as Linksys, Asus, Netgear, etc. Software distributions forrouterswith >5 GB Storage and >1 GB RAM
https://en.wikipedia.org/wiki/List_of_router_firmware_projects
Network encryption crackingis the breaching of network encryptions (e.g., WEP, WPA, ...), usually through the use of a specialencryption cracking software. It may be done through a range of attacks (active and passive) including injecting traffic, decrypting traffic, anddictionary-based attacks. As mentioned above, several types of attacks are possible. More precisely they are: Injecting traffic means inserting forged encrypted messages into the network. It may be done if either the key is known (to generate new messages), or if the key is not known and only an encrypted message and plaintext message is gathered, through comparison of the two. Programs able to do the latter are Aireplay and WepWedgie. Decryption often requires 2 tools; 1 for gathering packets and another for analysing the packet and determining the key. Gathering packets may be done through tools such asWireSharkor Prismdump and cracking may be done through tools such as WEPCrack, AirSnort, AirCrack, and WEPLab. When gathering packets, often a great amount of them are required to perform cracking. Depending on the attack used, 5-16 million frames may be required. The attack command itself, however, is surprisingly simple. Commands to be inputted into WEPCrack are: This command generates a log-file (ivfile.log) from a captured packet obtained by WireShark or prismdump A packet with at least 5 million frames is required. This command asks WEPCrack to determine the key from the log file.[1] Aircrack is another program that's even simpler to use, as no command need to be entered; instead the user is asked to type in some parameters and click some buttons. First airodump is started to gather the packets; herefore channel and MAC-filter are asked, yet the user does not need to know them per se (instead 0 and p may be inputted respectively). Then, AirCrack is started, the file just created by airodump is accessed, a 0 needs to be entered and the program determines the key. AirSnort is a software program that passively collects traffic on anIEEE 802.11bnetwork that was released in August 2001.[2]After enough packets have been collected, the program can then compute the key for the wireless network. As the software makes use of brute-force attack however, cracking the encryption can take between a few hours to several days, based on the activity on the network.[3]
https://en.wikipedia.org/wiki/Network_encryption_cracking
ThePayment Card Industry Data Security Standard(PCI DSS) is aninformation securitystandard used to handlecredit cardsfrom majorcard brands. The standard is administered by thePayment Card Industry Security Standards Council, and its use is mandated by the card brands. It was created to better control cardholder data and reducecredit card fraud. Validation of compliance is performed annually or quarterly with a method suited to the volume of transactions:[1] The major card brands had five different security programs: The intentions of each were roughly similar: to create an additional level of protection for card issuers by ensuring that merchants meet minimum levels of security when they store, process, and transmit cardholder data. To address interoperability problems among the existing standards, the combined effort by the principal credit-card organizations resulted in the release of version 1.0 of PCI DSS in December 2004.[citation needed]PCI DSS has been implemented and followed worldwide. The Payment Card Industry Security Standards Council (PCI SSC) was then formed, and these companies aligned their policies to create the PCI DSS.[2]MasterCard, American Express, Visa, JCB International and Discover Financial Services established the PCI SSC in September 2006 as an administrative and governing entity which mandates the evolution and development of the PCI DSS.[3]Independent private organizations can participate in PCI development after they register. Each participating organization joins a SIG (Special Interest Group) and contributes to activities mandated by the group. The following versions of the PCI DSS have been made available:[4] The PCI DSS has twelve requirements for compliance, organized into six related groups known as control objectives:[7] Each PCI DSS version has divided these six requirement groups differently, but the twelve requirements have not changed since the inception of the standard. Each requirement and sub-requirement is divided into three sections: In version 4.0.1 of the PCI DSS, the twelve requirements are:[8] The PCI SSC (Payment Card Industry Security Standards Council) has released supplemental information to clarify requirements, which includes: Companies subject to PCI DSS standards must be PCI-compliant; how they prove and report their compliance is based on their annual number of transactions and how the transactions are processed. Anacquireror payment brand may manually place an organization into a reporting level at its discretion.[11]Merchant levels are: Each card issuer maintains a table of compliance levels and a table for service providers.[12][13] Compliance validation involves the evaluation and confirmation that the security controls and procedures have been implemented according to the PCI DSS. Validation occurs through an annual assessment, either by an external entity, or by self-assessment.[14] A Report on Compliance (ROC) is conducted by a PCI Qualified Security Assessor (QSA) and is intended to provide independent validation of an entity's compliance with the PCI DSS standard. A completed ROC results in two documents: a ROC Reporting Template populated with detailed explanation of the testing completed, and an Attestation of Compliance (AOC) documenting that a ROC has been completed and the overall conclusion of the ROC. The PCI DSS Self-Assessment Questionnaire (SAQ) is a validation tool intended for small to medium sized merchants and service providers to assess their own PCI DSS compliance status. There are multiple types of SAQ, each with a different length depending on the entity type and payment model used. Each SAQ question has a yes-or-no answer, and any "no" response requires the entity to indicate its future implementation. As with ROCs, an attestation of compliance (AOC) based on the SAQ is also completed. The PCI Security Standards Council maintains a program to certify companies and individuals to perform assessment activities. AQualified Security Assessor(QSA) is an individual certified by the PCI Security Standards Council to validate another entity's PCI DSS compliance. QSAs must be employed and sponsored by a QSA Company, which also must be certified by the PCI Security Standards Council.[15][16] AnInternal Security Assessor(ISA) is an individual who has earned a certificate from the PCI Security Standards Council for their sponsoring organization, and can conduct PCI self-assessments for their organization. The ISA program was designed to help Level 2 merchants meet Mastercard compliance validation requirements.[17]ISA certification empowers an individual to conduct an appraisal of his or her association and propose security solutions and controls for PCI DSS compliance. ISAs are in charge of cooperation and participation with QSAs.[14] Although the PCI DSS must be implemented by all entities which process, store or transmit cardholder data, formal validation of PCI DSS compliance is not mandatory for all entities.VisaandMastercardrequire merchants and service providers to be validated according to the PCI DSS; Visa also offers a Technology Innovation Program (TIP), an alternative program which allows qualified merchants to discontinue the annual PCI DSS validation assessment. Merchants are eligible if they take alternative precautions against fraud, such as the use ofEMVorpoint-to-point encryption. Issuing banksare not required to undergo PCI DSS validation, although they must secure sensitive data in a PCI DSS-compliant manner. Acquiring banks must comply with PCI DSS and have their compliance validated with anaudit. In a security breach, any compromised entity which was not PCI DSS-compliant at the time of the breach may be subject to additional penalties (such as fines) from card brands or acquiring banks. Compliance with PCI DSS is not required by federal law in theUnited States, but the laws of some states refer to PCI DSS directly or make equivalent provisions. Legal scholars Edward Morse and Vasant Raval have said that by enshrining PCI DSS compliance in legislation, card networks reallocated the cost of fraud from card issuers to merchants.[18]In 2007, Minnesota enacted a law prohibiting the retention of some types of payment-card data more than 48 hours after authorization of a transaction.[19][20]Nevada incorporated the standard into state law two years later, requiring compliance by merchants doing business in that state with the current PCI DSS and shielding compliant entities from liability. The Nevada law also allows merchants to avoid liability by other approved security standards.[21][18]In 2010,Washingtonalso incorporated the standard into state law. Unlike Nevada's law, entities are not required to be PCI DSS-compliant; however, compliant entities are shielded from liability in the event of a data breach.[22][18] Visa and Mastercard impose fines for non-compliance. Stephen and Theodora "Cissy" McComb, owners of Cisero's Ristorante and Nightclub inPark City, Utah, were fined for a breach for which two forensics firms could not find evidence: The McCombs assert that the PCI system is less a system for securing customer card data than a system for raking in profits for the card companies via fines and penalties. Visa and MasterCard impose fines on merchants even when there is no fraud loss at all, simply because the fines are "profitable to them," the McCombs say.[23] Michael Jones,CIOofMichaels, testified before a U.S. Congressional subcommittee about the PCI DSS: [The PCI DSS requirements] are very expensive to implement, confusing to comply with, and ultimately subjective, both in their interpretation and in their enforcement. It is often stated that there are only twelve "Requirements" for PCI compliance. In fact there are over 220 sub-requirements; some of which can place anincredible burden on a retailerandmany of which are subject to interpretation.[24] The PCI DSS may compel businesses pay more attention to IT security, even if minimum standards are not enough to eradicate security problems.Bruce Schneierspoke in favor of the standard: Regulation—SOX,HIPAA, GLBA, the credit-card industry's PCI, the various disclosure laws, the European Data Protection Act, whatever—has been the best stick the industry has found to beat companies over the head with. And it works. Regulation forces companies to take security more seriously, and sells more products and services.[25] PCICouncil general manager Bob Russo responded to objections by theNational Retail Federation: [PCI is a structured] blend ... [of] specificity and high-level concepts [that allows] stakeholders the opportunity and flexibility to work with Qualified Security Assessors (QSAs) to determine appropriate security controls within their environment that meet the intent of the PCI standards.[26] Visa chief enterprise risk officer Ellen Richey said in 2018, "No compromised entity has yet been found to be in compliance with PCI DSS at the time of a breach".[27]However, a 2008 breach ofHeartland Payment Systems(validated as PCI DSS-compliant) resulted in the compromising of one hundred million card numbers. Around that time,Hannaford BrothersandTJX Companies(also validated as PCI DSS-compliant) were similarly breached as a result of the allegedly-coordinated efforts ofAlbert Gonzalezand two unnamed Russian hackers.[28] Assessments examine the compliance of merchants and service providers with the PCI DSS at a specific point in time, frequently usingsamplingto allow compliance to be demonstrated with representative systems and processes. It is the responsibility of the merchant and service provider to achieve, demonstrate, and maintain compliance throughout the annual validation-and-assessment cycle across all systems and processes. A breakdown in merchant and service-provider compliance with the written standard may have been responsible for the breaches; Hannaford Brothers received its PCI DSS compliance validation one day after it had been made aware of a two-month-long compromise of its internal systems. Compliance validation is required only for level 1 to 3 merchants and may be optional for Level 4, depending on the card brand and acquirer. According to Visa's compliance validation details for merchants, level-4 merchant compliance-validation requirements ("Merchants processing less than 20,000 Visa e-commerce transactions annually and all other merchants processing up to 1 million Visa transactions annually") are set by theacquirer. Over 80 percent of payment-card compromises between 2005 and 2007 affected level-4 merchants, who handled 32 percent of all such transactions.[citation needed]
https://en.wikipedia.org/wiki/Payment_Card_Industry_Data_Security_Standard
For computernetwork security,stealth wallpaperis a material designed to prevent an indoor Wi-Fi network from extending or "leaking" to the outside of a building, where malicious persons may attempt to eavesdrop or attack a network. While it is simple to prevent all electronic signals from passing through a building by covering the interior with metal, stealth wallpaper accomplishes the more difficult task of blocking Wi-Fi signals while still allowing cellphone signals to pass through. The first stealth wallpaper was originally designed by UK defense contractorBAE Systems[1]In 2012,The Registerreported that a commercial wallpaper had been developed byGrenoble Institute of Technologyand the Centre Technique du Papier with planned sale in 2013. This wallpaper blocks three selected Wi-Fi frequencies. Nevertheless, it does allow GSM and 4G signals to pass through the network, therefore allowing cell phone use to remain unaffected by the wallpaper.[2]
https://en.wikipedia.org/wiki/Stealth_wallpaper
TEMPESTis a codename, not an acronym under the U.S.National Security Agencyspecification and aNATOcertification[1][2]referring to spying on information systems through leaking emanations, including unintentional radio or electrical signals, sounds, and vibrations.[3][4]TEMPEST covers both methods to spy upon others and how to shield equipment against such spying. The protection efforts are also known as emission security (EMSEC), which is a subset ofcommunications security(COMSEC).[5]The reception methods fall under the umbrella ofradiofrequency MASINT. The NSA methods for spying on computer emissions are classified, but some of the protection standards have been released by either the NSA or the Department of Defense.[6]Protecting equipment from spying is done with distance, shielding, filtering, and masking.[7]The TEMPEST standards mandate elements such as equipment distance from walls, amount of shielding in buildings and equipment, and distance separating wires carrying classified vs. unclassified materials,[6]filters on cables, and even distance and shielding between wires or equipment and building pipes. Noise can also protect information by masking the actual data.[7] While much of TEMPEST is aboutleaking electromagnetic emanations, it also encompasses sounds and mechanical vibrations.[6]For example, it is possible to log a user's keystrokes using themotion sensorinsidesmartphones.[8]Compromising emissions are defined as unintentionalintelligence-bearing signals which, if intercepted and analyzed (side-channel attack), may disclose the information transmitted, received, handled, or otherwise processed by any information-processing equipment.[9] During World War II, TheBell Systemsupplied the U.S. military with the131-B2mixer device that encrypted teleprinter signals byXOR’ing them with key material fromone-time tapes(theSIGTOTsystem) or, earlier, a rotor-based key generator calledSIGCUM. It used electromechanical relays in its operation. Later, Bell informed the Signal Corps that they were able to detect electromagnetic spikes at a distance from the mixer and recover the plain text. Meeting skepticism over whether the phenomenon they discovered in the laboratory could really be dangerous, they demonstrated their ability to recover plain text from a Signal Corps’ crypto center on Varick Street in Lower Manhattan. Now alarmed, the Signal Corps asked Bell to investigate further. Bell identified three problem areas: radiated signals, signals conducted on wires extending from the facility, and magnetic fields. As possible solutions, they suggested shielding, filtering and masking. Bell developed a modified mixer, the 131-A1 with shielding and filtering, but it proved difficult to maintain and too expensive to deploy. Instead, relevant commanders were warned of the problem and advised to control a 100-foot (30 m)-diameter zone around their communications center to prevent covert interception, and things were left at that. Then in 1951, the CIA rediscovered the problem with the 131-B2 mixer and found they could recover plain text off the line carrying the encrypted signal from a quarter mile away. Filters for signal and power lines were developed, and the recommended control-perimeter radius was extended to 200 feet (61 m), based more on what commanders could be expected to accomplish than any technical criteria. A long process of evaluating systems and developing possible solutions followed. Other compromising effects were discovered, such as fluctuations in the power line as rotors stepped. The question of exploiting the noise of electromechanical encryption systems had been raised in the late 1940s but was re-evaluated now as a possible threat. Acoustical emanations could reveal plain text, but only if the pick-up device was close to the source. Nevertheless, even mediocre microphones would do. Soundproofing the room made the problem worse by removing reflections and providing a cleaner signal to the recorder. In 1956, theNaval Research Laboratorydeveloped a better mixer that operated at much lower voltages and currents and therefore radiated far less. It was incorporated in newer NSA encryption systems. However, many users needed the higher signal levels to drive teleprinters at greater distances or where multiple teleprinters were connected, so the newer encryption devices included the option to switch the signal back up to the higher strength. The NSA began developing techniques and specifications for isolating sensitive-communications pathways through filtering, shielding, grounding, and physical separation: of those lines that carried sensitive plain text—from those intended to carry only non-sensitive data, the latter often extending outside of the secure environment. This separation effort became known as theRed/Black Concept. A 1958 joint policy called NAG-1 set radiation standards for equipment and installations based on a 50 ft (15 m) limit of control. It also specified the classification levels of various aspects of the TEMPEST problem. The policy was adopted by Canada and the UK the next year. Six organizations—the Navy, Army, Air Force, NSA, CIA, and the State Department—were to provide the bulk of the effort for its implementation. Difficulties quickly emerged. Computerization was becoming important to processing intelligence data, and computers and their peripherals had to be evaluated, wherein many of them evidenced vulnerabilities. TheFriden Flexowriter, a popular I/O typewriter at the time, proved to be among the strongest emitters, readable at distances up to 3,200 ft (0.98 km) in field tests. The U.S.Communications Security Board(USCSB) produced a Flexowriter Policy that banned its use overseas for classified information and limited its use within the U.S. to theConfidentiallevel, and then only within a 400 ft (120 m) security zone, but users found the policy onerous and impractical. Later, the NSA found similar problems with the introduction of cathode-ray-tube displays (CRTs), which were also powerful radiators. There was a multiyear process of moving from policy recommendations to more strictly enforced TEMPEST rules. The resulting Directive 5200.19, coordinated with 22 separate agencies, was signed by Secretary of DefenseRobert McNamarain December 1964, but still took months to fully implement. The NSA's formal implementation took effect in June 1966. Meanwhile, the problem of acoustic emanations became more critical with the discovery of some 900 microphones in U.S. installations overseas, most behind theIron Curtain. The response was to build room-within-a-room enclosures, some transparent, nicknamed "fish bowls". Other units[clarification needed]were fully shielded[clarification needed]to contain electronic emanations, but were unpopular with the personnel who were supposed to work inside; they called the enclosures "meat lockers", and sometimes just left their doors open. Nonetheless, they were installed in critical locations, such as the embassy in Moscow, where two were installed: one for State Department use and one for military attachés. A unit installed at the NSA for its key-generation equipment cost $134,000. TEMPEST standards continued to evolve in the 1970s and later, with newer testing methods and more nuanced guidelines that took account of the risks in specific locations and situations.[10]: Vol I, Ch. 10During the 80s, security needs were often met with resistance. According to NSA's David G. Boak, "Some of what we still hear today in our own circles, when rigorous technical standards are whittled down in the interest of money and time, are frighteningly reminiscent of the arrogant Third Reich with their Enigma cryptomachine.":ibidp. 19 Many specifics of the TEMPEST standards areclassified, but some elements are public. Current United States andNATOTempest standards define three levels of protection requirements:[11] Additional standards include: The NSA and Department of Defense have declassified some TEMPEST elements afterFreedom of Information Actrequests, but the documents black out many key values and descriptions. The declassified version of the TEMPEST test standard is heavilyredacted, with emanation limits and test procedures blacked out.[citation needed][12]A redacted version of the introductory Tempest handbook NACSIM 5000 was publicly released in December 2000. Additionally, the currentNATOstandard SDIP-27 (before 2006 known as AMSG 720B, AMSG 788A, and AMSG 784) is still classified. Despite this, some declassified documents give information on the shielding required by TEMPEST standards. For example, Military Handbook 1195 includes the chart at the right, showing electromagnetic shielding requirements at different frequencies. A declassified NSA specification for shielded enclosures offers similar shielding values, requiring, "a minimum of 100 dB insertion loss from 1 kHz to 10 GHz."[13]Since much of the current requirements are still classified, there are no publicly available correlations between this 100 dB shielding requirement and the newer zone-based shielding standards. In addition, many separation distance requirements and other elements are provided by the declassified NSAred-blackinstallation guidance, NSTISSAM TEMPEST/2-95.[14] The information-security agencies of several NATO countries publish lists of accredited testing labs and of equipment that has passed these tests: TheUnited States Armyalso has a TEMPEST testing facility, as part of the U.S. Army Electronic Proving Ground, atFort Huachuca,Arizona. Similar lists and facilities exist in other NATO countries. TEMPEST certification must apply to entire systems, not just to individualcomponents, since connecting a singleunshieldedcomponent (such as a cable or device) to an otherwise secure system could dramatically alter the system RF characteristics. TEMPEST standards require "RED/BLACKseparation", i.e., maintaining distance or installing shielding between circuits and equipment used to handleplaintextclassified or sensitive information that is not encrypted (RED) and secured circuits and equipment (BLACK), the latter including those carrying encrypted signals. Manufacture of TEMPEST-approved equipment must be done under careful quality control to ensure that additional units are built exactly the same as the units that were tested. Changing even a single wire can invalidate the tests.[citation needed] One aspect of TEMPEST testing that distinguishes it from limits onspurious emissions(e.g., FCCPart 15) is a requirement of absolute minimal correlation between radiated energy or detectable emissions and any plaintext data that are being processed. In 1985,Wim van Eckpublished the first unclassified technical analysis of the security risks of emanations fromcomputer monitors. This paper caused some consternation in the security community, which had previously believed that such monitoring was a highly sophisticated attack available only togovernments; Van Eck successfully eavesdropped on a real system, at a range of hundreds ofmetres, using just $15 worth of equipment plus atelevisionset. As a consequence of this research, such emanations are sometimes called "Van Eck radiation", and the eavesdropping techniqueVan Eck phreaking, although government researchers were already aware of the danger, asBell Labsnoted this vulnerability to secureteleprintercommunications duringWorld War IIand was able to produce 75% of the plaintext being processed in a secure facility from a distance of 80 feet (24 metres)[19]Additionally the NSA publishedTempest Fundamentals, NSA-82-89, NACSIM 5000, National Security Agency(Classified) on February 1, 1982. In addition, the Van Eck technique was successfully demonstrated to non-TEMPEST personnel inKoreaduring theKorean Warin the 1950s.[20] Markus Kuhnhas discovered several low-cost techniques for reducing the chances that emanations from computer displays can be monitored remotely.[21]WithCRT displaysandanalogvideo cables, filtering outhigh-frequencycomponents fromfontsbefore rendering them on a computer screen will attenuate the energy at which text characters are broadcast.[22][23]With modernflat panel displays, the high-speed digitalserial interface(DVI) cables from thegraphics controllerare a main source of compromising emanations. Adding randomnoiseto theleast significant bitsof pixel values may render the emanations from flat-panel displays unintelligible to eavesdroppers but is not a secure method. Since DVI uses acertain bit code schemethat tries to transport a balanced signal of 0 bits and 1 bits, there may not be much difference between two pixel colors that differ very much in their color or intensity. The emanations can differ drastically even if only the last bit of a pixel's color is changed. The signal received by the eavesdropper also depends on the frequency where the emanations are detected. The signal can be received on many frequencies at once and each frequency's signal differs incontrastandbrightnessrelated to a certain color on the screen. Usually, the technique of smothering the RED signal with noise is not effective unless the power of the noise is sufficient to drive the eavesdropper's receiver intosaturationthus overwhelming the receiver input. LEDindicators on computer equipment can be a source of compromising optical emanations.[24]One such technique involves the monitoring of the lights on adial-up modem. Almost all modems flash an LED to show activity, and it is common for the flashes to be directly taken from the data line. As such, a fast optical system can easily see the changes in the flickers from the data being transmitted down the wire. Recent research[25]has shown it is possible to detect the radiation corresponding to a keypress event from not onlywireless(radio) keyboards, but also from traditional wired keyboards [the PS/2 keyboard, for example, contains a microprocessor which will radiate some amount of radio frequency energy when responding to keypresses], and even from laptop keyboards. From the 1970s onward, Soviet bugging of US EmbassyIBM Selectrictypewriters allowed the keypress-derived mechanical motion of bails, with attached magnets, to be detected by implanted magnetometers, and converted via hidden electronics to a digital radio frequency signal. Each eight character transmission provided Soviet access to sensitive documents, as they were being typed, at US facilities in Moscow and Leningrad.[26] In 2014, researchers introduced "AirHopper", a bifurcated attack pattern showing the feasibility of data exfiltration from an isolated computer to a nearby mobile phone, using FM frequency signals.[27] In 2015, "BitWhisper", a Covert Signaling Channel between Air-Gapped Computers using Thermal Manipulations was introduced. "BitWhisper" supports bidirectional communication and requires no additional dedicated peripheral hardware.[28]Later in 2015, researchers introduced GSMem, a method for exfiltrating data from air-gapped computers over cellular frequencies. The transmission - generated by a standard internal bus - renders the computer into a small cellular transmitter antenna.[29]In February 2018, research was published describing how low frequency magnetic fields can be used to escape sensitive data from Faraday-caged, air-gapped computers with malware code-named ’ODINI’ that can control the low frequency magnetic fields emitted from infected computers by regulating the load of CPU cores.[30] In 2018, a class ofside-channel attackwas introduced atACMandBlack HatbyEurecom's researchers: "Screaming Channels".[31]This kind of attack targetsmixed-signal chips— containing ananaloganddigitalcircuiton the samesilicon die— with aradio transmitter. The results of this architecture, often found inconnected objects, is that the digital part of the chip will leak some metadata on its computations into the analog part, which leads to metadata's leak being encoded in thenoiseof the radio transmission. Thanks tosignal-processingtechniques, researchers were able to extractcryptographic keysused during the communication anddecryptthe content. This attack class is supposed, by the authors, to be known already for many years by governmentalintelligence agencies.
https://en.wikipedia.org/wiki/Tempest_(codename)