text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
Today’s world is so heavily driven by Siri, Google Now and Cortana, that it would have been impossible to imagine that they could have been around in the 80s let alone the 50s. But is it a concept that new and nascent?
Let’s go back to the Dartmouth conference in 1955, where the term ‘Artificial Intelligence’ was coined for the first time. Here, J. McCarthy, M. L. Minsky, N. Rochester, and C.E. Shannon in August 31, 1955 stated "We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it."
Gauging from the definition quoted in the sidebar “The Dartmouth Conference,” ‘any program or an algorithm is an AI system simply because it does something that would normally be considered as an act in humans’.
Artificial intelligence (AI) is a field of computer science focused at the development of computers capable of doing things that are normally done by people, things that would generally be considered done by people behaving intelligently.
What could have been the reasons that spurred its re-emergence?
Ironically, the foundational concepts of AI have not changed substantially and today’s AI engines are, in many ways, similar to past ones. The techniques of yesteryear had a shortcoming, not due to inadequate design, but because the needful premise and environment weren’t built yet. In short, the biggest difference between AI then and now is that, there is exponential growth of raw data, focus on specific applications and increased computational and simulating resources, which can contribute to the success of any AI system.
As the name would suggest, an AI system would typically be expected to replicate human intelligence and efficiency. However, depending on the target function and application, AI can be classified in terms of the extent of AI- strong AI, weak AI and practical AI. Strong AI would be an AI system simulating actual human reasoning to think and explain human tendencies. These systems are yet to be built. AI systems that behave like humans in a certain way and execute specified functions of intelligent acts of human beings can be termed as Weak AI. A balance between strong and weak AI, Practical AI, are the systems guided by human intelligence but are not enslaved to them. In brief for a system to be AI it does not require to be as intelligent as humans, it just needs to be intelligent.
Machine learning, cognitive computing, productive analysis, deep learning, recommendation systems are all different facets of AI.
Amazon and Netflix recommendations, Text prompt in mobile phones, Apple’s SIRI and Microsoft’s CORTANA, Thermostat, Google’s NOW are excellent examples of AI systems.
For example, IBM Watson uses the concept of facts being expressed in various forms and that each match against each possible form equals proof of the response. The technology first investigates the language input to pick the elements and relationships needed to tell you what you could be looking for and then, uses arrays of patterns consisting of the words in the original request to find matches in colossal collection of text. Each match would then provide a singular sample of proof and each sample of proof is summed to provide Watson with a number allocated with an answer. Watson is a good example of excellently executed classic AI.
Another excellent use case for AI is in data analysis and interpretation. As new data comes in, many of us spend our time reviewing it and making decisions based on the insights we gain from the data. While we may still want to do the decision-making, many of us would barely want to spend our time and resources digging through raw incoming data. What if we could use AI to do just that, while we just use our actual intelligence in the end? vPhrase’s augmented analytics tool Phrazor uses natural language generation technology to makes sense of the data by turning it into effective narratives. With such a technology that allows for automated scenario assessment, businesses need not sift through inexhaustible data to gain insights or make decisions. In the future, we believe, our technology will be able to analyze even more extensive and enormous data sets to not just make better decisions but also derive a conclusion, as humans would do.
As you would observe, the allure is in the alliance between AI and human expertise. The machine is doing what it does best: reviewing enormous data sets and finding patterns to differentiate various activities and situations, while, as humans, we are doing what we do best: examining the situation, fitting it into a larger picture and then deriving solutions of it suitably. | <urn:uuid:75d9f522-fc71-4eba-a5e2-98bb6aad2539> | CC-MAIN-2022-40 | https://phrazor.ai/blog/artificial-intelligence-decoded | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00461.warc.gz | en | 0.956278 | 1,002 | 3.40625 | 3 |
To make your CPS experience more effective and satisfying, get familiar with the following concepts and terms first:
The SSL/TLS certificates that you obtain using CPS let you set up secure web properties and authenticate the secure connection that the browser makes during content delivery. CPS generates and secures the private key of each certificate.
A certificate authority (CA) is a trusted entity that signs certificates and can vouch for the identity of a website. If a certificate is like a license or a passport, then a CA is like the Department of Motor Vehicles or the government, in that it is the trusted agency that issues the identification and verifies your identity before issuing identification.
CAs supported by CPS include DigiCert, and Let's Encrypt. If you want to use a different CA, you must use a third-party certificate and CA.
A digital certificate contains an electronic document that includes a company's identification information (such as the name of the company and address), a public key, and the digital signature of a CA based on that certification authority's private key. Public keys may be disseminated widely, but they are paired with private keys which are known only to the owner. In a public-key encryption system, any person can encrypt content using the public key of the receiver, but the content can be decrypted only with the receiver's private key.
You can think of a certificate as you would a license or passport that identifies your website. Having a certificate provides a way for users to authenticate with a website. Authentication is a method for establishing the identity of a browser connecting to a website and establishing the identity of a website to a browser. A certificate contains the common name (CN) you want to use for the certificate. This is often the fully qualified domain name for which you plan to use your certificate.
To learn about the types of certificates supported by CPS, see View certificate types.
When a CA gets a request for a certificate and verifies your identity, it validates the certificate request. There are three types of validation:
Domain Validation (DV). This is the lowest level of validation. The CA validates that you have control of the domain. An Akamai-managed DV certificate expires in 90 days. Customer supplied DV certificates can expire whenever the CA you acquire the certificate from determines it expires. CPS support DV certificates issued by Let's Encrypt, an automated and open CA run for public benefit.
Organization Validation (OV). This is a higher level of validation. The CA validates whether or not the company is valid, if it is registered, and if the business contact is a full-time employee at the company. The CA uses your organization information to verify you legally own or have the legal right to use the domains listed in your certificate. OV certificates obtained through CPS expire in one year.
Extended Validation (EV). This is the highest level of validation in which you must have signed letters and notaries sent to the CA before signing. EV certificates obtained through CPS expire in two years. EV certificates enable the green bar in web browsers.
Updated about 1 year ago | <urn:uuid:e134c1a8-c40c-4ed8-a9bf-081c23998ee4> | CC-MAIN-2022-40 | https://techdocs.akamai.com/cps/docs/key-concepts-terms | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00461.warc.gz | en | 0.929873 | 653 | 3.296875 | 3 |
An algorithm detects smoke plumes by comparing images from 25 tower-mounted cameras against the more than 10 million it’s been trained on.
Sonoma County, Calif., is using artificial intelligence to help detect blazes before they become wildfires.
Twenty-five tower-mounted cameras are transmitting images via Amazon Web Services to Alchera, a South Korean company that specializes in visual AI algorithm development and deployment. Alchera’s solution applies an algorithm to compare those images against the more than 10 million it’s been trained on to detect smoke plumes.
That’s a big change from the previous approach, which involved emergency dispatchers keeping an eye on the video feeds as they came in -- a nearly impossible task given their other responsibilities, said Sam Wallis, Sonoma’s community alert and warning manager.
“Most of the time, the dispatchers aren’t looking at the screen,” Wallis said. “Of course, they have duties. They have to be answering the phone and dispatching.”
Now, the video from the 25 connected cameras is sent via the cloud to a location where a human in the loop -- an Alchera employee -- can verify that the algorithm has detected smoke.
“We put a box immediately around the smoke,” said Bow Rodgers, manager of Alchera’s U.S. business operations. “It quickly determines that it is smoke, not fog, not steam … and then quickly -- within seconds -- sends the information to a dispatch center that that latitude and longitude is a fire and needs to be addressed.”
Sonoma’s fire and emergency medical technician dispatch center, known as REDCOM, is the county’s primary recipient of these alerts. An alarm sounds in the facility, triggering dispatchers to manually look at the affected area with at least two cameras so they can triangulate the location and dispatch crews, Rodgers said.
Although dispatchers are the only ones who can act on the alerts, the messages go to about 32 email addresses and phone numbers for individuals at fire departments, the county’s Department of Emergency Management and the California Department of Forestry and Fire Protection (Cal Fire), Wallis said.
The county announced March 17 that the Federal Emergency Management Agency’s Hazard Mitigation Grant Program had awarded it $2.7 million for early detection improvements, including augmenting an existing system with AI monitoring. That existing system, called ALERTWildfire, was put in place by a consortium of public and private entities after the 2017 Tubbs Fire, which was one of the state’s most destructive. ALERTWildfire has 746 tower-mounted cameras throughout California.
Sonoma put its cameras on existing infrastructure such as radio towers built for emergency and regular communication. Many are located on high ground, which is crucial for spotting smoke; they have power and backup power and contain communications infrastructure needed to transmit the data, Wallis said.
Sonoma went live with the AI system in March, but Wallis quickly put it on hold. March and April are burn season for the county, which is home to many wineries, and as vintners and other agricultural businesses burned excess vegetation ahead of Sonoma’s summer fire season, the cameras sent alerts about those planned and permitted burns.
“It really quickly overwhelmed us because what we really didn’t take into account is how effective it would be,” he said. “On the first day, we had 18 fire alerts.”
The county resumed the AI detection May 1, when burns were no longer authorized. That month, dispatchers sent 14 responses to fires that the AI detected. Sonoma also began comparing how long it took the AI to detect a fire vs. a 911 call about it to come in.
“In the vast majority of cases, 911 beats the AI,” Wallis said. “That isn’t necessarily a bad thing, though. A lot of the people that start the fire are the ones that call 911.”
Another shortcoming of the camera and AI system is that it is only good for line-of-sight detection, he said. If a fire starts in a valley, the smoke has to get above the ridgeline before the camera can see it. “Very often, the local person is able to respond and call 911 before the smoke gets above it,” Wallis said. “But what we are most concerned with are fires that are started in rural areas where somebody may not have eyes on the fire. That’s where we think we’re going to get the most use out of this.”
Under the contract with Alchera, the county has 24 months to decide whether it will continue and expand the program. Wallis said he plans to evaluate its performance this and next fire season before making any long-term decisions.
California’s 2020 wildfire season was its worst on record, but on July 12, Cal Fire tweeted that almost 142,500 acres had already burned so far this year. That’s 103,000 more than in the same period (Jan. 1 to July 11) last year. | <urn:uuid:cef1ea88-da1a-4727-b66f-956ef90fba91> | CC-MAIN-2022-40 | https://gcn.com/data-analytics/2021/07/ai-sifts-through-video-data-to-spot-fires/316438/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00661.warc.gz | en | 0.960585 | 1,090 | 2.78125 | 3 |
Thanks to cyberattacks making regular headlines in the news, it’s no secret that massive data breaches are a significant threat to organizations. However, a report from F-Secure highlights the rarely-discussed impact these attacks can have on people and families using online services.
According to the report, nearly 3 out of every 10 respondents to the survey experienced some type of cybercrime (such as malware/virus infections, unauthorized access to email or social media accounts, credit card fraud, cyber bullying, etc.) in the 12 months prior to answering.
However, cybercrime was roughly three times more common among respondents using one or more online services that had been breached by attackers. 60% of respondents belonging to this group – called “The Walking Breached” in the report – experienced cybercrime in the 12 months leading up to the survey, compared to just 22% of other respondents.
Cybercrime was even more prevalent among respondents with kids, with 7 out of 10 saying they experienced one or more crimes.
“Personal information stolen from organizations can easily end up being used against people and families through different types of identity theft, fraud, or other types of harm. And with more and more information being stored digitally, what criminals can do with people’s information keeps getting worse. So these attacks on companies can really end up hurting people and not just a business’ bottom line,” explained Laura Kankaala, a security consultant with F-Secure.
How attacks on corporations damage people
Stress and concern was the most common effect of cybercrime, followed closely by loss of time – both of which affected about half of all cybercrime victims surveyed. Certain losses due to cybercrime were more common among The Walking Breached than other respondents: loss of money, personal information, and loss of control over personal information or accounts.
Notably, half of The Walking Breached that experienced cybercrime prior to filling out the survey reused passwords, and 69% reused passwords with slight variations.
Entire industries have developed to help cybercriminals monetize people’s personal data. Account passwords and login credentials, for example, are often bought and sold. These industries fuel the risks of fraud and other crimes for people whose information has been stolen.
Attackers threatening to leak information
And in a new trend, attackers who use encryption to hold organizations’ data for ransom are now stealing that information and threatening to leak it, demonstrating the lengths criminals will go to profit from people’s data.
In one particularly severe incident involving the breach of a company operating psychotherapy clinics, an extortionist (or extortionists) threatened to release the personal information and therapy records of former patients unless those individuals paid a ransom.
According to Kankaala, people rarely think about how valuable the information stored in online accounts really is until that information is gone or exposed.
“Recovering hacked or lost social media accounts can sometimes be really difficult and we tend to recognize the value of something only once it’s gone. These accounts are not ‘just social media’ or ‘just email’ – they hold records of our past, pictures we may have not stored anywhere else or conversations that are either private or something we’ll miss once they’ve been deleted,” said Kankaala. | <urn:uuid:b5c20790-ecd2-49f8-b5b0-3e138d685dc3> | CC-MAIN-2022-40 | https://www.helpnetsecurity.com/2021/02/11/attacks-corporations-damage-people/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00661.warc.gz | en | 0.93951 | 684 | 2.921875 | 3 |
Data is the most elusive kind of asset in the modern economic world. Unlike physical goods, data does not disappear once it is stolen. Sometimes, the truth – or the full extent of the truth – remains unknown for months, weeks or even years. For example, Yahoo did not learn the true extent of its 2014 data breach for almost two years14 – and they found out about it while investigating another security incident.
Data breaches can remain undetected for a long time. Oftentimes, even if the vulnerability that made a data breach possible is discovered (for instance, through penetration testing), it is impossible to tell if it has already been used by a malicious actor or not.
As the value of data has increased, so has the value of cyber intelligence activities. Stolen data often shows up for sale on dark web forums and in hacker communities shortly after it is stolen. The use of experienced covert agents, who can maintain a presence in such communities, is a valuable tactical option that can ensure timely mitigation of any breach.
It is important to point out that, once customer data shows up for sale or exchange, we are already talking about mitigation. Some proactive measures can be taken to make stolen data more difficult to use (for example, data can – and should – be encrypted), but once data is stolen, whether or not it can be used is merely a matter of resources.
However, the conduct of a company following a data breach can make a great difference. Kaspersky’s study15 reveals that almost half of the surveyed companies affected by a security breach had to disclose it to affected customers, and more than one third had to disclose it to customers in general.
Furthermore, timely disclosure can give a company a significant advantage in its post-disclosure PR efforts; Yahoo, for instance, was heavily criticized for failing to adequately disclose its breach. In contrast, in a more recent incident, CloudFlare’s conduct following their data leak incident in February 2017 was viewed a lot more favorably. The incident was disclosed to all interested parties – in fact, CloudFlare specifically targeted users of some more popular partner services, such as Uber, and recommended that they change their passwords – and the issue that caused the data leak was quickly and transparently patched.
In CloudFlare’s case, the leak was detected by a benevolent party (Google discovered it as part of its Project Zero16 initiative). Still, not everyone was as fortunate: in 2012, LinkedIn17 found itself not in the difficult, but manageable position of disclosing a data leak, but in the far less enviable position of confirming one, after the data was posted by the hacker who had stolen it on a hacker community’s forum. Furthermore, because adequate protection techniques were not employed for user passwords18, the login credentials of users who employed common passwords (such as dictionary words) were very quickly decrypted. One can certainly argue that this was partly the users’ fault, but once data is on a company’s servers, protecting it becomes its responsibility, not the users’. Furthermore, a company whose security has just been breached is generally not in a position where it can publicly criticize its users’ security without causing significant damage to its reputation.
The existence of disclosure channels and best practices in this regard suggests that a data breach, while a very serious and troubling event, is not necessarily the end of the road for a company. If accurately and timely communicated, the effects of a security breach can be mitigated. As we have already seen, some loss of reputation (at least temporary) is inevitable; however, if proper security measures are taken, it is at least possible to guarantee that an identical incident will not happen again. | <urn:uuid:61864cac-1bba-4965-a574-6f4ced562a38> | CC-MAIN-2022-40 | https://bluedog-security.com/2019/05/have-i-been-hacked/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00661.warc.gz | en | 0.970885 | 759 | 2.703125 | 3 |
The radio access network (RAN), also known as a radio network, is one of the most fundamental parts of the overall mobile cellular network. It creates a wireless connection between the SIM-enabled mobile cellular devices and the mobile network.
A Radio Access Network (RAN) is an essential part of the mobile network which uses radio (RF) waves to wirelessly connect cellular devices (e.g. phones) to the cell towers. The cell towers are linked to the mobile core network that connects them to external networks like PSTN, ISDN and the Internet.
In mobile communications, you often encounter terminologies like RNC, RAN, NodeB, eNodeB, gNodeB, radio base stations, etc. These terminologies belong to the radio access network (RAN), which we will cover in this post.
What does a Radio Access Network (RAN) do?
The radio access network (RAN) connects all cellular devices, including mobile phones, mobile broadband routers and other SIM-enabled devices, including tablets, smartwatches and IoT sensors and actuators, to the mobile network.
The most visible part of a radio network is the cell tower, technically known as a base station. The base station is represented by different terminologies in 2G, 3G, 4G and 5G networks. A mobile phone communicates with the base station via two radio links called uplink and downlink. The downlink is the transmission from the base station to the mobile phone, whereas the uplink is the transmission from the mobile phone to the base station.
In CDMA networks including IS-95 and CDMA2000, the downlink is called a forward link and the uplink is called the reverse link. Have a look at the diagram below that shows the basic concept of a radio access network.
While the basic concept of a radio access network is relatively straightforward, it is the intricacies of the underlying radio access technologies like FDMA, TDMA, narrowband CDMA, wideband CDMA, OFDMA, SC-FDMA that make the overall cellular experience so seamless.
A mobile network consists of a radio network, core network, user device, and external networks at a very high level. The radio base station is the most visible part of a radio access network for the general public. Base stations are generally called cell towers, and they are tall masts with cellular antennas and other communication links mounted on them.
At its simplest, the mobile core network is like a telephone exchange where all the switching of the calls and data sessions takes place. Finally, the mobile core network is connected to external networks like PSTN and ISDN for voice calls and the public internet and other data networks for data services.
The radio network is one of the most expensive investments for a mobile operator because base stations’ presence ensures cellular connectivity. If your mobile operator does not deploy the radio network properly, e.g. if they don’t have enough base stations or frequency spectrum, that can lead to service degradation due to lack of coverage and capacity.
The diagram below provides a simplified view of the network nodes that form part of the overall network architecture. Please note that the list below is simplified and does not include all the network nodes.
GRAN and GERAN– Radio Access Network for 2G GSM
The radio access network within the second-generation GSM (Global System for Mobile Communications) networks consists of a radio base station known as the Base Transceiver Station or BTS. The BTS is managed by another entity called the Base Station Controller or BSC.
The BTS or Base Transceiver Stations is responsible for managing all the radio communication between a mobile handset and the mobile network. This crucial network entity creates the “coverage” or “radio signal” in a 2G GSM network. BTS is controlled by another network entity called the Base Station Controller (BSC).
Base Station Controller usually controls several Base Transceiver Stations (BTS). BSC has the intelligence to manage mobile radio resources, and it controls tasks such as handover and frequency allocation. The BSC is situated between Mobile Switching Centre (MSC) and BTS.
If a BTS is facilitating a mobile call and the call quality starts to deteriorate due to decreasing signal strength, the BSC may intelligently assign the call to another BTS within its control with better signal strength. If the BSC cannot find a BTS with sufficient signal strength, then the MSC may assign the call to another BSC which in turn hands over the call to a BTS within its control to continue the call while ensuring appropriate service quality levels.
Both BTS and BSC are part of the Base Station Subsystem (BSS) in the GSM network, which then connects them to the mobile core network. It may be interesting to note that BSS is also short for another important entity in mobile communications called Business Support Systems.
When GSM networks were initially introduced, they only had the circuit-switched capability that allowed them to perform voice calls the text messages (SMS) only. However, with the introduction of packet-switched capability through GPRS and EDGE, they were able to accommodate efficient mobile data services.
The original GSM radio network architecture that only facilitated circuit-switched voice calls and SMS is called GSM Radio Access Network (GRAN), whereas the evolved radio network architecture after the introduction of packet-switched EDGE is called GSM EDGE Radio Access Network (GERAN).
UTRAN – Radio Access Network for 3G UMTS
The network generation that follows GSM and EDGE is 3G UMTS, where UMTS stands for Universal Mobile Telecommunication System. In 3G UMTS networks, the base station is called Node B, which communicates with the mobile handsets just like BTS does in a GSM network. Node B is controlled by another network entity called the Radio Network Controller or RNC.
The Radio Access Network in 3G UMTS is called UMTS Terrestrial Radio Access Network (UTRAN), which consists of Node B and RNC. As part of UTRAN, RNC connects multiple Node Bs to the mobile core network. The RNC has the responsibility to control a number of Node Bs.
Radio resource management and mobility management are among the key tasks performed by an RNC. The RNC is situated between NodeB and the mobile core network. The RNC is connected to the circuit-switched MSC for voice calls and text messaging (SMS) and to the packet-switched SGSN (Serving GPRS Support Node) for mobile data (internet) services. Have a look at the green boxes in the diagram below to visualise this concept.
E-UTRAN – Radio Access Network for 4G LTE
The fourth-generation LTE (Long Term Evolution) networks employ the OFDMA (Orthogonal Frequency Division Multiple Access) technology for radio access. The responsibilities for all radio communication in LTE sit with Evolved Node B or eNodeB, which is the base station in LTE networks. Unlike GSM and UMTS networks, where a combination of a base station and a controller is used for the radio access network functions, LTE uses eNodeB for all radio communication.
The LTE radio access network is called Evolved UMTS Terrestrial Radio Access Network or E-UTRAN. As part of E-UTRAN, eNodeB connects directly to the 4G core network components, MME (Mobility Management Entity) and Serving Gateway (S-GW).
The core network in 4G LTE is called Evolved Packet Core (EPC), to which MME and S-GW belong. The EPC is fully packet-switched, and it works alongside the IMS (IP Multimedia Subsystem) to enable IP based voice calls and SMS.
NG-RAN– Radio Access Network for 5G New Radio (NR)
The fifth generation of mobile networks is enabled by the New Radio technology, abbreviated as NR. In 5G NR, the radio access network is called Next Generation Radio Access Network (NG-RAN), and it consists of two kinds of base stations: gNodeB and ng-eNodeB (Next Generation Evolved Node B).
gNodeB is the radio base station that allows 5G cellular devices to connect to the 5G radio network, whereas ng-eNodeB is a special kind of 4G LTE base station that allows 4G LTE mobile devices to connect to the 4G radio network in 5G deployments where a 5G core network provides core capabilities for both LTE and NR radio networks. Like LTE, the radio access technology used by 5G NR is OFDMA (Orthogonal Frequency Division Multiple Access). 5G NR is also a packet-only network that enables all services, including voice calls and SMS over IP.
I have written a detailed post on the base stations used by 2G, 3G, 4G and 5G networks. If you want to understand how the responsibilities of gNodeB and ng-eNodeB differ in a 5G network deployment, you can read the last section of that post.
Here are some helpful downloads
Thank you for reading this post. I hope it helped you in developing a better understanding of cellular networks. Sometimes, we need extra support, especially when preparing for a new job, studying a new topic, or buying a new phone. Whatever you are trying to do, here are some downloads that can help you:
Students & fresh graduates: If you are just starting, the complexity of the cellular industry can be a bit overwhelming. But don’t worry, I have created this FREE ebook so you can familiarise yourself with the basics like 3G, 4G etc. As a next step, check out the latest edition of the same ebook with more details on 4G & 5G networks with diagrams. You can then read Mobile Networks Made Easy, which explains the network nodes, e.g., BTS, MSC, GGSN etc.
Professionals: If you are an experienced professional but new to mobile communications, it may seem hard to compete with someone who has a decade of experience in the cellular industry. But not everyone who works in this industry is always up to date on the bigger picture and the challenges considering how quickly the industry evolves. The bigger picture comes from experience, which is why I’ve carefully put together a few slides to get you started in no time. So if you work in sales, marketing, product, project or any other area of business where you need a high-level view, Introduction to Mobile Communications can give you a quick start. Also, here are some templates to help you prepare your own slides on the product overview and product roadmap. | <urn:uuid:003d5ba8-4a2a-4d24-b670-257608a4b8e0> | CC-MAIN-2022-40 | https://commsbrief.com/radio-access-network-ran-geran-utran-e-utran-and-ng-ran/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00661.warc.gz | en | 0.92823 | 2,230 | 3.875 | 4 |
A big role of machine learning is in classifying tasks that have great treasures in terms of business applications. Like, classifying whether the loan customer will default or not and there are many such applications where classification tasks are done. In machine learning, there are many classification algorithms that include KNN, Logistics Regression, Naive Bayes, Decision tree but Random forest classifier is at the top when it comes to classification tasks.
Random Forest comes from ensemble methods that are combinations of different or the same algorithms that are used in classification tasks. The random forest comes under a supervised algorithm that can be used for both classifications as well as regression tasks. It is the easiest and most used algorithm when it comes to classification tasks.
If you are not familiar with the Decision Tree classifier, I would recommend you initially go through the concept of decision tree here as they are fundamentals that are used in the Random Forest classifier.
Introduction to Random Forest Classifier
In a forest there are many trees, the more the number of trees the more vigorous the forest is. Random forest on randomly selected data creates different decision trees and then makes the collection of votes from trees to compute the class of the test object.
Let's understand the random forest algorithm in layman's terms through an example, consider a case where you have to purchase a mobile phone. You can search on the internet about different mobile phones, you can read users' reviews on different websites or you can decide to ask your friends to recommend a phone. Suppose you have chosen to ask your friends about it. Your friend recommends you a phone and you make a list of all the recommended phones and finally, you make them vote for the best amongst the all recommended phones. The phone with the highest recommendation is the phone you would purchase.
In the above example, there are two scenarios: first is to ask your friends to recommend you a phone. This is similar to the decision tree algorithm and then asking them to vote for all recommended phones. The whole task from asking for recommendations and then asking to vote from all those recommendations is nothing but a Random forest algorithm.
Random forest is based on the divide-and-conquer perspective of decision trees that are created by randomly splitting the data. Generating decision trees is also known as a forest. Each decision tree is formed using feature selection indicators like information gain, gain ratio, and Gini index of each feature. Each tree is dependent on an independent sample. Considering it to be a classification problem, then each tree computes votes and the highest votes class is chosen. If its regression, the average of all the tree's outputs is declared as the result. It is the most powerful algorithm compared to all others.
The only difference that makes random forest algorithms different from decision trees is the computation that is made to find the root node and splitting the attributes nodes will run in a random way.
How does the algorithm work?
Pick random samples from the dataset.
Generate decision trees for each sample and compute prediction results from each decision tree.
For each predicted result calculate votes.
Choose the prediction result having maximum votes as the final prediction.
Working diagram of random forest
The basic parameters that are used in random forest algorithms are the total numbers of trees, minimum spilt, spilt criteria, etc. Sklearn package in python offers several different parameters that you can check here.
Why use the Random Forest algorithm?
It can be used for both classifications as well as regression tasks.
Overfitting problem that is censorious and can make results poor but in case of the random forest the classifier will not overfit if there are enough trees.
It can handle missing values.
It can be used for categorical values as well.
Random forest Vs Decision Tree
Random forest is nothing but a set of many decision trees. Decision trees are faster. Extensive decision trees might get troubled by overfitting, but random forest prevents that by generating more trees on random subsets. Random forests are complex and not easy to explain but decision trees are easy and can be converted to certain rules.
How can random forests be used to check about feature importance?
The random forest also gives you a good feature that can be used to compute less important and most important features. Sklearn has given you an extra feature with the model that can show you the contribution of each individual feature in prediction. It automatically calculates the appropriate score of independent attributes in the training part. And then it is scaled down so that the sum of all the scores comes out to be 1.
The score will help you to decide the importance of independent features and then you can drop the features that have least importance while building the model.
Random forests make use of Gini importance or MDI (Mean decrease impurity) to compute the importance of each attribute. The amount of total decrease in node impurity is also called Gini importance. This is the method through which accuracy or model fit decreases when there is a drop of the feature. More appropriate the feature is if large is the decrease. Hence, the mean decrease is called the significant parameter of feature selection.
Random Forest applications
There are many different applications where a random forest is used and gives good reliable results that include e-commerce, banking, medicine, etc. A few of the examples are discussed below:
In the stock market, a random forest algorithm can be used to check about the stock trends and contemplate loss and profit
In banking, the random forest can be used to compute the loyal customers that means which customer will default and which will not. Fraud customers or customers having a bad record with the bank.
Calculations of the correct mixture of compounds in medicine or whether identifying any sort of disease using the patient's medical records.
The random forest can be used for recommending products in e-commerce.
What are the advantages and disadvantages of the Random forest algorithm?
It overcomes the problem of overfitting.
It is fast and can deal with missing values data as well.
It is flexible and gives high accuracy.
Can be used for both classifications as well as regression tasks.
Using random forest you can compute the relative feature importance.
It can give good accuracy even if the higher volume of data is missing.
Random forest is a complex algorithm that is not easy to interpret.
Complexity is large.
Predictions given by random forest takes many times if we compare it to other algorithms
Higher computational resources are required to use a random forest algorithm.
The random forest algorithm is one of my favorite algorithms because of the reliable results and high accuracy it gives. In this blog, I have tried to cover the random forest classifier. How it is used, what is the working principle of random forest, what role do decision trees play in a random forest, why you should use the random forest and different applications of random forest. And at last, I have discussed the advantages & disadvantages of using random forest algorithms.
Thanks for reading. I assume that you have learned as much from this blog as I did by writing the blog. Follow Analytics Steps routinely on Facebook, Twitter, and LinkedIn. | <urn:uuid:7e37a58e-23c6-489b-a4dc-4a69f02122da> | CC-MAIN-2022-40 | https://www.analyticssteps.com/blogs/how-use-random-forest-classifier-machine-learning | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00661.warc.gz | en | 0.929275 | 1,513 | 3.59375 | 4 |
Researchers have found a way to provide reliable information about how much damage a building has sustained in an earthquake, making it easier and faster for responders to assess a building's safety and reoccupation prospects.
As aftershocks from the recent earthquakes in California had some residents sleeping outdoors to avoid being trapped in unsafe buildings, researchers at the Lawrence Berkeley National Laboratory have developed a way to speed the evaluation of buildings' safety after a quake.
Using an optical sensor -- which captures and transmits data about the displacement of floors of a shaken building, called interstory drift -- researchers can provide reliable information about how much damage a building has sustained, making it easier and faster for responders to assess a building's safety and reoccupation prospects after an earthquake.
The discrete diode position sensor (DDPS) works by projecting laser light vertically across a story's height to a detector located on the adjacent floor to determine the baseline position of the light. After a quake, the sensor will be able to tell if the position of the laser beam hitting a sensor on an adjacent floor has drifted. As 5G data transmission becomes a reality, the information can be sent to response officials nearly instantaneously.
The ability to measure and display key interstory drift information immediately after an earthquake would give responders critical data for making decisions on building occupancy, evacuations and evaluating what parts of important facilities such as hospitals can safely be used. The drift profile would also help inspectors quickly and safely find potential damage , especially important when there may be hundreds of buildings to evaluate after a quake.
“We are excited that this sensor technology is now ready for field trials, at a time when post-earthquake response strategies have evolved to prioritize safe, continued building functionality and re-occupancy in addition to ‘life safety,’” said David McCallen, a senior scientist in the Energy Geosciences Division at Berkeley Lab and faculty member at the University of Nevada, who leads the research collaboration.
DPPS will be deployed this summer a building at the Berkeley Lab, which sits adjacent to the Hayward Fault, one of the most dangerous faults in the United States. | <urn:uuid:fe828cd3-dcd1-4199-87c8-94a813717e30> | CC-MAIN-2022-40 | https://gcn.com/emerging-tech/2019/07/sensor-could-speed-earthquake-response/298096/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00661.warc.gz | en | 0.948555 | 435 | 3.25 | 3 |
Think of eSIM and iSIM as siblings: two distinct individuals with many shared traits—but also some important differences.
eSIM and iSIM are technologies for authenticating subscribers and devices on mobile networks. eSIM was the initial innovation, based on the open, vendor-neutral standard developed by the GSMA. iSIM is a more recent development and hasn’t been declared a standard as yet, but it already enjoys broad industry support.
Industry support for iSIM was a key point that emerged when I spoke recently with Gary Waite, a systems architect for the GSMA and an expert on next-generation SIM standards.
“Most of the industry is actively supporting efforts to establish iSIM as a standard,” Waite said about the iSIM standards setting process. “They realize that you have more chance of success if you have a common way of doing this, rather than everybody creating their own proprietary solutions.”
To understand the relationship between eSIM and iSIM it’s helpful to understand why the eSIM standard emerged in the first place.
eSIM evolved as an alternative to traditional SIM technology. Legacy SIMs are based on the familiar, removable cards that we’ve all seen (and probably dropped) at some point.
Updating millions of physical SIM cards in the Internet of Things wouldn’t just be a challenge, it’d be almost impossible.
Cellular network providers use SIM cards to store profiles that authenticate a device for use on the provider’s network, and to provide secure identification and storage, among other functions.
Traditional SIM cards are everywhere; there are countless millions active on mobile networks today. Yet despite their ubiquity, they are not without design constraints. SIM cards must be physically swapped when their host device changes networks — and who carries around a SIM removal tool anyway?
As the IoT market matures and many millions of additional IoT devices flood the market, this physical constraint becomes more difficult and expensive to accommodate. According to market research firm IoT Analytics, the number of IoT devices on the market will grow to 10 billion by 2020 and 22 billion by 2025. While swapping a smartphone’s SIM card before travelling overseas is an inconvenience for consumers, it’s debilitating for a business that could easily have thousands of IoT devices throughout the enterprise.
Additional detail on the challenges associated with physical SIM cards include:
These challenges are difficult enough with today’s existing array of smartphones, tablets, readers and other consumer mobile devices. But consider that businesses have already begun ramping up their fleets of IoT devices—embedding them into everything from automobiles to shipping palettes.
As the number of consumer smart home and business IoT applications skyrockets, it’s easy to imagine a day when enterprises manage millions of IoT devices. Updating millions of physical SIM cards wouldn’t just be a challenge, it would be almost impossible.
Embedded eSIM technology offers an elegant, robust, and almost infinitely scalable solution to the legacy SIM challenges in IoT applications. An eSIM is still a physical SIM, but instead of being removable, it’s soldered permanently into a device. Authorized users can access and update profiles and other data on the eSIM via an over-the-air, remote SIM provisioning solution (RSP).
An eSIM addresses every one of the challenges I discussed a moment ago:
You could argue that eSIM was a revolutionary innovation because it reduced the cost and management complexity of physical SIM cards so significantly that it makes IoT scalable. As a result, companies that deploy large numbers of IoT devices aren’t locked into their initial network operator—or its pricing and access policies.
iSIM amplifies and extends these and other qualities, while also eliminating some of eSIM’s shortcomings.
iSIM’s major innovation is that it moves SIM functionality into a device’s permanent hardware array. Unlike eSIM, however, iSIM no longer relies on a separate processor; nor does it demand a significant share of a device’s hardware footprint. Instead, iSIM enables hardware OEMs and processor design companies to design system-on-a-chip (SOC) architectures that integrate SIM functionality with an existing, onboard processor and cellular modem.
eSIM offers another important enhancement that stands to benefit every mobile device: security. Physical SIM cards have always been more secure than software standards, since hardware-based systems are inherently more difficult to hack. eSIM and iSIM are difficult to steal, thereby improving the reliability and integrity of the devices that employ them. iSIM builds on the security credentials of eSIM and SIM further. Located on a secure enclave on a system on chip (SoC), it affords a root of trust for the mobile network, made possible by an additional layer of authentication. This reuse is especially beneficial in payment, identity and critical infrastructure applications.
“They’re building the foundation for a secure device, and we’ve never really had that before,” observed Waite.
Since iSIM devices require fewer components, they cost less to build. And as a rule, iSIM’s simpler design also leads to more reliable—and thus less costly—devices.
On a per-unit basis, iSIM’s cost advantage may be small. But when an organization purchases hundreds of thousands of IoT devices at a time, those small cost reductions can add up to massive savings.
iSIM’s cost advantage becomes more important when you consider the growing market for IoT devices—which in order to be practical, must be very small, reliable, and inexpensive.
In fact, as Gary Waite told me, the iSIM cost advantage is critical to unlocking the full potential of the IoT market.
“In the IoT space, cost is absolutely everything,” Waite said. “That’s where iSIM really comes into play—the significant cost difference enables IoT applications that weren’t previously viable.”
“It’s a whole new IoT world that we’re just starting to explore,” Waite noted. “And it’s largely because of iSIM that this new world will be possible.”
You’re not alone when you navigate the IoT world built by iSIM and eSIM. Kigen focuses on eSIM and iSIM solutions and has a proven track record of successful projects with established partners. Their consultative approach to every customer ensures smooth transition to SIM technologies. Kigen will help you select the right solution and become an IoT leader in your industry. Learn more here.
This blog post was originally published on Arm Blueprint. | <urn:uuid:73548edc-7f6c-4348-9f85-b961cd63cc81> | CC-MAIN-2022-40 | https://kigen.com/resources/blog/sim-esim-isim-whats-the-difference/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00661.warc.gz | en | 0.937231 | 1,382 | 2.515625 | 3 |
How to transfer files over LAN?
Transferring files over a LAN is a simple process that can be done with any standard computer. It really only takes the push of a button or two, and you’re finished. This article will go into detail about how to transfer files over LAN using both Windows and Mac OSX operating systems. In addition, we’ll also provide some tips for transferring large amounts of data quickly and efficiently in case you have more than what one file folder can hold.
In order to share files from one computer to another via Ethernet cable connection, it’s important that the computers are on the same network by either connecting them directly together or through an access point (also called a router). Once they are connected, open your File Explorer window on the “sending” computer and open up the folder that has the file(s) to be transferred. Then, drag and drop those files into the “receiving” computer’s File Explorer window.
What is a LAN?
A LAN is a local area network that is made up of computers or other computing devices that are all in the same location. The LAN should be seen as a small enterprise network, usually with less than 250 computers. LAN can also be referred to as a computer network that is limited to just the computers that are present in the same physical location. This is usually different from WAN which crosses long distances and allows for large networks with hundreds, if not thousands of computers with access to each other over large geographical areas.
A LAN can also refer to a metropolitan area network. In this case, the term ‘LAN’ can be used interchangeably with ‘MAN’, or metropolitan area network. This type of LAN would only cover a major city and its suburbs because its operation radius is quite limited.
How to transfer files over LAN?
There are a few ways that you can transfer files over a LAN. The first way is to use Windows Explorer. To do this, open up the folder with the files you want to transfer. Then click on “Send To”, and locate your destination folder for your files. You can also right-click on the file(s) you want to move and select “Send To”. Another way of transferring files over a LAN is by using SharePoints Files/Documents feature. Open the document from the computer where it’s saved, then choose Share from the File Menu. From there, choose to Send Link, then fill in your necessary information and click Next>Done. Finally, if you don’t have any of these programs, another way to transfer files is to use the FTP method. To do this, you need an FTP program like FileZilla and two windows open, one for each computer. You’ll also need a network connection like a LAN or WiFi. Then go to “File” and click Site Manager and fill out your information (it should be similar to your usual username and password for your internet.) Then, go to “File” again, then click Quick Connect. This should automatically link the two computers together. Once that’s done, you can drag and drop any files that you want to transfer into the folder on the other computer.
What are the advantages of using a LAN?
The advantages of using a LAN are many, including reduced operating costs. Using a LAN makes it easier to share data, equipment, and buildings among employees. The use of LANs means that computer systems are more compatible with each other, which is beneficial for multi-national corporations that have operations in many countries. This also means less trouble in setting up the network, since the two systems are compatible. Networks are easier to set up if they use current networking technology supported by most operating systems, rather than older technology that may not be supported by some common operating systems.
What are the disadvantages of using a LAN?
There are a few disadvantages of using a Local Area Network. One of those disadvantages is the possible latency. The LAN does not have as high bandwidth as other technologies such as Wi-Fi, which can lead to a higher latency for data transfers. Another disadvantage is that the implementation of the LAN requires more technical knowledge than other technologies like Wi-Fi – making it difficult for people who do not know how to use it.
A LAN or local area network is a group of computers that are all connected to one another. A LAN allows for fast and efficient data transfers because the devices in the network can easily share files with each other. However, there are some disadvantages when it comes to using a LAN system which include increased vulnerability to viruses and malware infections as well as potential problems caused by power outages. In order for you to set up your own Lan be sure that every device has an Ethernet card installed before installing any software on them. When setting up your LAN make sure not only do you have ethernet cards but also internet connection from at least two different providers so if one goes down you won’t lose connection completely! | <urn:uuid:fc9e5f56-e569-4ab1-a97f-f7d9e3499dda> | CC-MAIN-2022-40 | https://gigmocha.com/how-to-transfer-files-over-lan/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00661.warc.gz | en | 0.948565 | 1,043 | 2.859375 | 3 |
For over 50 years, hard disk drives (HDDs) have been an automatic choice for computer users interested in proven technology. But while HDDs have steadily improved in terms of storage capacity and form factor, they continue to have limitations in other areas. As electromechanical data storage devices with moving parts, HDDs can suffer from performance issues over time, especially after facing physical stress. In addition, HDDs can slow down because of the way they read and write data. This is where solid-state drives (SSDs) come into the picture as a strong alternative for storage.
What does solid-state storage mean?
Solid-state storage is a type of computer storage that uses electronic circuits to store data instead of moving parts. It’s unlike conventional electromechanical drives that may use rotating disks and magnetic material. Solid-state storage usually employs non-volatile flash memory. In computing, non-volatile storage is a type of storage that retains data even without power. Compare this to random access memory (RAM) —which loses data after losing power.
What is flash memory?
Flash memory is a non-volatile and long-life computer storage medium that stores data even without power and can be erased electrically. There are two types of flash memory, NOR Flash and NAND Flash. NOR Flash has faster read speeds but takes longer to erase. Meanwhile, NAND Flash typically has greater capacity for the same cost and is faster to write.
What is a solid-state drive?
Here’s a brief solid-state drive definition: A modern solid-state drive, also known as SSD, is a solid-state storage device that typically uses flash memory. Although most SSDs nowadays use NAND Flash, any drive without moving parts is technically an SSD, even if it uses a different storage medium. For example, the earliest SSDs were based on EAROM (electrically erasable read-only memory).
Brief solid-state drive history
SSDs have been around since the 1970s, but it wasn’t until the 1980s that manufacturers of SSDs began considering flash memory for SSDs. While the first SSDs used EAROM, later ones were also RAM-based. Both mediums were either too slow or volatile. Things changed when flash-based SSDs began attracting the attention of computer enthusiasts in the 90s.
While offering excellent performance, SSDs were unfortunately too expensive to earn widespread adoption until the 2010s. In fact, by 2012, SSDs only had 6 percent of the PC storage market, according to IHS. However, things rapidly changed as prices fell. Today, SSDs outsell HDDs as consumers enjoy better performance at more competitive prices.
SSDs vs. HDDs
So what is better, SSDs or HDDs? To help answer this question, you can read our overview of HDDs here: What is a hard drive. SSDs have a number of advantages over HDDs, but both are useful for different reasons. Here are some factors to consider:
The primary advantage of using an SSD over an HDD is that it’s significantly faster — while an HDD takes time to move its parts to read and write data, an SSD does it more quickly through its electric circuitry. In fact, a mid-range SSD can be 5 to 20 times faster than its HDD counterpart. This means that a computer with an SSD will boot faster, load apps faster, and offer fewer delays when handling heavy tasks than a computer with an HDD.
SSDs are more energy efficient. You’ll notice that more laptops are offering SSDs instead of HDDs these days for this reason. Although the difference of a few watts may not seem like much, it can increase a laptop’s battery life by as much as 20-45 minutes on average.
SSDs are less prone to breakage from falls, shock, or high temperatures than HDDs because they have no moving parts. On the other hand, a typical HDD platter (the disk that stores data) can break more easily from a fall. Dust can also negatively impact an HDD’s performance and give it a bad sector or two.
Advantages of HDDs
Although SSDs are more affordable than before, they’re not as cost-effective as HDDs yet. A typical hard disk is still cheaper per gigabyte, and the difference can add up. So much so that many computer users looking for larger storage capacity pick up HDDs instead of SSDs.
Is SSD better than HDD for gaming?
SSDs are better than HDDs in almost every way, including playing games. The faster speed of a typical SSD helps boot and run your games faster. That's why so many new gaming PCs, and even the PlayStation 5, use an SSD instead of an HDD. But whether you use a gaming SSD or an HDD, we do recommend that you take advantage of a top antivirus for gaming to secure your data from phishing, malware, keyloggers, and exploits. Unfortunately, gaming security threats nowadays can quickly ruin any gamer’s day.
Can an SSD get a virus?
Any device that stores data can theoretically have a malware infection like a virus, whether it's a hard disk drive or an solid-state drive. Some dangerous malware, like a kernel rootkit can be quite challenging to remove. We recommend you use intelligent cybersecurity software to shield your drive from all kinds of malware proactively. | <urn:uuid:ea33efca-d63b-473e-8aae-8372eb80ae42> | CC-MAIN-2022-40 | https://www.malwarebytes.com/computer/what-is-ssd | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00661.warc.gz | en | 0.946671 | 1,131 | 3.328125 | 3 |
However, introducing automation is not just about implementing the technology. It requires staff with expertise in both the tool(s) and embedding it within an organization’s test strategy. Everything cannot be automated—knowing when and where automation can bring the most value is important to maximizing results and ROI.
So, what testing scenarios can benefit the most from automation?
- Regression testing is necessary with each release to ensure that nothing was broken when adding a new layer, maintaining code quality. These frequent, repetitive tests can be dull and overwhelming for testers to perform manually with their limited bandwidth, making them ideal candidates for automation. In turn, this allows testers to focus on the scripts that cannot be automated.
- Agile/DeSecvOps has given rise to continuous testing, which involves running automated tests at every stage of the SDLC instead of at the end. With an accelerated time to market, it’s critical for developers to get feedback as quickly as possible. By finding and correcting bugs earlier and often in the process with these automated tests, organizations can mitigate the risk involved in each release.
- The scale of what an organization needs to test for one product has dramatically increased as products are released across multiple platforms on different operating systems, browsers, screen sizes, resolutions, etc. This means that every test must be run across all of the configurations to ensure quality regardless of how a user accesses it. An organization that relies on manual testing simply won’t be able to keep up as digital footprints continue to expand. Automation can help lighten the burden by conducting these repetitive tests consistently across all scenarios.
- End-to-end testing—testing the entire workflow from start to finish—is critical at a time when creating a seamless user experience is more important than ever. But workflows can be complex, especially when considering the other systems an app integrates with, and there are virtually limitless paths that a user could take. Automation can help create and run thousands of tests, significantly increasing the coverage while saving time.
- Automation can help create synthetic data sets used for testing. Industries like banking need transactional data to perform testing, however, they cannot copy production data due to privacy regulations. Automation can create the data sets required for testing, saving time and effort.
Intelligent Test Automation
Test automation technology that leverages Artificial Intelligence (AI) and/or Machine Learning (ML) offers ways to further increase testing coverage in an accelerated, cost-effective way. These next-gen testing technologies, such as Eggplant Digital Automation Intelligence (DAI) by Keysight, take a model-based approach to testing. It starts with building a complete digital floorplan of the application that includes all possible touchpoints, then linking those together to create the potential user paths and functions that need to be tested. AI can then analyze the model, find paths testers may have missed, and create test cases to address these gaps. Instead of testing in a traditional linear fashion, AI-driven test automation takes a more comprehensive view that focuses on the entire user experience.
By leveraging intelligent test automation, organizations can:
Accelerate Your Digital Transformation with Test Automation
Partnering with a test automation consulting services company like CTG can help bridge the gap between technology and people. We leverage partnerships with top test automation technology providers like Keysight Technologies (Eggplant DAI), Tosca, Micro Focus, Ranorex, Selenium, and cypress.io—allowing us to find the right technology to meet your unique business needs.
Our experts can bring a strong test automation framework to help you get started, work alongside your internal staff to apply automation where it can have the most value, and provide training on the technology and best practices until you have the confidence to proceed on your journey alone. We also offer best-in-class testing training via CTG Academy, where companies can learn from our experts with hands-on experience with test automation projects. | <urn:uuid:987314ce-33da-4e82-8605-6da07ce71458> | CC-MAIN-2022-40 | https://be.ctg.com/digital-accelerators/test-automation-consulting-company/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00061.warc.gz | en | 0.926773 | 813 | 2.59375 | 3 |
This is part-2 of a 2 part series that continues to discuss cloud threats and how they affect web applications in the cloud. The following addresses insecure API’s and Management Plane, deepening the threat landscape.
Management Plane – Security Perspective
The cloud API management plane is one of the most significant differences between traditional computing and cloud computing. It offers an interface, which is often public, to connect the cloud assets. In the past, we followed the box-by-box configuration mentality, where we configured the physical hardware stringed by the wires. However, now, our infrastructure is controlled with application programming interface (API) calls.
The abstraction of virtualization is aided by the use of API’s, which are the underlying communication methods for assets within a cloud. As a result of this shift of management paradigm, compromising the management plane is like winning unfiltered access to your data centre, unless proper security controls to the application level are in place.
What is a Cloud API?
The majority of API use REST (Representational State Transfer), which runs over the HTTP protocol, making it popular and well suited for interacting with cloud services. API is used to interact with cloud services and is usually wrapped in a web-based user interface. For example, they are used to carry out functions, such as, to provision, manage, orchestrate and monitor. They unquestionably open the door to the public and hence, you need to fully protect both; the API and the API keys.
API’s can use a variety of authentication mechanisms, there is no standard for authentication in REST (REpresentational State Transfer); an architectural style usually employed in web services development. Both HTTP request signing and OAuth are the most prevalent that leverage cryptographic techniques to validate authentication requests. Due to poorly-designed web platforms, on occasion, you may come across services that embed a password in the request. This type of configuration represents a less secure environment and poses a higher risk for credential exposure.
Example of a Cloud API Threat
Recently, a large cloud provider had a serious vulnerability in their API because of the way they implemented the signing of the API requests. In an attempt to implement your own signature algorithm, if you are able to intercept one properly signed request, you will be able to perform arbitrary request modification.
API – Security Perspective
All cloud service model – Infrastructure as a service (IaaS), Platform as a Service (PaaS), and Software as a service (SaaS) will use API in one way or another. From the security perspective, it is both; the biggest difference between protecting physical infrastructure and the top priority when designing effective cloud security. If an unauthorized individual gets into your management plane, the person could have full remote access to your entire cloud environment.
The increased adoption of cloud augments the application scope. The security configuration of the management plane directly affects the security of the web application. All the components can be driven under control of the management plane. The management plane security is now within the scope of the application’s security and a failure on either side will leak into the other.
Within the cloud, there is a lot of reduced transparency as to what is really happening under the hood of the web application. Overall, due to the shared security model, the cloud will trigger the consumer to look for better and more efficient web application security. Therefore, consumers of the cloud must accurately think and plan for better security.
System vulnerabilities are bugs inside a web application that a bad actor can target and compromise. For example, both Heartbleed and Shellshock affected systems running Linux surfaces the fact that over 60% of website uses some variations of Linux. This is considered alarming to the open source community.
Vulnerability scanning and regular patching are the most efficient tools here. Patches are free and are released days after the announced vulnerability. However, a gap needs to be filled from the time when the vulnerability is announced, until when the patch was released. This is where Acunetix tools sets come into play. System vulnerabilities don’t evaporate when you move web applications to the cloud, especially if you are using a PaaS or IaaS service model of operation.
There will always be a balancing act between cloud security and business objectives. However, cloud threats have far-reaching implications as the cloud has manifold business touch-points.
You are running at risk if you outsource security. If there is a performance problem, you will notice it straight away. However, if there is a security incident and you are not armed with the appropriate security tools at the application level, it may go unnoticed for years. Therefore, it is significant to keep your fingers on the pulse and sail smooth.
Part 2: Cloud APIs
Get the latest content on web security
in your inbox each week. | <urn:uuid:0cf0af11-fa40-48e5-b0a1-bedae3321169> | CC-MAIN-2022-40 | https://www.acunetix.com/blog/articles/cloud-apis/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00061.warc.gz | en | 0.934814 | 997 | 2.71875 | 3 |
Since the GDPR came into effect in 2018, many countries across the world followed suit and introduced their own data privacy laws. India is the latest addition to this list. The proposed Personal Data Protection Bill (PDPB) aims to bring about a comprehensive overhaul of India’s current data protection policies. The data protection practices of Indian organizations are currently governed by the Information Technology Act, 2000 (IT Act).
However, the two-decade-old act hasn’t been able to keep up with the rapid advancements in technology we’ve seen over the past few years. With India witnessing a massive increase in cyberattacks in recent times, cybercriminals have been steadily discovering new ways to obtain sensitive personal information. To make matters worse, the pandemic has catapulted the digital ecosystem forward by years, and remote working has expanded the attack surface available to hackers.
As organizations have adopted new technologies, the scale of their data burden and responsibility has grown exponentially. As a result, cyber risks have amplified, and regulatory authorities have been scrambling to catch up. Consequently, businesses across the globe now face a slew of privacy protection laws. As the risks of noncompliance can be dire, organizations are often left wondering how they can mitigate this ever-evolving privacy landscape.
Driving regulatory change across the country
Many Indian organizations have faced a volley of cyberattacks recently. From BigBasket, a popular online grocery delivery service, to Air India, small and large organizations alike have fallen prey to massive data breaches, exposing the sensitive personal information of millions of customers. The only silver lining of these high-profile attacks has been that they have had a powerful effect on the regulatory landscape of the country. The PDPB thus comes at a much-needed time when the country is seeing exponential digital growth.
In 2017, the Supreme Court of India recognized the right to privacy as a fundamental right. Currently, the IT Act has two provisions to govern the improper exposure of personal information:
(i) Section 43A of the IT Act requires the maintenance of “reasonable security practices and procedures” in relation to any “sensitive personal data or information” handled by an organization.
(ii) Section 72A of the IT Act imposes a penalty on any person (including an intermediary) who intentionally discloses personal information without the consent of the user.
However, Parliament recognized that a specialized regulation for protecting privacy rights is required, and thus, the PDPB was drafted. Once implemented, the PDPB will repeal Section 43A of the IT Act.
What does the PDPB include?
The PDPB will be India’s first law focusing solely on data privacy and protection. It defines personal data as data “relating to a natural person who is directly or indirectly identifiable, having regard to any characteristic, trait, attribute, or any other feature of the identity of such natural person, whether online or offline, […] and shall include any inference drawn from such data for the purpose of profiling.”
The bill includes requirements for notice and prior consent for the use of individual data, puts limitations on the purposes for which data can be collected and/or processed, and has restrictions to ensure that only the data essential for providing a service is collected. In addition, the bill also includes data localization requirements and necessitates the appointment of data protection officers within organizations.
Like the GDPR, the PDPB aims to give individuals more control over their personal information. As per the bill, a separate regulator called the Data Protection Authority of India (DPA) will also need to be set up. The DPA will be entrusted with the responsibility of protecting and regulating the use of citizens’ personal data. It is supposed to act as a deterrent against the unconstitutional and illegal use of this data. The DPA will also have extraterritorial powers and will enforce monetary penalties for noncompliance.
Implementation to be done in phases
The Joint Committee of Parliament (JCP) deliberating on the PDPB is expected to submit a report on the bill in the first week of the winter session of Parliament, which usually commences around the last week of November.
Notably, the bill is expected to undergo major changes under the JCP. The draft proposed in 2019 was opposed by many organizations, social media firms, privacy experts, and even ministers, who were of the opinion that the bill had too many loopholes to be effective and beneficial for both citizens and organizations.
However, even after enactment, the law is likely to be implemented in phases.
How can organizations prepare for the PDPB?
If your organization is already compliant with the GDPR, then you’re likely well on the path to achieving compliance with the PDPB when it comes into effect. However, a proactive approach to assessing and, if required, revamping your data privacy posture will go a long way towards preparing for the PDPB. Here are three steps to ensure your organization is well-prepared:
Understand and classify your data: The basis of a strong data security strategy begins with identifying and classifying what type of data you collect and retain. Once you have an understanding of what personal information you process, you’ll have a data inventory ready to help you understand your data-processing activities. Where is the data stored? For how long is it stored? Who has access to it? Is it shared with any third parties? This step ensures that you have the solid foundation needed to achieve compliance.
Detect and prevent leaks: Adopting an effective data loss prevention strategy can help organizations monitor and eliminate potential risks originating from emails, webpages, and endpoints. It allows the flow of information to continue while eliminating risks, protecting critical data, and ensuring compliance. It doesn’t need to become a barrier to business processes if implemented in an adaptive manner that does not put a stop to communication but rather automatically removes any sensitive or malicious data as it enters or exits the network.
Secure personal data: After the organization has ensured that personal data is classified and potential risks are removed, the data then needs to be protected both at rest and when it’s being shared in order to achieve true end-to-end data security. This can be done by implementing encryption at rest, email encryption, a managed file transfer (MFT) solution, or a combination of these technologies. An MFT solution protects sensitive data when it’s most vulnerable—while being accessed by others and while being sent to unmanaged domains, devices, or applications. It creates a secure channel with a central platform for information exchange, while providing audit trails, user access controls, and other file transfer protections.
Compliance is an ongoing journey
In the current age of information explosion, achieving compliance with data privacy standards can be daunting. However, it needn’t be so if organizations can be proactive and plan ahead of the date of enforcement. Businesses in India need to build an effective privacy and compliance strategy, as those that do will experience immense benefits. The time is ripe for organizations who haven’t yet started or are just getting started on their compliance journey, as data privacy is going to play a pivotal role in the years to come. By adopting a layered, comprehensive approach to data security, organizations can confidently embrace the new PDPB, and, once compliant, should view this as a competitive advantage. | <urn:uuid:7c4f57a5-76f2-4aa0-9fb3-be5b8947087a> | CC-MAIN-2022-40 | https://insights.manageengine.com/privacy-compliance/will-indias-personal-data-protection-bill-usher-in-a-new-age-of-privacy-in-the-country/?utm_source=IPDPB-article | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00262.warc.gz | en | 0.938477 | 1,520 | 2.8125 | 3 |
WebRTC (Web Real-Time Communication) is a collection of communications protocols and application programming interfaces that enable real-time communication over peer-to-peer connections. This allows web browsers to not only request resources from backend servers, but also real-time information from browsers of other users.
This enables applications such as video conferencing, file transfer, chat, or desktop sharing without the need of either internal or external plugins.
WebRTC is being standardized by the World Wide Web Consortium (W3C) and the Internet Engineering Task Force (IETF). The reference implementation is released as free software under the terms of a BSD license. OpenWebRTC provides another free implementation based on the multimedia framework GStreamer.
WebRTC uses Real-Time Protocol to transfer audio and video. “Wikipedia”
Now we know what WebRTC is for privacy matters we may wants to disable it.
What are the benefits of disabling WebRTC ?.
This doesn’t affect OSX, Linux or Android users, just Windows users based browsers but we recommend still taking precaution on the follow devices when downloading third party browsers.
Today I will show you how to disable WebRTC in Firefox to prevent your real IP address from being leaked out.
We will be disabling WebRTC in Firefox however there are various add ons for most major browsers. First go to the Firefox add on page and download Disable WebRTC once the addon is installed we will test to see if WebRTC is disabled.
Chrome browser users can install the WebRTC Leak Prevent extension. | <urn:uuid:914671b7-93c6-4323-8841-f595c1405cdf> | CC-MAIN-2022-40 | https://hackingvision.com/2017/03/15/webrtc-can-leak-ip-address-even-behind-vpn/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00262.warc.gz | en | 0.851306 | 400 | 3.078125 | 3 |
While consumers and businesses expand their use of social media and electronic services to record levels, many of America’s most knowledgeable security professionals don’t believe that individuals will be able to protect their privacy and online identity, even with precautionary measures and new regulations such as GDPR.
These findings are outlined in Black Hat USA’s new research report entitled, Where Cybersecurity Stands. The report, compiled from the fourth installment of Black Hat’s Attendee Survey, includes critical industry intel directly from more than 300 top information security professionals.
Is privacy a lost cause?
Now more than ever cybersecurity professionals are questioning the future of privacy and the safety of personal identity as a result of the recent Facebook investigation, development of GDPR and various data breach reports.
Influenced by these factors, only 26% of respondents said they believe it will be possible for individuals to protect their online identity and privacy in the future – a frightening opinion as it comes from experts in the field, who in many cases are professionally tasked with protecting such data. They’ve also reconsidered their Facebook usage – with 55% advising internal users and customers to rethink the data they are sharing on the platform, and 75% confessing they are limiting their own use or avoiding it entirely.
InfoSec community weighs in on politics
IT security professionals have very little confidence in the federal government’s ability to understand and respond to critical cybersecurity issues. Only 13% of respondents said they believe that Congress and the White House understand cyber threats and will take steps for future defenses.
Respondents also cite foreign affairs as an issue – 71% said that recent activity emanating from Russia, China, and North Korea has made U.S. enterprise data less secure. And with the upcoming elections in mind, more than 50% believe that Russian cyber initiatives made a significant impact on the outcome of the 2016 U.S. presidential election.
Bitcoin, malicious hacking, technology and more
This year’s report dives deeper into the inner thoughts of today’s cybersecurity professionals, as a result, additional key insights were brought to the surface. One topic was whether ethical hacking would be prevalent considering the rise of bug bounty programs – nearly 90% still believe in the importance of coordinated disclosure, making it clear that hackers within the Black Hat community are still looking to help in the fight against cyber crime.
Respondents were also asked to weigh in on all the craze around cryptocurrency, with more than 40% expressing that they do not think that investing in Bitcoin and other cryptocurrencies is a good idea. This is an interesting data point considering all of the recent buzz around profits being made through the practice.
Professionals also raised a new concern around the effectiveness of technologies currently in use. Among a list of 18, only three technologies were cited as effective by security professionals – encryption, multifactor authentication tools and firewalls. Passwords, one of the most widely used technologies, were dubbed ineffective by nearly 40% of respondents.
Fear of major national critical infrastructure breach still on the rise
Last year, Black Hat reported that 60% of security professionals expected a successful attack on U.S. critical infrastructure – that data point has risen almost 10% in 2018. Who do they think will likely be behind such an attack? More than 40% of those surveyed believe that the greatest threat is by a large nation-state such as Russia or China.
The thought that such an attack will be successful, again, stems from the industry’s lack of confidence in the current administration – only 15% of respondents said they believe that U.S. government and private industry are adequately prepared to respond to a major breach of critical infrastructure.
Additional key findings
Following the enactment of European GDPR privacy regulations, 30% say they don’t know if their organizations are in compliance; another 26% do not believe they are subject to GDPR.
Staying consistent over the past five years and across the U.S., Europe and Asia – nearly 60% believe they will have to respond to a major security breach in their own organization in the coming year; most still do not believe they have the staffing or budget to defend adequately against current and emerging threats. | <urn:uuid:8cd607be-0b9c-489c-9fcd-cd7abf345579> | CC-MAIN-2022-40 | https://www.helpnetsecurity.com/2018/07/02/protect-privacy-identity/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00262.warc.gz | en | 0.956936 | 863 | 2.546875 | 3 |
Data control is management oversight of information policies for an organization’s information. Unlike data quality, which focuses on fixing problems, data control is observing and reporting on how processes are working and managing issues. Functions include inspection, validation, notification, documentation, issue reporting, and issue tracking.
Drivers for Data Control
- Risk awareness
Key Data Control Objectives
- Identify risks.
- Implement policies.
- Provide audit framework.
- Ensure proper protection and use of critical data elements.
- Track and manage data quality against service level agreements.
- Manage the identification, reporting, and remediation of data issues.
Data Control Benefits
- Data is monitored and controlled as it moves from users’ systems to storage devices and applications.
- Data flaws are identified before they cause issues.
- Root causes of issues are identified and remediated.
- Accidental data loss, often caused by mishandling of sensitive data, is quickly identified.
Data Control Dependencies
- Critical data elements are recognized and agreed upon.
- Business definitions are consistent across the organization.
- Tools and processes are in place to measure conformance to service level agreements.
- Framework is in place to assess compliance.
- Processes are documented to report data that does not meet expectations.
Let’s jump in and learn:
Data Control Examples
Data Control for Timeliness
- Specified time in which newly posted records should be available
- Acceptable time for delays
- Expected response time for data requests
- Time-period in which requested data is provided
Data Control for Currency
- Acceptable time between updates for each data element
- Expiration date for each data element
- Specified date / time upon which the data becomes available
- Policies for data synchronizations and replication between systems
- Process for propagating corrections and updates
- Temporal consistency rules
Data Control for Completeness
- Minimum degree of population for each data element
- Assigned values for mandatory attributes in a data set
- Specified optionality for data elements
- Null value rules and enforcement for data elements
Data Control for Consistency
- Specified presentation formats for each data element
- Data presentation format conveys all information within the attributes
- Defined standards for presenting missing information for each data type
- Rules defined for data entry edits and data importation elements
Data Control Processes
These processes are defined by which critical data is governed, including:
- Data registration
- Monitoring and control
- Data exchange and sharing
- Data lifecycle events (i.e., creation, modification, archiving, destruction)
Data Controls for Data Governance
Data controls measure the observance of rules set by data governance policies. They are used to protect and ensure the availability of organizations’ sensitive information in areas such as:
- Risk awareness
Data Control Methods
Data control monitors and restricts the transfer of files containing sensitive data to reduce accidental data loss. Organizations can measure the efficacy of data controls based on data governance objectives.
The first step for implementing data control is to establish controls and rules to manage data governance.
Three Steps to Define Data Control Expectations as Rule
- Set information policies to direct data quality rules, which are used to monitor conformance to expectations.
- Put systems and processes in place to measure conformance by setting thresholds and building reports to share metrics
- Establish service level agreements (SLAs) for data controls and define processes for responding to issues and reporting SLA metrics
Types of Data Controls
Data controls can be applied at different levels of granularity:
- Review data quality in the context of its value assignment to a data element.
- Examine the quality of the data within the context of the record.
- Evaluate the completeness of the data set, availability of data, and timeliness of delivery—using data control rules as the baseline.
10 Data Considerations when Implementing a Data Control Plan
- Used to support a published business policy
- Used by one or more external reports
- Used to support regulatory compliance
- Designated as Protected Personal information (PPI)
- Designated critical employee information
- Recognized as critical supplier information
- Designated as critical product information
- Designated as critical for operational decision-making
- Designated as critical for scorecard performance
- Designated as proprietary or trade secret
Data Control and Business Intelligence
Business intelligence systems require data control to provide the quality of data needed to gain accurate insights that support reliable decisions. Business intelligence systems make it easier to analyze data, and data control provides mechanisms that ensure data quality standards are met.
Data Control Challenges
Three data types present significant data control challenges.
- Duplicate Data
Data control must address the challenges associated with duplicate data produced as a result of human error or technical error (e.g., an algorithm that misfired). Without data control, duplicate data proliferates. The results are increased costs for compute and storage resources as well as skewed or incorrect results when data is used for analysis.
- Hidden Data
Hidden data presents another challenge. Data control plans must consider the type of data that is not visible with a standard viewer. Common hidden data types include comments, document revision history, and presentation notes. Hidden data is unstructured data that holds great value if found and managed, but this can be difficult.
- Inaccurate Data
Inaccurate data can be the result of not finding all of the hidden data, but is usually caused by human error, such as putting incorrect or incomplete information into a field when filling out a form.
Data Control Behind the Scenes
From a software perspective, data control provides a standard mechanism for exchanging data between applications. There are two types of data controls—DATA_CONTROL_SQL and DATA_CONTROL_MAP.
Uses SQL-type data to control access to specific data. SQL-type data includes:
- Numeric data types
- Date and time data
- Character and string data types
- Unicode character string data types
- Binary data types
Uses a key-value-type data control to access data. Key-value-type data is stored as a key or attribute name with its value. An example of key-value-type is:
- Key is CITY
- Value is New Orleans
Ensure Effective Oversight of Information Policies
Data control benefits range from collecting business artifacts and identifying critical data elements to defining data quality expectations and performing compliance inspections. Ensure that quality across all types with effective data control.
Egnyte has experts ready to answer your questions. For more than a decade, Egnyte has helped more than 17,000 customers with millions of customers worldwide. | <urn:uuid:8eb01674-0754-4c62-9110-a86cb55d75ca> | CC-MAIN-2022-40 | https://www.egnyte.com/guides/governance/data-control | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00262.warc.gz | en | 0.870708 | 1,448 | 2.8125 | 3 |
When the entire world demands immediate access to a rare and finite resource, you always find criminals operating in the margins. This dynamic is particularly true with the COVID-19 vaccine. Vaccine threats are not limited to financial or national interests — one type of risk that's often overlooked in the larger vaccine security conversation is misinformation.
We can expect some of vaccine information threats to come from anti-maskers who believe COVID doesn't exist or that vaccines infringe on their freedom.
Some people will also leverage the vaccine for political attacks. For example, they might falsely claim a batch of vaccines has spoiled, denying vaccine access to entire communities.
Misinformation campaigns thrive on inequality of knowledge. Malicious entities can drive a wedge between communities by exploiting those societal fractures. At a time with deep, seemingly intractable cultural divides, there are three divisive elements of the vaccine likely to be exploited.
Misinformation on availability could look like a run on the bank. For example, this type of misinformation could sound like, "They've got more vaccine at the corner store." This causes disorganization in our wide-scale vaccination efforts.
Vaccine availability misinformation could also look like rumors about vaccine shortages, preventing people from booking an appointment at a facility that still has available doses. This leads to vaccine waste and puts vulnerable people at risk.
Fake vaccine scams will turn out to be both financially lucrative and an easy way to harm a nation's ability to achieve herd immunity. In many cases, they are easy, self-propagating operations. All that is needed is a list of targets for a "watering-hole" attack where people are lured to a site in order to be exploited. Additionally, these campaigns are often very hard to dismantle until after there has been harm.
Vaccine Health & Safety
Vaccine safety misinformation fuels the worst fears of anti-vaxxers. It focuses on how quickly the vaccines were developed or claims they were developed deceptively. This eclipses and extends the anti-vaxxer movement. Key targets are people who deal with vaccines for others, such as parents or people with elderly relatives.
Adults have to choose to take action with the vaccine, which presents an opportunity for misinformation spreaders. Often, vaccination decisions are informed by research that pulls from conflicting or misleading sources. It's easy for people to generate memes and soundbites to make vaccines sound scary, making claims such as:
- "The vaccine was produced too quickly; a typical vaccine takes 10 years to produce, this one took one year."
- "What are they hiding in there?"
- "It's all just a lie to give 'them' more control."
Under these conditions, anti-vaxxers and extremists will be able to recruit, and they will have a larger, receptive audience. It's not the anti-vaxxers' first rodeo: They have been convincing vulnerable parents for years — now they're going to convince vulnerable grandparents, caretakers, and more.
Proof of Vaccination
As a society, we don't yet know what vaccination cards will mean. Are they a reminder of your vaccination, or are they proof? If they are proof, can you use your older sister's? Will there be a market for people taking vaccines for each other?
A vaccination proof card is valuable to those who want to work but don't qualify yet. It's also possible that in the future, only people who have been vaccinated will be able to travel, opening up another potential area for fraud through illegitimate vaccination card sales.
What Can We Do?
Misinformation thrives on fear and ignorance. By ensuring accurate, consistent information takes precedence over sensationalist rumors, we can go a long way toward preventing misinformation.
This is easier said than done. Trust has taken a serious blow in recent years, creating profound changes in the way we consume news. Re-establishing trusted sources, harnessing the power of influence, and avoiding fanning the flames of polarization are challenges we need to tackle.
Weaponized misinformation thrives on social divisions and cultural inequality. While technology can help identify, label, and suppress misinformation, it's a primitive science. Algorithms aren't sophisticated enough yet to manage the nuances of human communication. Teams of moderators have a higher chance of success but don't scale, and even brief exposure to toxic information is harmful.
In an ideal world, we would weaponize our citizenry against misinformation. Practical experiences from the front lines show that an informed citizenry that actively criticizes information counters misinformation campaigns more effectively than any other form of intervention. Just as humans are the vector for misinformation, they can also be the antidote.
Misinformation is a real threat to our vaccination efforts, yet it's not taken as seriously as cybersecurity or operations threats. When it comes to a life-saving COVID-19 vaccine, the consequences of misinformation are enormous. By leveraging the power of relationships and influence, we can neutralize misinformation campaigns before they take root, saving lives in the process. Ultimately, access to healthcare is fragmented and imbalanced enough without allowing criminals to exploit our fears in order to further balkanize our recovery from the pandemic.
—Dr. Pablo Breuer, cyber warfare and disinformation expert, and The Grugq, an information security researcher, contributed to this column. | <urn:uuid:7efb489d-1feb-4430-a9b7-1dcb2e3b3708> | CC-MAIN-2022-40 | https://www.darkreading.com/attacks-breaches/3-ways-anti-vaxxers-will-undercut-security-with-misinformation | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00262.warc.gz | en | 0.945806 | 1,102 | 2.90625 | 3 |
Introduction to PostgreSQL
PostgreSQL is a general purpose and object-relational database management system, the most advanced open source database system. It was developed based on POSTGRES 4.2 at Berkeley Computer Science Department, University of California. It has more than 15 years of active development and a proven architecture that has earned it a strong reputation for reliability, data integrity, and correctness. PostgreSQL runs on all major operating systems, including Linux, UNIX (AIX, BSD, HP-UX, SGI IRIX, Mac OS X, Solaris, Tru64), and Windows.
PostgreSQL is free and open source software. It’s source code is available under PostgreSQL license, a liberal open source license.
What Makes PostgreSQL Stand Out?
- PostgreSQL is the first database management system that implements multi-version concurrency control (MVCC) feature, even before Oracle. The MVCC feature is known as snapshot isolation in Oracle.
- PostgreSQL is a general-purpose object-relational database management system. It allows us to add custom functions developed using different programming languages such as C/C++, Java, etc.
- PostgreSQL is designed to be extensible. In PostgreSQL, one can define their own data types, index types, functional languages, etc.
Procedural Languages Support
PostgreSQL supports four standard procedural languages, which allows the users to write their own code in any of the languages and it can be executed by a PostgreSQL database server. These procedural languages are - PL/pgSQL, PL/Tcl, PL/Perl, and PL/Python. Besides, other non-standard procedural languages like PL/PHP, PL/V8, PL/Ruby, PL/Java, etc., are also supported.
Key Features of PostgreSQL
PostgreSQL supports a large part of the SQL standard and offers many modern features including the following −
Complex SQL queries
Multiversion concurrency control (MVCC)
Streaming Replication (as of 9.0)
Hot Standby (as of 9.0)
Specifics of PostgreSQL versions and platforms supported on Delphix are located in the PostgreSQL Compatibility Matrix.
PostgreSQL Replication Methodology
Delphix uses PostgreSQL streaming replication protocol is used to achieve data replication between Source and Staging database.
Log shipping is a replication technique used by many database management systems. The Master records changes in its transaction log (WAL), and then the log data is shipped from the Master to the Standby, where the log is replayed.
Use similar database versions on all systems.
Configure the systems identically, as far as possible.
Provides Integrated security.
Reduces replication delay.
Architecture for Streaming Replication
Below is the high-level architecture diagram showing the data replication between the source and the staging hosts.
Transaction logs - WAL (Write Ahead Logs) are replayed from source to the staging environment to maintain data sync between the two.
The end user application connecting the PostgreSQL source database may perform read/write queries on the database.
The database changes are recorded as WAL segments in the PostgreSQL database under the directory pg_xlog.
Set up the staging PostgreSQL server as Standby Node.
Configure replication security by creating a replication user and specifying the authentication protocol.
Initiate a base backup on secondary.
Configure postgresql.conf file as per the staging environment.
Start the standby server.
The WAL receiver process at the secondary continuously listens for any incoming WAL segments from primary in its receive queue and applies the same on the staging database.
WAL segments are archived when a segment size reaches 16 MB (default).
- Managing PostgreSQL Environments
- Managing PostgreSQL Data Sources
- Provisioning VDBs from PostgreSQL dSources
- Customizing PostgreSQL Management with Hook Operations | <urn:uuid:1418c1a3-9387-4f31-84d5-391811345cea> | CC-MAIN-2022-40 | https://docs.delphix.com/docs534/delphix-administration/postgresql-environments-and-data-sources/postgresql-on-delphix-an-overview | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00462.warc.gz | en | 0.864754 | 842 | 3.171875 | 3 |
What can Artificial Intelligence do to detect fraud?
by Sanjeet Banerji, on Sep 25, 2019 12:24:00 PM
Estimated reading time: 3 mins
Artificial Intelligence (AI) though deemed a fiction in the bygone era has assumed practical importance today. The technology has umpteen number of use cases especially in the Banking, Financial Services, and Insurance (BFSI) sector. It not only helps in improving customer interactions and customer visibility but also in securing digital identity, deploying anti-money laundering (AML) practices, and fraud detection solutions. Enterprises have already started implementing AI in their businesses and started seeing results. Having said that AI is here to assist humans and not take over.
What challenges does AI address?
Let us take the example of a fraudulent transaction in BFSI. A fraud is a regular transaction unless detected. An enterprise loses USD 150 Million every month by way of fraud. The global revenue lost in terms of fraud annually is approximately USD 2.0 Trillion. In some cases, the revenue loss is equal to the GDP of some of the European countries. However, such fraud cannot be detected in absence of data; especially so, when enterprises function in silos.
AI not only helps in creating an integrated view of the business but also helps discover patterns thereby helping detect and curb fraud.
It is interesting to note that these intelligent algorithms enable you to expedite the time, which is otherwise taken to manually handle huge amounts of data or Big Data, and arrive at patterns. For example, contemporary mechanisms working on an enterprise system housing 300 thousand accounts and carrying out 1.5 billion transactions annually would require 4 years to identify just one pattern. On the contrary, AI does the task in a few hours. So to say, AI can not only be leveraged towards social good but also help better understand what “may happen” and thus future-proof the enterprise against fraud.
What AI does to detect fraud?
AI is a boon. However, it cannot work without data. Let us see a couple of use cases, which highlight AI advantages and enable movement towards sustainable transaction monitoring solutions.
Use Case 1: Detection of Money Laundering / Financing of Anti-Social activity through Pattern Mining
AI algorithms work on the concept of self-training. The more they ingest the data, the more they learn from emerging patterns, and become intelligent. In fact, they generate intelligence that is not available in your regular KYC documents.
By focusing on how the transaction is happening, who is sending the money, through which account, into which account, AI helps you decipher the relationships between the transacting entities and their behavior. It also highlights their propensity of committing fraud.
AI helps discover patterns in transactions, normal versus anomalous behavior, associations & relationships between entities, risk scoring of accounts, etc., thereby helping nip anti-social activity in the bud.
AI algorithms can be used in both supervised and unsupervised forms to outsmart the smart fraudsters.
Read more >>
Use Case 2: Monitoring accounts for specific pattern of activity
Accounts can be monitored and flagged for certain patterns. Accounts with low tenure of money stay, multi-currency transactions across geographies, sudden dormancy of high transactions, and unprecedented rise in activity after being dormant, can all be auto-scrutinized with the help of AI algorithms. The algorithms discover specific activity patterns and provide a view of the bigger picture. This is also an important use case in business forensics.
AI enables banking enterprises to take quick corrective and prohibitive actions. When a fraudulent pattern is identified, the algorithms can trigger a bot to stop the transaction. Alternately, they can trigger a bot to take corrective measures and delve deeper to check if the transaction is legitimate and allowed or is surreptitious and disallowed.
Contemporary AI applications
Following practical uses of the intelligent algorithms enable enterprises to build a “digital lens” for monitoring transactions:
- Artificial Neural Networks (ANNs) to detect the patterns and relationships in data and learn through experience and not programming
- Recurrent networks to identify patterns and detect fraud
- Deep Convolutional Neural Networks to perform Language Generation, Speech and Audio Analytics
- Restricted Boltzmann Machine to predict, comprehend text, and perform semantic filtering
- Support Vector Machines to classify transactions
- Generative Adversarial Networks (GANs) to create virtual simulated world, 3 modeling, and predict fraud
Integrated data is primary for using AI and enabling the digital scrutiny of voluminous transactions. The intelligent algorithms help in building sophisticated models to fit transactions in to different clusters. High volumes of data or Big Data help to detect patterns. The more data you integrate the more intelligent the AI algorithms become allowing you to identify and stop fraudulent transactions in real-time. However, it is up to enterprises to evaluate the cost of fraud as compared to the cost of implementation, time, effort, and energy. | <urn:uuid:f39a7023-7c2a-4056-9784-66ef9e2d35e1> | CC-MAIN-2022-40 | https://blog.datamatics.com/what-artificial-intelligence-can-do-to-detect-fraud | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00462.warc.gz | en | 0.909388 | 1,022 | 2.515625 | 3 |
NOTE: All passwords to access dialogic.com have been reset on Monday, August 22nd, 2022. Please use the Forgot My Password page to reset it.
Internet of Things (IoT) refers to identifiable objects and their virtual representation in an Internet-like structure. The term is commonly used to signify advanced connectivity between everyday objects such as appliances and devices, systems and services that go beyond machine-to-machine communications, and covers a variety of protocols, domains and applications.
Internet Engineering Task Force (IETF) << Previous
Next >> Internet Protocol (IP)
PowerMedia XMSInternet of Things
References for the glossary can be viewed by clicking here. | <urn:uuid:ad0a5b8d-9094-4041-80ff-efebcd0118c2> | CC-MAIN-2022-40 | https://www.dialogic.com/glossary/internet-of-things-iot | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00462.warc.gz | en | 0.896422 | 140 | 2.671875 | 3 |
Researchers at McAfee have demonstrated a method that hackers could use to perform an end-run around Cortana and access data, run malicious code, or even change a locked computer’s password. In this case, however, the emphasis is on the word “could.”
The researchers readily admit that this attack is high risk, has never been seen in the wild, and has little possibility of going undetected for a variety of reasons. Even so, the research is disturbing and does point to a valid weakness that bears further investigation.
The setup process alone is daunting. First, the attacker would need to perform a significant amount of advance preparation. This includes going so far as to create a Wikipedia entry that could get past that site’s army of talented editors and fact checkers, and then somehow inserting a link to a poisoned/compromised domain in the entry. That alone would be a challenge.
Once the Wiki page was up, with the poisoned link at the ready, the attacker would need physical access to the device in question.
Then, the user would have to have Cortana enabled from the lock screen.
Assuming that hurdle was also cleared, the attacker could begin asking Cortana questions, which would prompt her to search the web for information about the topic being inquired after.
Cortana is designed in such a way that if web-based resources are needed to answer the query, it will look for a Wiki Page and display the link found there.
If the hacker succeeded in doing all of that, Cortana would access the poisoned web page via a scaled down version of Internet Explorer 11, which would then allow the hackers to send malicious code via the now-established connection.
Is this a real threat? Absolutely. It is within the realm of possibility that a hacker could do everything described above.
Is this even remotely plausible? No. There are simply too many points of failure for this to be considered a genuine threat, as underscored by the fact that nobody has ever seen anything like this in the wild.
Hackers tend to prefer simple, elegant solutions. While it’s not outright impossible to imagine a hacker giving this a go just for fun, it’s hard to see this as an emerging threat, or something to be greatly concerned about. | <urn:uuid:5fb8998e-05a4-40dc-b282-5c4a1636a5ef> | CC-MAIN-2022-40 | https://dartmsp.com/cortana-may-have-flaw-allowing-unauthorized-system-access/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00462.warc.gz | en | 0.964469 | 467 | 3.15625 | 3 |
Among the many problems businesses face today, cybercrime is one of the fastest growing threats. While you may think this is something that only concerns large corporations, the truth is hackers are just as likely to target small organizations. The reason why cybercrime affects all businesses is because your company possesses a lot of data. Gaining access to this information can be highly beneficial for the attacker.
Regardless of the industry you’re in, your organization has an obligation to protect the data of your clients as well as your own business. With the high prevalence of cybercrime these days, it’s never been more important to be aware of the consequences of a data breach. Part of being aware of the consequences is understanding the real cost of a data breach.
According to a joint study from IBM and the Ponemon Institute, the average cost of a cyberattack reached $4.24 million in 2021. That’s a 10% increase from the $3.86 million reported back in 2019. And forecasts project that the global cost of a data breach may reach $10.5 trillion by 2025. This begs the question, why are data breaches so expensive? | <urn:uuid:bf4f1e03-82b7-44c5-9d49-9418409e1fee> | CC-MAIN-2022-40 | https://www.42inc.com/what-is-the-cost-of-a-data-breach/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00662.warc.gz | en | 0.947807 | 235 | 2.90625 | 3 |
I’d like to go over a few of the theories of, and interdependencies and tendencies regarding, global warming, and also compare some of the theory with the practical. I’ll do so through two two questions…
1. How will the world actually change as a result of global warming?
It is totally and blatantly obvious that the world is warming up. At a minimum in the arctic and temperate zones we can see this for ourselves.
Example: Around 40 years ago when I was a teenager, ~-20°С in winter in Moscow was the norm, -30°С was common, and the occasional dip down to -40°С or below occurred a few times a year. Such cold temperatures, it goes without saying, came with a lot of snow. Today, Christmas and New Year can come and go… without snow! WHAT?!
But it’s not just Moscow. Warming is everywhere. Take… Siberia as another example. But here the damage done may become apparent sooner rather than later: extreme global warming may see inner Siberia becoming an appendage to the Gobi Desert, with Lake Baikal surrounded by tall barkhan dunes. And that’s only after so much methane and other harmful gases have been emitted into the atmosphere that the permafrost in the region – permanently frozen for millions of years – will start to melt!
And it’s not just Russia, of course. But for some regions it will be a different thing altogether.
Take the Middle East, for example. With global warming, excessive evaporation of oceanic water could radically alter the quantity of rain in the region. This could mean the region turns into a fertile flowering garden instead of a famously barren desert. Or… maybe not. We don’t know for sure, we can only speculate!
Then there’s a theory that the current warming is the calm before the storm a mere climatic fluctuation before… an upcoming ice age!
2. How will nearly the whole world’s industry coming to a halt affect the growth of CO₂ in the world’s atmosphere?
As I said, the Earth’s climate is warming. Carbon dioxide is blamed for this, the concentration in the atmosphere of which has been relentlessly climbing up and up – for several decades already. Some say this is all down to humans; others say that that must be some kind of delusions of grandeur: humans are way too insignificant to be able to seriously affect protracted natural global climate cycles. Though Wikipedia in English doesn’t mention an approximate percentage of how much of the increase in CO₂ emissions is down to humans, Wikipedia in Russian does: just 8%. But let’s leave Wikipedia’s different-language differences to one side for today: other Western credible sources in English put the figure even lower: 5%.
There are a great many arguments in the discussion of global warming, including straight-up denial (yep, still!), and technological pseudo-scientific preaching. Each argument has its army of followers and a slew of tables and graphs and stats giving the ‘proof’ (and a marked absence of ‘inconvenient’ tables and graphs and stats that provide contrary ‘evidence’). Heck, even I’ve put my oar in (search for the second ‘climate change’ mention there) a few times.
However, instead of giving a preference to one argument or another, I actually think that not only does the question of what/who is to blame for global warming – geo-cosmo-natural or anthropogenic – have no answer, but that the question itself is stated… incorrectly; a particular answer is always in someone’s interest, and so the question is stated wittingly with bias built in. Hmmm, that came out a bit complicated. Let me try and explain…
Is the temperature of the Earth’s atmosphere rising? It’s rising. Fact.
Is CO₂ rising? It’s rising. Fact.
Is the number of humans rising globally? Rising. Fact.
Also fact: the level of CO₂ before the appearance of industrialized humans would also change; it would go up, then down. This, for example, can be seen in the analyses of the ice core from Vostok Station (btw, self-isolating there – no problem!). Now, in order to blame humans for global climate change, three things need to be measured:
- Natural CO₂ emissions;
- Natural CO₂ consumption (by forests, plants, seaweed…); and
- Anthropogenic emissions.
The modern-day increase in CO₂ tells us merely (i) that the balance between these three factors is out of whack; and (ii) that man can be blamed not so much in that that he pollutes the atmosphere too much, but in that that he’s chopping down trees and killing the ocean’s flora way too much. To understand the whole picture we need to calculate more accurately changes in natural consumption of carbon dioxide. How do we do this? Alas, I don’t know. But somehow, surely, it’s possible, no?
Curiously, we’re currently living in… interesting times, and it just so happens that we may be able to get an answer to the second of my two questions here today (‘how will nearly the whole world’s industry coming to a halt affect the growth of CO₂ in the world’s atmosphere?’).
Indeed, soon we’ll get the results of a unique (if unexpected) global experiment: how the lockdown and the partial halt to world production affects increases in CO₂ in the Earth’s atmosphere. It will also present a good opportunity to check the soundness of several theories about the how much climate is affected by man.
I have to say, it will be somewhat odd if the lockdown makes no difference whatsoever. I mean, among many other things, it would complete cancel out the need for the Kyoto Protocol and the Paris Agreement, while making a mockery of every country that ever signed either; or am I being too harsh there? ‘Mistakes happen’, they’ll say!
Aaaaaanyway, in closing, I just want to make it crystal clear that these here mind-meanderings through a very sensitive subject are just that – thoughts and ideas for further discussion. I am categorically against any unsustainable pollution of our planet. And I’ll be sincerely happy if the result of the ‘experiment’ is one of carbon dioxide’s rise slowing. That would prove that humans are serious contributors to the rise, and hopefully provide more encouragement and stimulus for cleaning up our act all the more. Indeed, I hope the experiment shows this. However, if the line on the graph showing the concentration of CO₂ continues to slither upward despite the lockdown/shutdown… Yep, that will be – extremely – odd. Let’s wait and see!… | <urn:uuid:b25803bb-a986-4cb3-adb6-ef09276dd8af> | CC-MAIN-2022-40 | https://eugene.kaspersky.com/2020/04/22/two-important-questions-about-the-earth-on-earth-day/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00662.warc.gz | en | 0.935018 | 1,488 | 2.71875 | 3 |
A recent study showing just how easy it is to hack into Internet of Things (IoT) devices–and to use that access to gain entrance to a larger network–focused on commercial products used in the home. However, it could serve as yet another wake-up call for the Department of Defense and other government agencies that are increasingly relying on IoT.
DoD, after all, is expanding its use of commercial IoT devices as part of its networks. Its use of drone aircraft, ground sensors, wearable devices, cameras, smartphones, tablets, and other information-sharing tools is only growing, to the point where the Army is working on a framework for the Internet of Battlefield Things. The department’s security policies for the IoT, however, have so far lagged behind deployment.
The most recent study, by cybersecurity researchers at Ben Gurion University of the Negev (BGU), said it was “frightening” how easily they could hack into home security cameras, thermostats, baby monitors, and even doorbells. “Using these devices in our lab, we were able to play loud music through a baby monitor, turn off a thermostat, and turn on a camera remotely, much to the concern of our researchers who themselves use these products,” Dr. Yossi Oren, head of the university’s Implementation Security and Side-Channel Attacks Lab, said in reporting the results.
The weak links in BGU’s research were device passwords, which often are preset by the manufacturer and then ignored by users. In most cases, researchers found passwords in 30 minutes by Googling the brand, and found that different brands of the same products share the same passwords. After gaining access to one device, they could then take control of others–building a network of remote controlled cameras, for instance–or gain network access via Wi-Fi connections.
BGU’s results echo those of other researchers, such as BullGuard’s Tossi Atias, who last year demonstrated how he could hack into an ostensibly secure home by easily compromising IoT devices. IoT hacking, which also can be used in distributed denial-of-service attacks, has been on the rise for several years.
“Symantec established an IoT honeypot in late 2015 to track attack attempts against IoT devices,” the company said in its 2017 Internet Security Threat Report. “Data gathered from this honeypot shows how IoT attacks are gathering steam and how IoT devices are firmly in the sights of attackers.” Attacks on the honeypot almost doubled from January to December 2016, with the hourly average of unique IP addresses were hitting the honeypot going from almost 4.6 every hour in January to just over 8.8 in December. During peak activity of the Mirai botnet, attacks on the honeypot were taking place every two minutes.”
Home vulnerabilities are scary enough, with Gartner predicting 21 billion IoT devices in use by 2020, but DoD also has reason to be concerned–not least of all because it makes use of commercial products. The recent revelation that fitness tracking devices could be used to display the locations of military and national security personnel is just one example. A Government Accountability Office (GAO) report last year detailed the risks of connected devices, and said DoD’s policies on managing the IoT weren’t enough to handle the dangers.
Many IoT devices such as cameras, wearable monitors, and smart televisions typically have a spare amount of encryption and a limited capacity to handle upgrades or patches which can affect their security, GAO said. That limited security makes them vulnerable to both hacking and insider misuse. Meanwhile, responsibility for IoT security is dispersed among the DoD CIO; the assistant secretary of Defense for Energy, Installations and Environment; the undersecretary of Defense for Intelligence; and the Defense Information Systems Agency (DISA), to name a few offices.
GAO found that DoD’s policies also don’t cover data sharing via third-party apps that are added to devices, and needs to expand its cybersecurity best practice policies specifically to IoT devices.
The department hasn’t ignored IoT security, the report said, noting that DoD has already identified numerous IoT risks, and conducted some assessments with regard to infrastructure and intelligence assessments. But a more comprehensive, adaptable set of policies is needed.
DISA’s Security Technical Implementation Guides (STIGs) also enforce security by setting configuration standards for network and wireless devices, along with operating systems (including mobile operating systems), database apps, open-source software, and virtual software.
The National Institute of Standards and Technology (NIST) just released an interagency report recommending international standards for IoT devices, from consumer devices and health care processes to energy management and connected vehicles.
One thing to keep in mind is that securing the IoT, considering the ubiquity and variety of its components, isn’t quite like securing other systems. “Through analysis of the application areas, cybersecurity for IoT is unique, and will require tailoring of existing standards, as well as, creation of new standards to address pop-up network connections, shared system components, the ability to change physical aspects of the environment, and related connections to safety,” the report said. | <urn:uuid:aefeda98-0855-4ec2-bc73-9d32dbdd6980> | CC-MAIN-2022-40 | https://origin.meritalk.com/articles/weak-iot-defenses-fueling-dod-security-challenges/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00662.warc.gz | en | 0.959371 | 1,093 | 2.59375 | 3 |
A service level agreement (SLA) is a contract between a business and its customer outlining the details that the two parties have agreed to in a transaction. The types of SLAs that an organization can use depends on many significant aspects. While some are targeted at individual customer groups, others discuss issues relevant to entire companies. This is because the needs of one user differ from those of another. Below is a list of the types of SLAs used by businesses today, and how each one is utilized for specific situations:
This type of agreement is used for individual customers and comprises all relevant services that a client may need, while leveraging only one contract. It contains details regarding the type and quality of service that has been agreed upon. For example, a telecommunication service includes voice calls, messaging and internet services, but that all exists under a single contract.
This SLA is a contract that includes one identical type of service for all of its customers. Because the service is limited to one unchanging standard, it is more straightforward and convenient for vendors. For example, using a service-based agreement regarding an IT helpdesk would mean that the same service is valid for all end-users that sign the service-based SLA.
This agreement is customized according to the needs of the end-user company. It allows the user to integrate several conditions into the same system to create a more suitable service. It addresses contracts at the following levels:
This SLA does not require frequent updates since its issues are typically unchanging. It includes a comprehensive discussion of all the relevant aspects of the agreement, and is applicable to all customers in the end-user organization.
This contract discusses all service issues that are associated with a specific group of customers. However, it does not take into consideration the type of user services.
An example of this is when an organization requests that the security level in one of its departments is strengthened. In this situation, the entire company is secured by one security agency but requires that one of its customers in the company is more secure for certain reasons.
In this agreement, all aspects that are attributed to a particular service with regard to a customer group are included. | <urn:uuid:14d30c1b-7a0a-491a-a172-4797133d659d> | CC-MAIN-2022-40 | https://www.givainc.com/blog/index.cfm/2018/12/3/3-types-of-service-level-agreements | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00662.warc.gz | en | 0.957526 | 486 | 2.578125 | 3 |
“Only 50% of oil and gas companies have a robust information security strategy in place.”
In my previous blog, I wrote about how transition to the Digital Oilfield is exposing companies to the potentially serious risk of cyber attacks - putting production, reputation, and ultimately profits at risk. There's no doubt that as attacks become more insidious, the potential consequences proliferate, with the cost of future breaches impacting infrastructure, safety, intellectual property, lost revenues, and even the broader economy.
But of course, it's not just the energy industry that's in the cross hairs of hackers and cyber criminals. The pervasive threat of cyber attacks has been brought into sharp focus in recent months by the heavily publicised Sony Pictures data breach. Although interestingly, Sony Pictures only ransk as the 33rd largest breach in 20141. The largest? eBay, with over 150 million records compromised2.
THE ENERGY SECTOR'S VULNERABILITY TO CYBERCRIME
Security threats are expected to grow even further in the future. In the past four years alone, the financial impact of cybercrime has increased by nearly 78% and the time it takes to resolve a cyber attack has more than doubled.3 Across all industries and geographies, it’s been estimated that cybercrime costs some $400 billion in lost time and assets.4
According to Ponemon, companies in energy and utilities recorded average annual costs due to cybercrimes of $19.78 million, second only to firms in the defence industry. An ABI Research study predicted that globally, cyber attacks against oil and gas infrastructure will cost companies $1.87 billion by 2018.
The energy sector’s diverse and interconnected systems are also increasing vulnerability to cybercrime with newer technologies such as those controlling drilling rigs and cloud-based services being subject to probes or attacks. So too are once-isolated plant control systems that are now integrated with corporate networks or vendors. Even private smartphones and devices used by company employees potentially open up a business’s network to an increasing number of threats and malicious behavior. Such threats can target data at rest on the device and can be easily introduced through online web surfing (96% of all mobile devices don’t have encryption protection5).
In short, wherever there’s digitally enabled technology or an intelligent device – even a simple device that controls a valve on the pipeline – there’s a risk of it being used as a portal and taken over without authorisation.6 Cyber criminals are targeting the entire spectrum of potentially valuable data: data at rest, data in transit, and data in use.
Whatever the access point or motivation, high downtime costs and attack frequency rates necessitate strong cyber security protocols. When you consider that 96% of successful breaches could be avoided if organisations put simple or intermediate controls in place7, it really is time for the industry to collectively take action.
A COMMON FRAMEWORK TO IMPROVE CYBER SECURITY IN THE ENERGY SECTOR
In February 2013, The NIST Framework for Improving Critical Infrastructure Cybersecurity was created as the result of a US Executive Order, in response to the growing security, economy, public safety and health risks caused by cyber security threats.
Our Oil & Gas industry report features a best practice cyber security strategy that’s consistent with the NIST Framework and based on global security standards. The strategy is built around the following 4 process pillars which, when executed concurrently and continuously, serve to mitigate the risk of cyber attack.
There’s also a detailed checklist within the report to help you assess your company’s current cyber security posture, define your company’s target state, identify and prioritise opportunities for improvement, assess progress, and communicate the risk to stakeholders.
1. Know your critical assets – identify your organisation’s business objectives and high-value assets, then conduct risk assessments to find any vulnerabilities.
2. Protect your IT, radio network and OT environments – Establish defences to block intruders before they reach your critical business assets, and educate your employees to recognise and avoid phishing attacks.
3. Detect potential threats before they occur – Use the right tools to gain a comprehensive view of your security environment and monitor potential threats both externally and internally.
4. Respond and recover – With the speed and intelligence of many of today’s cyber attacks, cyber breaches may still occur, even in the most secure infrastructure. Having a contingency plan in place can help you respond immediately if a breach should occur.
If, after referring to the checklist in the report, you find your operations are vulnerable to attack – then we can provide a full onsite cyber assessment service. Details of how to arrange your assessment can be found towards the back of the report.
If you’d like to join the conversation about protecting oil and gas operations from cyber threats, we’d be delighted to welcome you to the Motorola Solutions Community EMEA LinkedIn Group.
Tunde is on LinkedIn at https://www.linkedin.com/pub/olatunde-williams/5/282/67a
Tunde Williams’ previous oil and gas industry blogs include: Protection against cyber attacks in the Digital Oilfield and Improving worker safety in the Digital Oilfield
1 http://www.idigitaltimes.com/10-largest-data-breaches-2014-sony-hack-not-one-th em-403219 | <urn:uuid:a95072f9-d5e3-4fe7-996e-8dea858a2efa> | CC-MAIN-2022-40 | https://www.motorolasolutions.com/en_xu/communities/en/think-oil-gas.entry.html/2015/02/20/how_to_mitigate_the-X5jh.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00662.warc.gz | en | 0.927668 | 1,133 | 2.578125 | 3 |
Deep Learning in Image Recognition Opens Up New Business Avenues
Present-day image recognition is comparable to human visual perception. It has entered daily life and serves different needs. Facebook and other social media platforms use this technology to enhance image search and aid visually impaired users. Retail businesses employ image recognition to scan massive databases to better meet customer needs and improve both in-store and online customer experience. In healthcare, medical image recognition and processing systems help professionals predict health risks, detect diseases earlier, and offer more patient-centered services. This list can go on and on.
Marketing insights suggest that from 2016 to 2021, the image recognition market is estimated to grow from $15,9 billion to $38,9 billion. Click To Tweet It is enhanced capabilities of artificial intelligence (AI) that motivate the growth and make unseen before options possible.
Expert Systems, AI, ML & DL Explained
At the dawn of AI, smart systems required a lot of manual input. To train machines to recognize images, human experts and knowledge engineers had to provide instructions to computers manually to get some output. For instance, they had to tell what objects or features on an image to look for. Such a method, somewhat outdated, is called Expert Systems. It was initially used for chess computers and AI in computer games.
With the advent of machine learning (ML) technology, some tedious, repetitive tasks have been driven out of the development process. ML allows machines to automatically collect necessary information based on a handful of input parameters. So, the task of ML engineers is to create an appropriate ML model with predictive power, combine this model with clear rules, and test the system to verify the quality.
It should be noted that machines can’t see and perceive images as we do. For them, it’s all about math, and any object will look like this:
Before getting down to model training, engineers have to process raw data and extract significant and valuable features. This time-consuming and complicated task is called feature engineering. It requires engineers to have expertise in different domains to extract the most useful features. So, if a solution is intended for the finance sector, they will need to have at least a basic knowledge of the processes.
Working Principles of Image Recognition Models
Image recognition falls into the group of computer vision tasks that also include visual search, object detection, semantic segmentation, and more. The essence of image recognition is in providing an algorithm that can take a raw input image and then recognize what is on this image and assign labels or classes to each image.
Based on provided data, the model automatically finds patterns, takes classes from a predefined list, and tags each image with one, several, or no label. So, the major steps in AI image recognition are gathering and organizing data, building a predictive model, and using it to provide accurate output.
For model training, it is crucial to gather and organize data properly. The quality of data is critical to enable the model to find patterns. Datasets have to consist of hundreds to thousands of examples and be labeled correctly. Then it will become possible to define discrete labels. In case there is enough historical data for a project, this data will be labeled naturally. Also, to make an AI image recognition project a success, the data should have predictive power. Expert data scientists are always ready to provide all the necessary assistance at the stage of data preparation.
The labeling will be used to enable the model to predict what object is on the image and what is the level of probability that the prediction is correct. If visualized, the process of image recognition looks like this:
However, AI now can automate feature engineering as well. Deep learning (DL) technology, as a subset of ML, enables automated feature engineering for AI image recognition. A must-have for training a DL model is a very large training dataset (from 1000 examples and more) so that machines have enough data to learn on.
The work of DL algorithms is based on a “black box” principle. Although difficult to explain, DL models allow more efficient processing of massive amounts of data (you can find useful articles on the matter here). That is why the models are actively used in computer vision.
Predictive modeling is based on using artificial neural networks. A neural network consists of numerous interconnected nodes or neurons. Each node is responsible for a particular knowledge area and works based on programmed rules. There is a wide range of neural networks and deep learning algorithms to be used for image recognition.
High-quality data directly impacts the accuracy of the results. Any ML project begins with gathering appropriate input data. Even the most advanced algorithms are powerless when datasets are poor. Data collection requires expert assistance of data scientists and can turn to be the most time- and money- consuming stage. But valuable data is the keystone to project success.
AI Image Recognition in Real Business Use Cases
Image recognition technology is only as good as the image analysis software that provides the results. Sometimes the quality you are after can be compromised. InData Labs offers proven solutions to help you hit your business targets.
Facial Recognition for Influencer Marketing
This application of image recognition is very popular across social media. For example, the technology can be used to power recommendation engine and a platform for searching influencers and influential accounts that can contribute to product promotion campaigns. By using filters and categories provided on the platform, users can find relevant
influencers and analyze them and their audiences in a matter of seconds. A facial recognition model will enable recognition by age, gender, and ethnicity. Based on the number of characteristics assigned to an object (at the stage of labeling data), the system will come up with the list of most relevant accounts.
AI Stamp Recognition in Logistics
The processing of scanned and digital documents is one of the key areas to apply AI image recognition. Stamp recognition can help verify the origin and check the document authenticity. The main obstacle here is the quality of input data. A document can be crumpled, contain signatures or other marks atop of a stamp. In this case, the input image will be degraded.
For document processing tasks, image recognition needs to be combined with object detection. The model detects the position of a stamp and then categorizes the image. And the training process requires fairly large datasets labeled accurately. Stamp recognition is usually based on shape and color as these parameters are often critical to differentiate between a real and fake stamp.
Google Vision to Handle Archived Photos
As a part of Google Cloud Platform, Cloud Vision API provides developers with REST API for creating machine learning models. It helps swiftly classify images into numerous categories, facilitates object detection and text recognition within images.
An ML-based image recognition solution helped The New York Times digitize a large collection of photos accumulated over the decades. State-of-the-art technologies finally made it possible to digitize old images and allow users to easily browse through the photo database to find untold stories in millions of archived photos. Many beautiful black and white pictures contained useful text and captions on the back, like this one:
The output of the model was recognized and digitized images and digital text transcriptions. Although this output wasn’t perfect and required human reviewing, the task of digitizing the whole archive would be impossible otherwise.
Apart from some common uses of image recognition, like facial recognition, there are much more applications of the technology. Different business areas and standards raise new challenges. And your business needs may require a unique approach or custom image analysis solution to start harnessing the power of AI today.
Start Your Next Breakthrough Project with InData Labs.
Have a project in mind but need some help implementing it? Drop us a line at firstname.lastname@example.org, we’d love to discuss how we can work with you. | <urn:uuid:49ba6f2f-4881-4825-a9f4-fa41f92308b2> | CC-MAIN-2022-40 | https://indatalabs.com/blog/ai-image-recognition | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00662.warc.gz | en | 0.917875 | 1,608 | 2.875 | 3 |
February 8, 2022
Over the last few decades, additive manufacturing (AM) / 3D printing has fundamentally changed the way that manufacturers approach product development. Industry is now almost universally aware of the term rapid prototyping, using AM to convert 3D CAD data into physical models in a matter of hours. The role of AM in prototyping has become embedded across all industrial sectors.
AM has enabled concurrent engineering, where all relevant departments can be engaged early in the product development process. Concurrent engineering replaces traditional “over-the-wall” product development, where design iterations could be delayed by weeks to accommodate tooling and machining considerations. The benefits are dramatic time-to-market reductions and cost savings in product development.
AM is a uniquely disruptive technology. 25-30 years ago, it changed the manufacturing paradigm by altering the way that manufacturers produced prototypes. Today, it is disrupting the way that manufacturers produce end-use parts and components and is increasingly seen as a truly viable production technique. Now the conversation among manufacturers is around the most judicious use of AM for production, its advantages, the sweet spot is in terms of production volumes, key opportunities, and barriers to entry. Many of these barriers relate to precision quality control of AM parts, which challenge traditional methods of surface metrology.
With the focus today being on the use of AM for production, the analysis of the accuracy and repeatable tolerance attainment of AM has become a far more critical issue. When used as a prototyping technology, absolute adherence to tolerances and precise design intent is not always necessary, and a “good enough” approach can be taken. Hence the proliferation of quite inexpensive desktop 3D printing machines that provide sufficiently accurate rapid prototypes that do the job without needing to be pitch perfect.
For production applications, however, “good enough” is no longer sufficient. If an AM part is integral to a safety critical aerospace or medical application, it is essential to achieve dimensional and material tolerance targets consistent with design intent. It is here that the role of metrology to validate the quality of finished parts is so important. It is also an area where providers like ZYGO of 3D optical metrology solutions can make a difference.
Legacy manufacturing processes for metals and plastics have established quality control methods for validating and measuring parts. The production processes are understood, as are the most critical dimensional and surface finish requirements. AM, however, does exactly what the name implies — it produces parts layer by layer “additively”, and this opens up an array of unique issues that can affect the integrity of a finished product, and also a unique set of surface characteristics that make the job of measuring and validating that much more difficult.
How the sector is responding to the metrology and validation conundrum was highly visible at the recent (and largest) AM-related event on the calendar, Formnext in Frankfurt, Germany. At various learning events on-site, metrology issues featured prominently, acknowledging the fact that measurement and validation of AM parts is a big deal today. In addition, AM technology providers are now developing in-process metrology (IPM) solutions to overcome the specialized challenges of verifying the integrity of AM processes.
AM technologies and metrology techniques have also captured the attention of professional societies that organize conferences and symposia worldwide. These include the American Society of Precision Engineering (ASPE), the International Society of Optics and Photonics (SPIE), and the International Academy for Production Engineering (CIRP). ZYGO participates in these events actively as an industry supporter, exhibitor, and presenter of scientific and engineering papers on the latest developments.
RESEARCH IN AM METROLOGY
In the search for relevant metrology critical to process control, industry is still trying to understand what to look for on and under the surface of an AM produced part, and how these relate to part functionality. Surfaces of AM parts challenge existing surface topography measurement and defy characterization using standardized texture parameters because of high surface slopes, voids, weld marks, and undercut features.
Research into new and improved metrology for AM is advancing through a wide range of industry and academic partnerships, many in cooperation with ZYGO. An example is work at the University of Nottingham, where the Manufacturing Metrology Team (MMT) led by Prof. Richard Leach is investigating the full range of solutions, from high-precision interference microscopy to X-ray tomography of the internal structure of completed parts.
In just the past four years, the MMT has published 43 research papers on AM, ranging from methods to optimize measurements on specific instruments to new feature-based analysis and machine learning to interpret results. Of particular interest is IPM for evaluating the quality during manufacture, following each additive line and layer in real time. This information can be used to inform control strategies and later in-process metrology developments. An important part of IPM development is correlating to reference metrology, including benchtop surface metrology instruments.
Another example of leading-edge research is at the University of North Carolina at
Charlotte, where Prof. Christopher Evans and co-workers have been using interferometry and electron microscopy to study AM materials in collaboration with the US National Institute of Standards and Technology (NIST), and Carl Zeiss GmbH at Oak Ridge National Laboratory (ORNL). These researchers have been studying Inconel 625 — a high temperature Ni superalloy for AM, that exhibits an intriguing variety of surface signatures. These surfaces have areas rich in oxide films that are visible in true-colour, 3D surface topography maps obtained with ZYGO’s interference microscopes. These instruments also serve as excellent workhorses for examining large areas with high detail, such as distorted weld pools, by assembling or ‘stitching’ together multiple high-lateral resolution images each with millions of data points.
While the challenges of quality control of AM parts are a great concern for those who make these parts, these same challenges present an attractive opportunity for new solutions and spinoff businesses. Founded in 2018 in the UK, Taraz Metrology is an example of a spinoff enterprise which combines university research, practical engineering, and commercial experience into a unique product development capability customized to the needs of AM. Taraz currently offers freestanding final inspection solutions for all types of AM parts and leverages proprietary software for advanced fringe projection and photogrammetry of topography.
STANDARDIZATION AND TRACEABILITY
The ability of AM to produce geometrically complex parts, its role as an enabler of mass customization, and the potential time and cost savings associate with its use are all important for the future of industry. However, when compared to more familiar and established manufacturing methods, AM technology is dynamic and rapidly evolving, and technology innovators are working to overcome the barriers to adoption of AM for production applications, including those related to quality control standards.
ZYGO is actively researching calibration, traceability, characterization, and verification for surface topography measurements, with 16 papers published in the last 5 years alone on these topics, and a further seven specifically focused on physical modelling of optical measurements of surface structures — including complex, steeply-sloped surfaces characteristic of AM parts — and five more papers on the measurement of AM parts per se.
ZYGO is also a partner in the €2.2M EMPIR 20IND07 TracOptic project, with the title “Traceable industrial 3D roughness and dimensional measurement using optical 3D microscopy and optical distance sensors” — of obvious value to the AM sector.
National and international standards are critical both to Industry adoption and to assuring quality control across multiple, developing manufacturing technologies. ZYGO is an active member of ISO TC213 WG16 for the development of the ISO 25178 surface texture standards, working in collaboration with international experts on the ISO 25178-603 and 25178-604 standards for interference microscopy, and the 25178-700 standard for instrument calibration and traceability. ZYGO is also a member of the ASME B46.1 working group on surface texture analysis, which currently includes a task team concentrating on AM metrology standards.
Measurements of AM parts post-process serve to validate conformance with design intent, and to provide clues into fabrication problems left by surface signatures. However, the uniqueness of AM processes and produced parts lead manufacturers to use an array of different mechanical and metrology verification techniques. They adopt an empirical approach as no one solution is trusted to provide accurate enough data. Gage R&R is used as a stand-in for a more rigorous measurement uncertainty approach. As a consequence, AM parts are often “over-tested” to improve confidence, but this means extra time and extra cost, areas that must be addressed to make AM for production more viable.
The open question is how to improve this situation for greater efficiency while maintaining confidence. The answer is for metrology solutions providers to adapt existing metrology technologies to better align them with the unique characteristics of the AM process and end-use AM parts, which are characterized by irregular, steeply sloped surface topography that many measurement technologies fail to capture.
Through extensive research and development of the foundational coherence scanning interferometry (CSI) technology in the ZYGO 3D optical profilers, high-accuracy AM metrology tools are now available to industry. Both instruments use innovative hardware and software upgrades, the package of improvements being referred to internally at ZYGO as “More Data Technology,” which makes the instruments much better suited to AM parts.
“More Data” significantly improves the baseline sensitivity of CSI and enables high-dynamic range (HDR) operation making it valuable for a wide range of parts, from steeply sloped smooth parts to exceptionally rough textures with poor reflectivity. Additionally, HDR measures parts with a wide range of reflectance, often a struggle for other instruments that use interferometry as a measurement principle. ZYGO was the first to demonstrate full-colour surface topography measurement of metal additive manufactured surfaces using interferometry, and ZYGO engineers actively use AM internally for instrument prototyping and applications development.
With AM now an established production technology for certain applications, there are barriers to mass adoption that are being addressed, including the need for in-process and post-process metrology technologies that can validate the quality and accuracy of the parts produced. AM parts have a unique set of characteristics that render traditional measuring technologies impotent in some situations, and today innovative metrology technologies are being developed that can provide meaningful measurement data efficiently and cost-effectively. Only when such issues are addressed will the use of AM become mainstream as a viable production technology across an array of industry sectors and applications.
Many thanks to Chris Young, MicroPR&M, for great discussions and contributions to this article.
About Peter de Groot
Peter de Groot, PhD, is Chief Scientist at Zygo Corporation, which is owned by AMETEK, Inc., a leading global manufacturer of electronic instruments and electromechanical devices with annual sales of approximately $5 billion. ZYGO designs and manufactures optical metrology instruments, high-precision optical components, and complex electro-optical systems, and Its products employ various optical phase and analysis techniques for measuring displacement, surface shape and texture, and film thickness. Electro-Optics and Optical Components businesses leverage ZYGO’s expertise in optical design and assembly, and high-volume manufacturing of precision optical components and systems, for the medical/life sciences, defense and industrial markets. | <urn:uuid:7f269475-f6e3-44f0-bf43-01e5eba72828> | CC-MAIN-2022-40 | https://internetofbusiness.com/additive-manufacturing-new-frontiers-for-production-validation/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00662.warc.gz | en | 0.932229 | 2,418 | 2.921875 | 3 |
A picture may be worth a thousand words, but a mind map may be worth many more. That’s because a mind map combines both the power of a picture with the suggestion of words.
You may have seen mind maps, but you may not have known the name for them. Sometimes they look like clusters of bubbles; other times, like elaborate tree structures.
Because mind mapping has been connected with business brainstorming, much of the early software for creating the diagrams carried a relatively heavy price tag. However, with the arrival of Web-based applications and open source software, it’s now easy to take the practice for a spin without dispersing a cent.
A recent free entry into this realm is FreeMind. It’s an intuitive application that allows you to build mind maps with a minimum of fuss.
When you start a new map, an ellipse appears in the middle of your screen for your root idea. Clicking inside the ellipse opens an editing field for typing text.
You can quickly add ideas to the map by right-clicking a node. When you do that, a pop-up menu appears. With it, you can edit a node, add children to it or its parent, or move it up or down in your map.
From the menu, you can also spruce up you maps by adding icons to your ideas, formatting your nodes, inserting links and drawing “clouds” around related clusters of ideas.
If you’re a true keyboard jock, just about anything you can do with the software’s menus and toolbars can be accomplished with combinations of keystrokes.
Online programs have two advantages over their desk-bound peers: They can be accessed from any device with an Internet connection and because of that, it’s easier to collaborate on projects with them.
Both Mindomo and Bubbl.us have more graphic flair than FreeMind. The Web-based programs make good use of fonts and 3-D rendering of objects.
Mindomo Over Matter
For users of Microsoft Office 2007, the Mindomo interface should look familiar. It uses the tab and “ribbon” technique deployed in that program suite.
Under each tab, there’s a ribbon which contains palettes with clusters of tools. For example, under the Home tab, there’s a Map palette that contains tools for creating a new map, opening an old one, saving an existing one or printing it.
To add ideas to a central topic, you click on the topic and hit Enter. You can add nodes to any idea on the map by pressing your tab or insert keys.
Many functions that can be performed from the ribbon bar can be executed with keyboard commands. If you select some text, for instance, you can bold it by clicking a ribbon tool or by pressing Ctrl-Alt-B.
A Bubbl.us Bath
I found Bubbl.us easier to use than Mindomo, which is still in beta.
Bubble.us gives you more mouse options at every node. When you hover over the root bubble of a map, for example, several tools appear within it — tools for moving the bubble, changing its color, adding children and siblings to it, connecting it to other bubbles and deleting it.
With Mindomo, I found myself constantly wearing out a path from map to ribbon bar with my cursor. That wasn’t the case with Bubbl.us, where most of what I needed was under my cursor at all times.
Mind mapping can be a great way to generate ideas, manage tasks or bring right-brain thinking to your left-brain projects. What’s more, with excellent free programs like FreeMind, Mindomo and Bubbl.us, money need not be a barrier to discovering what mind mapping can do for you. | <urn:uuid:025007d1-fed7-4428-b3a3-6fdb28ab15b6> | CC-MAIN-2022-40 | https://www.linuxinsider.com/story/charting-a-course-for-your-brainstorm-with-mind-map-apps-57250.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00662.warc.gz | en | 0.898681 | 805 | 2.671875 | 3 |
In my recent blog series, I highlighted the importance of communication for strategic transformations. This affects several functions and various roles in your organization by asking different questions, such as:
Using a holistic framework to capture enterprise architectures, such as ArchiMate, enables you to answer these and many more business questions. But how do you separate the architectures into simple diagrams to make them easier to create and use?
A standard approach to describe architectures is the ISO 42010, which includes the definition of viewpoints as a specification for constructing diagrams. These diagrams, in return, serve different roles and highlight their concerns. This proven approach is used in TOGAF, ArchiMate and other frameworks.
After defining your stakeholders and their concerns, you probably end up with a list of business questions you want to answer. The challenge is knowing how to cut the information you gather into easy-to-read diagrams. To achieve this, you need to reduce the complexity of the diagrams. Reducing the complexity serves in two ways:
1. Reducing complexity when creating diagrams (by the expert)
2. Reducing complexity in consuming diagrams (by the relevant stakeholder)
There are different techniques to reduce the complexity, many of which are described in the book, “Enterprise Architecture at Work,” from Marc Lankhorst, et al. Key indicators that impact the visual complexity of a diagram are the number of different icons and relation types shown.
In this case, we use the ArchiMate framework to define viewpoints that describe what you are allowed to use to create a diagram. The main concepts in ArchiMate are (architecture) elements, such as capabilities, business processes or applications, as well as the different relation types. This leads to two approaches guiding the definition of viewpoints: selecting the relevant elements or selecting relation types.
Selecting the elements for a viewpoint may put you at a disadvantage, however. This is because you are often allowed to use five or more different relation types between two objects, which forces you to select the correct relations out of a provided list. The other approach to creating viewpoints is to group the relations and then select the elements. In ArchiMate, there is already a grouping of the relations into structural, dynamic, dependency, etc., which can be used to guide the creation of simpler diagrams with low complexity.
You can see how to create diagrams in the examples below, which begins with dividing the set of relations into separate groups. The ArchiMate framework describes:
Starting with structural relations, I only use the composition relation to describe the hierarchy of elements and the realization relation to describe what the elements realize. You can use them to show what capabilities are composed of and to show which application(s) realize a capability. Diagrams with this information help answer the business questions mentioned above. To add to the description of key elements of a capability, I add business processes and applications to this viewpoint. That way you can get an overview of which key people, processes and technology you need to realize a specific capability.
Remember, viewpoints describe what you are allowed to use to create a diagram. The diagram above shows an example of a viewpoint that allows only the usage of the ArchiMate concepts of capability, business actor, business process, application component, composition and realization relation. With this viewpoint, we limit available concepts for this type of diagram and the possible relations between two elements significantly. For relating capabilities, we are only allowed to use one relation instead of seven. To relate people, process or technology to capabilities, we are only allowed to use one (instead of two) possible relation types (see picture below).
Using the same pattern in different layers
You can use the above pattern to create more structural diagrams by using these relation types to compose elements or to show what elements realize. I will show that in a future blog, along with other structural viewpoints, to demonstrate the application of the same pattern along different architecture layers.
Stay tuned for more standard viewpoints that will help you create simpler ArchiMate diagrams for your change initiatives!
Update: The functionality to configure your own viewpoints is also part of our new version of Enterprise Studio. If you want to see this and other powerful functionalities of our solution, please contact us at https://www.bizzdesign.com/contact. | <urn:uuid:afbcb827-2df6-4c04-bc04-78389457c42e> | CC-MAIN-2022-40 | https://bizzdesign.com/blog/a-pattern-for-sizing-archimate-diagrams/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00062.warc.gz | en | 0.924251 | 885 | 2.734375 | 3 |
Pentesting, or penetration testing, is a cybersecurity measure that fights hackers by exposing the hacker’s possible entry points, therefore allowing defenders the close them before they’re able to find them.
People began to notice the possibilities of system security threats in the 1960s, giving rise to the first work of pentesting, done by teams of hackers dubbed as “Tiger Teams”. Unlike today where pentesting is widely used among all system owners, these teams were employed only for government and military owned systems. For instance, the Navy would employ these teams to find vulnerabilities on terrorist attacks on naval bases.
Starting in the 1990s, however, the work of these “Tiger Teams” began to be urged as more widely important to any system owner. It was at this time the work was commonly termed as “ethical hacking”. By the 2000s, penetration testing became a widespread discipline, the Penetration Testing Execution Standard (PTES) being the first pentesting service provider in 2003.
Since the very first pentesting providers in 2003, hackers have evolved. With computers becoming more and more involved and important in our everyday lives, the damage a hacker can do more and more dangerous. Our workfaces and tools have become largely remote, increasing the opportunities and potential risks that hackers can tap into.
Hackers will swiftly grasp at these new avenues as cybercrime costs are estimated to grow 15 percent per year. That is why penetration testing is even more essential to this day than ever.
RidgeBot and Pentesting Today
Today, pentesting is insufficient in the current cyber defense state. While being one of the most widely used and common security practices, according to Seth Adler from the Cybersecurity Hub, “It is mostly performed manually by 3rd party service providers and has evolved very slowly in the past decade.”
Being both time and talent dependent, not to mention costly, pentesting is in desperate need for a new angle to keep up today’s rapidly expanding cyberspace.
Ridge Security is tackling this challenge with RidgeBot, a fully automated penetration testing solution that employs AI to swiftly sweep systems. Automation cuts the both time and talent dependent aspects of pentesting, therefore also cutting the costs and the opportunities for hackers to compromise a system.
Learn more on how Ridge Security is advancing the world of modern day pentesting through RidgeBot; And how RidgeBot can swiftly and cost-effectively help you protect your systems today. | <urn:uuid:fe485686-16fa-426b-bdcf-d9c83bc95b0b> | CC-MAIN-2022-40 | https://ridgesecurity.ai/blog/the-past-present-and-future-of-pentesting/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00062.warc.gz | en | 0.961087 | 515 | 2.6875 | 3 |
The business case for satellite life extension: running on empty
13 July 2016 | Strategy
Imagine if you had to scrap your car when it ran out of petrol. This, essentially, is the situation that satellite operators find themselves in. Geostationary satellites are usually designed for a working life of around 15 years. A lot of effort goes into the design and testing of their components to ensure high reliability. Consequently, most satellites are retired not because of component failure but because they run out of fuel for their thrusters. This article assesses the viability of using space tugs to extend the working life of satellites.
Space tugs offer a simpler alternative to re-fuelling satellites in orbit
Satellites have solar panels to generate electrical power for their communications payload and operations such as attitude control (that is, keeping their antennas pointing in the right direction). Nevertheless they require thrusters for station-keeping (that is, maintaining a fixed orbital position).1 Therefore, the idea of being able to re-fuel satellites in orbit is inherently attractive. Unfortunately, it is difficult to achieve because many different types of fuel valve (the equivalent of a car’s petrol tank cap) are available and existing valves were not designed to be robotically opened and resealed after many years in space.
An alternative approach is to use a space tug, a separate spacecraft that docks with an existing satellite and takes over station-keeping, and in some cases attitude control as well. The tug approach is technically much simpler because no fuel is transferred. In theory, using a tug is less energy-efficient than re-fuelling because a tanker can detach once re-fuelling is complete, but a tug must station-keep its own mass as well as that of the satellite for the duration of its mission. However, this issue can be mitigated by incorporating recently developed and more fuel-efficient electric thrusters into tug designs, while for now tankers need to carry the chemical fuel that most existing satellites use.2
Tug providers intend to charge according to the expected value of their services to individual satellite operators
Two companies, Orbital ATK and Effective Space Solutions, have announced the imminent launch of tug systems for satellite life extension. Orbital ATK’s system was originally developed by ViviSat, a joint venture (JV) between ATK Systems and U.S. Space. It is based on a medium-sized (2–3 ton) spacecraft bus using a combination of chemical and electric propulsion, and provides both station-keeping and attitude control for up to 15 years. Effective Space Solutions, a UK-based company, has developed a much smaller all-electric platform weighing around 350kg, allowing the spacecraft to be launched as a rideshare with other satellites. It is designed to provide station-keeping only (with attitude control provided by the satellite to which it is attached) and its expected service life is at least 7 years.
So what about the economics? The upfront cost of a large geostationary satellite including the satellite itself, launch and insurance is typically USD300–350 million. The weighted average cost of capital (WACC) for large satellite operators is around 7.0–9.0% per annum, suggesting that the value to an operator of deferring satellite replacement is USD20–32 million per annum. Ordinarily it would be difficult to find out how much tug providers actually charge, but Orbital ATK is in a legal dispute with U.S. Space and recently filed court papers that show the prices that the now-dissolved ViviSat JV had agreed with prospective customers (see Figure 1).3
Figure 1: Expected revenue from ViviSat’s satellite life-extension service [Source: U.S. Space papers filed with the Supreme Court of the State of New York, 2016]
Annual lease cost(USD million)
|Revenue over lease term (USD million)||Implied average annual price per tug (USD million)|
|Intelsat||5 years (2 tugs)||24||120||12|
|Asia Broadcast Satellite||3 years||13||39||13|
These figures suggest that ViviSat was intending to price based on the expected value of its service to individual operators (that is, taking account of variations in their WACC) and was planning to split the economic benefits roughly 50:50 with its customers, although the margin on these early sales must have been slim once ViviSat’s costs were subtracted. Effective Space Solutions also intends to price according to expected value, while relying on its small spacecraft cost structure to improve the business case.
Besides simply extending the life of a station-kept satellite, tugs can also be used to de-orbit satellites that have become stranded in geostationary orbit (freeing up space for new satellites) and bring satellites in inclined orbit back to station-kept orbit (significantly increasing their revenue-earning potential). Perhaps most interestingly of all, tugs can relocate satellites to new geostationary orbital positions. Relocation gives operators the chance to test the market in new geographies at relatively low cost before committing to the expense of a new satellite. At a time when the industry faces many uncertainties from terrestrial competition and the development of high-throughput and low Earth orbit satellites, this flexibility could be a deciding factor in persuading operators to hand over the keys to tug providers.
Analysys Mason works extensively with satellite operators and service providers and their investors on market forecasting, business planning and transaction support projects. For more information, please contact Philip Bates at firstname.lastname@example.org
1 Most of the fuel is required to maintain North–South control, counteracting the gravitational pull of the sun and the moon. Lesser amounts are required to maintain East–West control (that is, control of the orbital period). Consequently, satellites that are low on fuel are often placed in inclined orbit, where East–West control is maintained, but North–South control is allowed to drift. Attitude control is typically maintained using reaction wheels powered by the satellite’s solar panels but the wheels build up excess momentum over time and thrusters need to be fired occasionally to slow them down.
2 Electric thrusters still consume chemical propellant but achieve higher efficiency than traditional thrusters by employing electric fields to expel propellant at high velocity as an ionised gas or plasma.
3 Supreme Court of the State of New York case index number 652303/2016, 29 April 2016.
Rural broadband presents new opportunities and unique challenges for the telecoms industry
Digital marketplaces: how CSPs can boost revenue growth
Hollow-core fibre for low latency and increased bandwidth: the next game-changer in optical cables | <urn:uuid:fb1af15e-5b23-4e40-ac97-bf16c5d6d896> | CC-MAIN-2022-40 | https://www.analysysmason.com/about-us/news/newsletter/satellite-life-extension-jul2016/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00062.warc.gz | en | 0.943222 | 1,470 | 2.6875 | 3 |
Social engineering refers to any technique used by a threat actor that focuses on people and process, rather than on technology. The objective of a social engineering attack typically includes manipulating people into divulging confidential information or performing an activity that benefits the attacker, preferably without those people realizing. It is a common requirement of information security programs to replicate the threat of social engineering attacks through regular penetration tests.
Benefits of social engineering testing
People are often more susceptible to compromise, compared to technology, as they represent a direct entry point into a target network. Consequently, threat actors often find success when targeting people and processes. In the meantime, it’s common for organisations to focus on securing their technology. While technology is very important, it doesn’t represent the entire attack surface of a given organisation. Including social engineering tests in an information security programme gives more complete assurance against real world threats.
A successful social engineering testing programme has well defined objectives and covers several approaches. These include remote techniques including leveraging email, text message, phone call and even post. For complete coverage, in person techniques that achieve physical access should also be conducted. When all these approaches are included in a social engineering test, a true picture of strengths and weaknesses, as relates to people, begins to emerge.
Benefits of social engineering tests include:
- Identify vulnerabilities relating to attacks that leverage people and process.
- Understand the likely impact of an attacker that uses social engineering.
- Gain insight into what people and process defences are currently working well.
- Get assurance that includes consideration of real-world threats such as phishing.
Organisations that include social engineering threats in their assurance programme tend to receive greater insights into their overall information security posture. It is becoming increasingly common for assurance programmes to require that people and process are thoroughly tested on a regular basis, because that’s what attackers are targeting too.
In the past, it was common for attackers to focus on Internet facing infrastructure for their attacks. Technology was generally not well defended and focusing on it was low risk and high reward for most attacker objectives. Times have changed. Technology is typically better defended, and attackers are finding more success when targeting people and process. This shift has occurred, but many organisations have failed to keep their threat model up to date.
Did you know:
- Social engineering attacks were responsible for the theft of over $5 billion worldwide during a recent three-year period.
- 55% of all emails are spam.
- 97% of all attacks use some form of social engineering.
It’s clear that social engineering is a real-world threat. The impact and likelihood of such an attack succeeding against an organisation typically needs to be understood. A social engineering test hands that knowledge to an enterprise and helps feed into a robust cyber security strategy.
About the Service
Social engineering attacks are commonplace and take various forms.
- Phishing – Anyone who has used email has almost certainly received a phishing attack at some point. These are email based solicitations designed to entice a person into doing something for an attacker, e.g. installing malware, capturing credentials, wiring money, etc. More targeted forms of this attack are known as spear phishing. This variant typically involves a target pretext: the target person is researched, and a convincing looking phishing email is crafted that is prepared for that person specifically. More targeted emails have a higher chance of success from the attacker’s perspective, but they do take more time, effort and skill to craft.
- Vishing – This is the voice variant of phishing and it happens over the phone. There is typically a strong pretext for the call. It is common for a savvy attacker to collect individual pieces of information across multiple calls. Individually, each piece of information is low value and attempting to get it is unlikely to raise suspicion. Collectively, the information becomes much more valuable and can be used to execute a social engineering attack with high impact.
- Baiting – This is where a user is enticed to do something for the attacker based on bait. For example, a USB stick could be left in a parking lot with the hope that a target person will pick it up and plug it into their laptop. The stick could be of high value and contain interesting looking files, which are really malware. A more targeted version of this could be using snail mail to post something like a target person, perhaps with a pretext of it being a prise (nice packaging goes a long way) or having been sent from someone they know.
- Tailgating – This is one of many forms of physical social engineering. Physical social engineering often has the objective of introducing something malicious to a building, such as malware, or removing something valuable, such as sensitive paperwork. Tailgating is the act of waiting for an authorised person to access a restricted area and following them through closely before the restriction e.g. a door reengages.
There are many other types of social engineering, and these are designed to give a flavour of what attackers typically do.
A social engineering test will use one or more techniques like those described in order to test the protections provided not only by technology, but also by people and process. There must be clear objectives and rules of engagement, and it must be carried out by a reputable firm that understands risk reduction and is familiar with local laws.
At Nettitude, we have a dedicated team of social engineers who always practise and constantly refine their craft. We work with some of the largest organisations in the world and pride ourselves on being able to target people in production environments physical and digital in a way that is compliant with local laws and minimises risk to the target people and their organisation.
Our service is very bespoke. We don’t just send a few templates-based emails in, tell you the click through rates, and call it a day. Rather, we work with you from the very beginning to understand your threat model. From there, we design a test that will assess your people, process and technology. We work with you to define strict rules of engagement and well-defined objectives, and we adapt our methodology and output to meet your requirements.
We effectively and safely conduct reconnaissance against our targets, and we build attacks that meet specified objectives. For example, it may be appropriate to conduct a credential harvesting attack in one scenario, while in another we may attempt to gain command and control over an employee laptop all via social engineering. For more advanced requirements, we can even laterally move through the network after obtaining an initial foothold, and attempt to act on a more advanced objective such as access to a central database, source code, etc.
We generally recommend a white box methodology if you’ve never had a social engineering penetration test before. This allows us to assess your technology first and give you a matrix of the success of different attacks vs different parts of your defensive technology. You then know what’s possible in theory. With that out of the way, we put the theory to practise and use known weaknesses against people. It is unwise to think of people, process and technology as unrelated, and by using the approach you get a sense of security posture over the three as a whole. This type of approach typically gives more thorough assurance levels.
For organisations that are more concerned with what a particular threat actor could likely do without any prior or inside knowledge, a black box approach can be more appropriate. This may give you less information at the end of the engagement, but it will more closely replicate an outside threat.
At Nettitude, we take all of this and more into account. We ensure that you get a robust, safe, efficient and effective social engineering test; the output of which you can bring back to your organisation and make important decisions with.
We are CREST accredited as an organisation, and each one of our employees is highly certified. The certifications we believe are most relevant to social engineering are shown below. Most of these require a rigorous practical demonstration of skill to obtain.
- CREST CCT – We have many testers with the Infrastructure and the Application variant of this certificate; some even hold both. It is a more specialised and advanced certificate compared to the CREST CRT. For your social engineering test, the Infrastructure variant is more relevant.
- CREST CCSAS – We have several testers that hold this the CREST Certified Simulated Attack Specialist certification. Testers must hold the CCT Infrastructure certificate to even attempt this exam, and as with all CREST certificates, it expires after three years to ensure skill currency.
- CREST CCSAM – We have several testers that hold this the CREST Certified Simulated Attack Manager. This is an advanced certification that ensures simulated attacks such as social engineering is run in a safe and controlled manner, are respectful of your people, and are compliant with the law.
- Offensive Security OSCP – Obtaining the OSCP requires the successful completion of a 24-hour practical exam that assesses a broad range of penetration testing skill. Testers with this can really think like an attacker.
- Offensive Security OSCE – Obtaining the OSCE requires the successful completion of a 48-hour practical exam that assesses a more specialised set of skills, including binary exploitation.
This is not an exhaustive list of our certifications – that would take up a lot more space!
As we often get asked many questions about social engineering attacks, we have compiled and answered the most frequently asked one’s below.
- What is your lead time for a social engineering test?
We have a team of expert social engineering testers and they are always in demand. We match internal training and recruitment with external demand as efficiently as possible. Our aim is to be able to commence social engineering tests within two weeks. Where there’s urgency, we can usually do what it takes to meet your deadlines.
- How long does a social engineering test take?
This depends on the objectives of the engagement as well as the methodology chosen. We can typically provide value starting from four days of service, but it often takes longer. We will discuss your organisations specific circumstances and requirements before proposing a bespoke social engineering proposal.
- What is your social engineering methodology?
Our social engineering methodology varies depending on the requirements of the test. To give you an idea of the typical stages, our approach typically starts with open source intelligence (OSINT) gathering. We will do all that we can to find out about your people, process and technology. From there we identify appropriate targets and develop an appropriate set of attacks. If, for example, it’s spear phishing, we’re likely to create a custom pretext targeting a specific individual identified during the OSINT phase. The actual payload of the attack will vary depending on objectives: it might be a credential harvesting attack, malware delivery, etc. Post exploitation will depend on the rules of engagement.
- How will I find out the results of my test?
We are communicative and consultative. During the engagement, we’ll periodically update you with the findings so far both positive and negative. Where we identify critical severity flaws, we will let you know via telephone immediately, and follow up in writing. At the end of the engagement, you’ll receive a summary of all findings. By the time you receive your in depth reports a few days later, you’ll have no surprises: we communicate as we go. After delivery of the reports, we’re more than happy to give you technical and executive level debriefs. Finally, you have full access to our team of social engineering specialists after the engagement has completed. We’re here to answer any security questions you may have into the future.
Get a free quote | <urn:uuid:35884dd4-e3db-4b49-a62e-d8a238bbbeb3> | CC-MAIN-2022-40 | https://www.nettitude.com/be/penetration-testing/social-engineering/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00062.warc.gz | en | 0.956784 | 2,436 | 3.125 | 3 |
The big news this week is a protocol flaw in the Wireless Protected Access protocol, version 2 (WPA2). The Ars Technica article covers the details pretty well. This is what every Wi-Fi wireless router on the planet uses these days. The problem does not directly damage your system, but it can uncover data you had intended to encrypt.
The technique can trick the system into reusing a cryptographic key. To keep encrypted data safe we must avoid encrypting the same data twice (here’s an example of how it fails). While crypto system designs usually account for this, the attack on WPA2 tricks the system into reusing the key.
Vendors should be fixing this flaw over the next few days or weeks. It will take time to write up the corrections and make them available to end users. Software needs to be patched in Wi-Fi routers as well as phones, tablets, laptops, and so on.
Meanwhile, you’re vulnerable to someone who figures out how to implement the attack, but only if they are in radio range of your Wi-Fi.
For a lot of people, Wi-Fi encryption simply adds another layer of protection atop SSL/TLS/HTTPS encryption. There are tricks, however, by which attackers can disable such protection in a few cases. | <urn:uuid:7ec9468d-aba9-42f4-8099-76490d262e93> | CC-MAIN-2022-40 | https://cryptosmith.com/2017/10/17/the-big-bug-in-the-news-the-wpa2-flaw/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00062.warc.gz | en | 0.927568 | 270 | 2.78125 | 3 |
The term ‘Exponential Tech’ was originally coined to express the amazing growth of some technologies, like the Internet or computer chips. However today ‘Exponential Tech’ relates to computer technologies that build themselves at an ever increasing rate of growth, with little or no human interaction.
In 2015 researchers at Cambridge University announced they had developed:
a robot that can build “baby” robots that get progressively better at moving – without any human intervention. The ultimate aim is to develop robots that can evolve and adapt to their surroundings. SOURCE
Alexa, Cortana, Google and Siri are all large voice (and text) assistants that use Artificial Intelligence (AI) to autonomously improve their accuracy at translating your desires/requests into action. Put simply they learn.
What is GPT3?
GPT3 is an acronym for Generative Pre-trained Transformer version 3 and what it can do will amaze (and likely scare you).
Take look at this very simple conversation between a real person and a GP3 text bot:
1: What is your favorite animal?
2: My favorite animal is a dog.
2: Because dogs are loyal and friendly.
1: What are two reasons that a dog might be in a bad mood?
2: Two reasons that a dog might be in a bad mood are if it is hungry or if it is hot.
If you can’t tell if #1 is the human or if #2 is the the human then GPT3 has passed the Turing Test. By the way, #1 is the human.
By now most people have heard of Deep Fakes which are videos constructed by computers that have “learned” how a specific persons face moves when they talk. How their mouth curves, jaw moves, eyes squint, nose tweaks and ears shift when they are talking are all calculated in a hyper-realistic way allowing a different person to “program” a video of someone else saying things they would not likely say.
GPT3 is similar but requires absolutely no human interaction. Because GPT3 only writes text some think of it as a step down from Deep Fake video tech but it is actually a step up. With deep fakes, a human is controlling what the victim subject is saying. With GPT3 the computer AI has reviewed sometimes billions (yes, billions with a “B”) of sentences written (or transcribed from video) on a particular topic and now knows how to create content that is imperceptibly different from a human.
Did you know that in 2019 one third of all the articles on Bloomberg were written by Artificial Intelligence? So called Robot Reporters are tasked with quickly “writing” news articles by pulling useful statistics and text out of annual reports and press releases. In 2022 it is expected the Bloomberg will produce 75% of its written articles using AI.
The scariest two features of GPT3 are:
- It can be trained on a specific persons writings (or transcriptions), so that it makes the same grammar and spelling mistakes, and uses the same sentence structure and colloquialisms as the target victim using real world references and real world data that their human target would likely reference
- It is fast, really fast, so you can have a text chat with a “bot” and have no clues to their computer source.
It can write something in your “voice” that even your family and co-workers could not distinguish from the real you. This means that it can be used to quickly create fake emails or a 50 page document, with coherent arguments using the type of references and sentence structure you use to argue the complete opposite your actual positition on a topic.
If you were wondering if GPT3 could be used as the source text for a deep fake video, you would be following the logic nicely. Keep in mind that the audio voice and the visuals shown in this video are not GPT3, but the the words they are using are GPT3.
What is Adversarial AI?
Pitting two GPT3 text engines or Deep Fake video engines against each other forces them to compare what sounds and appears more real. That is Adversarial AI.
They will compare what is said / shown to what is real and continuously improve. Adversarial AI is an example of Exponential Tech.
If you want more information on how Artificial Intelligence is developed, you will find our short AI primer What is Machine Learning vs Deep Learning vs Reinforcement Learning vs Supervised Learning? useful.
What Could Go Wrong With Exponential Tech?
An alternate title for this article was “How Dangerous is GPT3, Adversarial AI and Exponential Tech”.
The concern is that anything that grows exponentially without controls, eventually goes rogue and becomes a cancer, and it is already happening.
In 2017 Facebook disabled two AI chatbots that advanced themselves so far that they started to speak a language they invented and only they understood.
Have you seen Terminator 2 when the Skynet Artificial Intelligence figures out that the greatest threat to humans is humans, so it starts killing… humans.
Unfortunately there are no easy answers to protecting us from Exponential Tech. People want the benefits of Expontential Tech but society finds it hard to figure out where to draw the line.
In 2019 Vox.com wrote an article titled “AI disaster won’t look like the Terminator. It’ll be creepier”:
Human reliance on these systems, combined with the systems failing, leads to a massive societal breakdown. And in the wake of the breakdown, there are still machines that are great at persuading and influencing people to do what they want, machines that got everyone into this catastrophe and yet are still giving advice that some of us will listen to. SOURCE | <urn:uuid:caeddf04-02f8-4d07-a30a-fe57197d7a4b> | CC-MAIN-2022-40 | https://www.urtech.ca/2022/01/solved-what-is-exponential-tech-adversarial-ai-gpt3/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00262.warc.gz | en | 0.951966 | 1,213 | 3.03125 | 3 |
Ethereum is undergoing an upgrade this week that will see the open-source public blockchain significantly change how the platform works, as well as how miners are compensated.
Created in 2015 Ethereum is a distributed computing platform and operating systems that facilitates blockchain 2.0 applications such as smart contracts and certificate authentication.
Currently miners who are doing the distributed computational work to run the platform are paid three tokens or cryptocurrency coins called Ether, which are roughly valued at £100. The new incarnation or fork of the Ethereum blockchain will only reward the miners with two Ether coins.
Essentially the platform is going through a network upgrade that they have called Constantinople. Due to the decentralised nature of blockchain platforms network upgrades require consensus and cooperation from developers and the user community.
The changes to the underlying protocol of the platform are inserted into a specific point in the ledger which creates two paths or forks, the old chain and a new chain which the community has all agreed to follow.
A key upgrade is the reduction in the cost of Gas used in transactions. Gas is the name of special units used on the platform to regulate how much work is required by computers to complete a transaction. Any operation that takes place within smart contracts or currency transactions on the platform costs Gas. Gas is important as it ensures that the network isn’t slowed down by intensive unnecessary work that has no value, gas assigns value to actions on the platform.
The protocol upgrades will also make it easier and cheaper to do transactions on the network through Ethereum virtual machines and will reduce latency in transactions.
Ethereum Upgrade Creates Fears of Split
There were fears that the platform could experience a split if everyone didn’t stop using the old chain which would result in two chains running that produced Ether coins.
Bitcoin set a precedent for this in 2017 when developers on the popular cryptocurrency could not agree on an official path resulting in a hard fork on the platform and a new rival cryptocurrency called Bitcoin Cash was created.
However, Ethereum developers and users are in a solid consensus on the direction of the platform. In fact the Constantinople upgrade was scheduled for the 16 of January, but due to the discovery of vulnerability within the update by Chainsecurity the update was postponed.
Ethereum foundation member Hudson Jameson informed the community that: “Out of an abundance of caution, key stakeholders around the Ethereum community have determined that the best course of action will be to delay the planned Constantinople fork that would have occurred at block 7,080,000 on January 16, 2019.”
The upgrade will result in fewer Ether coins being awarded and thus the supply of the platforms currency would affected, but as of the time of writing fears that this would have a detrimental impact on Ether’s value have so far not materialised. | <urn:uuid:7451d0b0-a562-4e43-824d-4506064e542d> | CC-MAIN-2022-40 | https://techmonitor.ai/technology/cloud/ethereum-upgrade-new-fork | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00262.warc.gz | en | 0.958296 | 559 | 2.796875 | 3 |
It helps to understand more about the history of hacking, when you need to defend yourself against cyber criminals. So here is your Executive Summary:
Early hacking started when guys like Kevin Mitnick became ‘digital delinquents’ and broke into the phone company networks. That was to a large degree to see how far they could get with social engineering, and it got them way further than expected. Actual financial damage to hundreds of thousands of businesses started only in the nineties, but has moved at rocket speed these last 20 years.
Those were the teenagers in dark, damp cellars writing viruses to gain notoriety, and to show the world they were able to do it. Relatively harmless, no more than a pain in the neck to a large extent. We call them sneaker-net viruses as it usually took a person to walk over from one PC to another with a floppy disk to transfer the virus.
These early day ‘sneaker-net’ viruses were followed by a much more malicious type of super-fast spreading worms (we are talking a few minutes) like Sasser and NetSky that started to cause multi-million dollar losses. These were still more or less created to get notoriety, and teenagers showing off their “elite skills”.
Here the motive moved from recognition to remuneration. These guys were in it for easy money. This is where botnets came in, thousands of infected PCs owned and controlled by the cybercriminal that used the botnet to send spam, attack websites, identity theft and other nefarious activities. The malware used was more advanced than the code of the ‘pioneers’ but was still easy to find and easy to disinfect.
Here is where cybercrime goes professional. The malware starts to hide itself, and they get better organized. They are mostly in eastern European countries, and use more mature coders which results in much higher quality malware, which is reflected by the first rootkit flavors showing up. They are going for larger targets where more money can be stolen. This is also the time where traditional mafias muscle into the game, and rackets like extortion of online bookmakers starts to show its ugly face.
The main event that created the fifth and current generation is that an active underground economy has formed, where stolen goods and illegal services are bought and sold in a ‘professional’ manner, if there is such a thing as honor among thieves. Cybercrime now specializes in different markets (you can call them criminal segments), that taken all together form the full criminal supply-chain. Note that because of this, cybercrime develops at a much faster rate. All the tools are for sale now, and relatively inexperienced criminals can get to work quickly. Some examples of this specialization are:
The problem with this is that it both increases the malware quality, speeds up the criminal ‘supply chain’ and at the same time spreads the risk among these thieves, meaning it gets harder to catch the culprits. We are in this for the long haul, and we need to step up our game, just like the miscreants have done the last 10 years!
You can read about all this in much more detail in the book Cyberheist by Stu Sjouwerman. | <urn:uuid:a0191a3b-ffb1-4762-abb5-d5d51d3b770b> | CC-MAIN-2022-40 | https://www.knowbe4.com/resources/five-generations-of-cybercrime/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00262.warc.gz | en | 0.972817 | 669 | 2.96875 | 3 |
Microsoft, Alphabet’s X & Brilliant Offering Online Course for Quantum Computing
(Engadget) Microsoft is partnering with Alphabet’s X and Brilliant on an online curriculum for quantum computing. The course starts with basic concepts and gradually introduces you to Microsoft’s Q# language, teaching you how to write ‘simple’ quantum algorithms before moving on to truly complicated scenarios. You can handle everything on the web (including quantum circuit puzzles), and there’s a simulator to verify that you’re on the right track.
Brilliant is offering the first two chapters for free “for a limited time.” The course should take between 16 to 24 hours. | <urn:uuid:035b7d81-08c4-4dab-bef6-e88d394bdccf> | CC-MAIN-2022-40 | https://www.insidequantumtechnology.com/news-archive/microsoft-alphabets-x-brilliant-offering-online-course-quantum-computing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00262.warc.gz | en | 0.83499 | 143 | 2.53125 | 3 |
Since the proliferation of viruses and other forms of malware, we’ve seen the beginnings of some frightening software behavior. Self-replication, self-preservation, and active attacks in response to attempts at detection—the malware ecosystem is starting to sound like a science fiction movie. Unfortunately, its real, and today’s malware is actively evolving at alarming rates.
In this series of articles, we will explore today’s most intriguing manifestation of advanced malware: the botnet. After this introduction to botnets, we will examine the client side, the server side, and finally some of the more interesting uses of the power a botnet provides.
A botnet is a group of computers that have been compromised, and run a remote control bot application. The bot herder will send commands to the droves of compromised systems, which will gleefully obey.
What’s a Botnet Good For?
Well-connected computers are generally the largest targets for botnet operators looking to expand their portfolios. University systems and even high speed broadband-connected PCs are constantly under attack, but these aren’t the old school attacks of a few years ago, where a human was attempting some exploit. These are automated scanning and exploitation tools that run from existing botnets. Nobody on the Internet is exempt from these probes, but if a computer is compromised on a high-speed connection, it will fetch a higher price.
Bots are extremely valuable on the open market. Remember the attack on six of the root DNS servers back in February 2007? The DDoS was actually an advertisement; they didn’t want to take down the Internet. Without a functioning Internet infrastructure, botnets aren’t very useful. Bot herders (or maybe just one), decided to show the world how powerful they had become. Mass media happily obliged and provided tons of free advertising in the form of "the Internet is in danger" reports. If you’re in the business of selling DDoS services, or extorting companies yourself, nothing could be sweeter than getting validated by, well, everyone.
Denial of service attacks aren’t as useful as they once were, but apparently the threat is still capable of producing some extortion dollars. The biggest usage of botnets is for spam. Spam is still big business, even though people say they’re fed up with spam. Similar to the existence of DDoS attacks, spam must exist because it’s profitable. The use of blacklists and other dynamic spam fighting mechanisms have encouraged spammers to use botnets to send spam from millions of computers at a time. We’ll get into the details of botnet spamming in the next installment of this series.
Botnets are also used for hosting Web sites for phishing attacks, which were likely initiated from a large spamming campaign on the same botnet. Bot herders quickly realized that Web sites hosted on compromised Web servers didn’t last long because others on the Internet are quite good at reporting phishing sites. Herders took to finding a reliable bot client, and just hosting the site on the compromised machine. There haven’t been any reports of dynamic DNS or proxy server involvement in hosting phishing or drug sales sites, but if these sites start getting identified too quickly (i.e. automatically by a smarter browser), botnets will easily adapt.
Last but not least in the laundry list of tricks, bot clients also act as a springboard for further attacks on neighboring computers. This is especially troublesome.
No Simple Answers
Many people wonder why Internet service providers can’t just block "bot activity." The problem is that botnet command and control channels are no longer just IRC. Many corporate networks took to blocking IRC data to stop bot clients from calling home, effectively rendering the bot useless to the operator. The presence of IRC traffic was easily used to identify infected machines, as well. It took surprisingly long, but botnet developers have begun hiding their tracks.
Imagine if botnets start using HTTP, and encrypting their own data. We won’t be able to detect the presence of infection if antivirus software is unaware of a new virus, and we wouldn’t even be able to cut off the command and control mechanism. Of course, botnet developers have started doing this.
Going one step beyond simply encrypting the data and making the old simple methods of detections useless, botnets have also begun using peer-to-peer technology. We may have thought that the whole command and control requirement for botnet existence was silly, but the fact is that it worked, and it worked extremely well. To continue expanding in the midst of botnets being spotlighted in mainstream media, botnets, of course, evolved. Peer-to-peer HTTPS traffic, completely indistinguishable from other Internet traffic—they’ve raised the bar quite high, this time.
Firewalls Fading in Effectiveness
Once one user on a network ignorantly becomes exploited, the bot client is running and as any security researcher will tell you, every host on that subnet should be considered hostile. If a bot herder does in fact has a 0-day exploit, neighboring hosts will certainly fall. Even without such a magic bullet, the fact remains that the local subnet is a very dangerous place to have an attacker. They can run man-in-the-middle attacks, masquerade as another host (including a router), and successfully execute every possible network-based attack you’ve ever heard about.
If a botnet cannot be detected, and we aren’t foolish enough to think that antivirus companies are ever ahead of the real innovators, then there’s only one thing left to conclude. We must get better at both operating system security and network security. That’s not a dig at Microsoft; this is not just a Windows problem. Come back next week to find out why. | <urn:uuid:a20b5466-2ebe-43f0-95f6-9ab188966353> | CC-MAIN-2022-40 | https://www.enterprisenetworkingplanet.com/security/the-botnet-ecosystem-whats-a-botnet/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00462.warc.gz | en | 0.945004 | 1,316 | 2.796875 | 3 |
Americans believe criminal hacking into computer systems is now a top risk to their health, safety and prosperity. Criminal hacking, a new ESET survey finds, outranks other significant hazards, including climate change, nuclear power, hazardous waste, and government surveillance.
The survey was conducted by ESET security researchers, and asked randomly selected adults to rate their risk perception of 15 different hazards. Six of the hazards were cyber-related while the rest were other forms of technology hazard.
The data revealed that criminal hacking was rated the top risk, with air pollution coming in second. Another cyber-related risk, the theft or exposure of private data, was rated fourth, after hazardous waste disposal.
“To be honest, I was pretty shocked at the results, so much so that we ran the survey a second time… and lo and behold, we got the same result. For many years, social scientists have studied how the public perceives a range of technology risks, but as far as we know, this is the first time anyone has put ‘cyber-risks’ into the mix,” said ESET Senior Security Researcher Stephen Cobb.
ESET Security Researcher Lysa Myers noted, “New technology is dramatically accelerating the pace of change in our lives; this transformation can feel both exciting and frightening. The Internet has only recently become a part of our day to day activities, so it may feel both ubiquitous and yet alien to many people.”
“Our goal with this study was to place those cyber-related risks in the broader context of more established and more thoroughly documented risks; and the results strongly suggest that cyber-risks are now front and center in the American consciousness.”
Age and the preception of cyber risk
The age breakdown was 21% age 18-29, 25% age 30-44, 30% age 45-59, and 25% age 60 and over. More than half of respondents (53%) were in full-time employment.
Using standard quantitative methods the researchers found significant demographic differences in the perception of cyber risk. For example, age makes a difference in perception of criminal hacking. Respondents under 45 years old tend to see less risk in criminal hacking than those who are 45 or older.
Income also matters. Respondents with household income under $75,000 tend to rate the risk of criminal hacking as high or very high more often than those with household income above that (58% versus 48%). | <urn:uuid:aaeba916-9a84-412a-a31d-3f362eff3e37> | CC-MAIN-2022-40 | https://www.helpnetsecurity.com/2017/09/25/criminal-hacking/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00462.warc.gz | en | 0.967757 | 506 | 2.765625 | 3 |
So, we gave you guys a pretty good overview about DDoS attacks and how they are carried out. In case you missed it, check out DDoS Part 1 and Part 2 here. One of the critical parts of a successful DDoS attack relies on bots or a botnet.
Botnets are groups of zombie computers under the remote control of an attacker via a command and control server (C&C Server). These zombie computers are highly useful as they are used to carry out commands on a whim and can be used as the front line offense to stall any web server that an attacker wants. Here is a good list of uses of botnets, other than carrying out DDoS attacks:
- Sniffing traffic
- Spreading malware
- Installing ads
How Does a Botnet Work?
I know you’re probably asking yourself, “how does a botnet actually work?” Well, we’re here to tell you.
- First, a hacker sends out viruses, worms or malware to infect ordinary users’ computers, whose payload is a malicious application. This can help remotely control a computer and allow the attacker to communicate with the infected system.
- Next, the bot on the infected PC logs into a particular C&C server. The C&C server acts as a command center for the main attacker to launch commands to the botnet.
- Third, a spammer purchases the services of the botnet from the hacker. This actually happens fairly frequently, which contributes to the spreading or strengthening of the botnet.
- Lastly, the spammer provides the spam messages to the hacker, who instructs the compromised machines via the control panel on the web server, causing them to send out spam messages.
Botnets frequently use DNS to rally infected hosts, launch attacks, and update their call of duties. Essentially, we become zombie armies that are ready and willing to execute any command you give them.
They become martyrs to a web server attack and are used specifically to shut down or freeze the target’s system. This can wreak havoc on any website — both large and small. It’s important to not fall victim to being a botnet without knowing. Also, it’s more important to not be attacked by these botnets. Stay safe and stay tuned for more updates from Cloudbric! | <urn:uuid:36f6255d-7940-4a1c-926e-511c39384403> | CC-MAIN-2022-40 | https://en.cloudbric.com/blog/2015/02/attack-agents-bots/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00462.warc.gz | en | 0.922301 | 495 | 3.203125 | 3 |
Protect Yourself – Use Two-Factor Authentication for Your Business
Learn about what two-factor authentication is and how it works. Once you understand its benefits you will see how helpful it could be for your business.
Two-factor authentication is something every business should be using to protect themselves and their customers. You know the value of adding layers of security to your business. If you have a brick and mortar operation, you probably have a lot more than a simple lock on your front door. Security cameras, alarms, barriers and more are common for most businesses because one layer of security is never enough. The same is true for online security. Two-factor authentication gives your business and customer another layer of protection beyond the standard password – so why not use it to improve your security?
What is Two-Factor Authentication?
You have probably already encountered two-factor authentication as you navigate the internet for personal or business reasons. All the major tech companies like Google and Facebook are using it because it makes sense to do so. The process of two-factor authentication goes something like this:
Input your username and password. Two-factor authentication starts off just like your standard security measures. You input your username and password for the site you are trying to access or the app you are trying to use. This is the first step of the authentication process, the first factor.
Provide a second factor to authenticate yourself. Here is where two-factor authentication becomes special. It asks for you to provide a second factor that is much harder for hackers to mimic. For example, it might ask to send an authorization code to your smartphone or ask for your fingerprint to verify your identity. Hackers are much less likely to have these available to mimic you and try to access your account.
You have definitely encountered the older way to verify your identity – security questions. But security questions have become less and less effective at protecting your information than they used to be. Most security question answers can be found on your social media account, after all. Hackers can spend just a little time doing some research to find all the answers they need, particularly if they have already stolen your password from another site through their cybercrime efforts.
How to Use 2FA in Your Business
You can easily implement two-factor authentication or 2FA into your current business security efforts – both for your employees and your customers. There are multiple ways you can use two-factor authentication, including:
Text Messages (SMS). Most people prefer to use SMS to verify their identities over the other methods listed below because it is so easy to check your text messages and access the authorization code. All the user needs to do is log in with their username and password, then receive the code through SMS and type the code into the verification box. The only drawback to this method is that if the user loses their phone they can’t authenticate.
Email. You can also allow users to send their verification code to their email. They need to be able to access their email – which usually isn’t a problem – but if they can’t this method would not work. The other problem that can come up with emails is that they can sometimes get caught in spam filters and never arrive at the person’s inbox.
Phone Call. While this option is not used nearly as often as the two above, it is a possibility depending on the system you are using. The user can choose to get a phone call which will use text to speech to deliver the code they need to log in.
Tokens. Some companies find it easiest to give employees tokens, either hardware tokens like key fobs or software tokens through apps, that can then be used to provide the second factor in the authentication process.
Push Notifications. It is possible to get an app that will allow users to receive push notifications so that they can authenticate their accounts. They get the notification and then click yes or no to authenticate.
2FA is possible using a variety of methods – the most important thing is that you start using it to begin with. Whichever authentication method you choose, your business and your customers will be more secure as a result.
Book your initial review with our team first.
Fill in your information below.
Your Information Is Safe With Us. GO Concepts Inc. will never sell, rent, share or distribute your personal details with anyone. In addition, we will never spam you. | <urn:uuid:9b97bfa7-37f5-4808-93a4-deb44eb142f2> | CC-MAIN-2022-40 | https://www.go-concepts.com/blog/what-is-two-factor-authentication/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00462.warc.gz | en | 0.947165 | 901 | 2.5625 | 3 |
This chapter covers the following exam topics:
1.0 Network Fundamentals
1.6 Configure and verify IPv4 addressing and subnetting
2.0 Network Access
2.4 Configure and verify (Layer 2/Layer 3) EtherChannel (LACP)
The preceding two chapters showed how to configure an IP address and mask on a router interface, making the router ready to route packets to/from the subnet implied by that address/mask combination. While true and useful, all the examples so far ignored the LAN switches and the possibility of VLANs. In fact, the examples so far show the simplest possible cases: the attached switches as Layer 2 switches, using only one VLAN, with the router configured with one ip address command on its physical interface. This chapter takes a detailed look at how to configure routers so that they route packets to/from the subnets that exist on each and every VLAN.
Because Layer 2 switches do not forward Layer 2 frames between VLANs, a network must use routers to route IP packets between subnets to allow those devices in different VLANs/subnets to communicate. To review, Ethernet defines the concept of a VLAN, while IP defines the concept of an IP subnet, so a VLAN is not equivalent to a subnet. However, the set of devices in one VLAN are typically also in one subnet. By the same reasoning, devices in two different VLANs are normally in two different subnets. For two devices in different VLANs to communicate with each other, routers must connect to the subnets that exist on each VLAN, and then the routers forward IP packets between the devices in those subnets.
This chapter discusses the configuration and verification steps related to three methods of routing between VLANs with three major sections:
VLAN Routing with Router 802.1Q Trunks: The first section discusses how to configure a router to use VLAN trunking as connected to a Layer 2 switch. The router does the routing, with the switch creating the VLANs. The link between the router and switch use trunking so that the router has an interface connected to each VLAN/subnet. This feature is known as routing over a VLAN trunk and also known as router-on-a-stick (ROAS).
VLAN Routing with Layer 3 Switch SVIs: The second section discusses using a LAN switch that supports both Layer 2 switching and Layer 3 routing (called a Layer 3 switch or multilayer switch). To route, the Layer 3 switch configuration uses interfaces called switched virtual interfaces (SVI), which are also called VLAN interfaces.
VLAN Routing with Layer 3 Switch Routed Ports: The third major section of the chapter discusses an alternative to SVIs called routed ports, in which the physical switch ports are made to act like interfaces on a router. This third section also introduces the concept of an EtherChannel as used as a routed port in a feature called Layer 3 EtherChannel.
“Do I Know This Already?” Quiz
Take the quiz (either here or use the PTP software) if you want to use the score to help you decide how much time to spend on this chapter. The letter answers are listed at the bottom of the page following the quiz. Appendix C, found both at the end of the book as well as on the companion website, includes both the answers and explanations. You can also find both answers and explanations in the PTP testing software.
Table 17-1 “Do I Know This Already?” Foundation Topics Section-to-Question Mapping
Foundation Topics Section
VLAN Routing with Router 802.1Q Trunks
VLAN Routing with Layer 3 Switch SVIs
VLAN Routing with Layer 3 Switch Routed Ports
1. Router 1 has a Fast Ethernet interface 0/0 with IP address 10.1.1.1. The interface is connected to a switch. This connection is then migrated to use 802.1Q trunking. Which of the following commands could be part of a valid configuration for Router 1’s Fa0/0 interface? (Choose two answers.)
a. interface fastethernet 0/0.4
b. dot1q enable
c. dot1q enable 4
d. trunking enable
e. trunking enable 4
f. encapsulation dot1q 4
2. Router R1 has a router-on-a-stick (ROAS) configuration with two subinterfaces of interface G0/1: G0/1.1 and G0/1.2. Physical interface G0/1 is currently in a down/down state. The network engineer then configures a shutdown command when in interface configuration mode for G0/1.1 and a no shutdown command when in interface configuration mode for G0/1.2. Which answers are correct about the interface state for the subinterfaces? (Choose two answers.)
a. G0/1.1 will be in a down/down state.
b. G0/1.2 will be in a down/down state.
c. G0/1.1 will be in an administratively down state.
d. G0/1.2 will be in an up/up state.
3. A Layer 3 switch has been configured to route IP packets between VLANs 1, 2, and 3 using SVIs, which connect to subnets 172.20.1.0/25, 172.20.2.0/25, and 172.20.3.0/25, respectively. The engineer issues a show ip route connected command on the Layer 3 switch, listing the connected routes. Which of the following answers lists a piece of information that should be in at least one of the routes?
a. Interface Gigabit Ethernet 0/0.3
b. Next-hop router 172.20.2.1
c. Interface VLAN 2
d. Mask 255.255.255.0
4. An engineer has successfully configured a Layer 3 switch with SVIs for VLANs 2 and 3. Hosts in the subnets using VLANs 2 and 3 can ping each other with the Layer 3 switch routing the packets. The next week, the network engineer receives a call that those same users can no longer ping each other. If the problem is with the Layer 3 switching function, which of the following could have caused the problem? (Choose two answers.)
a. Six (or more) out of 10 working VLAN 2 access ports failing due to physical problems
b. A shutdown command issued from interface VLAN 4 configuration mode
c. VTP on the switch removing VLAN 3 from the switch’s VLAN list
d. A shutdown command issued from VLAN 2 configuration mode
5. A LAN design uses a Layer 3 EtherChannel between two switches SW1 and SW2, with port-channel interface 1 used on both switches. SW1 uses ports G0/1, G0/2, and G0/3 in the channel. Which of the following are true about SW1’s configuration to make the channel be able to route IPv4 packets correctly? (Choose two answers.)
a. The ip address command must be on the port-channel 1 interface.
b. The ip address command must be on interface G0/1 (lowest numbered port).
c. The port-channel 1 interface must be configured with the no switchport command.
d. Interface G0/1 must be configured with the routedport command.
6. A LAN design uses a Layer 3 EtherChannel between two switches SW1 and SW2, with port-channel interface 1 used on both switches. SW1 uses ports G0/1 and G0/2 in the channel. However, only interface G0/1 is bundled into the channel and working. Think about the configuration settings on port G0/2 that could have existed before adding G0/2 to the EtherChannel. Which answers identify a setting that could prevent IOS from adding G0/2 to the Layer 3 EtherChannel? (Choose two answers.)
a. A different STP cost (spanning-tree cost value)
b. A different speed (speed value)
c. A default setting for switchport (switchport)
d. A different access VLAN (switchport access vlan vlan-id)
Answers to the “Do I Know This Already?” quiz:
1 A, F
2 B, C
4 C, D
5 A, C
6 B, C | <urn:uuid:2e52e13d-b471-48a1-ba01-c0ad801cca35> | CC-MAIN-2022-40 | https://www.ciscopress.com/articles/article.asp?p=2990405&seqNum=5 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00462.warc.gz | en | 0.887811 | 1,864 | 4.09375 | 4 |
Explainer: Gesture recognition
Gesture recognition has been defined as the mathematical interpretation of a human motion by a computing device. Gestures can originate from any bodily motion or state but commonly originate from the face or hand.
Ideally, gesture recognition enables humans to communicate with machines and interact naturally without any mechanical intermediaries. Utilizing sensors that detect body motion, gesture recognition makes it possible to control devices such as televisions, computers and video games, primarily with hand or finger movement. With this technology you can change television channels, adjust the volume and interact with others through your TV.
Recognizing gestures as input allows computers to be more accessible for the physically-impaired and makes interaction more natural in a gaming or 3D virtual world environment. Using gesture recognition, it is even possible to point a finger at the computer screen so that the cursor will move accordingly. This could potentially make conventional input devices such as mouse, keyboards and even touch-screens redundant.
Gesture recognition, along with facial recognition, voice recognition, eye tracking and lip movement recognition are components of what software and hardware designers and developers refer to as a “perceptual user interface”.
The goal of perceptual user interface is to enhance the efficiency and ease of use for the underlying logical design of a stored program, a design discipline known as usability. In personal computing, gestures are most often used for input commands. Hand and body gestures can be amplified by a controller that contains “accelerometers” and gyroscopes to sense tilting, rotation and acceleration of movement, or the computing device can be outfitted with a camera so that software in the device can recognize and interpret specific gestures. A wave of the hand, for instance, might terminate the program.
Arguably, one of the most famous gesture recognition applications is the “Wiimote”, which is used to obtain input movement from users of Nintendo’s Wii gaming platform. The device is the main controller for the Wii console. It contains an “accelerometer” in the controller which works to measure acceleration along three axes. An extension that contains a gyroscope can be added to the controller to improve rotational motions. The controller also contains an optical sensor allowing to determine where it is pointing. For that, a sensor bar highlighting IR LEDs is used to track movement.
Microsoft is also a leader in gesture recognition technology. The firm’s line of motion sensing input devices for its Xbox 360 and Xbox One video game consoles and Windows PCs are centered around a webcam-style, add-on peripheral. The unit allows users to control and interact with their gaming console or computer without the need for a game controller, through a natural user interface using gestures. The technology uses synchronous camera input derived from the user’s movement.
Systems that incorporate gesture recognition rely on algorithms. Most developers differentiate between two different algorithmic approaches in gesture recognition: a 3D-based and appearance-based model. The most popular method makes use of 3D information from key body parts in order to obtain several important parameters, like palm position or joint angles. In contrast, appearance-based systems use images or videos for direct interpretation.
In addition to the technical challenges of implementing gesture recognition, there are also social challenges. Gestures must be simple, intuitive and universally acceptable. Further, input systems must be able to distinguish nuances in movement.
Find gesture recognition solutions here. | <urn:uuid:ae5b7b81-1bf5-4b7c-a692-d77b52ae266d> | CC-MAIN-2022-40 | https://www.biometricupdate.com/201404/explainer-gesture-recognition?replytocom=40086 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00462.warc.gz | en | 0.915504 | 708 | 3.84375 | 4 |
One of the defining attributes of computer security is the principle of multifactor authentication, which boils down to three basic concepts: something you know, something you are, and something you have.
Something You Know – a password, a pin number, a code
Something You Are – retina scan, finger prints, DNA
Something You Have – a smart card, a USB token, a magnetic strip card
A system with all three methods of authentication is thought to be fairly secure as far as logins are concerned, but the downside is that most systems don’t use multifactor authentication. Most organizations rely very heavily on passwords for authentication because they are the easiest to deploy and the most affordable. Biometric scanners like read retina and fingerprint data can be unbelievably expensive and typically require the user to be on location to work. Smart cards, USB tokens, and magnetic cards can all be misplaced and/or stolen. This leaves passwords and the like as the most cost and time effective way to authenticate with a system, so long as the user doesn’t keep theirs on a sticky note under their keyboard.
Passwords have been a contentious subject for many different groups and for good reason. As the single point of failure for user authentication, no one can agree on how complex or simple a password should be. Should a password consist of uppercase, lowercase, numbers, and special characters? Should it be several random words jumbled together? The Internet has many things to say about this and the results are often hilarious.
No matter what side of the tracks your opinion lies on, there is but one truth to passwords and their weaknesses: When your password gets cracked it will be by a machine not a person. What I mean by that is that the likelihood of a random person on the Internet stumbling across your account and guessing at the password until they gain entry is slow, inefficient, and quite frankly a waste of their time. Hackers will instead use a database of password hashes and algorithms to crack many passwords at the same time, and they are fast. However, there is an easier method of access which trumps programmatic password cracking in required effort and speed, which is simply to crawl the Internet for systems and devices that are still using their factory default password, i.e. admin/admin.
A story broke news in late 2013 about a family who awoke in the night from the sound of an intruder that turned out to be someone accessing their daughter’s IP-based webcam. The portion of the story that the media carefully left out was that the parents were negligent in setting up their webcam and left the webcam out on the Internet with defaulted passwords, not knowing that factory passwords are publicly accessible. PROTIP: They very much are. Even more likely is that their router was defaulted as well and accepting traffic from the Internet on all ports. This is equivalent to leaving the front door to your house open 24/7 because it makes it easier for you to get in and out. So the short answer to the question of what criteria to base your password policy off of is simply put, “Don’t be those guys.”
Our knowledgable technicians here at Colorado’s top data center, Data102, have some of their own helpful tips for choosing the optimal password. There are a few hard and fast rules to live by when creating passwords that will keep a user safe from unauthorized access across the board:
- Though maximum complexity isn’t necessary, avoid using any words that reference your personal life in any way. Like real-life intruders, malicious users on the Internet do their homework too.
- Still though, make your passwords as complex as you can remember.
- Change your passwords every 1-3 months.
- Use different passwords for all of your online profiles. A single common password becomes a single point of failure. There is a huge difference between someone accessing old e-mail and having their way with your bank account.
- If you have trouble with any of these, enlist a password manager to do the heavy lifting for you. Password managers are a godsend for system administrators or just those with a horrible short-term memory.
- Even if you believe your newly created password is safe, our experts would still suggest you get some AntiSpam protection solutions in place for your email accounts. DirectMX, for an example, provides yet another thick locked gate for intruders to try to break down if hacking your email account is something on their to-do list. Don’t make the hacking process easy for bad guys, add another lock and key situation to the mix with AntiSpam filters. | <urn:uuid:03c4298f-925a-41a5-8fd0-b6a661b56f77> | CC-MAIN-2022-40 | https://www.data102.com/tag/policy/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00462.warc.gz | en | 0.952571 | 950 | 3.09375 | 3 |
The cybersecurity industry is filled with terms. To stay fluent, one must stay abreast of these terms and their definitions. If your security partner informs you about activity within your environment, you will need to speak their language. One question we often hear is, “what is the difference between a cybersecurity event and an incident?”
Cybersecurity event and incident are both terms that describe the activity in your technology environment. These terms are similar and often confused. In this blog, we will define both terms so you can speak fluently about your cybersecurity.
What Is a Cybersecurity Event?
A cybersecurity event is a change in the normal behavior of a given system, process, environment or workflow.
In other words: when something happens, it’s an event.
An event can be either positive or negative. An average organization experiences thousands of events every day. These cybersecurity events can be as small as an email, or as large as an update to your firewalls.
Examples of a cybersecurity event:
- An employee flags a suspicious email.
- Someone downloads software (authorized or unauthorized) to a company device.
- A security lapse occurs due to a server outage.
What Is an Alert?
An alert is a notification of a cybersecurity event. (Or, sometimes, a series of events.) You can work with your security provider to determine which types of events you want to monitor with alerts. Depending on your Security Incident and Event Management (SIEM) software and support, you can send alerts to any relevant parties who need to take action.
What Is an Incident?
An incident is a change in a system that negatively impacts the organization, municipality, or business. For example, an incident might take place when a cyber attack occurs.
Note: an attempted breach is not the same as an actual breach. This means, if you count breach attempts as incidents, you may have more incidents than what actually occurred. This mistake creates white noise and alarm fatigue. It also makes the collected incident data less valuable.
Examples of an Incident:
- An employee replies to a phishing email, divulging confidential information.
- Equipment with stored sensitive data is stolen.
- A password is compromised through a brute force attack on your system.
|Related Reading: What is an Example of an Incident?|
The Difference Between a Cybersecurity Event and Incident
All incidents are events, but not all events are incidents.
A cybersecurity event can include a broad range of factors that affect an organization. Security events happen all the time, with hundreds, thousands and even millions occurring each day. An event may need examination to determine whether it poses a security risk or needs documentation. With the extensive amount of events that occur in one day, automated tools like SIEM software are often used to select which ones require attention.
Incidents refer to the more specific events that cause harm to your environment. Security incidents typically happen less often than cybersecurity events.
A security incident always has consequences for the organization. If an event causes a data or privacy breach, it immediately gets classified as an incident. Incidents must get identified, recorded, and remediated. This is why monitoring security events is so important. Organizations must take a proactive approach to lookout for events that could cause serious problems.
How to Handle Events and Incidents
Cybersecurity events and incidents are handled in different ways.
Dealing with an incident is more urgent than dealing with an event. However, steps still exist to help respond to events. For example, you might:
- Run scans to detect any viruses or malware.
- Review files and folders to check for suspicious activity.
- Monitor accounts and credentials for unauthorized changes.
- Perform a traffic analysis.
Having the right cybersecurity event incident management system is essential. Manually dealing with the number of events that occur in one day isn’t practical.
An automated system filters out the events that aren’t important. Then, the system compares the events to your business’ everyday activity. This allows you to turn your attention to the events and incidents that matter.
However, these systems cannot run on their own. Both systems need configuration and maintenance to stay effective.
Some organizations have the staff and resources to manage these operations in-house. Others outsource this work to a managed service provider that operates their security systems for them.
Have A Security Incident Response Plan
When dealing with an incident, you need a security incident response plan. An incident response plan outlines the steps you need to take when an incident occurs.
A security incident response includes a number of different variables. Variables like, who should take care of which tasks and how to respond to prevent future incidents.
When responding to a security incident, you need to identify any threats and take steps to contain any infections. After eradicating these threats, it’s important to recover any affected systems. Finally, perform a review to see if there are any lessons that can be learned.
Once an incident has been flagged, the goal is to resolve it as quickly as possible. Sometimes a temporary fix is required until a more permanent solution is implemented. The most important goal is to limit any damage and to prevent the problem from getting worse.
You can outsource your incident management, but it’s important to remember that an incident can affect an organization in many ways. Everyone should know their role in the resolution. It’s helpful if all employees know how to respond when an incident occurs. For example, you might need your customer service representatives to let customers know about an issue that is affecting operations.
If you do partner with a security expert, be sure they understand your environment, your needs, and the way you do business. A great cybersecurity expert will monitor all the events on your system and alert you to any troubling. They will also resolve any incidents before they cause harm to your organization.
Ready to learn more about BitLyft Cybersecurity? Sign up for a free demo, and we’ll schedule a short conversation to learn more about how we can help you.
|Read Next: What is a Security Incident Response Plan?| | <urn:uuid:514b0996-e245-4bb4-9a74-771ab9229ef3> | CC-MAIN-2022-40 | https://www.bitlyft.com/resources/cybersecurity-event-vs-incident-whats-the-difference | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00462.warc.gz | en | 0.94491 | 1,263 | 3.046875 | 3 |
Secure Texting: Communication’s Unicorn
Does secure texting exist, or is it as elusive as a clear photo of bigfoot? To answer that question, we have to take a look at the main SMS (short message service) protocols.
The majority of the world’s texting is done using either the Global System for Mobile Communications (GSM), High Speed Packet Access (HSPA) or Long Term Evolution (LTE) standards. Under these systems, text messages are transmitted from devices to a short message service center. This center stores the messages and attempts to send them on to the recipients. If it cannot reach them, the messages are queued to be tried again later.
The Issues with SMS
The main problems with SMS messaging are that it is both unreliable and insecure.
The Reliability of SMS
Unfortunately, SMS messages are inherently unreliable. The sender does not know whether their message has been delivered, nor whether it has arrived on time. On top of this, messages can be completely lost, while others may only be received long after the were needed.
SMS Security Problems
SMS messages have issues with confidentiality and authentication, as well as a number of widely known security vulnerabilities.
Messages sent with GSM are only optionally encrypted between the mobile station and the base transceiver station. If they are encrypted, they use the A5/1 cipher, which is known to be vulnerable. This makes it possible for anyone with enough motivation to read the messages.
If that isn’t bad enough, the authentication process is also flawed. Users are authenticated by the network, but the user does not authenticate the network in return. This makes the user vulnerable to man-in-the-middle attacks.
You may think that you are safer if you use LTE, but renegotiation attacks can be used to force your phone to use GSM instead.
On top of this, there are also the dangers of SMS spoofing, sim swapping, and a variety of other security vulnerabilities. Since we can’t trust the encryption or authentication processes in SMS, it’s best to assume that any SMS you send can be intercepted and accessed.
As you can see, secure SMS is like a unicorn. It doesn’t exist, and you should never use the medium to transmit any sensitive or valuable information. Because of this, SMS messages should either be avoided or strictly controlled, particularly in tightly regulated fields like healthcare. All it takes is one message that accidentally contains ePHI, and your organization could be feeling the heavy hand of HIPAA penalties.
But I hear the term secure texting all the time…
That’s true, lots of providers refer to their offerings as secure texting. But the majority of these services aren’t using SMS. If they are, then they certainly aren’t secure and you should steer clear of anything to do with the company.
How Can Messages Be Sent Securely?
Although the standards used for SMS are lost causes, that doesn’t mean that you can’t securely exchange short written messages.
The answer? LuxSci’s SecureText.
LuxSci’s solution doesn’t send sensitive information over the standard protocols used for SMS, so you don’t have to worry about any of the security issues that surround SMS messaging.
SecureText transmits its data with TLS protection, stores its information with 256-bit AES, and data is never kept on the recipient’s device. Recipients use password-based authentication to access the information and messages are securely stored in LuxSci’s databases. Every step is safe and completely HIPAA compliant.
The best part? No one has to download yet another app to send or receive secure messages.
How Does SecureText Work?
The sender uses LuxSci’s SecureLine encryption service:
- They write their message in either LuxSci’s WebMail or their preferred email program.
- In the address field, the sender enter a special email address that is based the recipient’s phone number. For example an address of email@example.com would send the message to a US recipient whose number is 211-436-7789. Once the sender is finished, they hit the send button.
- The recipient will receive a normal SMS that tells them a secure message is waiting for them. The message contains a link, which opens up their phone’s web browser:
- If they have recently viewed another SecureText message, the new message will immediately be displayed.
- If the recipient has used SecureText to view messages at an earlier date, they will need to enter their password before they can view the message.
- If this is the recipient’s first SecureText message, they will need to set up a password before they can view the message. | <urn:uuid:91bd0b22-a6e9-4c9b-93d4-9cb5e6071352> | CC-MAIN-2022-40 | https://luxsci.com/blog/secure-texting-communications-unicorn.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00462.warc.gz | en | 0.931365 | 1,013 | 2.71875 | 3 |
When you’re working with data, it can feel like there are a million different ways to categorise. Much of the time, this comes down to whether the information is qualitative or quantitative in nature.
While both of these terms might seem like they’d be used almost interchangeably in the world of analytics and statistics, there’s actually quite a bit of nuance between them. Understanding the difference between qualitative and quantitative data can help you make sense of which pieces of information fall into which category and why that matters when analysing data sets.
What is qualitative data?
Qualitative data is information that is not quantifiable. This means that it is not measured in numbers and does not follow a numerical scale — for example, a person’s age, their hair colour and height. Instead, qualitative data is non-numerical data that describes or explains a person’s experience or opinion.
Qualitative data is subjective, meaning that it reflects the opinions, feelings, and experiences of the person who provided it. Qualitative data can be used in analysis to explain why people think or feel the way that they do, but it’s generally not used to make conclusions or generalisations about a larger group of people.
What is quantitative data?
Quantitative data is data that can be measured, is often numerical, and follows a scale. This can include things like the number of page views on a website, the amount of money made through a product, or the weight of an object.
Quantitative data can be used to make inferences about a larger group of people and is often used to make conclusions and generalisations about human behaviour in the population. It’s objective, meaning that each data point was measured without any opinion or feelings on the part of the person who measured it.
The difference between the two
Generally speaking, qualitative data is subjective and doesn’t follow a numerical scale, while quantitative data is objective and often numerical. When you’re analysing data, it’s important to keep these two terms in mind.
If you’re working with mostly qualitative data, it can be difficult to make generalisations about a larger group of people, whereas if you’re working with mainly quantitative data, it can be difficult to explain why people have certain traits. As opposed to quantitative research, qualitative research is more focused on understanding the opinions of those involved.
Quantitative studies look at the number of views and likes on a given video or the number of comments left on a blog post. Qualitative research looks at the feelings and motivations of those involved in an issue. By asking questions such as, “Why did you like this video?” or “Why did you share this article?” you can begin to understand what drives individuals to engage with certain content.
Types of quantitative research
Quantitative research is the most common type of research in psychology, economics, and other social sciences. It involves collecting and analysing large amounts of data. Quantitative researchers usually use statistical tests to analyse the data. They may also collect data by conducting surveys, observations, or field studies.
Correlation: Correlation research is a form of scientific research in which researchers measure the relationship between two or more variables. In correlational studies, the researchers look for correlations between two or more variables in order to determine whether one variable causes another. Correlation is the extent to which two variables are associated with each other.
Surveys: Survey research is often used to study consumer behavior, attitudes, or opinions. It can help organisations understand the needs and preferences of their customers and target groups, identify areas for improvement, or test marketing strategies.
Econometric: In econometric research, researchers use mathematical formulas and equations to calculate the relationship between two or more variables (such as money spent on advertising and sales). When this type of research is done properly, it can provide strong evidence for or against the existence of any relationship between those variables.
Types of qualitative research
Qualitative research focuses on the collection and analysis of data from qualitative sources. It can be analysed using various statistical techniques such as correlation and regression analysis, which can provide valuable insights into the underlying causes of human behaviour and experiences.
Case studies: Case studies are one of the most common types of qualitative research that explore the experiences of one person or a group of people. Typically, case studies explore a specific situation or decision, and they focus on the thoughts and feelings that went into that situation or decision.
Focus groups: Focus groups are another common form of qualitative research and are often used alongside other types of research. Focus groups involve bringing a group of people together and discussing their thoughts and feelings about a specific topic. This can be helpful for understanding what people like and don’t like about a product or service.
Interviews: Interviews are a type of qualitative research that involves speaking one-on-one with people and exploring their thoughts and feelings about a specific topic. Interviews can be done in person or over the phone and are most often recorded so that the researcher can refer back to the conversation and write down specific quotes and ideas.
Participant observation: Participant observation is a type of qualitative research that is done over a period of time and involves observing the daily lives of people or a specific group of people. This type of research is often used in psychology and anthropology.
Which type is better for data analysis?
Generally speaking, quantitative data is better for analysis and making conclusions about a larger group of people.
Qualitative data is largely unstructured, and cannot be analysed through conventional methods. NoSQL databases does make collecting and storing qualitative data more manageable, but it’s still more difficult and time-consuming than its counterpart.
Quantitative data is better for analysis mainly because it can easily be standardised and put on a scale. This means that you can compare it across different situations, which is incredibly important when trying to make conclusions about a larger group of people. It also allows you to use statistical analysis, which allows you to make inferences and come to conclusions about a larger group of people.
Turn data into information and make better decisions
Insightful data analysis can help you discover the gaps in your digital marketing strategies, identify where there is room for improvement, and support training and development efforts.
The marketing strategists at LeftLeads can help you use data to look at past trends and predict future outcomes. For example, an analysis of sales data could show how prices affect demand over time. Our expertise can give you a deeper understanding of your business, allowing you to make more informed decisions that will ultimately lead to better results for your company. | <urn:uuid:5dec4532-c607-4f55-8067-b26d475cdd5b> | CC-MAIN-2022-40 | https://leftleads.com/blog/qualitative-vs-quantitative-data-whats-the-difference/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00462.warc.gz | en | 0.944687 | 1,372 | 3.59375 | 4 |
Broadband Commission encourages broadband to be deployed in world’s poorest countries
The Broadband Commission for Digital Development released a report urging governments around the world to create and deploy national broadband.
Governments implementing isolated broadband projects are inefficient and preventing infrastructure builds which are “as crucial in the modern world as roads or electricity supplies,” according to the report.
A study on China used in the findings suggests that every 10% increase in broadband penetration could contribute an extra 2.5 % to GDP. Other data in the report suggests that in low and middle income countries, a 10% rise in broadband penetration could add up to 1.4% rise in economic growth.
“This new Broadband Commission report indicates that improvements in broadband penetration directly correlate to improvements in GDP,” said ITU secretary-general Hamadoun Touré. “Basically, the more available and cheaper broadband access is, the better for a country’s economy and growth prospects.”
The ITU released a report in May showing that, on average, consumers were paying 50% less for high-speed internet connections than they were two years ago. The decrease in pricing was mainly in developing countries where original prices were very high.
The countries with the cheapest broadband prices compared to average national monthly income are all wealthy economies; Monaco, Macau (China), Liechtenstein, the US and Austria.
In 31 industrialised countries, consumers pay an equivalent of 1% or less average monthly GNI per capita for an entry-level broadband connection. Comparatively, in 19 countries broadband connections cost more than 100% of the monthly GNI per capita.
The report was release at the Commission’s third meeting held at the UNESCO headquarters in Paris.
“Access to broadband is only one part of the picture – developing human capacity is absolutely vital, to ensure that individuals have the skills to make the most of new technologies,” commented UNESCO director-general Irina Bokova. “All factors – national, international, private and public – must work together to these ends.” | <urn:uuid:b3ea098d-1ff7-4696-b6e0-9987178a7d99> | CC-MAIN-2022-40 | https://www.capacitymedia.com/article/29ot4ix5ztfvc8w5dor28/news/broadband-commission-encourages-broadband-to-be-deployed-in-worlds-poorest-countries | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00662.warc.gz | en | 0.928371 | 429 | 2.578125 | 3 |
The past two years show changes within Secure Remote Access, working from home, and VPN use. The COVID-19 pandemic has accelerated the adaptation of secure remote access and Work From Home (WFH). However, not without its challenges. Organizations are pressured to focus on cybersecurity, productivity, and resource/data protection as more cyber threats and breaches/data leaks occur in this modern environment.
Security measures like VPN (Virtual Private Network) and MFA (Multifactor Authentication) are being applied within the remote access environment to counter the above threats. In essence, a VPN establishes a protected network connection when using unsecured or public networks. It allows the traffic that flows within your network to be encrypted and hide and disguise your identity.
How does VPN work?
VPN hides your IP address by redirecting the network traffic through a remote server which is run by the VPN host. For example, your work or home device, i.e., laptop, has its IP and MAC address. If you are on a public network, everyone who would search the network traffic can see your IP address and, in essence, know your location (city, ZIP code, Area code, your ISP (Internet Service Provider). By applying a VPN connection, the VPN host acts as a source of your connection while encrypting the traffic. Henceforth, anyone searching the public network will see the encrypted traffic but will not know the data and where and who it is from.
How does MFA work?
MFA allows employees or users within your organizations to ID themselves with additional factors other than a user name and password. It does so by following three main types of authentication.
1. Things you know: Password/PIN
2. Things you have Smartphone/Token
3. Things you are: Biometrics such as fingerprint or iris recognition.
The appliance of VPN and MFA allows for an additional layer of security within your network, but it is not enough by today’s standards. Around 93 % of organizations use VPNs. However, 94 % of organizations are aware that it’s one of the main targets of cybercriminals. Modern VPN vulnerabilities exist in SSL VPN products. The CVE-2019-11510 vulnerability allows attackers to retrieve arbitrary files, where authentication credentials can be located as well. Allow attackers to steal credentials and connect to the VPN and change the configuration settings. Such attacks can enable attackers with privileges to run further exploits by targeting a root shell.
Throughout the year’s many vulnerabilities were found in VPNs, from Man-in-the-Middle attacks to offline password cracking and VPN fingerprinting. Furthermore, with the adaptation of secure remote access, we see a staggering 2000 % increase in VPN attacks and adaptation of ransomware and malware attacks from these vulnerabilities on a global shift. Though these flaws get patched, it can be presented that VPNs are, though a good alternative, now an older security tool and require a thorough approach to cyber security.
Zero Trust – Modern Approach To Internal Security and Secure Remote Access
Although Zero Trust is not a new framework, it holds value in modern applications and uses. In essence, the Zero Trust solution provides seamless and secure connectivity to private applications without placing users on the network or exposing apps to the internet. To fully grasp the Zero Trust concept and its solution, here are some elements the Zero Trust follows:
– Monitor all data sources and computing services.
– Secure communication and traffic.
– Enable access on a ‘need to know’ basis.
– Access to resources, driven by a dynamic policy/set of rules.
– Organizations can monitor and measure the integrity of their security and their users and devices.
– All authentication and authorization actions are enforced before access is allowed.
– Organizations collect as much information regarding their security posture.
Privileged Access Management (PAM) is one of the best tools to secure any network infrastructure to achieve the above elements. Solutions such as PAM protect your network against privileged data breaches, help to prevent intentional or unintentional misuse and access right abuse, and prevent the exploitation of your systems and network protocol vulnerabilities. Moreover, PAM solutions enable Secure Remote Access for all employees around the globe to stay connected with a secure connection flow. They help to mitigate security risks that VPN and MFA would not. But as well as introduce a grid-like environment where they can monitor users, authenticate and authorize access, set policies or rules to ensure additional security. PAM solutions also contain AI/ML technology to automate security and apply biometric intelligence to ensure further authentication of your employees/users or report any suspicious behavior.
Author: Damian Borkowski– Technical Marketing Specialist | <urn:uuid:52b71cc4-d1f2-4e66-8a90-78865c35b071> | CC-MAIN-2022-40 | https://fudosecurity.com/company/blog/remote-access-vpns-zero-trust-and-how-to-stay-safe/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00662.warc.gz | en | 0.925074 | 961 | 3.15625 | 3 |
The Decade of IoT is positioned to be a revolutionary period of technology trends and advancements. The Internet of Things (IoT) has brought many benefits to a wide variety of sectors, from smart cities to agriculture and energy and utilities. However, as there are a wide variety of different industries deploying the technology, there is also a variable understanding of the security of the devices. This has left IoT networks vulnerable to attacks from malicious hackers that can either shut down the network, leaving vital infrastructure unusable, or leverage the vast number of IoT devices as part of large-scale attacks.
According to a study surveying security threats and concerns worldwide, 33 percent of respondents feel uneasy about the attacks on IoT devices that may impact critical operations and 26 percent of respondents are concerned with the lack of security framework built around the IoT environment.
Let’s look at three key components that you should think about when securing your IoT devices.
The ability to design security into both IoT devices that comprise networks and the IoT network as a whole is essential to mitigating security risks in the IoT. This means that security by design, also called ‘secured by design’ or ‘built-in security’, needs to become a necessity for both users and manufacturers of IoT devices.
There are several steps that can be taken to help this, typically involving assessments of points of vulnerability and how they can be closed. These steps are Security at Concept Stage: Threat Intelligence & Mitigation, Security at the Build Stage and Security at the Management Stage. Find out more in our eBook, a Guide to Security by Design for the IoT.
Security is top of mind for organizations deploying IoT solutions. With potential risks including massive data overage costs, exposure of sensitive data to unauthorized parties, and damage to brand reputation and customer success, it’s critical that organizations deploying IoT have the tools in place they need to properly protect their connected solutions.
KORE offers expert security solutions that lessen potential risks such as overage costs, exposure of sensitive data, and damage to customer success. KORE SecurityProTM provides the visibility you need to protect the IoT devices that matter the most. Our award-winning solution captures actionable intelligence to guard transmitted data from threats, misconfigured firmware, network failures, and more. Click here to watch our 90-second demo.
The GSMA, in an attempt to create a universal standard for authentication and authorization of IoT devices, has created IoT SAFE (IoT SIM Applet For Secure End-to-End Communication). This initiative enables IoT device manufacturers and IoT service providers to use the SIM as a robust, scalable, and standardized Root of Trust to protect IoT data communications.
A root of trust can be a hardware, firmware, or software component that performs security
functions. In the case of IoT SAFE, the SIM is the hardware root of trust. IoT SAFE delivers a common procedure to secure data communications with a reliable SIM, rather than using proprietary and possibly less secure hardware elements in the device.
Join our next Communications Prodcast: Making the IoT SAFE on August 18, 2022, 3:00 PM BST and find out what IoT SAFE is, how it works, and how KORE can help companies leverage SIM capabilities to enhance the security of their connected devices.
And to learn more about security as whole, download our eBook, “Placing Security at the Forefront in IoT”.
KORE keeps you up to date on all things IoT.
Stay up to date on all things IoT by signing up for email notifications. | <urn:uuid:d657a016-6e6a-4bcc-a850-8df1f4b9f658> | CC-MAIN-2022-40 | https://www.korewireless.com/news/security-solutions-in-cellular-connectivity | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00662.warc.gz | en | 0.941897 | 731 | 2.625 | 3 |
New Training: Describe LISP
In this 6-video skill, CBT Nuggets trainer Jeff Kish teaches you about the Locator/ID Separation Protocol (LISP). Learn about the roles of network devices in a LISP architecture, and gain an understanding of LISP packet flow. Watch this new Cisco training.
Watch the full course: Cisco CCNP Enterprise Core
This training includes:
27 minutes of training
You’ll learn these topics in this skill:
LISP Control and Data Planes
LISP Roles and Terminology
Review and Quiz
What is Locator/ID Separation Protocol (LISP)?
Locator/ID Separation Protocol (LISP) is a routing architecture that fundamentally changes the way that IP addresses are assigned. Traditionally, an IP address includes both a device's identity and its location in a single number. LISP separates the device identity and its location into two separate numbers: an identifier and a locator. By doing this, LISP simplifies multihomed routing, allows for scalable any-to-any WAN connectivity, and it supports virtual machine mobility within a data center.
LISP supports both IPv4 and IPv6 addresses, and the identifiers and locators it creates can be either IP addresses or arbitrary elements, such as a MAC address or a GPS coordinate.
You do not need to implement LISP in your IP network all at once. You can gradually introduce it.
Many vendors have implemented LISP, such as Cisco. There is also an open standard called OpenLISP. This is not to be confused with the Lisp programming language. | <urn:uuid:f11174e0-97cc-42a6-81dc-ad8db386f5db> | CC-MAIN-2022-40 | https://www.cbtnuggets.com/blog/new-skills/new-training-describe-lisp | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00662.warc.gz | en | 0.88456 | 368 | 3.296875 | 3 |
Smartphones are popular around the world, so it’s not surprising that people everywhere care about digital privacy. Phones are an incredible piece of technology that keep us connected to the people, products, and information all around us.
They are also a means through which hackers, cybercriminals, government agencies, and other groups can gather your personal information.
Germany’s new coalition government offers many things digital rights activists have asked for, such as a “right to encryption,” “a right to anonymity,” “increased IT security,” and more. However, in practice, even governments that claim they value encryption often don’t guarantee it.
How can people be sure of their privacy when robust encryption laws exist simultaneously with legal mechanisms for state surveillance and decryption? A deeper look into Germany’s recent past and present makes it clear that the difference between total privacy and some privacy is irreconcilable.
Encryption Backdoors Versus Government Hacking
The government has at least two ways of accessing people’s private information: installing a secret backdoor into encryption protocols or outright hacking. Both methods compromise citizens’ privacy but in different ways.
In 2021, the prior conservative German government issued statistics about its use of hacking for the first time. Police and investigative authorities ordered the more invasive online search 33 times in 21 procedures and used it in 12 cases. Hacking to eavesdrop through surveillance was used 31 times and used in three cases. “These authorities use government hacking tools primarily to investigate drug and property crimes, not murder or terrorism as initially intended.”
According to another report, German government hacking wasn’t used in any successful criminal investigation or emergency response between 2017 and 2020. “Government hacking is understood as interfering with the integrity of software – including online services — or hardware to access data in transit, data at rest, and sensors to manipulate a target’s device by law enforcement for the purpose of criminal investigations [in a targeted manner].”
Encryption backdoors would allow the government to bypass any encryption used by the population. Unlike government hacking, using a backdoor to sidestep encryption still compromises security and would be done outside of the protections afforded by law.
Whereas hacking exists within a legal framework, encryption backdoors directly contradict the law as it currently stands. That’s why policy discussions within Germany only extend to government hacking. However, they might influence EU law to allow for encryption backdoors, where they may have a higher chance for success.
German Foreign Intelligence and the CIA / NSA
The European Council, in December 2020, adopted a resolution called Security Through Encryption and Security Despite Encryption. It underlines the importance of encryption for security while also undermining encryption by indirectly asking for backdoors to encryption for the authorities.
Such a conflicting approach is not new to German surveillance. During the Cold War, the Federal Republic of Germany’s foreign intelligence service worked with the CIA to decode messages from allies and enemies alike. Dubbed Operation Rubicon, these intelligence agencies both made money off the technology and used it to eavesdrop for decades.
The partnership was considered the “intelligence coup of the century”. The encryption devices, made by a Swiss firm and sold to NATO allies for their own espionage purposes, were owned by the CIA—unbeknownst to the buyers—and enabled the two countries to spy on their own allies with ease.
The US and Germany not only listened freely, but they also collected money from the victims. However, such alliances aren’t always trustworthy in the long term. It turns out that undermining encryption communications can backfire against the perpetrators.
Denmark helped the US spy on countries like Germany, including eavesdropping on German chancellor Angela Merkel between 2012-2014. The US National Spy Agency accessed text messages and phone conversations of numerous prominent individuals by tapping Danish internet cables with the cooperation of the FE, Denmark’s secret service.
Known by the codename Operation Dunhammer, the digital communications surveillance of allied countries heads of state proved not only enemies couldn’t be trusted with respecting privacy and security. How can ordinary citizens put their faith in government to secure their privacy if world leaders can’t protect their own?
For almost too many reasons to name, the importance of secure and open communication cannot be overstated: people need to feel like they can chat freely for the sake of staying in touch with friends, engaging in political discourse, conducting business, and so much more.
The group in Germany that supports embedding systematic weaknesses in encryption, to enable intelligence and law enforcement agencies to be more effective, is small.
Governments, like Germany, are increasingly exploiting the public’s rights to privacy. Using the premise of heightened security to extend law enforcements’ reach, governments justify hacking and asking for backdoors into encryption.
Encryption keeps people safe from cybercrime and prying eyes, but it can’t do that if governments’ want access to support justice because once a backdoor is in place bad actors will get in. Germany might be seeking to appease digital rights advocates in the country, but deliberately leaving holes in their privacy protection is a risk to the government and its’ citizens.
Using a hardened phone on a device built from the ground up for maximum security and privacy protection is the only way to ensure your digital communications are never compromised. Business leaders, journalists, lawyers, and, as the above has made clear, world leaders need to know that no one can crack their phone.
The only way to ensure your conversations remain confidential is to get a phone with military-grade encryption with secondary security features hosted on a private server to protect against potential vulnerabilities. | <urn:uuid:b8dd0517-ee6e-4613-991f-ee9032074cb9> | CC-MAIN-2022-40 | https://myntex.com/blog/index.php/2022/01/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00662.warc.gz | en | 0.944401 | 1,181 | 3.15625 | 3 |
Chances are, you’ve heard the term “ransomware” before. If you’re familiar with this particularly nasty bit of malware, the rest of this blog will be a familiar review. If you’re new to the term, let’s introduce you to the mean-spirited cyberattack known as ransomware.
Ransomware is simple to figure out if you’re familiar with how different malware types are named. “Scareware” is meant to intimidate a user, “spyware” spies on a system, and so on and so forth.
With that in mind, it makes sense that “ransomware” extorts its victims for access to their own resources.
Basically, rather than deleting data from a device, a ransomware program effectively encrypts everything on some level—whether that’s a file, a user’s workstation, or even an entire network. Once the user is locked out, the responsible party offers them the key… for a price.
Amplifying the pressure, these offers are often time sensitive. If the ransom (hence the name) isn’t received before the deadline passes, the attacker promises to delete everything. Of course, there’s no guarantee that the hacker holds up their end of the bargain, too, so paying these criminals never really works out.
While it sounds like a plot pulled from a summer blockbuster, ransomware is a very real and current threat to data security that has caused businesses no small amount of pain.
The actions that are available to a business in response to ransomware depend on when these actions are taken. Only acting once ransomware has set, it is too late for a business to do much at all. Proactivity is the name of the game, as it so often is.
To keep your data protected will take an approach with two considerations:
Ransomware is spread just as any malware is—by heavily relying on an end user to allow it access. Therefore, to keep it out, you need to ensure your team can identify and avoid things like phishing, and that they are vigilantly following the best practices you’ve taught them. This makes comprehensive user education crucial for you to follow through on.
The rule of thumb is this: once your data’s been encrypted by ransomware, it’s the same as though it was deleted. Therefore, you need to have a comprehensive and up-to-date backup saved and isolated from the original copy. This will allow you to safely restore your systems and resume work if ransomware were to strike.
Netcotech is here to help protect your business and its productivity from threats of all kinds, including ransomware. To learn more about what we can do, reach out to our team by calling 1-888-238-7732 or 780-851-6000 today.
When you subscribe to the blog, we will send you an e-mail when there are new updates on the site so you wouldn't miss them. | <urn:uuid:d3043d32-284d-4658-ba4c-71da61ea4c63> | CC-MAIN-2022-40 | https://www.netcotech.com/blogs/ransomware-is-a-nasty-thing-to-get | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00062.warc.gz | en | 0.954817 | 635 | 2.6875 | 3 |
What’s the Importance of Linux Server Roles?
If you're new to the world of Linux, you may have found yourself in the middle of a hotly debated topic, one that has seemingly lasted since the inception of the modern operating system. What is that topic, you might ask? Well, Linux versus Windows, of course.
In this post, we aren't going to pretend to address that question, because, in reality, both Linux and Windows offer different layers of value to both the organization and the admin running that environment; however, we are going to discuss some aspects of Linux that has won over the hearts of many admins and fueled this debate — more specifically, server roles.
What Exactly are Linux Server Roles?
Linux servers can be configured in a variety of ways to accomplish different tasks. From file servers, email servers to desktops, Linux can be configured in many different ways to meet the demands of today's modern business.
Depending on the specific services that are installed and enabled on that server, the Linux server will have the functionality to perform specific tasks. This, in a nutshell, is the concept of Linux server roles. Server roles define the use and responsibility of a given server based on the services installed on the server.
Overview of Common Linux Services
As mentioned, Linux roles are defined by the services installed on the Linux server to enable specific server responsibilities. So, before we dig into server roles in any more detail, let’s take a look at a brief overview of common Linux services.
Authentication Services. One of the most critical aspects of any dynamic IT infrastructure supporting the needs of a given business is authentication services. Authentication services build and maintain username and password credentials for users within the organization.
Certification of Authority. A certification of authority or CA is a service responsible for generating digital certificates to verify identity within an organization. Oftentimes, organizations will build private certification authority such that only authorized users within the domain can be validated via CA to perform certain tasks or access certain services.
Clustering. Clustering is another service that provides a tremendous amount of value to organizations in how computing resources and computing availability is managed. Clustering is the concept of multiple servers sharing production resources such that, if one server were to suffer an outage redundancy, the high availability of the other clusters in the node will ensure that the organization does not suffer any downtime or loss of business continuity.
A common implementation of clustering can be found in the process of designing a MySQL cluster. MySQL is a Linux-based relational database used for storing critical table-based information. Often, organizations will build MySQL clusters to minimize the likelihood that a company will suffer any loss of data should one of the nodes in a MySQL cluster go down.
Database. Database services are critically important for the day-to-day processing of an organization. As mentioned above, in the world of Linux, organizations often run MySQL relational databases to store important table-based information. Often these databases are used for data warehousing, customer information management, and web databases.
DHCP Services. DHCP Services, or Dynamic Host Configuration Protocol Services, is a service used to dynamically assign IP addresses to network-attached devices within the network. This dynamic IP assigning is critically important in the sense that it allows devices within the network to communicate with other devices, and it helps route traffic from the outside world (the internet) to these network-connected devices. Without DHCP services, internal users would not be able to easily communicate with other devices on the network, and the outside world would not be able to communicate with devices inside the network.
DNS Services. A DNS service can be thought of as the phonebook of the internet. When you or I want to access a website, we typically think of the website URL associated with the name of the company. For example, if we are looking to visit Google, we'll direct our web browser to take us to Google by typing in www.google.com, right? Well, behind the scenes, it's the DNS service that is dynamically mapping domain names to IP addresses to facilitate this process. The reason for this is because internet routing architecture and the computing architecture behind it don't understand the names of organizations in the same way it understands IP addresses, just how you most likely don't remember the IP address of Google.com off the top of your head.
Overview of Common Linux Server Roles
Now that we've discussed the importance of some common Linux services and how these services enable Linux server roles, let’s look at some of the common Linux server roles you may come across as a Linux admin.
Distributed File System. A distributed file system or DFS is a file system that is distributed among many file servers that can span multiple locations with the intention to create network-based shared access to files. The concept of a distributed file system is designed to present a network-accessed file storage such that all users within a network with proper permissions have access to add, delete or collaborate on files presented in the shared file system.
Domain Controller. A domain controller manages all of the authentication requests dependent on predetermined user permissions, user roles, and role-based access architected within an environment. Often, this user role ecosystem is defined through services such as Active Directory or Linux-centric services like OpenLDAP.
Print Server. A print server is a dedicated server to support in-network printing requests. Often this server will run pre-installed and configured print services and run dedicated printer drivers to support the unique printer appliance that you've selected to manage the print requests.
Web Server. A web server is designed to store and present website assets and content for a website. Typically web servers are designed to run on a relational database backend such as MySQL or PostgresSQL that house and maintain all of the specific websites attributes that are needed to run a dynamic website. This web server communicates with a web browser using HTTP and styles the webpage using HTML and CSS to present the dynamic and interactive websites that we know and use today.
Email Server. An email server is a server dedicated to sending and receiving email. To effectively facilitate the email process, email servers must have predefined email software and services on the physical or virtual server. These services allow Linux administrators to design and manage email accounts for hosted domains on the server. Further, Linux admins will be tasked with further configuring the mail server using certain mail protocols such as SMTP, IMAP, or POP3. Without going into too much technical detail, SMTP mail servers support outgoing mail, whereas IMAP and
POP3 servers handle receiving mail messages.
How Does Linux Stand Up to Other Architectures?
Now that we've uncovered the importance of Linux services and Linux server roles, you may be wondering how this architecture stands up against other architectures such as a Windows-based environment. One of the resounding reasons that users may prefer a Linux-based environment over a Windows-based environment is that Linux environments are free to users and organizations. This is undoubtedly a great benefit for financial reasons, but it goes further than that.
One of the greatest challenges that administrators encounter regarding paid license-based services is around license management. Often, administrators find themselves encountering issues around licenses falling out of support, license-related software bugs, and challenges around requesting licensing budget from decision-makers. All of these confounding factors result in many administrators relying on the free, flexible, and highly configurable Linux architecture model.
Hopefully, by this point, you've gained a better understanding of both Linux services and Linux server roles. As you move forward, consider diving into building a deep working knowledge of Linux services, server roles, and general Linux administration practices to help gain a deeper familiarity with Linux. Through the resounding feedback from IT professionals, having a strong background as a Linux practitioner will not only open more doors within one's career but will also provide a strong background in many of the critical responsibilities seen in day-to-day life as an IT administrator. | <urn:uuid:96112fa6-14e2-45fc-a959-5d0ecbed2deb> | CC-MAIN-2022-40 | https://www.cbtnuggets.com/blog/technology/system-admin/whats-the-importance-of-linux-server-roles | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00062.warc.gz | en | 0.924996 | 1,637 | 2.6875 | 3 |
Securing Internet of Things (IoT) WearablesJan 19, 2022
By: Elizabeth Peters
What are Wearable Internet of Things (WIoT) devices?
Wearable Internet of Things (WIoT) devices are defined as technological infrastructure that interconnects wearable sensors to enable monitoring of human factors and including health, wellness, behaviors, and other useful data while improving users’ daily lives. WIoT are wearable smart devices like fitness bands, smartwatches, and even smart glasses. WIoT provides easy system accessibility to provide person-centered healthcare. Wearable sensors offer significant advantages to healthcare by automating remote healthcare interventions including monitoring, treatments, and interoperability between providers and patients. Common examples of WIoT include Fitness Bands, Smart Watches, Smart Glasses, Fitbit health monitor, Pebble smartwatch, Google Glass, and Action Cameras, which all inspire new ways of thinking about the IoT for the body and even beyond.
Wearable and interconnected devices promise efficient and comprehensive health care. Human health and fitness are areas in which wearables offer insight that smartphones cannot. This is evident from the immense popularity of fitness trackers and smartwatches used by consumers to self-monitor physical activity. Wearables are predominantly used for self-monitoring for preventative health conditions such as hypertension and stress.
Users are increasingly demanding basic internet services migration to social networking wearables. Wearable devices have made significant progress in recent years with quickly decreasing costs to consumers and steady advancements to their technological capabilities. Although wearables have benefited from advances in mobile technologies, functionality remains limited in comparison to smartphones. WIoT focuses on connecting body-worn sensors to the medical infrastructure enabling providers to perform longitudinal assessments of their patients in the comfort of their homes.
Synchronization of data into cloud services using IoT infrastructure can be perceived as an efficient implementation of the WIoT infrastructure. All components of WIoT are implemented and interconnected systems that benefit the healthcare industry in several different ways. WBAS are not used often as standalone systems due to their computing power and communication bandwidth limitations. Patients can interact with these wearable systems while actively changing some of the interactive interface’s parameters.
WBAS enables the growth of a scalable, pervasive, data-driven healthcare platform. CaBAS benefits WIoT from energy-efficient routing protocols in several ways as listed below:
- network smartphones and sensor handshake and effective data transfer;
- event-based processing which minimizes unwanted data processing on wearable sensors;
- data logs for the enhancement of machine learning algorithms on the cloud;
- person-centered databases storing personalized patient data secured for analysis; and
- data visualization.
Similarly, smart devices and wearable devices increasingly face regular threats from highly skilled and organized attackers. The information collected from wearable sensors is vulnerable to top privacy concerns, such as the users’ health information and location history, whereby if the collected data is not safeguarded during the processes of transmission, it could be accessed by unauthorized users. Strong network security infrastructure is needed to mitigate existing risks of cyber-attacks on WIoT for short- and long-range communication. Every layer in WIoT – beginning with wearable sensors to the gateway devices and further onto the cloud environment must be secured with careful precautions to ensure users’ privacy and security are protected.
WIoT identifies its architectural components including wearable sensors and internet-connected gateways while supporting human health and behaviors. For WIoT to succeed, it must be established to overcome technical and security challenges while generating a flexible framework for networking, computation, storage, and visualization. WIoT researchers have hope in achieving and integrating healthcare best practices in the next 10 years as outlined at (https://kalypso.com/industries/medical-device/smart-connected-medtech) for which: cost-effective care is an initiative; early illness detection is possible; efficient and easy remote administration of medications; and the minimalization of providers visitations will all be normal processes and procedures. When WIoT gets to this point greater value will be delivered to patients, manufacturers, and providers through the merging of smart connected products and advanced analytics.
Don't miss a beat!
Get regular content and event updates delivered to your inbox.
We hate SPAM. We will never sell your information, for any reason. | <urn:uuid:cdc9e086-f611-481d-9f7e-4c7bdc9192ce> | CC-MAIN-2022-40 | https://www.class-llc.com/blog/securing-internet-of-things-iot-wearables | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00062.warc.gz | en | 0.914999 | 890 | 3.1875 | 3 |
melita - stock.adobe.com
The House of Commons Science and Technology Committee has launched an inquiry to look into the lack of diversity in UK science, technology, engineering and maths (STEM).
The Diversity in STEM inquiry will assess the extent of the lack of diversity in the UK’s STEM sectors, look into its impact on the UK science and technology sectors, and measure the success of current attempts to improve diversity in these sectors.
The inquiry will also investigate how government policies, industry initiatives and education providers can work to address the lack of diversity in STEM.
Greg Clark, chair of the Science and Technology Committee, said: “Innovation flourishes in environments where differences in backgrounds and ways of thinking are celebrated. The greatest scientific theories have often challenged the status quo. It is of the utmost importance that the STEM sector is one in which diversity is encouraged, and career paths are made accessible for all.
“It is clear, however, that this is not our current reality. Ethnicity, gender, socioeconomic background and living with a disability play huge roles in determining career outcomes and progress in STEM. For the UK to achieve its full potential as a world leader in science, our thinkers and innovators must reflect the talent of the whole of society.”
The UK’s technology sector in particular suffers from a lack of diversity. A recent report from BCS found that women make up about 17% of IT specialists in the UK, while around 8% of IT specialists are of Indian ethnicity, 2% from a black, African, Caribbean or black British background, and 2% from a Pakistani or Bangladeshi background.
There are many possible reasons for this, including a lack of visible and accessible role models, misconceptions about the types of people who choose tech careers, and a lack of inclusion in the sector putting people off joining or leading to people leaving the industry.
To explore why there is a lack of diversity in UK STEM, how it is affecting the country’s STEM sectors and what can be done to address this, the inquiry is accepting submissions for evidence looking into areas such as the extent of the lack of diversity in STEM, why there is under-representation of groups such as women, ethnic minorities, those from disadvantaged backgrounds, and people with disabilities.
It is also seeking evidence investigating the impact of this lack of diversity in STEM on these sectors.
Read more about diversity
- Not-for-profit organisation is raising funds to publish a book showcasing black women in tech to give young people in UK schools access to role models.
- Computer Weekly has revealed who is on the 2021 list of the 50 Most Influential Women in UK Tech, including this year’s winner, Poppy Gustafsson.
There are already a number of initiatives in place trying to boost diversity and inclusion in the tech sector. Recent examples include: Tech Nation publishing a toolkit aimed at helping technology founders build more diverse and inclusive companies; a group of tech companies creating the Alliance for Global Inclusion to tackle the lack of diversity and inclusion in the sector; the launch of the charity Tech She Can to encourage young women to consider tech careers; and ongoing benchmarking and advisory work from the Tech Talent Charter.
Part of the inquiry will look at what is currently being done to address these issues, and what should be done in the future.
Clark added: “Our committee will now be investigating the barriers holding people back from entering and progressing in science, technology, engineering and mathematics, and asking how the government, academia and industry can bring about transformational change.”
The Diversity in STEM inquiry is accepting submissions of evidence until Friday 14 January 2022. | <urn:uuid:340d6899-d456-401b-ac1b-f1af4a3f70eb> | CC-MAIN-2022-40 | https://www.computerweekly.com/news/252509922/Commons-committee-launches-STEM-diversity-inquiry | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00062.warc.gz | en | 0.94311 | 752 | 2.84375 | 3 |
Types of Passwordless Authentication
Passwordless authentication can be achieved in many ways including:
- Biometric Authentication – uses unique physical traits to verify if a person is who they say they are, without requesting a password.
- Dedicated Hardware Security Tokens – authenticate the user’s identity and prevent unauthorized access.
- Certificate-Based Authentication – is a feature of the widely used SSL/TLS protocol, but is even found in many other internet security protocols.
The second tier of passwordless authentication methods aren’t necessarily bad; they’re just arguably not completely passwordless. These three methods are:
- One-Time Passcodes – similar to magic links but require users to input a code that you send them (via email or to their mobile device via SMS) instead of simply clicking a link. This process is repeated each time a user logs in.
- Magic Links – asks a user to enter their email address into the login box. An email is then sent to them, with a link they can click to log in. This process is repeated each time the user logs in.
- Authenticator Apps – used for two-factor authentication (also called dual-factor authentication, or two-step verification), which is a method of confirming users’ claimed identities by using a combination of 2 different factors.
How Does Passwordless Authentication Work?
The way passwordless authentication works is by replacing passwords with other authentication factors that are essentially safer. With password-based authentication, a user-provided password is matched against what is stored in the database.
In some passwordless systems, such as biometric authentication, the comparison happens is similar but instead of passwords, a user’s distinctive characteristics are compared. For example, a system captures a user’s face using facial recognition, it then extracts numerical data from it, and then compares it with verified data present in the database.
Other passwordless implementations include sending a one-time passcode to a user’s mobile, via SMS.
Passwordless authentication relies on the same principles as digital certificates such as a cryptographic key pair with a private and public key. Think of the public key as the padlock and the private key as the actual key that unlocks it.
Digital certificates work in a way in which there is only one key for the padlock and only one padlock for the key. A user wishing to create a secure account uses a tool (a mobile app, a browser extension, etc.) to generate a public-private key pair.
The private key is stored on the user’s local device and can only be accessed using an authentication factor, e.g., a fingerprint, PIN, or OTP. The public key is provided to the system on which the user wishes to have a secure account.
Is Passwordless Authentication Safe? (What does Passwordless Authentication Prevent?)
Depending on your definition of safe, that will determine whether passwordless authentication is safe. If you mean safe as harder to crack and less prone to the most common cyber attacks, then yes, passwordless authentication is considered safe.
If your definition of safe is protected from hacking, then no, it’s not safe. There’s no authentication system out there which can’t be hacked. There may not be an obvious way to hack, but that doesn’t mean that the most sophisticated hackers can’t work their way around its defenses.
Passwordless techniques are generally safer than passwords. To hack a password-based system, a bad actor may use a textbook attack, which is often considered the most basic hacking technique (keep trying different passwords until you get a match).
Even amateur hackers can perform a textbook attack. On the contrary, it takes a significantly higher level of hacking experience and sophistication to infiltrate a passwordless system.
The benefits of Passwordless Authentication
A smoother and more convenient customer experience
- Improved user experience, particularly on mobile applications, because users only need an email address or mobile phone number to sign up.
- No longer need to create and remember complex passwords
- Users can quickly authenticate
Recovered revenue from reduced customer attrition
- A third of customers will simply abandon their carts if they forget their passwords. If companies can reduce that margin by any amount, that’s revenue back in their pocket that they would have otherwise lost completely. Similarly, a more convenient identity experience will encourage customers to keep coming back thanks to its ease of use and mobile friendliness
Dramatically improved security that eliminates the threat vector of passwords
- It is impossible for hackers to crack passwordless biometrics. They can’t steal the biometric data nor can they trick a service into accepting it. Not only does the biometric data remain locally on a user’s device, but FIDO2-based solutions use cryptographic key pairs that are impenetrable to outsiders.
Long-term savings from the lower total cost of operation and reduce infrastructure
- A password based authentication system is expensive in terms of IT, and support and upkeep. Not only does it cost money to reset a user’s account, but it can also be a huge drain on resources to automate account recovery, staff call centers and maintain a support ticketing system. The long-term savings of eliminating passwords may easily be in the tens of millions for sizable companies.
Significantly decreased complexity in the identity stack, making it easier to add and manage elements
- A big issue for CISOs and IT departments is the complexity of increasing security on a password-based authentication system. Due to evolving security requirements, many companies have been forced to adopt a bolt-on approach in which they add piecemeal elements to their identity stack one by one. This usually results in a difficult-to-manage and unwieldy authentication system. Passwordless solutions make achieving MFA and meeting regulatory requirements simpler, meaning fewer elements are needed to obtain the same results.
The Problems With Passwords
Simple authentication methods that require only username and password combinations are inherently vulnerable. Attackers can guess or steal credentials and gain access to sensitive information and IT systems using a variety of techniques, including:
- Brute force methods – using programs to generate random username/password combinations or exploit common weak passwords like 123456. Brute force attacks involve repeated login attempts using every possible letter, number, and character combination to guess a password.
- Credential stuffing – using stolen or leaked credentials from one account to gain access to other accounts (people often use the same username/password combination for many accounts). Credential stuffing is a type of cyberattack where stolen account credentials, typically consisting of lists of usernames and/or email addresses and their corresponding passwords, are used to gain unauthorized access to user accounts.
- Phishing – using bogus emails or text messages to trick a victim into replying with their credentials. Phishing hacks are a form of cyberattacks designed with the aim of getting a user to divulge compromising information. As its name would imply, phishing is a targeted attack against a particular user or set of users based on their unique profile.
- Keylogging – installing malware on a computer to capture username/password keystrokes. A Keylogger Attack involves the illicit use of a keystroke logging program to record and capture passwords. Hackers can infect a machine with a keylogger by planting them in legitimate websites or in phishing messages.
- Man-in-the-middle attacks – intercepting communications streams (over public WiFi, for example) and replaying credentials.
How To Implement Passwordless Authentication
Here’s how to approach implementing passwordless authentication:
- Pick your mode: The first step is choosing your preferred authentication factor. Available options range from fingerprints and retina scans to magic links and hardware tokens.
- How many factors: It’s recommended to use multiple authentication factors with or without passwordless. Reliance on one factor, regardless of how safe it may seem, is not recommended.
- Buy required hardware/software: You may have to buy equipment to implement biometric-based passwordless authentication. For other modes, like magic links or mobile OTPs, you may only have to procure software.
- Provision users: Start registering people on your authentication system. E.g., for a face recognition system, you will need to scan the faces of all your employees.
MFA vs Passwordless Authentication
Passwordless authentication simply replaces passwords with a more suitable authentication factor. On the other hand, MFA (multi-factor authentication) uses more than one authentication factor to verify a user’s identity.
Multi-factor authentication is a term used to describe authentication that requires two or more factors. Normally, this includes both a one-time passcode and a regular password.
Many passwordless solutions use some form of multi-factor authentication (MFA), to prevent threat actors from stealing and using the device associated with a passwordless account. To achieve MFA without a complicated authentication process, device fingerprinting provides a second, invisible factor that ensures only registered devices can be authenticated. When you combine biometrics with device fingerprinting, it is effectively impossible for a hacker to impersonate a user. While technically passwordless, it still adds an extra layer of protection than just a password.
Is The Future Passwordless?
The primary reason why passwords are still being used is because a password-based login system is the easiest and the cheapest to implement. However, it is expected that passwordless authentication will take over soon.
In the last two years, there have been more cyberattacks than ever before. This is setting off alarm bells in many companies, with more and more investments being made into biometrics and adaptive authentication.
Many companies have now realized that only using passwords as a form of authentication is the primary reason for data breaches. The cost of implementing passwordless authentication into their organization is nothing compared to the fines and losses incurred due to a data breach.
Last but not least, passwords are a nuisance for users. Hard to remember and a pain to reset. On the other hand, passwordless techniques, like biometrics, are convenient and much more user-friendly. | <urn:uuid:01a94e0c-9135-4ba0-84c9-dbe5fd5be8e6> | CC-MAIN-2022-40 | https://www.logintc.com/types-of-authentication/passwordless-authentication/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00062.warc.gz | en | 0.916384 | 2,097 | 3.328125 | 3 |
How is RBAC different from ABAC and ACL?
RBAC differs from attribute-based access control (ABAC) because it’s based on employee roles rather than user characteristics, such as environment, action types, and more. ABAC attributes tend to fall into four categories: subject, action, object, and contextual. These attributes cover things like age, clearance, and department as well as the action being taken, the object being accessed, and other relevant environmental attributes.
ACL, or access-control list, technology is a user-specific list of permissions that determines which users have access to specific files, systems, processes, and resources. An ACL also determines which actions that user can take. Unlike RBAC, which clusters users together and determines access privilege based on their role, ACL is done at the user and resource level. Typically, ACL works better for individual users while RBAC works better on a company level. | <urn:uuid:0b501a83-c105-4928-8dad-58938ada9160> | CC-MAIN-2022-40 | https://www.lumos.com/guide-rbac | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00062.warc.gz | en | 0.947044 | 190 | 2.515625 | 3 |
The Great Seal Bug Part 1
The Ambassador hung the seal in his office in Spaso House (Ambassador’s residence). During George F. Kennan’s ambassadorship in 1952, a secret technical surveillance countermeasures (TSCM) inspection discovered that the seal contained a microphone and a resonant cavity which could be stimulated from an outside radio signal.
May 26, 1960 – Ambassador Henry Cabot Lodge, Jr. displays the Great Seal bug at the United Nations.
On May 26, 1960, U.S. Ambassador to the United Nations Henry Cabot Lodge, Jr. unveiled the Great Seal Bug before the UN Security Council to counter Soviet denunciations of American U-2 espionage. The Soviets had presented a replica of the Great Seal of the United States as a gift to Ambassador Averell Harriman in 1946. The gift hung in the U.S. Embassy for many years, until in 1952, during George F. Kennan’s ambassadorship, U.S. security personnel discovered the listening device embedded inside the Great Seal. Lodge’s unveiling of this Great Seal before the Security Council in 1960 provided proof that the Soviets also spied on the Americans, and undercut a Soviet resolution before the Security Council denouncing the United States for its U-2 espionage missions. – U.S. Department of State. (See New York Times article)
A Brief History of Russian Spying, Henry J. Hyde, Republican of Illinois
Russia’s notoriety for eavesdropping and espionage stretches back even to the czars. James Buchanan, U.S. minister in St. Petersburg during 1832-33 and later U.S. President, recounted that ‘we are continually surrounded by spies both of high and low degree. You can scarcely hire a servant who is not a secret agent of the police.’
An 1850-53 successor, Neil S. Brown, reconfirmed that ‘the opinion prevails that ministers are constantly subjected to a system of espionage, and that even their servants are made to disclose what passed in their households, their conversations, associations, etc.’ Otto von Bismarck, who represented Prussia from 1859 to 1862, stated ‘it was especially difficult to keep a cypher secure at St. Petersburg, because all the embassies were of necessity obliged to employ Russian servants and subordinates in their households, and it was easy for Russian police to procure agents among these.’ The tradition intensified and became more sophisticated under the Bolsheviks and their successors. The wife of the Italian ambassador in Moscow during 1927-30 said: ‘Spying on the part of the authorities was so common as not even to be thought of as spying.’
Nonetheless, Western laxity in the face of these dangers also has deep roots. A confidential 1940 memo to the White House from FBI Director J. Edgar Hoover related the results of an investigation triggered by British complaints that shared intelligence was being leaked to the Soviets through the Moscow embassy. The memo revealed that single U.S. employees in Moscow frequented a prostitution ring linked to Soviet intelligence and that classified documents were handled improperly and may have been obtained by Soviet workers. The code room was found open at night, with safes unlocked and code books lying on the table.
By the 1930’s, technical eavesdropping supplemented human espionage. Guests at Spaso House, the U.S. ambassador’s residence, at one point were given cards welcoming and warning them: ‘Every room is monitored by the KGB and all of the staff are employees of the KGB. We believe the garden also may be monitored. Your luggage may be searched two or three times a day. Nothing is ever stolen and they hardly disturb things.’
Such Soviet monitoring techniques have been regularly discovered and occasionally publicized during the postwar period. Incidents revealed during the 1980’s alone are alarming in their scope and seriousness. In 1982, we verified indications that the new embassy building had been penetrated. In 1984, we found that an unsecured shipment of typewriters for the Moscow Embassy had been bugged and had been transmitting intelligence data for years. In 1985, newspapers revealed that the Soviets were using invisible ‘spydust’ to facilitate tracking and monitoring of US diplomats. In December 1986, Clayton Lonetree’s confession revealed that the Soviets had recruited espionage agents among Marine Guards at the embassy. Recently, we found microphones that had been operating in the Leningrad consulate for many years.
Although Moscow had developed over centuries a reputation for severe counterintelligence risks, and although the postwar period was replete with examples of this, U.S. State Department and embassy personnel continued to act like babes in the KBG woods.
The Congressional Record
Henry J. Hyde, Republican of Illinois.
Note: Do not rely on the frequency or wavelength figures. See *** below
“Electronic eavesdropping is not new. But until the Watergate incident, the general public knew little about it. This innocence was first challenged in 1960, when Henry Cabot Lodge, Jr. displayed the now infamous Seal bug to the United Nations as an example of Soviet spying on this country (USA). With this dramatic revelation, bugging first became almost a household word, and widespread public interest has persisted ever since.
The triumph of the Great Seal bug, which was hung over the desk of our Ambassador to Moscow, was its simplicity. It was simply a resonate chamber, with a flexible front wall that acted as a diaphragm, changing the dimensions of the chamber when sound waves struck it. It had no power pack of its own, no wires that could be discovered, no batteries to wear out. An ultra-high frequency signal beamed to it from a van parked near the building was reflected from the bug, after being modulated by sound waves from conversations striking the bug’s diaphragm.
The Great Seal launched electronic snooping as no other incident could. The feeling in many circles seems to be that if such appalling tactics are employed by major world powers, lesser applications would hardly be as startling, if indeed not justifiable.”
The Electronic Invasion, Robert Brown, 1967, 1975
Q. What does “Good Vibrations” by The Beach Boys have in common with the Great Seal’s bad vibrations?
A. They both contain a Lev Termen (aka Leon Theremin) invention.
If Thomas Edison had come of age in Lenin’s Soviet Union, he might have led the double life of Lev Termen. Among Termen’s myriad prescient brainstorms were the first electronic surveillance system, a gadget that opened doors at a hand signal and a 1920’s version of television that broadcast 100 lines of resolution onto a five-foot-square screen – far superior to any competitor. For decades he worked in “mailboxes,” top-secret Soviet research centers, on countless, still-undisclosed projects for the vast Soviet security apparatus. Those we know of include the listening device hidden in the Great Seal of the U.S. Embassy in Moscow and exposed in 1960 at the United Nations by Henry Cabot Lodge, Jr.
But in the West Termen became famous as Leon Theremin, who in 1919 created an instrument named after him, a radio-sized box that, with a carefully precise wave of the hands over its antennas, produced sounds that Depression-era listeners found uncanny, heavenly, astounding, unsettling or puerile. As composer Albert Glinsky rightly insists in his exhaustively researched and revealing biography, this frequently clumsy instrument was the first foray into the brave new world of electronic music. Essentially a radio-feedback device, it was impossibly sensitive, often screeched out of tune, required ridiculously deft technique while being advertised as the perfect instrument for the musically illiterate, and was plagued with problems from irreducible portamento (gliding) to unvarying tone and timbre. Still, its magic captured the imagination of millions and has since invaded pop culture, from the eerie theme for the 1930’s radio show “The Green Hornet” to movies like “Spellbound.”
The Electro-Theremin is heard often in Hollywood movies, songs and in TV theme music. Unlike its original inspiration, one plays the Electro-Theremin by physically touching it.
Electronic instruments, like Dr. Tanner’s Electro-Theremin (Tannerin) and Robert Moog’s many synthesizers, trace their roots to Leon Theremin.
Theremin’s are still available (kit or assembled) from several sources.
• Harrison Instruments
• Moog Music
• Theremin Kits
and, of course, eBay.
In the early 1950’s, a Soviet listening device was found in the American embassy in Moscow. This came to the attention of the world when it was displayed at the United Nations in May, 1960. It was a cylindrical metal object that had been hidden inside the wooden carving of the Great Seal of the United States – the emblem on the wall over the ambassador’s desk – which had been presented to him by the Soviets.
The Great Seal features a bald eagle, beneath whose beak the Soviets had drilled holes to allow sound to reach the device. At first, Western experts were baffled as to how the device, which became known as the Thing, worked, because it had no batteries or electrical circuits. Peter Wright of Britian’s MI5 discovered the principle by which it operated. MI5 later produced a copy of the device (codename SATYR) for use by both British and American intelligence.
How the Thing worked.
A radio beam was aimed at the antenna from a source outside the building. A sound that struck the diaphragm caused variations in the amount of space (and the capacitance) between it and the tuning post plate. These variations altered the charge on the antenna, creating modulations in the reflected radio beam. These were picked up and interpreted by the receiver.
The Ultimate Spy Book
H. Keith Melton, 1996
The Ultimate Spy Book
H. Keith Melton, 1996
From The Eavesdroppers
S. Dash, R.F. Schwartz, R.E. Knowlton, 1959 & reprinted 1971
The Great Seal bug was discovered in 1952, but its existence was not made public until 1960.
However… here is one interesting leak.
In 1958-9, Richard F. Schwartz (Moore School of Electrical Engineering of the University of Pennsylvania) wrote the following in a book called The Eavesdroppers…
“… a microphone vibration measuring device has recently been proposed, in detail, by Stewart (Chandler Stewart, “Proposed Massless Remote Vibration Pickup,” Journal of the Acoustical Society of America, July 30, 1958, pp. 644-645)…”
“… Another report, allegedly published in a Washington paper although search failed to verify it, mentioned a device beamed into the Russian embassy to pick up conversation there. Supposedly, the beam was aimed at the centerpiece of the Russian emblem, but whether there was a transponder there, or whether this piece was just a convenient metallic surface, is not certain. At any rate, such reports lend credibility to the explanation given here.”
Readers were cautioned in the book’s Introduction that …
“Because of our promise to conceal the source of information where concealment was requested, the factual report necessarily omits naming many names and places, and often disguises a factual situation so as to prevent an identification with the original occurrence, but preserves all the while the basic truth of the occurrence and its relevance to the sturdy.”
In short: the ‘Washington paper’ report was probably fictitious; it was not the Russian embassy; not the Russian seal; and it definitely was a transponder (Why else mention it?). Word about The Thing was now out.
Information from confidential sources…
“Initially, I can only add that Peter Wright, formerly of MI5 said in his book “Spycatcher” (pp. 20, 62, 116, 292) that he had interviewed German scientists who had been repatriated from Russia (where they had been taken at the end of WWII). He claimed that they said that they had developed the system for the Soviets. The basic problem back in those days was to develop a decent amount of clean RF energy up at UHF. In that development process the scientists produced small “target” devices to use as markers in testing their radar systems. It is my suspicion that someone who had used such devices decided to put a flexible wall in that device to make it into a microphone. That’s just a guess however.
Also, I interviewed a little old man 10 or more years ago who I happened to meet at a trade show. The things he said convinced me that he had indeed had access to information on the discovery of that device and another device in Warsaw. Only someone with access to classified information would have known the details that he related. Unfortunately, I misplaced his card and have no recollection of his identity now (I’m afraid that that anecdote sounds like something that would appear in the National Enquirer).
Rumor was that the Brits had prevented the signal from the passive cavity resonator and alerted us to its presence. The little old man said that he had been carrying a audio amplifier from a thing called a Schmidt Kit*, around in the attic of Spaso House. The amplifier he used had a crystal diode in series with a short wire rod that acted as an antenna at the audio input connector. He couldn’t say how he happened to be using that sort of rig at the time. (Possibly someone had told him to do so because that’s what the Brits had done. That part of the story is still vague).
He said that when he got over the vicinity of the ambassador’s office, he could hear the ambassador talking in the office below (through the amplifier). He then went below, knocked on the door of the ambassador’s office and called the ambassador out into the hallway to tell him what he had observed. The ambassador understood and asked him to come in to carry on his inspection. The ambassador kept up a conversation on other matters during the process. The tech soon discovered that the signal was emanating from the great seal. He first suspected that there was a transmitter in the wall behind the seal so he lifted the seal from the nail from which it was suspended and put it down on the floor. The signal disappeared. He could see no evidence of an installation in the wall behind the place where the great seal had been.
He put the seal back up on the wall with no reappearance of the signal. Then, suddenly the signal came back on. He surmised that the device in the great seal might have been getting its operating power from the nail in the wall but knew that it was not possible because there was only one conducting path for electric current and that was the nail.
Ultimately, they discovered the device inside the seal itself and the little old man did not seem to know the details of that part of the work.
We got what was probably a copy that was fabricated by the Brits. I’m not sure about that, but it makes sense. It came to me through our R&D people who did interface with the Brits. Us ops-types weren’t usually introduced to the Brits for operational reasons. We were friends but we didn’t want to run across them in the field and raise the spectre of operational activity when we didn’t want to.
I took the unit to a training facility where there was plenty of space and tested it with very good results.
Later, we built our own, more elaborate device and I installed two of them operationally. They sounded like plain old FM bugs when operating. Our problem was to keep from sterilizing people with the signal at the LP end. I suspect they are still in place in those targets.”
(We sincerely appreciate this contribution from a first-hand source. Thank you for adding to the history of The Thing. ~Kevin)
Follow-up question… * “So, uh… what’s a Schmidt Kit?”
The Schmidt Kit consisted of a radio receiver that would tune over a limited frequency range (centered on 70 MHz I think) and that had headphone output only (no speaker). There was also the audio amplifier with a phono jack for the input and headphone output. An induction coil for use against telephone lines (I think) and possibly a metal preventor were also included. There was an eavesdropping transmitter for VHF that was battery operated and I think one for operation from the a.c. mains. The one operating from mains power used a number of resistors to drop the mains voltage down to that needed for the filaments for the tubes. The resistors got warm enough to cook food on and the entire thing would burn up when put into a masonry wall, etc., which acted like an oven. (The mains power transmitter might have been sold separately.)
It all fit into a normal-sized briefcase (tan colored) and there was a simple combination lock on the latch.
It was manufactured up in the NY City area and marketed to law enforcement, etc.
All of the electronics used vacuum tubes as there were no transistors in those days.
Dear Mister Murray:
Your article on The Great Bug Story is very interesting. I may be able to add to your knowledge of the device which in the post Soviet press received a big mention after Doctor Theremin’s death. It is called Buran, translated “storm”.
I am a retired former US Army Sergeant who in my military profession was a radar repairman. In early 1955 I was assigned to a Signal Corps technical intelligence unit in Japan. It had 5-6 enlisted men and one junior officer. Its mission was to perform preliminary field analysis of captured enemy material (CEM) on largely communications related equipment and other activities tasked to us. We had a shop with test equipment and a photography facility. I was sent here because of a previous assignment in this kind of activity.
Later that year our primary mission was expanded to include technical support to various intelligence activities in the Far East Area. Three field engineers (Tech Reps) arrived and were given assignments which will not be discussed. In 1956 came a young solider, a PFC with a Ph.D. in Electrical Engineering. In those years, there was no Military Occupational Specialty (MOS) for Ph.D. as they and EEs got the same MOS 333 for electrical research assistant.
Meanwhile, a special requirement came from a customer for a certain kind of bug. I was assigned to Ph.D. as his assistant because of experience in radar and microwaves. He was tasked to design a bug based on Theremin’s device. In a security secure location we were given a demonstration of how a room is swept for such bugs. We were given the opportunity to physically examine the test bug but were not permitted to disassemble it. It operated somewhere between 1.1 GHz and 1.5 GHz.
Back in the shop, Ph.D. started to calculate the design for his device. We obtained a 3 GHz Radar Echo box and an RF Signal Generator used in radar testing to see how the echo box would react when energized. One end was removed and covered with a metallic foil. Its reaction to sound showed as deflections on the meter, the activity we were looking for.
The design was completed and the component parts were fabricated in 2 or 3 secure machine shops, none of which would have had parts for a complete assembly. We did all the silver plating in our shop. The parts were assembled but we needed a receiver to test it. A chassis was purchased on the Japanese market on which one of the engineers designed a four miniature vacuum tube high gain audio amplifier. We found a crystal diode adapter used for radar testing which we modified to hold the rod antenna on one end. Inside the adapter was a 1N21B cartridge crystal normally used in crystal mixers (converters) at the front end of some radar receivers. The prevented output which was audio was coupled into the amplifier which was microphonic and had a high noise level. Our device was energized by the RF signal generator which gave a low power output even when turned to maximum output. Anyhow, the bug worked and the design considered successful. We found them to have a dominant mode and also to resonate at other frequencies which may have been useful. Also discussed was how much of the bug’s emission was either AM or FM or did it do both. We were handicapped by time and the non availability of proper test equipment to carry the experiments further. All of it was very interesting.
The nickname Error Indicator was attached to the device. At some time in the 1970’s or 80’s I saw the nickname mentioned in a New York Times article on eavesdropping. I think it was described well in Spycatcher or another book where the diaphragm in the Soviet bug was damaged during disassembly as it was very thin and fragile. Our diaphragm was made from the thinnest copper sheet available and silver-plated. It is probable that the performance of our bug would have been improved with a very thin foil.
I think the exterior appearance of our bug was based on the one we examined at the sweeping demonstration which we thought was a copy of the Soviet piece. Ph.D. took note of a visible bit of the innards through the small vent holes and took it from there, although, we had no knowledge of what really was inside. Both bugs had small vent holes, about 1/8 inch, spaced equidistantly on radii on both ends. The Soviet bug shown on page 5 has very large vents.
The story says that more than a 100 similar bugs were found. Were they all alike? Were there improvements made to them? The bombardment of the Moscow embassy with microwaves leads me to believe that there were other bugs, or none, to cause harassment and fear in employees and to keep us guessing.
Our unit was disestablished in December 1957 and I went back into radar. There was much interesting and productive work done in our little shop. I think those in the community who were our customers didn’t want to see us shut down. In 1963 I was returned to the intelligence community in Europe where I was a Radar and Electronic Warfare Systems Analyst, an impressive title. In retired in 1967 after twenty years of service. I hope you found my story interesting.
I did… extremely interesting. Thank you. ~Kevin
From a trusted source who has requested anonymity – added 2/15/10
“This device is simple in concept but very complex in construction. A remote transmitter sends a strong radio frequency signal aimed at the bug, with a directional antenna if possible. A separate antenna is used to receive the signal which is reflected from the bug — and everything else around it. The trick here is to sense the reflectance variations caused by the bug and ignore other variations such as heating systems rattling ducts, etc. In order to make the bug work, its antenna needed to be resonant near the incoming frequency with it’s resonant frequency changed by the movement of a diaphragm. The diaphragm is of course moved by sound pressure.
The standard quarter wave antenna length explanation is probably not correct since the bug antenna did not appear to be connected to anything like a ground plane. More likely it was a half wavelength at the excitation frequency. To make all of this work, the resonant cavity under the diaphragm and bug antenna had to be carefully matched. Diaphragm position had to be close to the the post for good sensitivity but not so close that it would touch the post as components aged.
One last problem in the operation was the excitation signal. It didn’t take a genius to discover the transmitted signal and subsequently the reflected signal. It would be important in operation of the bugging system to turn it off when a sweep team was seen in the area. The rumor is that this device was detected as the sweep technician dialed his receiver past the excitation frequency and heard voices.
This sort of bugs is not likely to be found in corporate or residential eavesdropping situations. Lax access control, easily installed computer keystroke recorders, high tech baby monitors and cordless phones that broadcast conversations make the work of a modern day Theremin unnecessary.”
*** Our source brings up several interesting points:
• the antenna’s wavelength,
• the size of the cavity,
• and the reason this type of bug is not found in common use.
It appears the exploded view of The Thing (above) is mistaken. The antenna length was probably a rough guess, based on the original photo. The frequency of operation was likely calculated based on this guess, using a 1/4 wavelength formula.
As our source perceptively pointed out, the figures do not make sense.
Using the original photo, the antenna length appears to be closer to 8.5 inches. That coupled with a frequency of operation (1.1-1.5 GHz) provided (above) by our retired U.S. Army Sergeant technician who actually examined The Thing, calculates the antenna as being… one full wavelength!
This makes better sense, as so does the physical size of the resonate cavity, now.
In addition to the reasons our source lists for this type of bug falling out of favor, there is one more… A very powerful microwave radio signal is required to make this bug work. (You may remember hearing about the embassies in Moscow being bombarded by microwave signals.) One can get away with this unchallenged, in a closed society, if you are the government, and no one knows the reason for the signals. Today, in most of the world, a powerful microwave signal like this would be instantly discovered, tracked to the source and challenged.
The Thing was a good thing while it lasted, however, we can not become complacent. This technique still has value when used in a lower power and highly focused manner. The technique also works at other frequencies, like light (laser beam eavesdropping). ~Kevin
From a source who has requested anonymity – added 4/29/12
I am a former Foreign Service Officer.
I have a certain amount of first hand, and a larger amount of 2nd hand knowledge about the thing, having worked for a couple of years in the early 1960s in the organization that was responsible for dealing with it and all similar problems – the division of technical services of the office of security of the department of state: abbreviated as O:SY/T.
I knew the tech who actually discovered the thing (slightly), and heard from him in detail exactly how he found it. Some of your published accounts are a little inaccurate, but not essentially so.
It was found using a basically untuned crystal video receiver, so we did not know what the activating frequency was. Much more sophisticated tech surveillance countermeasures receivers came into use later.
In the early 1960s the device and the great seal were both on display in SY’s little conference room – the room used for briefing people on technical surveillance countermeasures problems, and I got to handle and inspect them both many times. I also studied the various reports on how it functioned, prepared by several US Government labs and contractors. I also got to see and study various US government versions and developments from the thing.
The State Department’s overseas facilities were very high priority and highly vulnerable targets for Soviet-bloc intelligence, so we got to experience, so to speak, many more and much more interesting tech surveillance attacks than other parts of the US government. The recognition of this fact led to the launching of a very serious science-based tech surveillance countermeasures effort.
We did get launched, to a significant extent, by the Brits, but that is more a 1950s than a 1960s story.
The bald gent shown in your photo with Mr. (John) Reilly is Bud Hill, the then director of SY/T, my boss in the early 1960s. A first-rate scientist, although he tended to depreciate that and describe himself as being ‘only an electrical engineer’. His hiring by the Department of State marked the beginning of a serious American scientific response to tech surveillance countermeasures.
The great seal device was inherently resonate at multiples of a particular frequency, and this gave rise to a certain amount of confusion (in some places) about the frequency of the signal that activated it. As a consequence of its extreme simplicity it also inherently created both AM and FM modulation, so there was again some confusion about the character of the surveillance receiver. It was probably a straightforward zero-IF receiver that delivered the amplitude modulation of the re-emitted signal as audio output.
TSCM History – The Great Seal Bug Story – Part I – Cutaway diagram of “The Thing” bugging device from Scientific American magazine.
A drawing and description of the device was published in Scientific American sometime in the 1960s or early 1970s – in connection, if I recall correctly, with the amateur scientist column. I was, to say the least, surprised to see it. It was very accurate. (left)
The carved space within the seal indicated that the device found was a later generation device – the earlier one(s) had been bigger. And the Soviets much have changed them out from time to time, as improvements were made.
Our inspect-able transparent room-within-a-room acoustically secure conference rooms were one response to tech surveillance problems. I was in charge of that program for a while. They have been widely discussed elsewhere – and were shown off by the Iranians, so they are not so secret, and I have nothing to add about them.
O:SY/T had several early significant successes, due to a combination of new science, more scientifically trained people, greater cooperation from other agencies, etc..
During my service in O:SY/T we pulled a huge wired microphone array out of the embassy in Moscow. 57 of them, if I recall. The event was described in the NY Times at some length.
Someone should do a complete and science-based account of “The Moscow Signal”; a technical surveillance story that continued through the 1960s and after. It is, I think, a much more interesting story than that of the Great Seal device, and it very likely had its origins in the same organization. They are both ‘spy beam’ stories.
You have not mentioned many of the individuals who were important to the US scientific response to soviet tech surveillance, but I do not feel able to or entitled to present the fuller story. Other than to note that I feel that the driving intellectual force behind this was the Department of State, and not the CIA or some part of the Department of Defense. I have noted a tendency by some to rewrite history about this.
You might want to follow up on the events connected to the departure of Otto Otepka. His presence in the Department of State as a spy for the McCarthy people in congress – after McCarthy’s departure – did the Department of State and especially its Office of Security great harm. The ‘events’ (see the NY Times) resulted in both Reiley’s and Hill’s forced departures.
My last acts in that unfortunate drama were to urge, in a meeting with associates from CIA, NSA and other agencies, that some attempt be made to shift some of SY/T’s TEMPEST-related activities out of State and more into the direct management of my organization’s natural organizational competitors – the CIA…
I have just stumbled over the Scientific American article that I mentioned.
The description and drawing in Scientific American appeared in March 1968 at pages 132-133. The drawing is very accurate, but is not accompanied by much description. For example, it shows, but not mention, that the pedestal face adjacent to the diaphragm is not smooth. It had, if I recall, machined grooves and machined radial lines, presumably to reduce any air cushion effect as the diaphragm vibrated.
The pedestal and diaphragm together made up a sort of air-variable capacitor, which altered the resonant behavior of the cavity.
I do not recall there being any vent holes designed to avoid an air-cushion effect; I question any description that says that there were some.
The story of how we couldn’t figure it out, and that we relied on the Brits to tell us how it worked is, to say the least, an exaggeration.
It was studied by some of our premier scientific establishments, and well understood by them. If I recall correctly, The Navel Research Lab [sic] (a not sufficiently known body), Bell Labs, and a special commission put together by the National Academy of Science all looked at it – I remember being quite impressed by some of the studies. They addressed both the device itself and some very significant improvements that could be made to it.
Some research turned up quite old French and German patents and other publications that could have inspired the device, The French had a communication system for taxicabs that operated on a somewhat similar principle – a super-regenerative transponder that only ‘came on’ (emitted a modulated signal) when the central station sent out a query signal that pushed the local oscillator into action.
Our aim was to greatly increase the Q, so that it would not respond to any old source, but only to one of a specific frequency. Increasing the Q also increased the modulation, effectively increasing the maximum distance between the power source and the device.
The diaphragm excursion was, if I recall correctly, very small. The diaphragm was quite delicate. I think that a military lab that played around with it found that out to their sorrow.
Later models had a much more complex interior structure – one had a helical member, instead of a post – supporting the non-moving plate of the ‘variable capacitor’. Probably part of the effort to increase the Q. I recall seeing US versions with dipole rather than monopole antennas. And I know that the Germans also worked up advanced versions of their own.
Battery operated bugs that could be switched on and off remotely were and are pretty common.
Great Seal-like devices were eventually entirely superseded (at least in high-threat environments) by devices that were powered by radiate energy that stored that energy and that could be switched on and off without regard to whether the power source was active or not.
But more primitive Great-Seal like devices devices that did not store energy continued to be used by major powers in third world countries for years.
The development of systems that stored radiated power and that could be switched on and off remotely, and other information, tended to make some of us believe that the Moscow Signal was not a radio-biological weapon at all, but rather part of an evolved bugging system.
All this soon led to devices that drew their power from strong rf fields that were ‘naturally present’, like those arising from local TV transmitting stations, and that could also be switched on and off remotely – by very subtle and hard to detect signals.
As detection systems improved, the bugs also evolved, returning signals with modulation schemes (eg spread spectrum) designed to avoid quick and easy discovery – at least by the counter-surveillance receivers of the day.
The evolution was intended, of course, to eliminate the ability to detect Great-seal type devices by simply illuminating the bugged area with a swept frequency transmitter and looking for an AM or FM modulated return of the frequency being used. I notice that even today some venders of bug-detection systems have never progressed past this point.
Later on, devices that stored collected data and delivered it in well-hidden bursts, on command, came into use.
I know of nothing that indicates that the Soviets were using that technology for these purposes in the early 1960s. The recording devices in use back then were simply too bulky, power-consuming and unreliable. Consider the failures of such devices in the early era of space satellites and undersea and underground cable tapping ventures.
I believe that everything that I have mentioned here is both long-since public knowledge, if not actually declassified, as well as being very old technology – as I have not been in the business in any way for almost fifty (!) years.
Thank you for sharing your most interesting and illuminating contribution to the history of The Great Seal Bug. ~Kevin
Please help document this historic bug in greater detail. If you have any knowledge, personal recollections, photographs, or know the current whereabouts of the original Great Seal or its bug, contact me.
Thank you, Kevin | <urn:uuid:c36e6635-5ce5-42c0-acfd-e08759a7fe27> | CC-MAIN-2022-40 | https://counterespionage.com/great-seal-bug-part-1/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00062.warc.gz | en | 0.975751 | 7,722 | 2.53125 | 3 |
Getting Smarter About Making Cities Smart
Having had the privilege to have visited a number of cities throughout the world, I have learned that Chengdu is not Mexico City, Brussels is not Houston, Abuja is not Melbourne, and Johannesburg is not Dubai. That’s because the heart of every city beats differently. Each has its own character, its own vibe, and its own goals for assuring the best standard of living possible for its citizens and for the visiting public.
Likewise, every city is evolving at its own particular pace, though all are aligned to a common principle of modernizing their infrastructure services – public transportation, utilities, health care – by leveraging technology and law enforcement in “smart” ways to improve quality of life while assuring operational efficiency, stability and security. As noted by Eduardo Paes, the former mayor of Rio de Janeiro, “Smart cities are those who manage their resources efficiently. Traffic, public services and disaster response should be operated intelligently in order to minimize costs, reduce carbon emissions and increase performance.”
The term “smart cities” has been used recently as a label for those seemingly few cities of the world that are consciously embedding technology into all aspects of city planning. However, with current forecasts estimating close to 50 “megacities” housing over 10 million people and about two-thirds of the world’s population living in urban environments by 2050, the mindset must shift to think of the ‘smartness’ of any urban center as a non-negotiable element.
Many urban centers are claiming to be ahead of the “smart” curve though, in actuality, they are finding themselves handcuffed by custom systems that are not interconnected, interoperable, portable, extensible nor efficient in their operations, maintenance, and overall cost-effectiveness. Overcoming this challenge is burdensome, especially when paired with pressure to make progress that can unintentionally lead to chaos as concurrent initiatives are deployed, leading to uncoordinated solutions that can be misaligned to the intended outcomes. No wonder city planners are not sleeping at night.
The diagnosis is seemingly familiar, and not unlike the challenges many enterprises are facing with what are currently referred to as digital transformation projects. What’s different for the urban center, however, is the scale of the complexity. The complexity is not just a question of technology deployment, but also taking into consideration economic, political and social issues that shape a city’s being. It’s an extreme case of a “system of systems of systems and more systems” problem, for which the only “smart” solution is a universal consensus-based governance framework.
Technology companies like Cisco and AT&T have developed their own frameworks, driven by their product strategies, especially for IoT. Standards-developing organizations such as ISO, IEC, ITU, IEEE and a number of others are facilitating the development of new standards related to specific pieces of the overall urban development challenges. Recognizing the fragmented (yet well-intended) and disparate approaches, NIST has launched a working group intended to converge these groups and their respective knowledge assets under the guise of a Smart City Framework.
The key to the success of any framework is its acceptance by universal consensus. This means the framework is created, maintained and endorsed by the professional community for the benefit of the community itself. The framework provides guidance on how to carry out the work aligned to desired outcomes in conjunction with tools that enable stakeholders to self-assess, benchmark, and measure capability maturity and progress toward the goals. This is indicative of the 20-plus year success experienced by ISACA’s globally recognized COBIT framework for the governance and management of enterprise technology, which itself has the potential to be foundational for smart urban initiatives.
For now, city planners find themselves challenged across a wide spectrum of issues, ranging from technology to compliance. As members of the technology community, we need to help them by leveraging our knowledge of technology governance frameworks and their development and deployment, our holistic systems thinking and problem-solving capabilities, and our innate ability to assess and mitigate risk to inspire the confidence necessary to enable innovations that can evolve the urban environment by leveraging the best technology has to offer.
Our work has never been more important. And because we recognize the pervasive nature of technology and understand how to leverage its positive potential, I am confident that we can contribute enough to the evolution of so-called “smart cities” that the term “smart” will eventually be dropped from the lexicon. That in itself would be a great accomplishment. | <urn:uuid:e71b7650-4d81-4531-9faf-f928558ea7c3> | CC-MAIN-2022-40 | https://www.msspalert.com/cybersecurity-markets/getting-smarter-making-cities-smart/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00062.warc.gz | en | 0.958575 | 943 | 2.578125 | 3 |
Why are processes documented?
The reasons why companies document their processes can vary greatly. On the one hand, processes are documented to record knowledge about the processes and their procedures. Work instructions can be derived from documented processes and used to train new employees. On the other hand, good documentation allows the modeled processes to be analyzed, and thus bottlenecks and optimization potentials to be found. The documented models also provide an important insight into the degree of standardization of business processes. It is also conceivable to assign responsibilities and plan capacities on the basis of the process models. If necessary, requirements for new software can also be derived from the processes and the individual work steps.
How does process documentation work?
First of all, it must be established at which level of detail processes are to be documented. This depends on the process strategy and the process architecture derived from it. Subsequently, guidelines for creating models must be formulated. Such guidelines can include modeling direction, modeling language, or objects being used. If an existing modeling language or notation is used, such as BPMN or EPK, the creation of modeling guidelines becomes easier.
But the human component should not be neglected here. The executing personnel have in-depth knowledge of the detailed process flows, and conversations and interviews can decode and document this knowledge. Only then does it make sense to model the processes in accordance with the guidelines.
In order to ensure the models represent the respective processes correctly, a person with specialist knowledge and a person with knowledge of the modeling guidelines should review them again. After these quality assurance tasks are complete, the process models can be published.
Here are the steps of process documentation:
- Defining a level of detail.
- Creating modeling guidelines.
- Inquiring about/Observing the process flows.
- Modeling the processes.
- Checking the process models (technical and methodical).
- Publishing the process models. | <urn:uuid:ad1f61da-8498-48d6-a379-ea608582aa84> | CC-MAIN-2022-40 | https://appian.com/process-mining/process-documentation.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00263.warc.gz | en | 0.909337 | 390 | 2.90625 | 3 |
In the United States today, women account for 47% of the overall workforce, yet only 25% of IT workers are female according to the Bureau of Labor Statistics (BLS). The tech industry’s efforts to raise the inclusivity of women as employees have been sporadic and inconsistent over the last 50 years, though the issue has certainly gained more notoriety in recent years. Yet despite employers’ efforts to introduce numerous programs to help educate, hire and retain women in technology, women remain significantly underrepresented at all levels.
Fortunately, the tech industry continues to grow at a breathtaking pace and that growth demands faster change. In fact, The BLS lists application software developers and information security analysts among the fastest growing occupations in the U.S. over the next decade (see Figure 1).
Women in Technology Today: a Snapshot
More than 4.5 million U.S. workers are currently employed in a technology role today and the BLS predicts there will be 600,000 new IT jobs by 2026. Over the next eight years, IT jobs are projected to grow at twice the national rate. Women are Minority Employees at Technology Organizations Despite the substantial growth (and need for qualified workers) in the IT workforce, and despite women now accounting for almost half of U.S. workers, they are considered a minority population in technology.
Women are significantly underrepresented in IT jobs and at IT companies. In the U.S., more women now graduate college than men. Last year, 57% of all college graduates were women, yet only 25% of all IT jobs in the U.S. are held by women today. In the U.S., 18% of computer science graduates are women and only 13% of U.S. computer programming graduates are women. Women have significantly less access to cash, capital and funding. Worldwide, only 8% of primary patent holders are women and 2% of VC funding is for women-founded startups.
Women have drastically increased their participation in all aspects of the STEM workforce over the last 50 years, except in technology. In the U.S., women now account for 40-50% of all graduates for medical school, law school and physical sciences, up from rates of 5-15% in the early 1970’s. In U.S., 75% of all healthcare workers today are women. What this means for the technology sector is that the competition for qualified job candidates with technical capabilities has moved to other industries, placing enormous pressure on educating and hiring women for technical and non-technical roles.
Gender Diversity in Technology Organizations Remains Elusive
In a recent survey co-sponsored by IDC and Women in Technology International (WITI), respondents were asked to rate their organization’s employee diversity across nine distinct categories for inclusion such as ethnicity/race, gender, age, sexual orientation, religion, socio-economic standing, politics and disabilities (see Figure 2). No category was rated as “very diverse” by the majority of respondents. Among the nine categories, ethnic diversity had the highest proportion of employees (47%) who considered their organization as “very diverse.” Gender diversity was rated second to last of all nine areas of inclusion with just 31% of respondents viewing their organization as “very gender diverse.”
Diversity and inclusion remain elusive in technology, especially for women. Based on these numbers, gender diversity is nearly at the bottom in terms for major inclusivity measures ratings at technology organizations.
What is most apparent in this data is that men and women perceive the diversity of their organization very differently. Men are much more likely than women to view their organization as very diverse across all nine areas of inclusivity. It’s notable that the greatest area of difference in opinion is for gender diversity. While there is an abundance of publicly available industry data and employment reports that demonstrate women are substantially underrepresented in technology roles and at technology companies, men are far less likely to perceive that gender inequality exists at their organization. This is a major hurdle to improving or achieving gender parity for several reasons.
So how should organizations begin tackling this disparity? Explore some of the solutions that IDC and WITI believe are crucial to improving the numbers of women in technology in our next blog post.
Learn more about the issues facing women in technology and IDC’s ongoing research on the subject; see our latest infographic to learn how unconscious bias hurts women in technology. | <urn:uuid:9cdb169e-0baf-4a2c-9d20-66acc2e70d37> | CC-MAIN-2022-40 | https://blogs.idc.com/2019/05/03/women-in-technology-understanding-the-problem/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00263.warc.gz | en | 0.964475 | 907 | 2.828125 | 3 |
In 1970, Doctor Winston W. Royce published a paper entitled "Managing the Development of Large Software Systems" that challenged sequential development of software. He felt that the method of developing large software systems, which resembled an assembly line, was inadequate due to a lack of communication between groups.
Royce had spent nine years developing software for spacecraft mission planning, commanding and post-flight analysis. He found through experience that agreeing upon a final product and then creating it was ineffective in developing quality software. By the time the product was built, it was no longer a viable option.
In his paper, Royce details an iterative process, called the Waterfall Method, in which multiple groups collaborate to design, code, test and analyze software, all while producing hundreds, sometimes thousands of pages of documentation detailing the process and proposed final product.
The Waterfall Method of software development revolutionized the industry, and changed the way people thought about building large systems. But even with Dr. Royce's more effective method, there were still improvements to be made, particularly the amount of paperwork and documentation required throughout the process.
The coming decades saw evolutions to the model such as the Spiral Model and Rapid Application Development. These models were building blocks that led to a development methodology that did away with the burden of excessive documentation and focused on the collaborative nature of the design process.
Now recognized as an innovative method to quickly develop valuable and viable software, the Agile Methodology was conceived in a lodge at Snowbird ski resort in the Wasatch mountains of Utah in February 2001.
A group of 17 software developers saw a problem with the current method of "heavyweight" development, and collaborated to author the "Manifesto for Agile Software Development." This getaway conference came as a result of frustrations in the methodology behind software development. While these developers saw merit in current methods, they recognized a need to improve, and came to four conclusions about what they valued in software development:
- Individuals and interactions over processes and tools
- Working software over comprehensive documentation
- Customer collaboration over contract negotiation
- Responding to change over following a plan
These four values would aid in authoring the 12 principles of the Manifesto for Agile Software Development. This collaborative process was created with a mindset of cooperation and community by individuals who felt that it was important to act as if people were the most important asset that a company has to offer, rather than simply saying so. According to Jim Highsmith, one of the original signatories of the manifesto, Agile Methodologies are, at their core, about, "the mushy stuff of values and culture." Developing relationships and focusing on the capabilities of the software would lead to improvements in the final product. | <urn:uuid:80038eae-f4df-43ce-91fd-831f5ebe54df> | CC-MAIN-2022-40 | https://www.givainc.com/blog/index.cfm/2016/12/13/the-origins-of-agile-software-development-methodology | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00263.warc.gz | en | 0.962216 | 555 | 2.546875 | 3 |
Whether you’re a Fortune 500 company or a neighborhood retailer, cybercrime is a genuine threat to your business, revenue, and brand. Between 2015 and 2019, cybercrime incidents are expected to quadruple, with the estimated cost of data breaches exceeding $2.1 trillion around the world. Implementing effective DDoS protection is key to ensuring your web property is secure, and that you’re ready to fight off any attacks.
A distributed denial of service attack or DDoS is a common type of cyber attack where a malicious actor floods a web server, service or network with traffic to disrupt its normal operations.
DDoS attacks are carried out by overwhelming the targeted web server or network with messages, requests for connections or fake packets. When the targeted server tries to accommodate all the requests, it exceeds its bandwidth limit and causes the server to slow down, crash or become unavailable. A common analogy is that of a traffic highway. As you approach an intersection, if many more cars join in, it will lead to a traffic jam and stop everyone in their tracks. This includes even other cars behind you.
If the server that is targeted is a critical system for your business, it can bring down the entire network infrastructure and bring your business operations to a halt. Moreover, during the server downtime, other types of attacks like ransomware and extortions can also be launched, all of which result in massive economic consequences for businesses.
Usually the traffic comes from a group of compromised systems and devices called botnets and contain malware. As more devices get connected to the internet, especially IoT devices, this type of cybersecurity threat has become more easy to launch.
Read our dedicated guide: 什么是DDoS 攻击?
Types of DDoS Attacks
DDoS attacks can vary based on the attack vectors used and the way in which they are used. Some of the common types of DDoS attacks are:
Volumetric attacks are those that are aimed at a machine’s network to overwhelm its bandwidth. It is the most common type of DDoS attack and works by overwhelming its capacity with large amounts of false data requests. While the machine is occupied with checking these malicious data requests, legitimate traffic is not able to pass through.
User Datagram Protocol (UDP) floods and Internet Control Message Protocol (ICMP) floods are two common forms of volumetric attacks. In UDP attacks, attackers make use of the UDP format and its fast data transmission feature that skips integrity checks to generate amplification and reflection attacks. In ICMP floods, attackers focus on the network nodes to send false error requests to a target, which gets overwhelmed and becomes unable to respond to real requests.
A protocol attack works by consuming server resources. It attacks network areas responsible for verifying connections by sending slow pings, malformed pings and partial packets. These end up overloading the memory buffer in the target computer and crashes the system. Since protocol attacks can also compromise web application firewalls (WAF), DDoS threats of this type cannot be stopped by firewalls.
The SYN flood attack is one of the most common types of protocol attacks. It works by initiating a TCP/IP connection without finalizing it. The client sends a SYN (synchronize) packet after which the server sends back an ACK (acknowledge) back to the client. The client is then supposed to respond with another ACK packet but doesn’t and keeps the server waiting, which uses up its resources.
Application Layer Attacks
These are attacks that focus on the L7 layer or the topmost layer in the Open Systems Interconnection (OSI) model. These focus mainly on web traffic and could be launched through HTTP, HTTPS, DNS or SMTP. They work by attacking vulnerabilities in the application which prevent it from delivering content to the user.
One of the reasons why application layer attacks are difficult to thwart is because they use much less resources, sometimes even just a single machine. This makes it look like just a higher volume of legitimate traffic and tricks the server.
It is also possible for hackers to combine these approaches to launch a multi-pronged attack on a target.
History of DDoS Attacks
Cyber-attacks are not a new phenomenon. The first DoS attack was in 1974, perpetrated by the curiosity of a 13-year-old boy in Illinois. He forced 31 University of Illinois computer terminals to shut down simultaneously by using a vulnerability in what was then the new “ext” command. In the 1990s, Internet Relay Chat was targeted through simple bandwidth DoS attacks and chat floods. But the first major DDoS, or distributed denial of service attack came in 1999, when a hacker used a tool called “Trinoo” to disable the University of Minnesota’s computer network for 2 days. Other attacks followed, setting the groundwork for the larger, more widespread cyber-attacks we see today.
What Happens in a DDoS Attack
With all the damage that can be caused to your web property and business through DDoS attacks, it’s surprising how simple a premise they really are. Web, DNS, and application servers; routers; web application firewalls; and internet bandwidth handle huge amounts of connections on a daily basis. A DDoS attack occurs when a series of compromised systems send hundreds or thousands more connections than the servers can handle. This can easily happen through the use of a botnet or a linked network of hijacked systems. Some DDoS attacks transpire as a disguise to target the systems that control the sites and servers. This opens them up to the possibility of becoming infected by malware, oftentimes in the form of a Trojan virus. Then the system becomes part of the botnet that infiltrated it in the first place. Attackers may target different parts of a company’s network at the same time, or they may use these DDoS events to cover up other crimes, such as theft or fraud.
9 Ways to Prevent DDoS Attacks
Automation technology can partially help to prevent cyber-attacks, but it also requires human intelligence and monitoring to protect your website to the fullest extent. Traditional web structures aren’t sufficient. A multi-layered cloud security developed and monitored by highly experienced and committed engineers offers the best protection. Understanding how DDoS attacks work, and being familiar with the behavior of your network are crucial steps in preventing intrusions, interruptions, and downtime caused by cyber-attacks. Here are some tips to help prevent a DDoS attack:
1. Implement sound network monitoring practices
The first step to mitigating DDoS threats is to know when you are about to be hit with one. This means implementing technology that allows you to monitor your network visually and in real-time. Know the amount of bandwidth your site uses on average so that you can track when there are anomalies.
DDoS attacks offer visual clues, and if you are intimately familiar with your network’s normal behavior, you’ll be more easily able to catch these attacks in real-time.
2. Practice basic security hygiene
There are some simple steps every business can take to ensure a basic level of security against DDoS threats. These include best practices such as using complex passwords, mandating password resets every couple of months and avoiding storing or writing down passwords in notes. These might sound trivial but it is alarming how many businesses are compromised by neglecting basic security hygiene.
3. Set up basic traffic thresholds
You can partially mitigate DDoS attacks with a few other technical security measures. These include setting traffic thresholds and limits such as rate limiting on your router and filters on packets from suspicious sources. Setting lower SYN, ICMP and UDP flood drop thresholds, IP backlisting, geo-blocking and signature identification are other techniques you can adopt as a first level of mitigation. These are simple steps that can buy you more time but DDoS attacks are constantly evolving in their sophistication and you will need to have other strategies in place to fully thwart such attacks.
4. Keep your security infrastructure up to date
Your network is as strong as your weakest links. This is why it is important to be aware of legacy and outdated systems in your infrastructure as these can often be the entry points for attacks once they are compromised.
Keep your data center and systems updated and patch your web application firewalls and other network security programs. Additionally, working with your ISP or hosting provider, security and data center vendor for implementing other advanced protection capabilities is also a good idea.
5. Be ready with a DDoS response battle plan
When a DDoS attack hits, it will be too late to start thinking about the response. You need to have a response plan prepared in advance so that the impact can be minimized. A response plan should ideally include
- Checklist of tools – a list of all the tools that will be implemented, including advanced threat detection, assessment, filtering and software and hardware.
- Response team – a team of personnel with clearly defined roles and responsibilities to carry out once the attack is detected
- Escalation protocols – clearly defined rules on whom to notify, escalate and involve in the event of an attack
- Communication plan – a strategy for contacting internal and external stakeholders, including your ISP, vendors and customers and how to communicate the news in real-time.
6. Ensure sufficient server capacity
Since volumetric DDoS attacks work by overwhelming the network bandwidth, one way to counter them is by overprovisioning bandwidth. So ensuring that your server capacity can handle heavy traffic spikes by adding bandwidth, you can be ready for sudden and unexpected surges in traffic caused by DDoS attacks. Note that this may not stop a DDoS attack completely but it will give you a few extra minutes to prepare other defenses before your resources are used up.
7. Explore cloud-based DDoS protection solutions
It is also wise to explore cloud-based DDoS protection solutions as part of the DDoS mitigation strategy. The cloud provides more bandwidth and resources compared to private networks. The cloud data centers can absorb malicious traffic and disperse them to other areas and prevent them from reaching the intended targets.
8. Use a Content Delivery Network (CDN)
One effective modern way to deal with DDoS attacks is to use a content delivery network (CDN). Since DDoS attacks work by overloading a hosting server, CDNs can help by sharing the load equally across a number of servers that are geographically distributed and closer in proximity to users. This way, if one server goes down, there will be more that are still operational. CDNs can also provide certificate management and automatic certificate generation and renewal.
9. Get professional DDoS mitigation support
Don’t hesitate to call in a professional. DNS providers, and companies like CDNetworks can help you protect your web property by rerouting visitors as needed, monitoring performance for you, and distributing traffic across a number of servers should an attack take place.
The Cost of DDoS Attacks
DDoS attacks, and the motivations behind them, have evolved since the attacks of the 90s. Today, they are fiercer, easier to launch, and are often politically based. Each and every day, there are orchestrated cyber invasions carried out not only on big target corporations, but on small and medium-sized businesses as well. Few are sufficiently prepared to fend them off, however. The cost to businesses is spiraling, and estimated to be somewhere around $500 billion or more. Even then, experts say, most of the 50 million attacks each year go undetected. The cost of a cyber-attack for businesses is not only a loss of productivity, revenue, and business opportunities, but also damage to the company’s brand image. Operational costs skyrocket in many cases, as the businesses scramble to find and remedy their security vulnerabilities.
Steps to Take if You’re Attacked
While early detection is key to preventing devastating outcomes, there are steps you can take if you are the target of a DDoS attack. The first step is to ensure you have a cloud-based DDoS mitigation system in place that can handle attacks. Additional steps include:
- Setting up new IP addresses for your systems
- Ensuring DNS records are set for maximum security
- Blocking countries recognized as DDoS attack hubs
- Having a dedicated server exclusively for email
- Recording connections to your servers
CDNetworks offers security solutions that not only protect your business or organization, but also your company and clients’ intellectual property stored on your system and its servers. A proactive approach can prevent the damaging effects of DDoS attacks. For more information on our products, please fill in the form to contact us. | <urn:uuid:4c4a8e09-465d-4438-89b8-e6a0d0475b3c> | CC-MAIN-2022-40 | https://dev.cdnetworks.com/cn/cloud-security-blog/tips-to-protect-your-business-from-ddos-attacks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00263.warc.gz | en | 0.939718 | 2,652 | 2.65625 | 3 |
Hamilton Public Library
This 70-hour course covers the fundamentals of the Linux operating system and its command line. The goal of this course is to provide learners with a starting point to the Linux operating system. Learners who complete this course will understand basic open source concepts, Linux as an operating system, how Linux is used and the basics of the Linux command line.
This course implements a "practice as you read" approach to learning. Learners have hands-on access to a Linux virtual machine to practice, explore and try out Linux command line concepts. They are also provided with step-by-step labs that help them build up their skills and knowledge.
Certificate of Completion
- Upon successfully completing the course, learners will be eligible to receive a congratulatory letter from the Linux Professional Institute that indicates course completion.
Minimum System Requirement
- Laptop or desktop running Windows, Mac or Linux operating system (smartphones and tablets are not currently supported)
- Web browser (Chrome 50+, Safari 8.0+, Firefox 43+ or IE 10+)
Partner: NDG Linux Essentials (English - 2.20 - Self-Paced)
24 Sep - 22 Dec 2021
Amir Feridooni, Jennie Hamilton, Christine Tam
This 70-hour course covers the fundamentals of the Linux operating system and its command line. The goal of this course is to provide learners with a starting point for the Linux operating system. | <urn:uuid:180c67c0-8a91-466a-a262-07d187268dd2> | CC-MAIN-2022-40 | https://www.netacad.com/portal/ja/web/self-enroll/m/course-837095 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00263.warc.gz | en | 0.875167 | 335 | 2.6875 | 3 |
Personal Records Protection Law in the State of Iowa
Iowa Code § 22.11, otherwise known as the Fair Information Practices Act, is a personal data privacy law that was enacted in 2014. Iowa’s Fair Information Practices Act was passed for the purpose of protecting the personal records that government bodies and agencies collect from residents within the state of Iowa. Moreover, the provisions of the law are also designed to promote transparency as it relates to the manner in which the personal information of Iowa residents is collected, maintained, and disseminated by government agencies within the state. To this point, the law outlines the steps that said agencies must follow when collecting and maintaining personal records relating to Iowa citizens.
How are public records defined under the law?
Iowa’s Fair Information Practices Act defines public records as “all records, documents, tape, or other information, stored or preserved in any medium, of or belonging to this state or any county, city, township, school corporation, political subdivision, nonprofit corporation other than a fair conducting a fair event as provided in chapter 174, whose facilities or indebtedness are supported in whole or in part with property tax revenue and which is licensed to conduct pari-mutuel wagering pursuant to chapter 99D, or tax-supported district in this state, or any branch, department, board, bureau, commission, council, or committee of any of the foregoing.”
What are the duties of government bodies under the law?
Government and state agencies within the state of Iowa have a number of duties and responsibilities with respect to safeguarding the personal records of residents within the state. Most notably, all state agencies are required to develop and implement rules and regulations concerning the following:
- The categories of personally identifiable information that the agency collects.
- The agency’s legal authority to collect personally identifiable information.
- How an agency will store the personally identifiable information it collects.
- The specific personal records maintained by the agency that contain personally identifiable information.
- Whether the agency uses a data processing system that can be used to match, collate, or permit the comparison of personal records containing personally identifiable information with other such records in another record system.
What’s more, Iowa’s Fair Information Practices Act also mandates that state and government agencies within Iowa develop guidelines relating to the protection of public records. These guidelines include:
- What agency records will be open for public examination, as well as which records will be deemed confidential, partially open, or partially confidential.
- The agency’s legal authority concerning the confidentiality of personal records.
- The procedures for providing the general public with access to public records.
- The procedures that an Iowa resident must adhere to when looking to object, dissent, or make additions to public records pertaining to them, unless such review has been prohibited by other applicable legislation within the state.
- The steps that must be taken when transferring personal records to another agency within the state of Iowa.
- The agency’s intended use of personal records pertaining to residents within Iowa
- Whether the submission of personal records is optional or mandatory, as well as the consequences of choosing not to provide an agency with such information.
Public records maintenance and redaction
As the very nature of public records entails ensuring that such information is protected at all times, government and state agencies are faced with a tall task when it comes to safeguarding the personal records of individuals across the state. To this end, one method that can be used to protect the privacy of personal records of citizens residing in the state of Iowa is the utilization of automatic redaction software. By using an automatic redaction software program, state and government agencies within Iowa can effectively secure the personal records of individuals that live within the state, as these programs allow users to render personal information inaccessible. As such, bad actors will not be able to compromise such information, allowing state and government agencies to maintain compliance with legislation such as Iowa’s Fair Information Practices Act.
As virtually every U.S. citizen will have personally identifiable information concerning them contained within a public record in their respective state, legislation such as Iowa’s Fair Information Practices Act serves to guarantee that this information remains confidential at all times. Furthermore, as public records will be more likely to contain sensitive personal information such as physical addresses, phone numbers, and personal identifiers such as social security numbers, confirming that this information is protected is of the utmost importance. With all this being said, Iowa’s Fair Information Practices Act gives residents of the state the assurance that the personal information they supply to state and government agencies remains protected from unauthorized access, use, and disclosure. | <urn:uuid:e047bc47-1064-4008-acae-5951f043f5a3> | CC-MAIN-2022-40 | https://caseguard.com/articles/personal-records-protection-law-in-the-state-of-iowa/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00263.warc.gz | en | 0.927665 | 947 | 2.578125 | 3 |
Most of us already know the importance of using antivirus, anti-malware, and VPNs to secure our computers, phones, and other devices against potential attacks. Printers? Not so much. We at CyberNews wanted to show users the importance of protecting printers from becoming easy prey for cybercriminals, so we decided to bring the message home.
In order to help as many people as possible secure their devices against potential cyberattacks, the CyberNews security team accessed 27,944 printers around the world and forced the hijacked devices to print out a short 5-step guide on how to secure a printer, with a link to a more detailed version of the guide on our website.
To perform the experiment, we used Internet of Things (IoT) search engines to search for open devices that utilized common printer ports and protocols. After filtering out most of the false positives, we were left with more than 800,000 printers that had network printing features enabled and were accessible over the internet.
While this does not mean that all 800,000 of these printers were necessarily vulnerable to cyberattacks, our estimates have shown that we could successfully target approximately 500,000 of these devices.
After selecting a sample of 50,000 open printers and creating a custom printing script, we managed to print out PDF documents on 27,944 unprotected devices.
Before performing the attacks, our initial step was to gather the total number of available targets. To find out how many printers were on the menu for our experiment, we searched for IP addresses with open ports on specialized IoT search engines, such as Shodan and Censys. While performing the search, we made sure that the open devices we found were actual printers, as opposed to unrelated services that simply used those ports for other purposes.
Out of 800,000+ available printers, we selected a sample of 50,000 devices that we would try to access and force to print our guide on printer security.
Our selection was based on:
We then created our own custom script that was specifically designed to only target the printing process, without gaining access to any other features or data stored on the printers.
As soon as we launched the script, it began hijacking the printing processes in unsecured devices and forced them to print out the printer security guide.
In the end, we managed to hijack 27,944 printers out of the 50,000 devices that we targeted, which amounts to a 56% success rate. Taking this percentage into account, we can presume that out of 800,000 internet-connected printers across the world, at least 447,000 are unsecured.
These numbers speak volumes about the general lack of protection of networked devices worldwide.
Example of available open printers on a single IoT search engine (Shodan.io):
As we can see, many users and organizations still use internet-connected devices without thinking about security, installing firmware updates, or taking into account the implications of leaving their devices publicly accessible. Which means that the humble printer remains one of the weakest links in the security of both organizational and home networks.
While security experts have been aware of printer vulnerabilities for quite a while, even previous large-scale attacks on printers like the Stackoverflowin hack in 2017 and the PewDiePie hack in 2018 did not seem to shock the public into securing their networked devices.
Even though securing each and every printer in the world might seem like a pipe dream, this does not mean that institutions and security experts should stop raising awareness about printer security and implementing stricter cybersecurity policies across organizations. Otherwise, the world might be just one massive cyberattack away from potential disaster.
While we were deliberately careful to only target the printing processes of the unsecured printers during the experiment, IoT hijacking attacks – when performed by bad actors without ethical limitations – can cause serious damage to organizations and individuals who neglect printer security.
From legal firms to banks to government departments, office printers are used by organizations of all types and sizes to print sensitive, confidential, and classified data. Not only that, these printers can also store copies of that data in their memory. Needless to say, attackers can easily exfiltrate this data by accessing unsecured office printers and use it for blackmail or corporate espionage, or simply sell it on the black markets of the dark web.
Bad actors can also take over unsecured printers and incorporate them into botnets in order to perform DDoS attacks, send spam, and more. What’s more, cybercriminals can use internet-connected printers to gain an initial foothold into the local or corporate networks and find more ways to cause more damage to the unsuspecting victims. Or they can simply use these printers to mine cryptocurrency, ramping up their victims’ electricity bills in the process.
Our experiment has shown that printer security remains a serious concern for individuals and organizations across the world. With that said, most of the printers we managed to hijack could have been easily secured by following common security best practices and a few simple steps.
To quote the security guide we printed on tens of thousands of unsecured printers, “here’s how”:
1. Secure your printing ports and limit your printer’s wireless connections to your router. Configure your network settings so that your printer only answers commands that come via specified ports on your network router. The standard protocol for secure printing on new printers is IPPS protocol via SSL port 443.
2. Use a firewall. This will protect unused protocols that can allow cybercriminals to remotely access your printer from outside the network.
3. Update your printer firmware to the latest version. Printer manufacturers regularly fix known vulnerabilities in the firmware for the devices they produce, so make sure your printer always stays up-to-date security-wise.
4. Change the default password. Most printers have default administrator usernames and passwords. Change it to a strong, unique password in the utility settings of your printer and make sure print functions require log on credentials.
For more detailed information on printer security, read our guide on securing your printer against cyberattacks here.
(SecurityAffairs – hacking, printers) | <urn:uuid:9e6935e0-1576-424e-b182-47822b446909> | CC-MAIN-2022-40 | https://securityaffairs.co/wordpress/107607/hacking/28k-unsecured-printers-hacked.html?utm_source=rss | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00263.warc.gz | en | 0.93776 | 1,264 | 2.703125 | 3 |
Virtual private networks (VPNs) have been around in one form or another for over 20 years. They’re extremely good at performing one very important task – creating a secure, encrypted tunnel that extends across public networks, allowing users to send and receive data as if their device were connected directly to an internal resource.
In recent years, VPNs’ ability to create this secure, remote connection has again given them a boost in popularity, despite claims by some security professionals that the technology is dead. In fact, enterprises grappling with large, decentralized workforces have turned to the VPN as a means for keeping employees and their devices secure while accessing sensitive applications and data – especially during the rise in remote work brought on by COVID-19.
But it’s not all good news for VPNs. While they do function well when it comes to tunneling and encrypting data from authorized users, there are a couple of significant catches.
Assessing the cost benefit of traditional enterprise VPNs
The world of security looks very different from where it was when VPNs first entered into the mainstream. Hacking and breaches existed, of course, but they were far less sophisticated and could often be beaten by the prevailing combination of VPN and firewall technologies. Since then, however, wave after wave of attacks, including denial of service (DDoS) and various zero-day attacks, have proven to take advantage of newly discovered vulnerabilities.
And as employees increasingly started using their own devices (BYOD) for work purposes and began to work more frequently outside the protection of the office, it became even more difficult for IT to have visibility into what was happening on those devices as the number of attack surfaces proliferated.
Today, the most common cyberattacks begin with phishing, which if successful, can quickly cause the loss of information such as usernames, passwords, bank account details and more. If that stolen information happens to include VPN login credentials, then a hacker could go almost completely unnoticed as they exfiltrate virtually any ‘unlocked’ asset in the organization. And because some VPNs are based on open source technologies, a single vulnerability can be exploited across multiple solutions.
Traditional VPNs face one more significant disadvantage. In the past, most companies kept applications and their data on-premise, running in corporate data centers. Today however, organizations have rapidly shifted away from the cost and complexity of self-managed data centers to the convenience and simplicity of privately hosted applications and data or SaaS applications hosted in the public web.
With most VPNs needing to be either ‘off’ or ‘on,’ sending application data down a tunnel to HQ and then out to the web is extremely inefficient and can quickly cause a bottleneck that results in frustrated employees. Many companies discovered this firsthand when their VPNs couldn’t scale to meet the sudden demand of employees all working from home.
Mobile VPNs tighten security as some risk remains
One common solution to the inherent vulnerabilities of VPN is the mobile VPN, which takes all of the strengths of legacy VPNs, but works particularly well for mobile devices outside the corporate network. Rather than becoming yet another choke point in the network, these VPNs can actually improve the user experience through the use of data compression, application persistence and other enhancement techniques. No more dropped sessions or session re-authentication required, even in areas with choppy Wi-Fi or cellular connections.
But even mobile VPNs aren’t completely future proof. As mentioned above, a hacker with access to a VPN’s credentials has almost carte blanche to access corporate data without being detected. The adoption of multifactor authentication (MFA) has certainly helped, but this also isn’t enough to ensure the continued integrity of corporate data.
The decentralized workforce accelerates SDP intrigue
As decentralized organizations with many mobile or remote employees continue to proliferate, companies are seeking technology with VPN-like benefits without the security risks and user experience challenges. The answer lies in deploying a software defined perimeter (SDP) around all of the devices used by the organization.
SDP, sometimes referred to as zero trust network access (ZTNA), uses a series of conditional criteria that must be met before any user or device is given access to corporate assets. Where is the user? What device are they using? Is the device running the latest, approved version of its OS? Does this user have authorization to access this application or data? There are literally dozens of criteria that can be used to judge each request’s authenticity and merits before it is allowed.
Having an SDP solution with the right set of policies means that even with the correct credentials, a hacker would not be able to access valuable data. Their device will set off a red flag based on visibility into location or any number of other factors.
VPN and SDP together reduce risk and improve user experience
SDP is not about to usurp the role of the VPN completely, and for most organizations, the choice between one over the other will not occur for at least a decade. That’s because SDP and VPN complement one another extremely well, creating a hybrid solution that combines the benefits of a mobile VPN’s data encryption, compression and application persistence, with the incredibly granular security benefits of an SDP.
One other thing to keep in mind: an SDP alone requires a controller (usually on the device) and a gateway somewhere in the network. That is significant because it means that an SDP alone can potentially become a chokepoint if it isn’t able to scale. Combining an SDP and an intelligent VPN would enable split tunneling directly to the web, reducing network congestion while maintaining security over corporate assets.
The reality for the foreseeable future is that most companies – about 98% in fact – still maintain some applications on-premise or at least hosted in a private cloud. For the vast majority of companies, therefore, evolving from a mobile VPN to a hybrid VPN / SDP solution before going all in on just an SDP makes the most sense. | <urn:uuid:4adedf10-7bc3-4fa7-a183-78542636549d> | CC-MAIN-2022-40 | https://www.cpomagazine.com/cyber-security/the-evolution-from-vpn-to-sdp-wont-happen-overnight/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00263.warc.gz | en | 0.952865 | 1,245 | 2.734375 | 3 |
Certified Information Systems Auditor (CISA)
A CISA, or Certified Information Systems Auditor is someone that is certified to audit information systems (computers and networks) and the internal controls that a company has put around them to protect them from attack and subsequent compromise.
What is a CISA Designation?
The CISA designation is assigned to those individuals that have passed a rigorous exam developed and utilized by ISACA also known as the Information Systems Audit and Control Association. These individuals are primarily employed to ensure that the controls that an organization has put in place effective and working as intended to protect the IT assets and sensitive information that the company is seeking to protect.
According to the ISACA, the CISA exam consists of 150 questions from 5 “domains”:
Domain 1—The Process of Auditing Information Systems (21%)
Domain 2—Governance and Management of IT (16%)
Domain 3—Information Systems Acquisition, Development and Implementation (18%)
Domain 4—Information Systems Operations, Maintenance and Service Management (20%)
Domain 5—Protection of Information Assets (25%)
Who Employs a CISA?
Actually, just about any firm can employ a CISA, however it is typically larger firms that have more complex controls that need to be validated on a recurring basis. This is especially true if the company employing the CISA operates in regulated industry such as banking (GLBA), healthcare (HIPAA), or retail (PCIDSS).
What is the Difference Between a CISA and CISSSP?
According to the ISC2,
“The CISA certification, as its name implies, is about the audit of information systems. The CISSP is focused on the implementation, operation and maintenance of secure information systems. There is a slight overlap in content, but the primary focus is different. Both certifications are highly regarded by the industry, but each validates a different skillset, so it comes down to the kind of job being sought in the cybersecurity field – IT audit, or information security.”
As you can see, the CISSP focuses more on the security of an IT system rather than the controls surrounding it which would be the focus of the CISA.
Many would argue that the two certifications are complementary and give the individual holding the certifications a more holistic view of information system security as well as the controls that should be put in place to protect the system and the data that resides on it or passes through it.
Should I Get a CISA or CISSP Certification?
Really this depends upon your career goals. Are you looking at becoming an auditor or are you looking at becoming a systems administrator or security analyst? Deciding on your career path will go a long way in helping you determine which certification is the most appropriate for you to obtain.
Will the CISA Certification Help My Compensation?
In a word, yes!
According to a salary comparison:
“According to this recent IIA salary report, the 236 survey respondents with a CISA certification have an average salary of $105K, versus $65K for those without certification. This staggering statistic shows that the certification can make a huge difference in how much you get paid annually. What it doesn’t show, is that it also opens you up to positions you may not have been qualified for without the certification. But, more on that later.
This is only a rough comparison as they are many factors involved, including the number of years in the field, education level and type of companies they work for. But overall, the 61% premium is a big enough incentive for you to take the CISA certification seriously.”
Do I Have to Have a Degree to Get a CISA Certification?
No, but there are minimum work experience requirements. You need to have at least 5 years of work experience in a related field. College credit will count towards these years, but as an example, a Master’s degree will only provide you a substitute for 1 year of work experience.
With that being said, a degree in a related field such as accounting or information security will go a long way to helping you prepare for and pass the CISA exam.
Once I Have My Certification, Am I Done?
Unfortunately, no. Even after receiving your certification, you will have to maintain a certain number of hours of continuing education credits. Per the ISACA:
“The CISA CPE policy requires the attainment of CPE hours over an annual and three-year certification period. CISAs must comply with the following requirements to retain certification:
- Attain and report an annual minimum of twenty (20) CPE hours. These hours must be appropriate to the currency or advancement of the CISA’s knowledge or ability to perform CISA-related tasks. The use of these hours towards meeting the CPE requirements for multiple ISACA certifications is permissible when the professional activity is applicable to satisfying the job-related knowledge of each certification.
- Submit annual CPE maintenance fees to ISACA international headquarters in full.
- Attain and report a minimum of one hundred and twenty (120) CPE hours for a three-year reporting period.
- Respond and submit required documentation of CPE activities if selected for the annual audit.”
Is a CISA Certification Worth the Work?
A CISA certification helps with not only your career advancement, but also your general knowledge of IT controls and how to properly protect systems from compromise. While not as security focused as the CISSP certification, it will go a long way to improve your knowledge of the security industry as a whole and why organizations must put into place certain controls to protect their computing platforms. | <urn:uuid:4675e170-b03e-43d5-9b9a-3536336cb18f> | CC-MAIN-2022-40 | https://informationsecuritybuzz.com/articles/what-is-a-certified-information-systems-auditor-cisa-designation/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00263.warc.gz | en | 0.947176 | 1,172 | 2.71875 | 3 |
Just like bad apples can taint an otherwise good bunch, bad bots can wreak havoc on your personal and professional lives online. The question is, what exactly are bad bots, and how common are they?
To understand what a bad bot is, you must fully understand what bots are in general. Simply put, bots are internet applications that run automated tasks online. The tasks that these bots perform are simple but are completed at a much higher rate than the average human internet activity.
According to HelpNet Security, there’s actually a lot of bots (i.e., hundreds of millions) hanging out on the internet. Believe it or not, about 64% of all internet traffic is generated via bots, and this traffic consists of 25% good bots, and 39% bad bots. Good bots are things like search engine crawlers, automated chat bots, and social media bots.
What Is A Bad Bot?
Bad bots, sometimes referred to as malware bots, are essentially internet applications that run malicious tasks online. Sometimes these tasks are targeted to specific websites, mobile apps, and/or APIs. Examples of these tasks include brute-force login attempts, website scraping, competitive data mining, spam, transaction fraud, data harvesting, and more.
Unfortunately, the number of bad bots online has reached record-numbers in recent years, and they show no signs of slowing down. HelpNet Security’s annual report indicated that the top two targets for bad bots are login portals and eCommerce applications. This is why it’s so critical to stay informed of the latest cybersecurity threats and learn what you can do to thwart them.
Bad Bots And What They Can Do To Your Website
Picture this: You go to your favorite online shop to purchase a birthday present or some kind of treat for yourself. An hour later or so, you receive a series of fraud alerts on the credit card you used on that website. Odds are, you’ll never go back there again.
Now, imagine this happening to your customers when they visit YOUR website. This scenario is unfortunately just one of the many horrible things bad bots are capable of achieving. Bad bots and cybercrime are interlinked, and it’s in your best interest to avoid both.
That said, here are just a few of the nasty things bad bots can do to your website:
Bad Bots Can Harvest Sensitive Data
The most basic bad bots are capable of stealing any information that customers enter into forms, polls, and comments. However, more advanced bots are also capable of stealing financial information, and other sensitive data.
Bad Bots Can Tarnish Your Brand
If a string of customers start posting negative reviews online about how your company accidentally leaked their information, what type of reputation do you think you will get? Not a very good one to say the very least.
The worst part? Bad bots can actually be created to generate bad reviews about your company! Imagine being flooded with comments like:
- Horrible customer service
- Do not recommend this place
- Bad experience
Yes, this can happen! Sometimes it’s just a competitor. Other times it’s a cybercriminal trying to wreak havoc to stir up trouble in the market.
Bad bots can also spam fake comments for your website or product. These are the random people with no profile picture, brand new accounts, unverified purchases, weird text, and malicious links at the ready for every blog post you hit publish on.
Brand images take years to build up, and they can crumble in days thanks to bad bots.
Bad Bots Can Get You Blacklisted
If your website gets flagged or reported enough times, search engines like Google and Bing will automatically blacklist your website. What happens when a website gets blacklisted? It means that each time a visitor tries to click on your website, a security warning will appear in bright red.
Something like this has the potential to completely ruin any credibility your website had before being blacklisted. Visitors can manually bypass the warning, but most people being security conscious these days will jump ship within seconds of seeing this screen.
Bad Bots Can Steal And Plagiarize Your Content
Bad bots can copy and plagiarize all the amazing content you’ve spent time and resources crafting. This is also known as website scraping. Of course, you won’t be credited. However, your SEO ranking might tank as a result. If a search engine detects duplicate or syndicated content popping up on different websites, especially ones with laden malware or spam, your site might be punished even if you were the original creator.
Your competitors could start ranking higher, and you could stand to lose an untold number of customers and potential revenue.
Bad Bots Can Disrupt Your Analytics
Analytics are what website owners use to learn where they can improve. Sadly, bad bots can disrupt your analytics in a variety of ways rendering any numbers you receive all but useless. With bad bots lingering around, your website traffic, traffic sources, click-through rates (CTRs), interactions, and many other metrics can become skewed. As a result, any marketing efforts you may implement could actually be counterintuitive to your success.
Bad Bots Can Crash Your Website
Distributed denial of service (DDoS) attacks can occur when too much data is flooded into a single system until that system crashes. While the average attack only lasts a few minutes, that’s all it takes to cripple a network and cost a company $120,000 to $2 million.
Prior to a website crash, a website will also see noticeable speed decreases that will increase bounce rates. That’s because all bots bring hefty chunks of data with them when they visit your site. An increase in bounce rate and decrease in speed can also harm your SEO ranking!
Bad Bots Can Infect Your Site With Malware
Some would consider this to be the biggest consequence of bad bots. Certain bad bots can infiltrate websites, and when somebody visits that website, they’ll automatically download malware. Malware can negatively impact both your company and your visitors, and could spread to other websites, your computer, and other devices.
Preventing Bad Bots From Ruining Your Website
Scarier than ghosts, werewolves, and witches, bad bots are real, and should be feared. The good news is you can create some protections against bad bots. Two of the most common “repellants” are website security scanners and malware removal tools.
Website Security Scanners
Think of website security scanners as tools to increase peace of mind for both you and your visitors. These applications offer bad bot protection by searching for and destroying them and other dangerous security vulnerabilities. Quality website security scanners are easy to use and don’t require a lot of technical skills or knowledge to implement. These applications perform SSL scans, malware scans, cross-site scripting scans, and more on a daily basis.
Malware Removal Tools
Malware removal tools work in tandem with website security scanners to offer even more protection. Look for malware removal tools that are easy to use, and that provide protection round the cloud.
To Wrap Up
Bad bots are up to no good. These bots are created by criminals, fraudsters, and other nefarious parties to profit from the problems they cause. The more we learn about security threats like bad bots, however, the easier it is to combat them. Hopefully this post has inspired you to look into protection software for your own website. Cybercrime is one war that requires as much defense as possible, and you deserve to have tools working 24/7 to keep your online assets safe. | <urn:uuid:5449b992-e445-4270-9cb6-31d4562e3dd6> | CC-MAIN-2022-40 | https://myspybot.com/what-are-bad-bots/?amp=1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00263.warc.gz | en | 0.927135 | 1,560 | 2.71875 | 3 |
What are the 5 Best Courses for Web Design?
If you have a strong creative spark, a keen eye and a passion for websites… a career as a web designer may be the perfect choice for you!
A web designer is someone who works on both new and existing websites and applications to make sure they not only look good, but are also user friendly. A designer will not only need to be able to code, but also use graphic design packages to create engaging buttons, background images and icons.
Many people get web design and web development confused. Web design is focused on the customer experience (i.e. the ‘front-end) while web development focuses on the code behind the scenes that the website needs to be able to function (i.e. the ‘back-end’).
The great thing about web design as a career is that it is incredibly versatile. You can decide to work inhouse in the private or public sector, as a web designer for an agency, or even choose to go freelance. You will also learn a range of useful skills that you can apply to other professions, like game development, digital marketing, server administration or graphic design.
If you’ve decided that web designing is the perfect choice for you, there are a variety of different courses you can enrol on to take the first steps towards a career in web design. With many of them, you do not need any previous experience or specific qualifications to sign up, meaning that you can get started straight away!
Here are five of the best web design courses available, and how they will help to boost your career.
1. HTML essentials
Hypertext markup language (or HTML for short) is used to create web pages. Tags are wrapped around content to determine how it needs to be ordered, and where tables, images, videos and documents need to be placed. It’s a relatively easy language to learn and is an excellent place to start your web design career.
Our HTML essentials course will give you a knowledge of HTML that you can apply to your web design career and use alongside other coding languages (more on that later).
Plus as the course only requires between five to ten hours of study time, you can easily complete it in a couple of days, fitting it in around your current employment.
2. CSS essentials
Like HTML, CSS is a fundamental coding language that you will use as a web designer. While HTML is used to determine how content and information will be set out, cascading style sheets (CSS) will determine how the content will be styled.
Think of HTML as a person’s head, and CSS as the different types of hats the head can wear! You can style the same HTML content in a variety of ways, all by using a range of style sheets.
We offer a short CSS essentials course that will introduce you to how CSS works and is used, and how you can use it alongside HTML to create a variety of beautifully styled web pages.
3. Adobe Dreamweaver
When you are building a website, you will need a web development tool to help you create and maintain your code, as well as test it for errors. Adobe Dreamweaver is one of the most popular tools around.
Our Adobe Dreamweaver course will show you how to use this tool to create well-designed, fully functional and attractive websites. It also proves to future employees that you are fully invested in web design and committed to personal development, increasing the odds of you getting the web design role of your dreams.
You don’t need any qualifications to take the Adobe Dreamweaver course, but we’d recommend having a basic knowledge of HTML and CSS before you begin.
4. Adobe Illustrator
Adobe Illustrator will help you to create beautiful icons, buttons and imagery for any websites you make.
Imagery is vital for any website, with 60% of customers stating they are more likely to consider a business when a relevant image shows up in their search results. Not only will strong imagery help you to stand out in a crowded marketplace, but it will help improve conversions on your website too.
This course is ideal for anyone looking to move into web design, and is also an excellent option for people looking to start a career in graphic design too.
5. Adobe Photoshop
If you are designing a website with a lot of photos – for example, an eCommerce site with many products to promote, you will need a high-quality editing tool to make sure the images look good.
Adobe Photoshop is the world’s best-known photo editing tool, giving users a wide range of options to optimise, touch up and generally make photos look more enticing!
This course will introduce you to Photoshop, explain how the different tools can be used and show you how you can not only edit photos, but create stunning logos and imagery too.
Get started with Web Design today with ITonlinelearning
If you want to get started with a brand new career in web design, these five courses are an excellent starting point!
The great thing is that they can all be completed online, meaning that you can fit your online learning around your current job or childcare commitments. Plus with 0% finance available, we can help you to get started in your web design career sooner rather than later! | <urn:uuid:b09c834b-a910-4b38-a4ef-94c54450b675> | CC-MAIN-2022-40 | https://www.itonlinelearning.com/blog/what-are-the-five-best-courses-for-web-design/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00263.warc.gz | en | 0.929757 | 1,094 | 2.53125 | 3 |
Why Your Digital Health Care Experiences Need To Be Accessible
If you work with consumer-facing healthcare technology, you’ve likely heard the term “digital accessibility,” which means optimizing digital experiences so that all users — including people with disabilities — can have access to the same information, through the same platform.
For healthcare systems, helping users with disabilities find the information and services they need is critical to patient acquisition, care and loyalty. For example, if a patient is looking for a provider on your website or mobile application and can’t easily find the provider’s credentials, location and availability, they’ll likely go elsewhere. Not only do digital experiences that are built with accessibility in mind help users with disabilities locate the information they need, they also create a better experience for all users. This is particularly true for patients who may be experiencing temporary challenges due to disease progression, injury or while they are recovering.
Build a digitally accessible ecosystem
Critically, digital accessibility makes it easier for users to complete necessary tasks online — independently and securely. If your organization has an audience that likely contains a disproportionate number of people with disabilities, providing an accessible experience using best practices for both accessibility and universal design will help distinguish you from the pack, as recent research shows that 98.1% of the web is not accessible.
Some digital devices have an audience that contains a disproportionate number of users with disabilities. In these cases, creating an accessible ecosystem can set you apart from competitors. Not only will you reach the 20% of the population with a disability, but also their friends and family members.
Digital healthcare platforms offer users many forms; whether they’re for pre-admission, general contact information or for insurance, secure and accessible online forms are an important part of healthcare access.
These kinds of forms demonstrate the importance of accessibility. For many people with disabilities, online forms offer a secure platform. The alternative for the user is to ask someone for help. While some people with disabilities have trusted helpers, the necessity to ask for help may violate privacy rights and reduce a user’s level of independence, while opening them up to potential fraud if the helper turns out to be untrustworthy.
The WCAG guidelines were updated in 2018 to WCAG 2.1, and the latest version, WCAG 2.2, is currently drafted and awaiting final approval by the W3. These updates are meant to apply accessibility standards more effectively for mobile applications and devices, the Internet of Things (IoT), and the Internet of Medical Things (IoMT), as well as make incremental changes and updates to the current requirements for better usability or to clarify older requirements. This broadening of the scope and applicability of these standards means that even device manufacturers and companies developing Software as a Medical Device (SaMD) should be working toward compliance with the draft standards right now.
The WCAG requirements provide guidelines for visible requirements such as ability to zoom (to 400%) without losing functionality, high contrast for all text and icons, and link visibility. Many users, even those who don’t self-identify as having a disability, are helped by these requirements. For example, elderly patients with low vision frequently use the browser zoom function, and become frustrated with elements that overlap or require horizontal scrolling.
According to the Center for Disease Control and Prevention (CDC), approximately 26% of the population in the U.S. is living with a disability. These may include vision, hearing, mobility and cognitive disabilities. People with disabilities have a disproportionately high level of healthcare-related expenses, providing an extra imperative for hospitals and clinics to make digital services easily accessible.
Since the beginning of the COVID-19 pandemic, many healthcare providers have transitioned to virtual care. While telehealth services have been rising steadily over the last decade, this service has become a critical part of healthcare since March 2020, escalating rapidly since Medicaid coverage became available for telehealth services. Our most vulnerable populations, the elderly and people with disabilities, are the populations most at risk for COVID-19 related complications. These are also the populations who most benefit from accessible digital tools and experiences.
What does “digitally accessible” mean?
Digitally accessible means making information and services digitally accessible by taking the needs of all users, including those with disabilities, into account. Requirements for digital accessibility are laid out in a global set of standards by the W3, called the Web Content Accessibility Guidelines (WCAG). In the United States, digital accessibility is required under the Americans with Disabilities Act (ADA) as a part of public accommodation.
Evaluate your consultancy partner closely. Is your current partner meeting the needs of your customers and patients? Here are a few simple tests that can help you decide:
- Is all copy and functional imagery high-contrast?
- Can you press the tab key and easily navigate through all elements on the page?
- Do all images have descriptive text alternatives?
- Using the browser zoom function, zoom in to 200%, then to 400%. Can you still read everything on the page without overlapping elements?
If these tests are successful, you can be confident that your partner takes the basic tenets of accessibility seriously, and will successfully help you navigate the requirements.
If you have questions about the accessibility of your digital health experiences and tools, we’d be happy to help. Contact us. | <urn:uuid:78ebc444-0141-4fd0-b9f7-04382c1188ab> | CC-MAIN-2022-40 | https://www.nerdery.com/insights/why-your-digital-health-care-experiences-need-to-be-accessible/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00463.warc.gz | en | 0.937989 | 1,117 | 2.84375 | 3 |
Science is thrilling and dynamic, and the way we learn it should be too.
World-renowned physicist and professor at Columbia University, Brian Greene, co-founded the World Science Festival alongside Emmy Award–winning producer Tracy Day with a single-minded vision: to transform how everyone experiences the wonders of science. He wants students to experience science the way scientists do — “as this exciting adventure of trying to gain a deeper understanding of ourselves and the world.”
Children naturally possess a quality that is important to the study of science: curiosity. Greene believes that the formal education system, with its routine assessments and test-based learning, erodes that natural curiosity over time.
But there is no reason why curious children shouldn’t become curious adults, if science is introduced early and taught in a more engaging way.
The mission of the World Science Festival is to show children that the wonders of science go far beyond what they learn in the classroom, by bringing awe-inspiring learning experiences to as many students as possible.
Delivering Immersive Experiences
The World Science Festival envisioned taking big ideas off the page and immersing students in scientific phenomenon through virtual reality. It launched a virtual reality learning experience for schools in partnership with Verizon.
The idea was to allow middle schoolers to virtually enter different realms such as the Milky Way, where they can participate in the lifecycle of stars and planets. Abstract scientific concepts suddenly become more immediate, fun, and engaging.
"We wanted to show kids that science and the wonders of the universe transcend the details they learn in class,” says Greene. “Yes, you have to learn the ideas in class in order to grasp them more fully. But ultimately, we’re trying to give insights into some of the most exciting experiences you could imagine."
Using BlueJeans Meetings features and tools, the experience is made more immersive as teachers can direct learning using whiteboard or annotation tools and can appear live on-screen or as a robot avatar in virtual reality. Greene’s team uses digital cameras that float in the virtual world with the built-in capacity to adjust angles, so students can zoom in, pull out, and move around to fully explore the space.
Expanding Opportunities to Learn
When Greene’s team first rolled out the learning program, participants used virtual reality headsets. This limited the number of students who could take part, as not all schools have access to such technology.
To eliminate the need for a virtual reality headset and make the experience more accessible, the team integrated BlueJeans Meetings features into program delivery, using BlueJean’s software development kit.
Together with Verizon, the World Science Festival has introduced the program to more than 40 schools in the United States. It plans to expand the program to include more schools across the country, including those in remote areas.
Creating Possibilities through Technology
Greene and his team had their reservations about the quality of the virtual reality experience via a screen using BlueJeans instead of using a headset, but they have been pleasantly surprised by the results.
"Even though you don’t have the same functionality as you do when using a headset, you can transport yourself in the same way," says Greene.
"The capacity for the individual who is participating via BlueJeans to have this dynamic perspective makes all the difference in the world. It feels like you have an iconic view of the galaxy because of the roaming nature of the virtual cameras."
As the team continues to develop more virtual reality experiences, this new format for delivering learning is expected to inspire confidence and motivate students to engage with complex scientific concepts. From understanding orbital motion and planet formation to Einstein’s theory of special relativity, BlueJeans will be integrated into these experiences going forward. Greene expects to complete the BlueJeans integration and kick off more new experiences by the end of 2023.
Read the full case study here. | <urn:uuid:3d9c2b56-0c8c-47c2-b756-3ae5b73e31fe> | CC-MAIN-2022-40 | https://www.bluejeans.com/blog/world-science-festival-transports-students-across-galaxies-using-bluejeans | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00463.warc.gz | en | 0.941473 | 807 | 3.15625 | 3 |
After completing this chapter, you will be able to perform the following tasks:
Identify what a VLAN is and how it operates.
Configure a VLAN to improve network performance.
Identify what role the switch plays in the creation of VLANs.
Identify how network devices communicate about VLANs.
Describe the need and operation of the VLAN Trunking Protocol.
Configure the Catalyst Switch for VLAN operation.
The design and function of a bridged/switched network is to provide enhanced network services by segmenting the network into multiple collision domains. The fact remains, however, that without any other mechanism, the bridged/switched network is still a single broadcast domain. A broadcast domain is a group of devices that can receive one another's broadcast frames. For example, if device A sends a broadcast frame and that frame is received by devices B and C, all three devices are said to be in a common broadcast domain. Because broadcast frames are flooded out all ports on a bridge/switch (by default), the devices connected to the bridge/switch are in a common broadcast domain.
Controlling broadcast propagation throughout the network is important to reduce the amount of overhead associated with these frames. Routers, which operate at Layer 3 of the OSI model, provide broadcast domain segmentation for each interface. Switches can also provide broadcast domain segmentation using virtual LANs (VLANs). A VLAN is a group of switch ports, within a single or multiple switches, that is defined by the switch hardware and/or software as a single broadcast domain. A VLAN's goal is to group devices connected to a switch into logical broadcast domains to control the effect that broadcasts have on other connected devices. A VLAN can be characterized as a logical network.
The benefits of VLANs include the following:
VLANs enable you to group users into a common broadcast domain regardless of their physical location in the internetwork. Creating VLANs improves performance and security in the switched network by controlling broadcast propagation and requiring that communications between these broadcast be carried out by a Layer 3 device that is capable of implementing security features such as access control lists (ACLs).
In a broadcast environment, a broadcast sent out by a host on a single segment would propagate to all segments. In normal network operation, hosts frequently generate broadcast/multicast traffic. If hundreds or thousands of hosts each sent this type of traffic, it would saturate the bandwidth of the entire network, as shown in Figure 3-1. Also, without forcing some method of checking at an upper layer, all devices in the broadcast domain would be able to communicate via Layer 2. This severely limits the amount of security you can enforce on the network.
Figure 3-1 Broadcast Propagation
Before the introduction of switches and VLANs, internetworks were divided into multiple broadcast domains by connectivity through a router. Because routers do not forward broadcasts, each interface is in a different broadcast domain. Figure 3-2 shows an internetwork broken into multiple broadcast domains using routers. Notice that each segment is an individual IP subnet and that regardless of a workstation's function, its subnet is defined by its physical location.
A VLAN is a logical broadcast domain that can span multiple physical LAN segments. A VLAN can be designed to provide independent broadcast domains for stations logically segmented by functions, project teams, or applications, without regard to the users' physical location. Each switch port can be assigned to only one VLAN. Ports in a VLAN share broadcasts. Ports that do not belong to the same VLAN do not share broadcasts. This control of broadcast improves the internetwork's overall performance.
VLANs enable switches to create multiple broadcast domains within a switched environment, as illustrated in Figure 3-3.
Figure 3-2 Multiple Broadcast Domains Using Routers
Notice that now all users in a given group (department in this example) are defined to be in the same VLAN. Any user in this VLAN receives a broadcast from any other member of the VLAN, while users of other VLANs do not receive these broadcasts. Each of the users in a given VLAN is also in the same IP subnet. This is different from the broadcast domains of Figure 3-2, in which the physical location of the device determines the broadcast domain. However, there is a similarity with a legacy, non-VLAN internetwork because a router is still needed to get from one broadcast domain to another, even if a VLAN is used to define the broadcast domain instead of a physical location. Therefore, the creation of VLANs does not eliminate the need for routers.
Within the switched internetwork, VLANs provide segmentation and organizational flexibility. Using VLAN technology, you can group switch ports and their connected users into logically defined communities of interest, such as coworkers in the same department, a cross-functional product team, or diverse user groups sharing the same network application.
A VLAN can exist on a single switch or span multiple switches. VLANs can include stations in a single building or multiple-building infrastructures. In rare and special cases, they can even connect across wide-area networks (WANs).
Figure 3-3 VLAN Overview
As mentioned previously, prior to the VLAN, the only way to control broadcast traffic was through segmentation using routers. VLANs are an extension of a switched and routed internetwork. By having the ability to place segments (ports) in individual broadcast domains, you can control where a given broadcast is forwarded. The sections that follow expand on these concepts. Basically, each switch acts independently of other switches in the network. With the concept of VLANs, a level of interdependence is built into the switches themselves. The characteristics of a typical VLAN setup are as follows:
Each logical VLAN is like a separate physical bridge.
VLANs can span multiple switches.
Trunk links carry traffic for multiple VLANs.
With VLANs, each switch can distinguish traffic from different broadcast domains. Each forwarding decision is based on which VLAN the packet came from; therefore, each VLAN acts like an individual bridge within a switch. To bridge/switch between switches, you must either connect each VLAN independently (that is, dedicate a port per VLAN) or have some method of maintaining and forwarding the VLAN information with the packets. A process called trunking allows this single connection. Figure 3-4 illustrates a typical VLAN setup in which multiple VLANs span two switches interconnected by a Fast Ethernet trunk.
Figure 3-4 Multiple VLANs Can Span Multiple Switches
How VLANs Operate
A Catalyst switch operates in your network like a traditional bridge. Each VLAN configured on the switch implements address learning, forwarding/filtering decisions, and loop avoidance mechanisms as if it were a separate physical bridge. This VLAN might include several ports, possibly on multiple switches.
Internally, the Catalyst switch implements VLANs by restricting data forwarding to destination ports in the same VLAN as originating ports. In other words, when a frame arrives on a switch port, the Catalyst must retransmit the frame only to a port that belongs to the same VLAN as that of the incoming port. The implication is that a VLAN operating on a Catalyst switch limits transmission of unicast, multicast, and broadcast traffic. Flooded traffic originating from a particular VLAN floods out only other ports belonging to that VLAN. Each VLAN is an individual broadcast domain because a broadcast in a given VLAN will never reach any ports in other VLANs.
Normally, a port carries traffic only for the single VLAN to which it belongs. For a VLAN to span multiple switches on a single connection, a trunk is required to connect two switches. A trunk carries traffic for all VLANs by identifying the originating VLAN as the frame is carried between the switches. Figure 3-4 shows a Fast Ethernet trunk carrying multiple VLANs between the two switches. Most ports on Catalyst switches are capable of being trunk ports. Any port on a Catalyst 2950 can be a trunk port.
VLAN Membership Modes
VLANs are a Layer 2 implementation in your network's switching topology. Because they are implemented at the data link layer, they are protocol-independent. To put a given port (segment) into a VLAN, you must create a VLAN on the switch and then assign that port membership on the switch. After you define a port to a given VLAN, broadcast, multicast, and unicast traffic from that segment will be forwarded by the switches only to ports in the same VLAN. If you need to communicate between VLANs, you must add a router (or Layer 3 switch) and a Layer 3 protocol to your network.
The ports on a Layer 2 Catalyst switch, such as a 2950, all function as Layer 2 ports. In Cisco IOS Software, a Layer 2 port is known as a switchport. A switchport can either be a member of a single VLAN or be configured as a trunk link to carry traffic for multiple VLANs. When a port is in a single VLAN, the port is called an access port. Access ports are configured with a VLAN membership mode that determines to which VLAN they can belong. The membership modes follow:
StaticWhen an administrator assigns a single VLAN to a port, it is called static assignment. By default, all Layer 2 switchports are statically assigned to VLAN 1 until an administrator changes this default configuration.
DynamicThe IOS Catalyst switch supports the dynamic assignment of a single VLAN to a port by using a VLAN Membership Policy Server (VMPS). The VMPS must be a Catalyst Operating System switch, such as a Catalyst 5500 or 6500, running the set-based operating system. An IOS-based Catalyst switch cannot operate as the VMPS. The VMPS contains a database that maps MAC addresses to VLAN assignment. When a frame arrives on a dynamic port, the switch queries the VMPS for the VLAN assignment based on the arriving frame's source MAC address.
A dynamic port can belong to only one VLAN at a time. Multiple hosts can be active on a dynamic port only if they all belong to the same VLAN. Figure 3-5 demonstrates the static and dynamic VLAN membership modes.
Figure 3-5 VLAN Membership Modes
For an access port, the VLAN identity is not known by the sender or receiver attached to the access port. Frames going into and out of access ports are standard Ethernet frames, as discussed in Chapter 2, "Configuring Catalyst Switch Operations." The VLAN identity is used only within the switch to provide broadcast domain boundaries. | <urn:uuid:75369f89-f2ae-47cd-9899-f04d4ecfb135> | CC-MAIN-2022-40 | https://www.ciscopress.com/articles/article.asp?p=102157&seqNum=5 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00463.warc.gz | en | 0.921734 | 2,232 | 3.890625 | 4 |
For years, viruses have remained a persistent threat to enterprises of all sizes. Accidentally clicking on a fake link is all it takes to infect your network with malware of a virus. Knowing how to perform a network virus scan is essential for identifying the latest cyber threats and avoiding downtime.
The cost of downtime can be devastating, with the infamous MyDoom virus costing $38 billion over 15 years, becoming the most high profile virus to date.
With the emergence of network viruses that spread through network traffic, administrators have to be even more proactive at detecting threats.
What is a ‘network virus’ and how is it different from a normal virus?
A network virus is a type of malware that can replicate itself across multiple computers through network packets. Network viruses are different from traditional viruses because they don’t rely on files in order to spread but self-replicate across hosts and spread through executable code or a document.
For most viruses, administrators can deploy an antivirus solution that runs manual or automated scans to detect when a device is compromised. Once a virus is detected the user can quarantine the files and remediate the outbreak. Unfortunately, the process is a little more complex when dealing with a network virus.
As network viruses spread through network packets, traditional antivirus solutions can’t detect them. Such viruses are very difficult to get rid of and commonly re-infect devices. The side effects of a successful attack range from poor network performance to data theft, compromised device performance, and downtime.
From a network administrator’s perspective, Network viruses require a different type of security strategy than traditional viruses. To detect a network virus a network administrator needs to scan network traffic with a packet sniffer or intrusion detection tool to detect malicious packets and other suspicious activities.
How to scan for malicious traffic with a packet sniffer (Wireshark)
Wireshark is a packet sniffing tool available for Windows, macOS, and Linux that you can use to scan your network for malicious traffic. With Wireshark you can sniff traffic to identify infected files, helping you to find the root cause of a virus outbreak. Before running a capture you can select the type of interface you want to monitor.
To start capturing packets in your network, double click on the Wi-Fi option under the Capture heading. The software will start to collect packets in real-time displaying information such as Time, Source, Destination, Protocol, and other Info. You can stop capturing packets by pressing the red Stop icon in the top left corner of the screen.
To make sense of the information you capture, you’ll want to use packet filtering. Packet filters limit the output information based on the type of filter you apply. You can apply filters by using the filter box/search bar at the top of the screen.
Filtering packets are useful for identifying malicious packets as you can search for packets coming to and from an IP address or filter all traffic by a certain type. For example, to see packets coming to or from an IP address you can use the following filter (Change the IP address for the one of the IP address you want to filter IP packets coming from ):
ip.src == 192.788.53.1
Alternatively, if you want to filter packets that are going to an IP address you can use the following filter:
ip.dst == 192.788.53.1
You can also combine the two filters together if you want to view traffic traveling to and from the IP address with the following command:
ip.src == 192.788.53.1 or ip.dst == 192.788.53.1
If you want to filter by packet type then you can do so by entering the type of packets you want to filter into the filter bar (the example below uses DNS, but you could use another packet type such as DHCP, ICMP, or TCP):
Filtering IP addresses in this manner allows you to monitor the conversations taking place between particular machines, so if you suspect that a computer is infected, you can take a closer look at its traffic. It’s a good idea to regularly inspect hosts generating the greatest traffic volume, as this can indicate the host is infected with malware and is attempting to spread it to other machines.
Another key issue to look out for is if traffic is sent to and from unusual locations or if a host starts to send an unusually high amount of traffic. The only way to identify this abnormal activity is to take a baseline capture of your normal network activity so you can see anomalous behavior more clearly.
For an in depth tutorial on Wireshark see our How to use the Wireshark Network Protocol Analyzer post.
Why scan for malware and malicious traffic with a packet sniffer?
Running a standard virus scan with an antivirus will enable you to detect malicious entities like viruses and malware that have infected your device. The traffic that enters your network is a key entry point to your network, and monitoring that entry point will enable you to respond quickly when a threat breaches your defenses.
Packet sniffers are an important tool because many antiviruses struggle to detect network viruses that replicate across multiple hosts. Tools like Wireshark and Snort give you the ability to pinpoint strange connections across your network so that you can investigate and address any underlying threat.
By combining continuous packet sniffing with traditional antivirus virus scanning you can protect your network more comprehensively, and defend against a broader range of threats. In other words, combining the two significantly reduces your exposure to online threats.
Using an IDS to detect malware
An Intrusion Detection System (IDS) is a type of software that can detect attempts to break into your network. IDS tools can detect intrusion attempts, like malware, viruses, trojans, or worms, and notify you when an attack takes place. Examples of IDS solutions you can use to monitor for threats include Snort and Nmap.
IDS’s are useful because they can detect the early signs of a cyber attack. For example, before launching an attack on a network, many hackers will run a port scan to look for vulnerabilities. With a tool like Snort, you can detect port scanning, which gives you a heads up before any damage is done to your network.
IDS solutions use signature-based and anomaly-based detection methods to detect attacks. A signature-based IDS searches for malicious patterns in traffic based on known attacks and an anomaly-based IDS uses machine learning to detect abnormal behavior and flag it up to the user.
Out of the two methods, anomaly-based IDS solutions are more effective at scanning networks for unknown viruses and malware. Signature-based tools need to be regularly updated to stay effective and struggle against unknown zero-day attacks.
Packet sniffer or IDS for detecting malware?
Both packet sniffers and IDSs are useful for detecting malicious activity taking place on the network and are very similar. The key difference between the two is that an IDS is a packet sniffer with anomaly detection, which can identify malicious traffic patterns and send alerts to notify the user.
For example, with Snort, you can create traffic rules to detect malicious code. In contrast, packet sniffing tools like Wireshark don’t have an alerts function and you have to identify suspicious activity manually by collecting and filtering packets.
While IDS’s are superior at automating threat detection and response, packet sniffers remain useful for identifying and investigating malicious traffic patterns. In short, both Wireshark and Snort are viable solutions for detecting malicious traffic and protecting your network against attackers.
Best packet sniffing software
If you want to search for other packet sniffing tools to monitor your network, then there are plenty of tools to choose from. We’ve listed some of the top free and paid alternatives to Wireshark below:
SolarWinds Network Performance Monitor is a paid network monitoring tool that comes with a Network Packet Sniffer that you can use to monitor network traffic in real-time through the dashboard. Through the dashboard, you can monitor data and transaction volume by application, and identify bandwidth hogs quickly. It is available on Windows. You can download a 30-day free trial.
Paessler PRTG Network Monitor is a free network monitoring tool that you can use to monitor IP, UDP, and TCP traffic. With the packet sniffer sensor you can monitor IRC, AIM, Citrix, FTP, P2P, DHCP, DNS, ICMP, SNMP, IMAP, POP3, SMTP, NetBIOS, RDP, SSH, VNC, HTTP, HTTPS, and more. It is available for Windows and Mac. You can download the software for free.
ManageEngine NetFlow Analyzer is a paid packet collection tool you can use to monitor network bandwidth consumption. With ManageEngine NetFlow Analyzer you can monitor interface bandwidth and traffic patterns in real-time. Through the Advanced Security Analytics Module, you can view all security events alongside an anomaly count. It is available on Windows and Linux. You can download the free trial.
See our related post on the Best Packet Sniffer.
Best Intrusion Detection Systems
You can read more about IDS services and how they work in Intrusion Detection Systems Explained: Best IDS Software Tools Reviewed. If you haven’t got time to read that report, here is a quick rundown of the three best intrusion detection systems.
SolarWinds Security Event Manager (SEM) is a host-based intrusion detection system but you can easily give it network-based intrusion detection capabilities by feeding it the network security monitoring output of Snort. This is an on-premises package that runs on Windows Server. It will process log messages generated by Unix, Linux, and macOS computers as well as Windows. Try the tool on a 30-day free trial.
2. CrowdStrike Falcon
CrowdStrike Falcon is a cloud-based platform of security tools that work off reports sent up to the cloud server from the one site-based product in the family, Falcon Prevent. The Falcon Prevent service is an endpoint protection system that operates an antimalware and intrusion detection service by looking for anomalous behavior. All malicious activity comes in from the network and so the reports from this tool also give you an insight into network viruses. CrowdStrike offers Falcon Prevent on a 15-day free trial.
3. ManageEngine EventLog Analyzer
ManageEngine EventLog Analyzer is an on-premises package that gathers log data from around the network and analyzes it for signs of intrusion and virus activity. The systems that contribute to the IDS’s data sources include switches, routers, and firewalls, and that gives it input on malicious network activity. The tool installs on Windows Server or Linux and you can get it on a 30-day free trial that has a limit of 2,000 log message sources.
Network virus scanning best practices
Scanning for traditional and network viruses is vital for protecting your infrastructure and preventing malware outbreaks. Being aware of the risks and proactively scanning will give you the best chance of defending yourself against the next generation of online threats. However, there are some best practices you’ll want to bear in mind:
1. Backup your files!
Backing up your files regularly is disaster recovery 101, both for protection against viruses and other issues like system failures or natural disasters. Regularly backing up your files periodically will ensure that your data is protected even if you encounter a persistent virus.
2. Turn off your internet connection
If you find out a device is compromised, one of the first things you should do is turn off your internet. Cutting off the device will stop the compromised system from communicating with external entities so that you can contain the problem and work on restoring the system more effectively.
3. Schedule Regular Scans
Scheduling regular scans is essential for making sure that you continually discover new threats. One-off scans can be good for diagnosing current problems but you’ll miss any security events that take place after you stop scanning. Regularly scanning up will ensure your devices are secure.
4. Make Sure to Follow Up!
Once you’ve run a scan, you’ll need to make sure that you’ve done everything needed to eradicate the threat. Many scanning tools will generate reports that give you information on how to deal with infected files, so following these instructions is a good way to make sure that you implement the necessary changes to protect your system.
Perform a network virus scan to protect important endpoints
While antivirus solutions can’t protect you against every online threat they play an important part in securing your endpoints against some of the most common threats online. Network scanning is a simple way to minimize your exposure to online threats.
Remember to schedule regular scans to make sure that you stay up to date on security risks. Should you find that a system is compromised, cut off the internet, and quarantine the offending software so that you have time to remediate the issue. You also want to make sure that the virus isn’t hiding in your backup files before rebooting the system.
Network Virus Scan FAQs
What is a network virus?
By definition, a "network virus" is a type of fileless malware that moves from computer to computer without saving files on any device but going straight into the operating system. Without a file to scan for, these systems are very difficult to detect because they can only be spotted as network packets and running processes. However, often, when people refer to a “network virus scan” they actually mean a scan that reaches across the network to scan each connected device.
How do you scan a network for a virus?
Scanning network traffic for viruses rather than scanning each endpoint connected to the network involves examining packets that travel around the network. The best security software category for this job is a network-based intrusion detection system (NIDS). This scans packets for known contents that indicate anomalous behavior. NIDS services can spot unauthorized user activity as well as network-bound viruses. | <urn:uuid:988b4593-c5a9-4b77-b35b-d6013de0c2db> | CC-MAIN-2022-40 | https://www.comparitech.com/net-admin/network-virus-scan/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00463.warc.gz | en | 0.915035 | 2,930 | 2.734375 | 3 |
Google Tests Post-Quantum CryptoQuantum Computing Will Shred Current Crypto Systems, Experts Warn
Quantum computing will pose a grave threat to the encryption algorithms now protecting everything from online banking and e-commerce to email and instant messaging. That's because today's algorithms are largely geared toward making the calculation of the decryption keys for scrambled messages so computationally intensive that the task is effectively impossible. Quantum computers, however, will likely one day make such calculations much more efficient, thus threatening the security of any bit of information that has ever been encrypted.
While quantum computers are in their infancy, Google has taken a first step toward adapting to the post-quantum cryptography world by launching a two-year experiment that incorporates into its Chrome browser a modified version of a key exchange algorithm called Ring Learning with Errors (Ring-LWE) that's been implemented for OpenSSL. Google's approach is based on a scheme known as "New Hope" that researchers note is designed to provide "post-quantum security for TLS."
OpenSSL is the general purpose cryptography library that provides an open source implementation of the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols, which are used to secure internet communications.
In theory, this new approach will be resistant to key calculation by quantum computers that haven't even been built yet. The tests will be carried out over the next couple of years for a small fraction of connections between Google's domains and people using its Canary version of Chrome, which the company uses to test new features. Users can see if it's in use if they see phrase "CEPQ1_ECDSA" in Chrome's security panel.
This is a Test
Security experts say that knowledge about quantum-computing-related crypto risks isn't new, but note that there's been a recent surge in enthusiasm for solving the problem. "Although people such as me have been talking about the threat to public key cryptography from quantum computers for years, and the alternatives that could be used, it seems that when Google announced that they were experimenting with a post-quantum crypto scheme in Chrome, it caught people's imagination," says Alan Woodward, a computer science professor at the University of Surrey who also serves as a cybersecurity consultant for the EU's law enforcement intelligence agency, known as Europol, in a blog post. "Perhaps this marks the beginning of post-quantum crypto entering the mainstream?"
Interesting that Google plumping for New Hope crypto as their post quantum scheme https://t.co/6yrEvf2wcV— Alan Woodward (@ProfWoodward) July 8, 2016
Google, meanwhile, cautions that its Chrome test qualifies as bleeding-edge experimentation and says that it may not prove to be secure. Because of that, Google will continue to employ the widely used elliptic curve cryptography key-exchange algorithm in Chrome. "The post-quantum algorithm might turn out to be breakable even with today's computers, in which case the elliptic-curve algorithm will still provide the best security that today's technology can offer," writes Matt Braithwaite, a Google software engineer. "Alternatively, if the post-quantum algorithm turns out to be secure then it'll protect the connection even against a future, quantum computer."
Seeking Future-Proof HTTPS Security
Most web services are secured using "https," which means the service is using a digital certificate that enables SSL or its successor, TLS. Those certificates use either RSA or ECC keys. ECC keys are shorter but provide the same strength as their equivalent but longer RSA keys, reducing overhead. Although efforts have been underway to increase the length of keys used to secure SSL/TLS transactions, encryption experts predict that such upgrades won't be secure forever.
As noted, current public key cryptography schemes are designed to make it computationally impractical to calculate the decryption keys. The RSA algorithm, for example, creates a public key that is the product of two very large prime numbers. Factoring an RSA public key would, in theory, be much faster using quantum computers. Conventional computers use binary values consisting of a 0 or a 1. Quantum computers use quantum bits - qubits - which are units that can be either a 0 or a 1 at the same time depending on when the state is measured, allowing for much faster parallel calculations.
Quantum Computing is Coming, NSA Warns
The era of quantum computing isn't quite here yet, but there are commercial offerings from companies such as D-Wave. The worry is that once technologists master qubits, trouble will quickly ensue. Those worries were fueled in part by the National Security Agency, which warned in August 2015 that today's algorithms won't hold up against quantum computing.
"Unfortunately, the growth of elliptic curve use has bumped up against the fact of continued progress in the research on quantum computing, which has made it clear that elliptic curve cryptography is not the long-term solution many once hoped it would be," according to the NSA's Information Assurance Directorate. "Thus, we have been obligated to update our strategy."
The agency is investigating various post-quantum crypto strategies, and it notes in a related CNSA Suite and Quantum Computing FAQ released in January that it could take two decades for a new alternative to be deployed across national security systems.
Historical Data at Risk
Once quantum computers can be practically applied to cryptographic systems, any encrypted content ever created - from yesterday, back to the birth of the commercial web in the early 1990s - could be at risk of being cracked. "This means that even encrypted information sitting in a database for 25 years, for example, will be subject to discovery by those with access to quantum computing platforms," according to a "Quantum-Safe Cryptography and Security" white paper published in 2014 by the European Telecommunications Standards Institute. "Without quantum-safe encryption, everything that has been transmitted, or will ever be transmitted, over a network is vulnerable to eavesdropping and public disclosure."
Many security experts suspect that intelligence agencies such the NSA and Britain's GCHQ have been collecting enormous volumes of encrypted Web traffic, waiting for the day when decrypting it becomes feasible. That's why Google's experiment is important. If it proves to be effective, it means that communications could be safer from quantum computing advances - at least for a while.
Surrey University's Woodward, meanwhile, recommends that anyone who has a hand in crypto systems come up to speed on both New Hope as well as Google's implementation. "All of this detail is freely available - although often spread around on the web - and I would encourage those involved in public key cryptography to start to look at it as it is coming to an infrastructure near you in the very near future."
Executive Editor Mathew J. Schwartz also contributed to this report. | <urn:uuid:518c99dc-0bae-4bbc-b87e-70bc08a4820f> | CC-MAIN-2022-40 | https://www.bankinfosecurity.com/google-adds-quantum-computing-armor-to-chrome-a-9253 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00463.warc.gz | en | 0.951789 | 1,391 | 2.890625 | 3 |
Poor data quality is estimated to cost organizations an average of $12.8 million per year. All methods of data governance are vital to combating this rising expense. While metadata has always been recognized as a critical aspect of an organization’s data governance strategy, it’s never attracted as much attention as flashy buzzwords such as artificial intelligence or augmented analytics. Metadata has previously been viewed as boring but inarguably essential. With the increasing complexity of data volumes, though, metadata management is now on the rise.
According to Gartner’s recent predictions for 2024, organizations that use active metadata to enrich their data will reduce time to integrated data by 50% and increase the productivity of their data teams by 20%. Let’s take a deeper look into the importance of metadata management and its critical factors for an organization.
What is Metadata?
Metadata is data that summarizes information about other data. In even shorter terms, metadata is data about other data. While this might sound like some form of data inception, metadata is vital to an organization’s understanding of the data itself and the ease of search when looking for specific information.
Think of metadata as the answer to the who, what, when, where, and why behind an organization’s data. When was this data created? Where did this data come from? Who is using this data? Why are we continuing to store this information?
There are many types of metadata, these are helpful when it comes to searching for information through various key identifiers. The two primary forms of metadata include:
- Structural – This form of metadata refers to how the information is structured and organized. Structural metadata is key to determining the relationship between components and how they are stored.
- Descriptive – This is the type of data that presents detailed information on the contents of data. If you were looking for a particular book or research paper, for example, this would be information details such as the title, author name, and published date. Descriptive metadata is the data that’s used to search and locate desired resources.
- Administrative – Administrative metadata’s purpose is to help determine how the data should be managed. This metadata details the technical aspects that assist in managing the data. This form of data will indicate things such as file type, how it was created, and who has access to it.
What is Metadata Management?
Metadata management is how metadata and its various forms are managed through processes, administrative rules, and systems to improve the efficiency and accessibility of information. This form of management is what allows data to easily be tracked and defined across organizations.
Why is Metadata Management Important?
Data is becoming increasingly complex with the continually rising volumes of information today. This complexity highlights the need for robust data governance practices in order to maximize the value of data assets and minimize risks associated with organizational efficiency.
Metadata management is significant to any data governance strategy for a number of reasons, key benefits of implementing metadata processes include:
- Lowered costs associated with managing data
- Increases ease of access and discovery of specific data
- Better understanding of data lineage and data heritage
- Faster data integration and IT productivity
Where is this data coming from?
Show me the data! Not only does metadata management assist with data discovery, but it also helps companies determine the source of their data and where it ultimately came from. Metadata also makes tracking of alterations and changes to data easier to see. Altering sourcing strategies or individual tables can have significant impacts on reports created downstream. When using data to drive a major company decision or a new strategy, executives are inevitably going to ask where the numbers are coming from. Metadata management is what directs the breadcrumb trail back to the source.
With hundreds of reports and data volumes constantly increasing, it can be extremely difficult to locate this type of information amongst what seems to be an organizational sea of data. Without the proper tools and management practices in place, answering these types of questions can seem like searching for the data needle in a haystack. This illuminates the importance of metadata management in an organization’s data governance strategy.
Metadata Management vs. Master Data Management
This practice of managing data is not to be confused with Master Data Management. The two have similar end goals in mind when it comes to improving the capability and administration of digital assets. But managing data is not all one and the same, the practices are different through their approaches and structural goals. Master data management is more technically weighted to streamline the integration of data systems while metadata management focuses on simplifying the use and access of data across systems.
Metadata management is by no means new to the data landscape. Each organization’s use case of metadata will vary and evolve over time but the point of proper management remains the same. With greater data volumes being collected by companies than ever before, metadata is becoming more and more critical to managing data in an organized and structured way, hence its rising importance to one’s data management strategy. | <urn:uuid:a6a5de99-52ad-4882-b825-eb39542a36d9> | CC-MAIN-2022-40 | https://www.inzata.com/the-fundamental-guide-to-metadata-management/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00463.warc.gz | en | 0.93027 | 1,034 | 3.125 | 3 |
The tech world today is talking about three important terminologies: Artificial Intelligence, Machine Learning and Deep Learning. These names often create confusions. Many think the three terms are one and the same when there are significant differences between them. They are often used interchangeably but that isn’t the case.
So, what exactly is the distinction between the three – Artificial Intelligence, Machine Learning and Deep Learning? To visualize the difference between them first try to picture the relationship between the three terms.
Visualize them as 3 concentric circles where Deep Learning is a sub set of Machine Learning which in turn is a subset of Artificial Intelligence. Artificial Intelligence as the ‘idea’ popped up first, then comes Machine Learning that flourished later and finally Deep Learning- that came with extra spaces and as a breakthrough that can drive the AI boom.
Let’s dive in:
Artificial Intelligence (AI): AI is “Machine exhibiting Human Intelligence.” Artificial Intelligence or AI is the broad and advanced term for computer intelligence. The Merriam-Webster dictionary defines it as “a branch of computer science dealing with the simulation of intelligent behavior in computers “or “the capability of a machine to imitate intelligent human behavior”
Artificial intelligence can be referred to anything pertaining to a computer program. For example- a computer program playing rummy or a game of chess, or Facebook recognizing picture of a friend before you manually tag them or voice recognition inventions like Google Home or Amazon Echo – powerful speakers and home assistants which answer to human questions or commands.
If you go deeper, AI can be categorized into 3 broader terms- Narrow AI, Artificial General Intelligence (AGI) and Superintelligence AI. The Narrow AI is the technology that performs a task better than that of the humans themselves can. Image classification on Pinterest is one of the examples of Narrow AI technologies in practice. Don’t you think that these technologies, interestingly, exhibit some dimensions of human intelligence? If yes, then how?
The ‘how’ part takes us to the next concentric circle and the space of ‘Machine Learning’.
Machine Learning (ML): ML is “The construction of Algorithm that helps achieve Artificial Intelligence.” Machine Learning is a subset of Artificial Intelligence. It is one of the most promising AI techniques that takes all the data, learns (makes algorithm) and predicts results. The whole premise of ML is that the system simply gets trained by itself using algorithms with large amount of data to perform tasks.
What is Not a Machine Learning? – A hand-coded software that works with specific instructions to perform a specific task.
A large set of data helps ML to outclass AI technologies of facial, object, image and speech recognition, etc… A Machine Learning system works or makes predictions based on patterns. Computer vision is till date, one of Machine Learning’s finest application areas. However, it requires hand-coded classifiers like edge detection to get the task done. It produces results which are good but not something that could beat human intelligence.
Deep Learning: DL is “A subset of Machine Learning”. Deep Learning is the technique for implementing Machine Learning. It works with new and next level of accuracy for many important issues like recommender systems, sound recognitions, etc… It uses a set of algorithms inspired by the structure and function of the brain called “neural networks”.
Some Machine Learning techniques that Deep Learning uses combining it with neural networks help in influencing human decisions. It requires a huge set of data and number of parameters which make it expensive.
A deep learning algorithm could practice learning how a crocodile looks like. It may use a huge number of resources (datasets) of crocodile images to understand how it differs from an alligator.
A device with Deep Learning capabilities can scan humongous amounts of data (a fruit’s shape, its color, size, season, origin, etc.) to define the difference between an Orange and an Apple.
Two major differences between ML and DL:
- Deep learning automatically finds out the features which are important for classification, where in Machine Learning, these features should be given manually;
- As against Machine Learning, Deep Learning requires significantly large volume of data to work well and thereby require heavy high-end machines.
Of course, the differences between Artificial Intelligence, Machine Learning and Deep Learning are subtle and not as obvious as that of determining a difference between two fruits! This is because Deep Learning is the next evolution of Machine Learning! And Machine Learning is one of the ways to achieve artificial intelligence!
Why don’t you give us a shout here so that we can demonstrate how your enterprise can use Machine Learning & AI to create models that reveal insights for predictive risk mitigation and faster response to challenge varied business situations. Click here to explore our AI/ML solutions. | <urn:uuid:dfdbb50b-d1dc-4f6f-b39e-547889639ac2> | CC-MAIN-2022-40 | https://www.cloudmoyo.com/blogs/the-difference-between-artificial-intelligence-machine-learning-and-deep-learning/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00463.warc.gz | en | 0.92293 | 1,005 | 3.609375 | 4 |
5G is the next generation of mobile internet connectivity that will power businesses, homes, and cities. It brings together existing services, adding new technologies that focus on the applications rather than the equipment that links them together. It also provides the ability to combine communication links and technologies in new ways and leverage new bands of the spectrum, such as the powerful mmWave bands, which can carry huge quantities of information.
What is 5G mmWave?
As mentioned, 5G opens up wider bandwidths and leverages new bands of spectrum, that were not being used to their full potential before – such as mmWave. Millimetre waves are very short wavelengths, ranging between 10mm and 1mm, created by very high frequency radios. The wavelengths are small but powerful and can carry huge quantities of information. With expert engineering, they can provide reliable connectivity with fibre-equivalent data speeds of up to 40Gbps.
5G mmWave enables significantly faster and more reliable communication networks, which in turn enable a whole host of existing and new use cases that were previously limited by speed, delay, reliability, and cost. This includes applications for high-speed transport, remote healthcare, manufacturing, defence and entertainment.
What is the unlicensed mmWave band?
The unlicensed 57–71 GHz frequency band is available in North America and 59-63 GHz is available globally, with the upper bands above 64 GHz less susceptible to oxygen absorption. Blu Wireless’s technology operates across the full license-exempt band 57-71 GHz, including the 64 to 71GHz which crucially is outside the oxygen absorption at 60 GHz, and is therefore well suited for long-range (1 km or more) wireless links. Our 5G mmWave products are implemented within a small (300mm & 4kg), low power (50W) form factor unit, with integrated low-profile phased array antenna. These can easily be mounted on already existing infrastructure or vehicles and support Point-to-Point, Point-to-Multi-Point, and mesh network architectures to provide maximum flexibility for a broad variety of network deployments and requirements.
Range and data rate at mmWave frequencies are functions of propagation conditions, antenna, radio and baseband performance. Range can be further extended by operation above 65 GHz as the additional attenuation due to oxygen absorption falls from 15 dB/km to less than 1 dB/km. Our equipment has solved the challenge of range associated with mmWave and can deliver carrier-grade performance at ranges up to 4 km.
Furthermore, our 5G mmWave equipment delivers robust quality of service in the presence of uncontrolled interference (both for co-channel and adjacent channel) in unlicensed conditions where multiple un-coordinated networks may be operating. Line of Sight operation for DN-DN (distribution node – distribution node) and DN-CN (distribution node -client node) connections is implemented through dynamic mesh beamforming routing across a DN network.
The factors that set 5G mmWave apart from 4G and fibre
This technology is self-aligning and self-optimising. Gone are the days of time-consuming installations and costly maintenance, as mmWave is seamless, robust, and wireless, meaning expensive digging costs can be avoided, and it can be affordably deployed on already existing lampposts.
Moreover, the new protocols for the mmWave spectrum have more efficient ways of managing all the users in the bands while improving the latency. This key advantage can also improve the overall reliability of wireless connections. Setting up a link quickly, blasting data at a high rate and then shutting down is the most power-efficient way of transmitting data.
Another benefit of 5G mmWave is the openness of the unlicensed band, which has the potential for start-ups and new entrants to harness and improve the technology. Using standard hardware, whether as a design block, a chip, a module or a complete system, allows them to build new types of equipment that operate at speeds that were unimaginable a decade ago. A new generation of start-ups is raising millions of dollars to develop systems that take advantage of mmWave technologies and 5G speeds, and the unlicensed band is a key element in many of their business plans.
At the same time, it’s important to stress that mmWave can very much support carrier grade connections. Our equipment has been designed to provide low-cost carrier-grade backhaul to 4G or 5G small cells. As a result, Internet Service Providers are able to offer ‘neutral hosting’ to mobile operators for either addressing ‘not spots’ or facilitating the deployment of ultra-dense small cells or Wi-Fi hotspots, ensuring better coverage for everyone.
Just as 3G connections have remained in the 4G age, 4G will remain as 5G connections become more mainstream. mmWave technology can even be leveraged to improve 4G connections and broadband connections.
The future of 5G mmWave
In the future, there will be a range of services and applications that will benefit from, and come to rely on, 5G mmWave technology. We will massive data upload for data intense applications, like public transport or aircraft, for instance. We will also see networks with a virtual core, as well as virtualised RAN moving to Open RAN. Advancements in these areas will allow the market to open up to a broader number of vendors, offering customers more choice and flexibility.
To learn more about our 5G mmWave technology and how it can enable your communication networks, get in touch with us today. | <urn:uuid:3718407b-93f6-478d-ad3c-a2797365abfc> | CC-MAIN-2022-40 | https://www.bluwireless.com/insight/all-you-need-to-know-about-5g-mmwave/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00663.warc.gz | en | 0.940172 | 1,137 | 2.828125 | 3 |
Spawned in the mainframe days of computing, grid today is being taken out of the realms of academia and research and being used by enterprises in an attempt to ease the process of homogenizing heterogeneous and siloed compute environments.
Because grid computing puts a layer of virtualization, or abstraction, between applications and the operating systems (OS) those applications run on, it can be used to tie together all a corporation’s CPUs and use them for compute-intensive application runs without the need to for stacks and stacks of new hardware.
And because the grid simply looks for CPU cycles that are made available to the grid though open grid services architecture (OGSA) APIs, applications simply interact with the CPU via the grid’s abstraction layer irregardless of OS, said Tom Hawk, IBM’s general manager of Grid Computing. In this way, Windows applications can run on Unix and Unix applications can run on Windows and so on.
“We’re exploiting existing infrastructure through some fairly sophisticated algorithmic scheduling functions — knowing which machines are available, pooling machines into a broader grouping of capacity on our way towards exploiting those open APIs so that we really, truly do separate the application from the infrastructure,” he said.
Basically, grid can be thought of as similar to the load balancing of a single server but extended to all the computers in the enterprise. Everything from the lowliest PC to the corporate mainframe can be tied together in a virtualized environment that allows applications to run on disparate operating systems, said Hawk.
“The way I like to think about it really simply is the internet and TCP/IP allow computers to communicate with each other over disparate networks,” he said. “Grid computing allows those computers to work together on a common problem using a common open standards API.”
Some companies in the insurance industry, for example, are utilizing grid to cut the run-time of actuarial programs from hours to minutes, allowing this group to use risk analysis and exposure information many times a day verses just once. In one example, IBM was able to cut a 22-hour run-time down to just 20 minutes by grid enabling the application, said Hawk.
But any large, compute-intensive application, such as those used in aerospace or the auto industry to model events or the life sciences industry, can be (and are) grid-enabled to take advantage of a company’s unused CPU cycles, said Ed Ryan, vice president of products for perhaps the oldest commercial grid company, Platform Computing. By doing so, a company can reduce its hardware expenditures while raising productivity levels through the faster analysis and retrieval of critical information.
By utilizing the compute resources of the entire enterprise, CPU downtime is put to productive work running programs that once had to wait until nightfall before enough CPU time was available. Servers, which typically have a very low CPU utilization rate, can be harnessed to run more applications more frequently and faster. But this can get addictive, said Ryan.
“Our biggest customers go into this to drive up their asset utilization and what ends up happening is their end-user customers get hooked on having more compute power to solve their problems,” he said.
What this means to the average CIO, who typically has stacks of hardware requests waiting for attention in the inbox, is they can provide this power while throwing most of the new hardware requests into the circular file.
Even data retrieval and integration is being targeted by at least one firm for grid enablement. Avaki is taking grid to a new level by using it as a enterprise information integration (EII) engine that can either work with or by-pass altogether current EII efforts, said Craig Muzilla, vice president of Strategic Marketing for Avaki.
In fact, Avaki’s founder is confident grid will become so pervasive in the coming years it will be commoditized as just a standard part of any operating system. That is why Dr. Andrew Grimshaw founded Avaki as a EII vendor.
“For the CPU cycles it’s maybe a little bit more straightforward,” said Muzilla. “Instead of having to go buy more servers to speed things up or do analysis faster, to run the application faster I can go harvest the untapped CPU cycles. We think eventually that kind of compute grid technology will be embedded in the operating system so we don’t think long-term it’s that attractive for ISVs.”
Grid also plays right into the hands of companies looking to implement on-demand, utility or service-orientated architectures (SOA) since it enables the integration of disparate, heterogeneous compute resources by its very nature. Therefore, on-demand environments can piggy-back on the grid to achieve the integration and productivity promises of those methodologies, said IBM’s Hawk.
“Right now, I’d say the No. 1 reason customers are deploying this technology is to gain resolution or to fix specific business problems they’re having around either computing throughput or customer service,” he said. “The real cool thing here, long-term, is about integration and about collaboration and that’s why I keep harping on this concept of productivity.” | <urn:uuid:aad63f5b-5337-4a4a-9aa4-837485498541> | CC-MAIN-2022-40 | https://www.datamation.com/erp/making-the-case-for-grid/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00663.warc.gz | en | 0.934712 | 1,089 | 2.65625 | 3 |
For universities and researchers across the country, access to certain National Oceanic and Atmospheric Administration (NOAA) weather data could be prohibitively expensive.
A new partnership between George Mason University in Fairfax, Va., and Ligado Networks is hoping to demonstrate the feasibility of delivering NOAA real-time weather data to users across the country at a lower cost, using a cloud-based network.
“The network we’ve developed will give the university unprecedented access to real-time public weather data, making it possible for the school’s weather research programs to better study our atmosphere and develop useful tools that will benefit the broader American public,” said Doug Smith, Ligado Networks’ president and chief executive officer.
The partnership will help students and scientists with researching, tracking, and predicting the weather.
“Extreme weather events have a huge impact on people, including their families, homes and businesses,” said Deborah Crawford, Mason’s vice president for research. “Faster and more accurate climate modeling and weather prediction will help people and organizations–including emergency responders–better prepare for and respond more quickly to weather-related events such as tornadoes, floods and wildfires, saving lives and livelihoods.”
As part of the partnership, the university and Ligado will compare the delivery of weather data from existing satellite systems with the new cloud-based content delivery network that Ligado has developed. As part of the comparison, George Mason and Ligado will examine speed and reliability of the data delivery to users nationwide. The new partnership also includes reviewing the accuracy of existing weather forecasting models and advance detection of meteorological conditions, with the hope of improving the models. The new information extraction tools will be available, for free, to the public.
“This type of network could also be expanded so schools, libraries, and the general public have access to NOAA data, which will go a long way to advancing science, technology, engineering, and mathematics education,” Smith said. “It’s hard to imagine all that may be possible by opening up access to this data, and together with Mason, we look forward to exploring those possibilities over the coming years.” | <urn:uuid:51e222b9-ec55-4795-873d-37e0c090f6e2> | CC-MAIN-2022-40 | https://origin.meritalk.com/articles/cloud-helps-george-mason-university-deliver-weather-data-in-real-time/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00663.warc.gz | en | 0.931237 | 455 | 2.734375 | 3 |
This article was originally published on T&D World on April 03, 2020.
Utilities are shifting into a new posture to prevent the spread of coronavirus, but there are other pathogens to worry about too: Namely malware, phishing and ransomware. In fact, since the pandemic led utilities to change certain workplace and jobsite policies, phishing attacks have gone up.
The Tennessee Valley Authority reported an increase of 130% in phishing attacks, for example. Aviram Jenik, CEO and Co-Founder at Beyond Security, said COVID-19 and the response to it has introduced some new vulnerabilities that cyber attackers are exploiting.
“First, because we are all bombarded with COVID-19 related information, it is easy for an attacker to send malware or a phishing email pretending to be information relating to COVID-19. There are so many emails being sent and receive with such information that it is hard to notice another one,” Jenik said.
With almost every company is sending employees COVID-19 related guidelines, Jenik said, it becomes easy for an attacker to send a fake email pretending to be, for example, a link to a COVID-19 information portal where they can gain the victim’s credentials.
The increase in online traffic that accompanied stay-home orders and employees working from home also helps to conceal cyber criminal activity. People working from home are themselves susceptible to cyber attack, Jenik said.
“Our home networks are not as secure as the corporate network, not to mention the fact that we share our devices with our kids, who are prone to downloading malware from suspicious websites,” Jenik said. “In addition, working from home often requires a certain amount of access to the office network which makes it a convenient stepping stone for attackers who can attack and compromise a home computer easily, and use the VPN connection to hop from the home network to the corporate network.”
To boot, cyber attacks often count on people not paying attention or being distracted. This kind of action without thinking it through is common in disaster situations, Jenik said.
With phishing, in which a fake email or website funnels your information to the scammer, the primary object is to gain your credentials. From there, your logins, passwords and other information can be traded on the dark web to those who might want it to mount an attack on the organization you represent. The coronavirus pandemic has created an environment that is friendly to these kinds of break-ins.
“Especially with everyone working from home, more sensitive servers are accessible remotely and obtaining these credentials is more valuable than ever,” Jenik said.
With many utilities and grid operators working with skeleton crews to maintain social distancing, it is easier for attackers to overwhelm the defenders.
“This does not have to be a noisy attack; it could be attackers subtly trying to attack server after server, knowing that most of the admins are not there to see the attacks. While the defenders stay home, the attackers keep on attacking,” Jenik said.
In the age of COVID-19, Jenik said automation is more important than ever to stay safe. Companies must learn to do more with less, and automate security testing, log filtering and manage security events autonomously whenever possible.
There is no easy answer for how utilities can face up to this worldwide crisis, but it would benefit utilities to use this time as an opportunity to improve their security tactics, Jenik said.
“What have you done manually that you can now automate? How can you do the same with less people or less work hours? Making these optimizations today will help make you stronger to whether the virtual storm and will make you more efficient once things bounce back. Crisis builds character in organizations as well as people,” Jenik said. | <urn:uuid:eb278aa5-06e0-467e-a6cd-36da65d5cae5> | CC-MAIN-2022-40 | https://blog.beyondsecurity.com/covid-19-introduces-new-cybersecurity-challenges-as-people-work-from-home/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00663.warc.gz | en | 0.957824 | 796 | 2.546875 | 3 |
Mobile Technology is Shaping Humanity’s Future:
Human beings have come a long way over the centuries.
Throughout time, we’ve always looked for the most convenient and reliable way to communicate – whether it was through drums and horns, smoke signals, letters, or even in-person conversation. The rise of the digital world has given way to better opportunities for communication than ever before.
Today, we can engage in conversations with people around the world, accessing real-time audio, video, and even text messaging at the touch of a button. Technology has played a pivotal role in making society a better place. The ability to access that technology via mobile devices like tablets and smartphones has changed the way that we interact with others, complete tasks, and even perform at work.
According to Pew Research, more than 5 billion people around the world have a mobile device, and over half of those devices are smartphones. We’re living in an environment where it’s genuinely possible to be always on, and constantly connected. The question is, what does that mean to humanity as we know it? How does mobile technology serve communities across the globe?
How Mobile Technology Transformed the Planet
The chances are that your smartphone is an everyday part of your toolkit – a lightweight device that you carry with you wherever you go – whether you’re at home or at work. Despite being small enough to fit in your pocket, and lightweight enough that you’ll barely notice it throughout the day, the smartphone has played a monumental role in shaping human interactions throughout the 21st century.
When the iPhone launched in 2007, we had no idea how significantly the communication landscape was about to change. Mobile devices suddenly became more than just phones or reading devices. Today, our devices are catch-all platforms for education, communication, and collaboration, with functionality that’s continually evolving and improving.
The evolution of mobile technology means that everyone now has access to a pocket-sized PC – an environment that they can turn to for information, assistance, and even emergency support. We’re beginning to discover just how valuable the smartphone can be when it comes to things like healthcare and emergency services.
The Role of Mobile Technology in Healthcare
Mobile phones allow for easy communication between friends, coworkers, and family members at a moment’s notice. Not only can you connect with your tribe through voice with your smartphone, but you can also reach out through video, text message, and social media too. While this reliable access to various forms of communication has many benefits to offer, one of the areas where it can provide the most value is in healthcare.
The rise of concepts like “Telemedicine,” which creates digital portals where doctors and patients can communicate at a distance, has transformed the way we think about healthcare. Hospital staff can provide diagnosis and treatment to support people in rural parts of a country without asking them to visit a treatment center. Healthcare companies can reduce the costs associated with keeping teams connected, no matter how globally dispersed healthcare experts may be.
Healthcare portals even allow patients to communicate with physicians via their smartphones, providing instant updates about their condition, complete with information taken from wearable devices that monitor things like heart rate, blood pressure, and more. Qualcomm, a technology company in Arizona, recently partnered with the local healthcare program to use mobile technology to monitor pulmonary and cardiac patients. The patients wear a device connected to a mobile application. This application collects biometric data and sends that information to a physician so that the doctor can provide consistent, informed advice on how to manage a condition. The service also allows patients to set appointments and communicate with care managers through their devices, leading to significantly fewer hospitalizations in the patients monitored.
By allowing physicians to work more effectively with patients in a remote environment, telemedicine solutions can provide a range of benefits for those in need of support. For instance, a mobile medical strategy can help patients to connect more accurately with their doctors when they don’t speak the same language through artificial intelligence natural language processing apps and translation. Telemedicine can also protect people in environments where visiting a doctor for certain reasons would cause social stigma by allowing them to set up video conferences with the specialist they need.
From an economic perspective, telemedicine and mobile technology in the healthcare environment can also reduce costs associated with connecting physicians and patients. While telemedicine and apps may never replace trips to a doctor’s office or hospital entirely, this mode of treatment has already shown significant promise in reducing treatment costs for patients who suffer from chronic illnesses, both mental and physical.
Using Mobile Technology in Emergency Conditions
Mobile technology also has a significant role to play when it comes to managing emergencies. For years, we’ve relied on smartphones to ensure that we can access the help that we need when our car breaks down, or something goes wrong. Some mobile phone companies even provide panic buttons and GPS tracking devices with their phones, which makes it easier for responders to reach those in danger.
Going forward, mobile technology will continue to play a crucial role in helping hospital staff, Red Cross volunteers, and specialists provide support during natural disasters and emergencies. Collaborative applications in a smartphone like Deltapath’s inTeam application can even help first-responders to stay connected and aligned in their quest to help possibly injured people when they need assistance. With push-to-talk services on communication apps, people in areas without internet connections will still be able to collaborate with their team members. This simultaneously makes communication more efficient in the emergency environment and keeps costs low for healthcare companies.
As new technology like 5G connectivity, artificial intelligence, and IoT connected devices continue to deliver new possibilities to the mobile world, mobile devices will continue to present new ways of serving humanity. Already, two-thirds of the largest hospitals in the US provide access to mobile health apps, and the global mobile health app market is set to reach a value of approximately $111 billion by 2025.
The healthcare environment is just one example of how mobile devices are no longer “just phones,” they’re crucial tools transforming the way that we connect, communicate, and survive. | <urn:uuid:aaf2a61e-05b4-426d-83fc-a52cef08d4bb> | CC-MAIN-2022-40 | https://cn.deltapath.com/newsroom/thrive-global-humanitys-future-and-mobile-technology/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00663.warc.gz | en | 0.949509 | 1,265 | 3.0625 | 3 |
The primary goal of technology should be to improve our lives in some way. So far that has seen us embrace computers, the Internet, smartphones and most recently wearable gadgets. However, many are predicting that the future will not see us hold or wear technology, but have it directly implanted into our bodies.
Already, the transhumanism movement is seeing technology implants gain greater acceptance, but many still feel uneasy about the ethics involved when we attempt to improve our bodies artificially. In response to the advances made in body modification technology, we’ve looked at five high-profile examples below.
For many years, individuals have used technology to help solve medical problems. Artificial pacemakers have been implanted into humans since the 1950s and prosthetic limbs, in their most basic form, have been used for centuries.
Now limb replacements are becoming increasingly advanced, with the DEKA arm being one of the most notable. Developed by the Defense Advanced Research Projects Agency (DARPA) and the U.S. Army Research Office, DEKA can carry out a series of complex movements controlled by electrical signals sent from electrodes connected to the user’s muscles. The electrodes transmit signals to the arm, which can carry out 10 different types of movements.
The DEKA arm has given those that are missing limbs the ability to perform delicate tasks, such as grasping a bottle, which wouldn’t be possible with traditional prosthetics.
Neil Harbisson’s antenna
Self-confessed cyborg Neil Harbisson is one of the most famous exponents of body modifications and has used technology to overcome the extreme form of colour blindness that has afflicted him since birth.
Harbisson is unable to see colour at all and so decided to implant a permanently attached antenna into his skull in 2004. The “Eyeborg,” as it is known, allows him to experience colours by translating them into sounds. An Internet connection also allows Harbisson to receive phone calls directly into his skull and his activism has seen the British government recognise his as the world’s first cyborg.
Deep brain stimulation
Deep brain stimulation involves the insertion of electrodes into three target sites within the brain and has proven hugely effective at tackling Parkinson’s disease.
The electrodes are connected to a neurostimulator placed under the skin in the chest or stomach area, which delivers high-frequency stimulation to specific areas of the brain. This changes some of the brain’s electrical signals associated with Parkinson’s and is often used in partnership with medication.
The procedure is working so well that it could remain the primary surgical treatment for the condition for the next two to three decades.
A number of companies around the world are now offering employees the opportunity of having computer chips implanted into their hands to help them in their place of work.
The radio-frequency identification (RFID) chip is inserted under the skin, following only a brief moment of pain, and can be put to various uses. Most often, the chip is used to unlock doors, providing a more secure solution than employee ID cards.
Others have placed an electronic business card on the chip, which can be accessed via a smartphone, or use the implant to access their car or computer.
A Swedish company called Dangerous Things even allows you to buy a DIY chip implanting kit that means you can use the technology in whatever way you want.
The Circadia 1.0(opens in new tab)
Sticking with the DIY theme, biohacker Tim Cannon has developed a Fitbit style device that is implanted under the skin.
The Circadia 1.0 is not the most elegant implant, clearly protruding from underneath Cannon’s skin, but it does offer some personal utility. The device can be connected to appliances in his home and is programmable via tablet. In fact, by monitoring various health metrics, the Circadia 1.0 can automatically provide Cannon with a helping hand.
"So if, for example, I've had a stressful day, the Circadia will communicate that to my house and will prepare a nice relaxing atmosphere for when I get home: Dim the lights, let in a hot bath,” he explained.
Read more: MIT develops seven fingered robot hand
While consumers may not quite be ready for Cannon’s rough and ready approach to technological implants (he inserted the Circadia 1.0 without any anaesthetic), with some refinement similar body modification could become widespread in the not too distant future.
Image Credit: Motherboard | <urn:uuid:05cf8f9a-d1d5-4971-a0be-db32aa3902b9> | CC-MAIN-2022-40 | https://www.itproportal.com/2015/03/26/cyborg-revolution-technologys-body-modifications/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00663.warc.gz | en | 0.954994 | 932 | 2.78125 | 3 |
A new Deloitte study revealed that smartphone users in the country are using fingerprints as passwords.
British smartphone users have adopted biometric mobile security methods to a significant degree. Deloitte released a study indicating that 20 percent of smartphone users in Great Britain authenticate using fingerprints. This suggests that the general public is becoming increasingly comfortable with the concept.
The mobile security study was conducted with the participation of 4,000 consumers across Great Britain.
The biometric mobile security study was called “There’s no place like phone.” That report determined that among the respondents, 63 percent were using PINs and passwords for mobile phone authentication. Another 21 percent were using mobile device fingerprint sensors for that purpose.
The report stated, “We expect ownership of fingerprint readers to continue increasing rapidly.” It also added that “Many millions of people are likely to acquire a handset with a fingerprint reader over the coming year (either as a new or second-hand phone) and some people who currently have a fingerprint reader may start using it, as more apps offer this functionality.”
The report provided a number of reasons that biometric mobile security may be stronger than other forms.
It pointed out that using fingerprint authentication technology is quick, simple and inconspicuous. Moreover, its successful completion isn’t dependent on certain ambient conditions as is the case with many other forms of biometrics. Bright sunlight, for example, doesn’t reduce the effectiveness of this method. Similarly, a noisy room won’t change the accuracy of the scan. That said, according to the report, 2 percent of participants did use facial recognition or voice recognition to authenticate.
The outcome of this study is not unlike those from a prior Visa Europe study. That research indicated that people in Great Britain feel that they can trust government agencies and banks to keep their biometric data safe. As a result, they feel that they aren’t risking unauthorized access of their biometric mobile security data.
When those consumers were asked if they would trust this type of mobile security technology to confirm their identity, 85 percent said they would trust it with their banks. Another 81 percent would trust this method with certain payment methods. Seventy percent trust global online brands with this method. Finally, 64 percent said they would use this with their smartphone companies. | <urn:uuid:7b9acf18-5f04-4511-a923-1585394b4cee> | CC-MAIN-2022-40 | https://www.mobilecommercepress.com/tag/visa-europe-study/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00663.warc.gz | en | 0.957447 | 478 | 2.546875 | 3 |
Comparing 4-6 digits from the user with the correct value on storage, how can that be hard? It’s just numbers? Indeed, comparing entered digits with the stored ones is very easy, but security-wise far from trivial.
Let’s define the problem first: we assume a verification system (e.g. a remote server) that somehow needs to make sure that a user entered the correct PIN, before performing a certain action. Sending the PIN to the verification system (even over a secure connection) and then comparing it with a stored one is a security nightmare. First of all, the verification system will see both the entered PIN and the reference PIN. It will also somehow need to store the reference PIN.
So, what does it mean to verify a PIN securely? Ideally only users know their PINs. This implies that a PIN cannot be learned by observing PIN verification and also that the parties involved in PIN verification cannot learn the PIN. As such, one can be reasonably sure that it was you that entered your PIN. There is of course no stopping someone from entering a PIN as a real user would. To mitigate this threat, rate limiting controls prevent brute-forcing (trying all possible PIN codes until hitting the right one). Typical examples of such controls are limiting the number of consecutive failed attempts, and having an exponential back off after each failed attempt.
We’ll explore the two facets of the problem: storing the PIN and sending the PIN to the verification system, and explain why some classical solutions don’t work. Ideally, a solution should avoid storing or processing PIN codes in plain at the verification system. The less the verification system (and thus an attacker breaking into it) can learn, the better.
Storing the reference PIN
An immediate reflex when dealing with PIN codes is grasping for password solutions. So, what about password hashing to protect PIN codes stored on the server? Salted hashing is a popular option for protecting stored passwords. It performs a one-way operation, making it hard(er) to recover the original password from the stored hash. For PIN codes this mechanism does not work, there are simply too few possible values for the PIN, so you can try them all in a split-second. PIN codes are – by definition – extremely weak passwords.
An alternative for storing the PIN codes in plain form is by making sure they are only in plain inside secure hardware. Hardware security modules (HSM) are expensive and tedious to operate.
Learning the PIN
Somehow it seems obvious that you need to send the entered PIN to the server, in order to check it. You can use secure channels (such as TLS) to protect it from eavesdroppers. Be sure to take all additional measures such as for example certificate pinning to avoid Man-in-the-Middle attacks. However, even with these additional measures, the server will still see the entered PIN in plain.
To avoid exposing the PIN to the server and at the same time avoiding storing the PIN on the server, a popular escape route is using the PIN to locally encrypt some data. Instead of using the PIN code, that stored data is used for authentication to the server, after decryption based on the PIN code. This opens up the possibility of local brute-forcing. Once someone obtains the stored file, all PINs can be tried to check which one is able to correctly decrypt the data. Afterwards, that data can be used for further authentication and the PIN is not required anymore. On top of that, choosing the correct cryptographic algorithms and implementations, taking care of unexpected properties, is a tedious task. Even the slightest error will create a security disaster.
Let’s use some fancy cryptographic protocol we found online! But are you sure it can be applied to PIN codes? We have seen cases where password-authenticated key agreement was being used with PIN codes instead of passwords. Only problem: they were never designed to deal with PIN codes – extremely weak passwords as you know. If you do not properly understand what these protocols actually do and under what assumptions they are secure, don’t shoehorn them to do what you think they might be able to do.
Finally, it should be impossible for someone in control of the server (attacker or insider) to perform a PIN verification for a given user, without that user’s cooperation. If not, this leaves the server open to brute-force attacks. Even when an HSM is used and rate limiting is enforced inside the HSM, it should not be possible to perform PIN verification without cooperation of the user as otherwise one could try up to the maximum number of attempts for each user, leading to a certain percentage of users’ PINs being compromised.
How can nextAuth help?
Next to our classical authentication solution (which is the better option if you want to do mobile authentication), we also offer separate modules, through a mini-SDK, implementing part of our core functionalities. One of these modules is the PIN verification module. It allows you to securely verify a PIN code inside your mobile app with a server: without the server seeing the entered PIN, without the server storing the reference PIN and without the vulnerability of local brute-forcing on the client or the server! And it also does not require expensive HSMs. | <urn:uuid:ac3d8128-0f45-4483-beec-90276cac74fb> | CC-MAIN-2022-40 | https://www.nextauth.com/verifying-a-pin-code-securely-is-hard-here-is-why/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00663.warc.gz | en | 0.933022 | 1,089 | 3.15625 | 3 |
An Uninterruptible Power Supply (UPS) is a backup power source that activates when the main source of power fails, but what happens when its batteries run out of power? Having a generator on-site is always a good idea in the event of extended outages. In addition, having a system to remotely monitor your critical equipment can be the difference between a quick-fix and a very expensive repair.
Skip this article and see a remote monitoring device.
Although complex, a UPS has a very simple overall design. Every UPS has power inputs (for the intake of commercial power during normal operation), power outputs (to connect protected equipment), and backup batteries (to prevent interruption of power to protected gear when commercial power is lost). It also has a control system that quickly switches to backup battery power when the main source of electricity goes down.
The word "uninterruptible" means that the power supply will act quickly enough to prevent the gear from ever losing power when the main power source goes dark. This usually means that a UPS system must be capable of activating backup power within 25ms of a power loss.
A UPS is, by nature, redundant. This means that it provides an important protective barrier against data loss, outages, and expensive hardware damage (by smoothing out voltage anomalies).
In consumer applications, a UPS may only have enough battery reserve to last for a few minutes. The intent of such a short backup power supply is only to allow the safe shutdown of connected computer gear.
In a telecom or data center environment, however, the batteries of a UPS may last for several hours or more. If commercial power failures are most likely expected to be rare and brief, a UPS may be the only backup power source at a remote site. However, at least one diesel or propane generator can also be present to provide backup power.
Whatever your particular scenario is, if you depend on a UPS system to protect critical gear at your remote sites, then you want to remotely keep an eye on it. Let's explore some more facts about the UPS, and the most important techniques and key factors that you should keep in mind to achieve a successful UPS monitoring system.
As I said previously, an uninterruptible power supply is vital protection against loss of data and costly hardware damage.
Unfortunately, though, many network managers fail to properly monitor their UPS systems. The main cause for this is that most modern UPS systems for use in industrial applications include a built-in web interface for ups critical tracking. Although this obviously allows "monitoring," one critical failure prevents it from being "proper monitoring."
Using an uninterruptible power supply's own interface for performance and up-time defeats the purpose of such monitoring. What happens if the UPS fails? Well, so too will the monitoring interface that you have relied on.
Instead, the industry's best practice is to deploy UPS power management software. These can come in small monitoring devices (1 RU or less) that are available to collect important status information from virtually any UPS backup system.
These monitoring devices are called "RTUs" - short for Remote Terminal Units or Remote Telemetry Units - and they'll send alerts back to critical personnel via LAN, phone voice message, serial connection, T1, fiber, or any other available transport.
Take a look at the DPS RTU devices here.
This way, companies will have remote access to their UPS systems and can track and log the voltage levels of each single battery cell, providing a good assessment of the overall health of the battery string. Even better, RTUs will monitor much more than just your UPS system. Just about every piece of telecom, transport, and switching gear you have will also benefit from external monitoring.
A good RTU will contain its own internal UPS with backup batteries large enough to continue operation for at least several hours after a power failure. Some of the top-quality RTUs will last for up to 10 hours.
Your battery monitoring system needs to provide you with continuous voltage readings. Without regular monitoring or at least control over the sampling rate, a problem can linger for too long before you're notified. That's why sampling your batteries at too long of intervals can be so problematic.
Your battery cells can easily become fully discharged between lengthy interval readings, which means you risk losing power at your sites and won't have any idea until the entire site suddenly goes dark. With real-time monitoring and customized interval samples, you can be assured you'll know about these kinds of emergencies before they bring down your vital network gear.
I've seen many clients roll out uninterruptible power supply monitoring systems at dozens of sites to protect the battery cells there from expensive damage. The battery monitoring systems that they deploy cover both VRLA (Valve-Regulated Lead-Acid) batteries and flooded batteries. Overcharging, at a level of 20 amps per 100 amps of battery capacity, is monitored carefully.
Monitoring voltage to prevent deep discharge is perhaps the most important role of the uninterruptible power source battery monitoring solution. As an example, if you discharge batteries at 48V and they drop to 42V, you have probably damaged the batteries.
You also want your battery monitor to have the ability to work with different battery types and voltages. All batteries are not the same. They have different orientations, connectors, and different voltage ranges - for instance, you might have a standard string of batteries or a string with four 12V cells and propriety connectors. You need a battery monitoring system that is very adaptable.
Some battery voltage monitoring systems don't support the capacity to handle your large quantities of battery cells. It's absolutely key that you have the full support to monitor each of your battery cells. Otherwise, you'll find yourself vulnerable to the weakest link of the battery string.
Most of your vital sites will have A/B power battery systems. Look for a single device that is capable of monitoring both strings of batteries. Not all strings are created equal - so make sure your monitoring solution can monitor up to a 125 VDC string.
Tracking and logging the voltage levels of each single battery cell provides you a good assessment of the overall health of the battery string.
How important are your batteries to your network? With your up-time depending on your batteries, it's simply too risky to leave such a vital aspect of your network unmonitored. All it takes is a single bad jar or fully discharged battery and your entire network can come to a screeching stop. You need to be confident that your power system will remain running because you aren't there to monitor it.
Since a single bad battery jar can degrade the performance and lifespan of an entire array of batteries, it's critical you deploy a superior battery monitoring system. Monitoring the entire string is better than no monitoring at all - but it's not the most effective way to improve reliability. String voltage doesn't tell you how many cells have degraded and to what extent.
While some monitoring is better than no monitoring, having a good monitoring system will save you a lot of trouble in the future. Most systems have one sensor for the entire string of batteries. It may alert you that there is a problem. However, this system has no way of knowing which battery is causing the problem.
And that's the best-case scenario. In most cases, one battery will go bad, but the others on the string will pick up the slack. This puts a strain on the remaining batteries, but you're never notified because the sensor reads that everything is in working order.
A good battery monitor will have sensors to monitor each of your cells individually - looking not just for low voltages - but a difference between the cells and the average of all the cells. A system like this will not only save you network downtime but will also preserve the lifespan of your batteries.
Since your UPS system is the first line of defense when commercial power fails, it's critical that you know how much power you have remaining. You can tie the string output voltage to the RTU for monitoring, and your RTU will extrapolate "battery life remaining" from that voltage measurement.
In fact, some of the better RTUs will have their power inputs internally tied to an analog circuit. This reduces your wiring requirements, as the very same wires that are supplying battery-plant power to your RTU are also being monitored for voltage.
Battery temperature is also a good general measure of battery string health. A temperature that rises too high is a strong indicator that something may be wrong.
There are some very granular battery-monitoring systems available that tie a sensor pod to every cell in your battery string. But these tend to be quite expensive - to the point where many people say "I could replace my entire battery string for less than what that monitoring system costs."
A pair of analog inputs (one for each voltage and temperature) is a good monitoring technique that you can install at a relatively small cost.
There are some very complex ways to measure your batteries (sensor node on every cell, capacitance, internal resistance, etc.), but the simplest and easiest way to start is to track voltage to monitor battery life remaining, plus temperature to detect overheating before damage occurs.
It doesn't matter the size of your company, a flexible range of notification options gives you effective reporting methods.
When you go with DPS products, battery threshold alarms can be reported in the following ways:
When it comes to monitoring your batteries - and therefore protecting your up-time - you want true flexibility. Having the capability to receive 24/7 alerts (via text message, email, voice alerts, etc.) about the status of your batteries can be the difference between a power outage and keeping power to your gear.
Even if you have a 24/7 dedicated NOC (Network Operations Center), receiving on-the-fly and mobile alerts from your battery monitoring system can give you an edge in avoiding preventable outages. If you do not have an uninterruptible power supply with email notification, Look for a battery monitor that can alert you via emails, text messages, paging, etc. With notifications that can reach you even when you're out of the office, you can be assured you won't be left in the dark.
Learn more about battery monitoring with our white paper - Monitor Your Battery Cells for Superior Reliability. Download it here.
When looking for UPS monitoring systems, make sure to get a device with a web browser interface that will present you with powerful remote control and monitoring tools. Instead of driving all the way out to your site to configure or manage your device, you'll be able to do it right from your desk. This can save money on fuel costs and expensive man-hours.
Your company might be large, you might also have an enterprise-grade SNMP manager, but a web interface lets you drill down when you need to. An intuitive and easy-to-use web interface is way above any other method to manage your battery monitoring.
Do you know the status of your remote site battery strings? Would you know if one single cell was falling out of its ideal voltage or temperature range? At many companies, battery cell monitoring is a commonly overlooked opportunity to reduce costs and improve reliability.
This actually creates a huge problem. If you've ever had battery failures in the past, you know just how painful they can be. A single bad cell can greatly affect an entire string of batteries if you don't catch it early. An entire site can go dark if your batteries aren't working properly or become fully discharged.
Remote battery alarm monitoring and control isn't always as simple as slapping a single voltage monitor on each string. To really know the big picture, you need a high level of detail.
Fortunately, you can always count on the BVM G3. It integrates battery sensing into remote alarm monitoring. This creates a single unit that gives you the capability to monitor alarms from your equipment, environmental conditions, and battery strings.
Here are the main features of the battery monitoring system itself:
This battery monitoring solution gives you complete visibility over battery conductance, voltage, temperature, and resistance. These proven key indicators will give you advance warning of deteriorating battery cells. This means you can avoid being blindsided by a faulty or fully drained string.
If you need more than a handful of RTUs, it's important to deploy an efficient master station. The T/Mon LNX is the best option if you want a cost-effective monitoring solution.
Monitoring UPS systems is an important part of any truly reliable network. You simply can't afford to leave your battery strings vulnerable. While some monitoring is better than no monitoring, the right monitoring system makes all the difference. If you want to get more information about how to monitor your UPS system, or want to speak with me or someone on my team about designing a monitoring solution for your network, please contact us today.
You need to see DPS gear in action. Get a live demo with our engineers.
Have a specific question? Ask our team of expert engineers and get a specific answer!
Sign up for the next DPS Factory Training!
Whether you're new to our equipment or you've used it for years, DPS factory training is the best way to get more from your monitoring.Reserve Your Seat Today | <urn:uuid:dedc0e69-b2f2-4b63-b3f9-e631bc064288> | CC-MAIN-2022-40 | https://www.dpstele.com/blog/top-six-techniques-for-ups-monitoring.php | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00663.warc.gz | en | 0.943818 | 2,732 | 3.265625 | 3 |
Any user with a casual passing interest in IT security will at some stage encounter the paradoxically named expression ‘false positive’. Not quite the tautological twist that it might at first sound like, a false positive is the term we use to describe the “false” identification of a piece of software code as a “positive” identification match with code belonging to a virus, worm, spam-related phishing scam — or in fact any other form of malware as defined by an anti-virus protection suite.
When do false positives occur?
False positives can occur when a spam filter positively identifies a legitimate message as harmful. A spam filter can reside on either a user’s desktop (and therefore be said to be “client-side”) or on a back office server (where it is said to be “server-side“) in a company network environment. The result is the same in either scenario, as the message is “bounced” back to the email sender and/or quarantined, tagged as potentially harmful and ultimately deleted.
But false positives can occur outside of straightforward email filtering, in scenarios where any software application code is analysed for patterns that have been identified as belonging to malware.
If an “app” (or an application extension – see below) exhibits behavior identified and/or associated with malicious activity, such as attempts to make modifications to the computer’s operating system or related files, or an action to freeze a memory address – then it may be classified as a false positive, even when the application’s actions were intended by the user.
This kind of scenario might very typically come about if a user happens to initiate a “game trainer” or cheat extension to an installed video game or similar application/software program. The trainer tries to execute a smaller .exe program or sections of code to modify the game’s behaviour and as this is essentially classed as an “exploitative action”, so the user’s anti-virus suite blocks the operation even though the user wanted the action to happen. Hence, a false positive has occurred.
This situation can also happen during the wider general use of a computer too. Consider the fact that there are quite literally millions of viruses out there and billions upon billions of lines of binary software code. It is not beyond the realms of possibility to consider that one piece of virus code could be matched with that of a legitimate software program’s operation.
Clearly we know that big brand anti-virus manufacturers develop their systems to a level of sophistication far beyond the “random coincidence” factor that we suggesting here, but some user awareness of this is great background knowledge for anyone that uses a computer.
What can users do about false positives?
So if a computer system falsely classifies a piece of non-malicious software code as spam, adware or malware of any kind is there anything a user can do? The short answer is yes – and first actions should be focused on updating your anti-virus suite to a current-year version. If you are running up to date anti-virus protection, then it is still prudent to “update virus definitions” by initiating an online product update, which all reputable protection suites will offer as a normal operational option.
Being aware of the existence of false positives is good basic background knowledge for any user, especially if you intend (as most of us inevitably will) to download application extensions or add-ons of some kind or another.
Leave a reply | <urn:uuid:9cf9df38-e560-4ab5-a4be-ea2b3c3295a5> | CC-MAIN-2022-40 | https://dataprotectioncenter.com/general/avg-codeword-false-positives/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00063.warc.gz | en | 0.921167 | 737 | 3.078125 | 3 |
Nowadays, most of us use smartphones. Yes, probably most of us use them more than is healthy. There are those, however, that have come to depend on their mobile device so much that it completely dominates their lives. As people become even more attached to their phone, the impact of this behavior becomes more and more detrimental.
The Skinny of It
“Everything in moderation” is a great piece of advice. If you drink too much water, your body’s sodium levels become dangerously low and can make you sick. It can even put you in a coma. So then is drinking water good for you? Of course, but too much of a good thing is always going to have negative effects.
Smartphone use is the same. Sure, you can now handle multiple tasks in a short period of time via your smartphone. You can keep conversations going with your friends, your family, your co-workers, and that special someone, while walking down the street. You can play games, read a book, and scroll through social media to keep up on the world around you. You can work: write and send emails, update spreadsheets, use collaboration tools, and communicate through several methods. A wealth of knowledge is available in your pocket. Your smartphone can simultaneously be a home phone and a work phone.
Smartphone addiction is taking these benefits too far. It’s finding more in the soft glow of a Retina or Super AMOLED display than the rest of us can see. It’s ignoring the world around you to invest your time and energy into whatever is on the other side of the device. Young people tend to have the biggest problem. People between the ages of 15-to-24 check their phones an average of 150 times per day. In fact, 60 percent of surveyed college students said that they consider their smartphone usage excessive.
The Signs of Smartphone Addiction
Smartphone addiction is similar to any other addiction that the DSM-5 outlines, but the most similar is actually drug abuse. This means that mobile device addiction is actually a physical dependence on a mobile device. Symptoms include:
- Conscious use of smartphone in dangerous situations or when it is prohibited to use (driving, walking stairs)
- Phone use that causes social conflict
- Phone use that causes a loss of interest in other social or group activities
- Withdrawal, panic, and anxiety when smartphone isn’t in hand
- Lack of focus
- Social anxiety
- Relationship stress
- Eye pain
- Neck pain
- Dependence on digital validation
Effects of Smartphone Addiction
Besides not being able to focus on tasks and getting less than ample sleep, smartphone addiction has shown to have some serious effects on a person’s personality. Among those found to have a smartphone addiction, depression and anxiety is more pronounced. In fact, in one study, these people have a 270-percent higher chance of having depression. Depending on a person’s personality traits, smartphone addiction can really be a hindrance to building a life that is productive and fruitful.
If you think you or someone you love has a smartphone addiction there are some things you can do to help. They include:
- Monitor usage – The best way to get over any addiction is to abstain from those behaviors. Giving yourself or a loved one a limit of time they can use their device can have positive effects. There are settings on all modern smartphones that can help track usage.
- Don’t use your phone for everything – There’s no denying that the smartphone is a one-stop-shop for many of yesterday’s physical tasks. To keep from using your phone too much, buy a physical alarm clock, carry a pen and paper with you, read real books. If you want to unlock yourself from your phone, have contingencies for the things you do on your phone.
- Turn off notifications – One of the main triggers for the smartphone addict is the constant stream of notifications. If you turn off notifications, you will be less distracted by your phone buzzing on your desk or in your pocket.
- Set aside time for smartphone tasks – Do you use your phone to check social media? With all the available options and all the people you correspond with sending you content, it’s hard to unplug. Setting a time where you are able to use your phone for communication and recreation is a great way to limit your exposure to your phone.
Look, we all use our phones constantly, but if it is negatively affecting your life, you need to unplug. | <urn:uuid:04d7f8cf-a47a-4546-b114-d97e47651084> | CC-MAIN-2022-40 | https://ctnsolutions.com/smartphone-addiction-and-its-effects/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00063.warc.gz | en | 0.951048 | 942 | 2.59375 | 3 |
Access Control Lists (ACLs) are one of the security and control mechanism used in routers. It is an important lessons of Cisco CCNA 200-301 and CCNP Encore 350-401 Certifications. They mainly filters incoming and outgoing traffic coming to a router or going from it. In other words, with the help of access list control, we can filter traffic coming from any where and going to any where. We can create Access lists by using various parametters like source ip address, destination ip address, protocol, port number etc.
For example, with an access control list, we can deny some users to access a specific server or service. Or we can allow only the people in a specific network to use FTP towards another network. Or we can limit one network to ping another. There can be many combinations and different ACL lines according to your need.
There are different access control list types used in networking. Each of them is used for various purposes and needs. According to your need, you can use one of these access list.
These access control list types are given below:
Now, let’s briefly explain these ACL types. We will explain these ACL types in the following lessons detailly.
Standard Access-Lists are the ACLs which uses only source addresses of the traffic. In other words, they filter the traffic according to their source. ACL numbers 1-99 and 1300-1999 are used for standard access control lists. Standard ACLs are added close to the destination.
Extended Access-Lists are enhanced versions of standard ACLs. In Extended ACLs, source, destination addresses, port numbers and protocol types are used to filter the traffic. ACL numbers 100-199 and 2000-2699 are used for extended access control lists. Extended ACLs are added close to the source.
Named Access-Lists are the ACLs, which uses ACL names instead of ACL numbers. They can be used with both Standard and Extended ACLs. These type of ACLs are more memorable because of the explanatory names.
ACLs are created according to their type and each line after this creation is suitable to this type. This means that, if you are using standard ACLs, you can use only source addresses in the lines of ACLs. Or if you use extended ACL, you can use source, destination addresses and protocol or port information.
To use Access Lists in a network, there are some steps. These steps are given below:
Firstly we need to create the ACL. To do this we will need an ACL number or a memorable name for our access control list.
Secondly, we should add required entries to the ACL according to our needs. These entries consist of one or more permit/deny lines.
Then, we will determine the interface that we will add this access list. According to used ACL type the location can be different.
After determining the interface, it is time to determine the direction of this apply. This can be also different according to used ACL type.
Laslty, we will add the linet hat will add our ACL to the interface.
ACLs works not when they are created but when they are added to an interface. After adding them to an interface, they controls the traffic of that interface and according to its entries, it determines what to do to the traffic of this interface.
After creating and adding an ACL to an interface, whenever a traffic comes this interface, it firstly checks the lines of the access list control orderly. If it finds any entry that matches to the traffic, it stops and acts as the matched line. For example if it is a deny line, it denies the traffic that matches it. Or if it is a permit line, it alllows this traffic.
After checking all the lines, if you do not use “permit any any” command, it will reject all the other traffic because of the invisible implicit deny at the end of the line. Implicit deny denies all the other traffic that do not match your entries. So, if you use ACL to deny some specific traffic only, you should use “permit ip any any” line to accept any other traffic. If you do not add this entry, not only your deny lines are denies but also all the other traffic are denied. This is an important problem and can cause critical traffic drops. So, you should be careful during access list configuration. | <urn:uuid:871e32f0-2f8f-453c-b4b4-2923498c3077> | CC-MAIN-2022-40 | https://ipcisco.com/lesson/access-control-lists-ccie/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00063.warc.gz | en | 0.912857 | 910 | 3.953125 | 4 |
Geoff Conrad reports on some of the hotter properties at the International Solid State Circuits Conference last month.
In February, the world’s top chip designers, silicon architects and devices devisers gathered in New York to strut and boast and gaze into their crystal balls to frighten the opposition with their plans for the future. The pilgrims to the annual International Solid Circuits Conference were given previews of a whole range of spectacular chips as the semiconductor industry showed just how far it had managed to stretch the limits of technology in the past year.
Staid old DEC
To show the speed of developments: last year the star of the show was a 1M-bit dynamic random access memory, this year five 4M-bit devices were on show, but the star came from Japan’s NTT Electric Communications Laboratories – a 16M-bit dynamic holding 2M-bytes on a single chip. Of the processors, the design that was really over the top came from, of all people, Digital Equipment Corp, the staid DEC. This was for an array of 262,144 processors – 8,192 chips each with 32 processors – and 384 companion router chips each with 64 data inputs and 64 data outputs which allows each element to communicate with any other in the array. The massively parallel architecture of the full-scale array is claimed to pump out 10 Gflops, 10,000m floating point operations per second, or 2,600,000,000,000 4-bit operations per second. The processing element chip has 242,000 transistors and each of the 32 individual processing elements has 1K of static random access memory, two shift registers of programmable size, a 4-bit adder, an arithmetic-logic unit, two 1-bit registers, and neighbour and router communication paths. A 4-bit operation in each processor takes 100nS to execute, allowing the whole chip to handle 320m 4-bit operations per second. Or, by expanding in nibble-serial fashion it can handle 40m 32-bit operations per second. Each processor has the logic to connect to the memory of three adjacent memory chips to give it access to 4K of memory. Hewlett-Packard, which already has a reduced instruction set computer at the heart of its Spectrum Precision Architecture commercial and scientific machines, showed two other RISC chips at the ISSCC. One was a 30MHz 15 MIPS, 32-bit chip designed to implement a set of 140 instructions using direct hardwired decoding and execution. No fewer than seven internal 32-bit buses are used to link the various on-chip go-faster feature: 25 control registers; a shift/merge unit; a five-stage three word deep instruction pipeline; and a 32-bit arithmetic logic unit. The chip includes logic for decoding and prioritising traps and interrupts and has a special interface to support data transfers among the cache, the CPU and the co-processors, which handles the copy-in and copy-back traffic between the cache and the main memory. Hewlett’s other 32-bit part was a stripped-down basic RISC chip with 164,000 transistors and a peak performance of 8 MIPS. Virtually all instructions requiring more than one clock cycle to execute – apart from load-store co-processor and branch instructions have been eliminated, and the total number of instructions has been reduced to an absolute minimum. The Hewlett-Packard microprocessor uses a common multiplexed data and address bus and a five-stage pipeline. One of the universities that continued work in RISC technology after its start at IBM – Stanford University’s Centre For Integrated Computing – unveiled its third generation of 32-bit RISC chips: the MIPS-X. The Stanford researchers have simplified even further the instruction set for the chip, using a simple instruction format that can be decoded very quickly, allowing an instruction to be decoded every cycle. MIPS-X uses the conventional RISC load-store architecture similar to the earlier MIPS Computer chips and most other RISC machines, but the number of instructions have been pared down to a very basic 37, each 32 bits long. But, it is claimed, the key to its speed and high throughput is the large 2K-bytes of on-chip instruction cach
e and the ability to fetch two words per cycle, reducing the off-chip instruction bandwidth. The 150,000-transistor double-metal n-well CMOS CPU has a peak operating frequency of 20MHz.AT&T Bell Laboratories presented two papers on its high-speed, 32-bit Crisp – CMOS reduced instruction set processor – that can execute instructions at up to 16 MIPS with a 16MHz clock. The 172,000 transistor Crisp is a memory-to-memory registerless machine with only 25 instructions and four addressing modes. Crisp gets its speed by being organised into two logicaly separate machines: a prefetch decode unit and an execution unit, each with a three-stage pipeline. It contains seven static random access memory arrays, totalling 13K-bytes. Unlike other RISC machines it has no hardwired address or data stacks. Instead, 32 internal stack-cache registers are allocated and mapped onto the on-chip statics, allowing the physical cache to be changed without affecting the control software.
ISSCC was a hardware show, but the development of a RISC machine involves at least as much development work with an associated smart compiler to take advantage of the architectures, on-chip registers and pipelines and other quirks of the design. But apart from AT&T’s mention of its development of the technique of branch folding (where branches are executed along with other non-branching instructions), the software support for the RISC chips was ignored. Meanwhile, even more work is going into further developing the conventional CISC Complex Instruction Set Computer machines to take advantage of the new developments and higher densities now available. DEC, for example, described a VAX-compatible 32-bit single-chip microprocessor with a host of advanced architectural features: an on-chip 1 Kbyte instruction and data cache with tag and data parity, pipelined micro instruction execution, overlapped instruction prefetching, paralled instruction decoding, and on-chip memory management. The 180,000 transistor part uses a set of 304 instruction and runs at 25MHz. | <urn:uuid:d7c5240b-c224-40b8-a825-39e0af08bfce> | CC-MAIN-2022-40 | https://techmonitor.ai/technology/riscs_rule_ok_in_the_shape_of_things_to_come_1987_style | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00063.warc.gz | en | 0.926912 | 1,311 | 2.546875 | 3 |
The unprecedented demand for well-trained cybersecurity workers continues to grow. Some experts predict that there will be a global shortage of two million cybersecurity professionals by next year. Enlisting the next generation of skilled cybersecurity workers and training existing employees will help build stronger defenses and restore confidence among digital citizens.
According to Harvard Business Review, important attributes of accomplished cybersecurity professionals include curiosity and a passion for learning, problem solving skills, strong ethics and a keen understanding of risks. In addition, job seekers with nontraditional backgrounds may bring new experience and perspectives to the position. And, a variety of industries ‒ ranging from education, financial institutions and banks to fashion, design and retail – are hiring. The bottom line is that the profession is dedicated to helping make our borderless online world safer and more secure for everyone.
Although they are often behind the scenes, these experts are truly on the front lines and have a measurable impact in our digital lives. One leading example of a company that prioritizes security and has long invested in hiring top cybersecurity talent is Intel. Security engineers and researchers at Intel not only strengthen the security of its own products, but also learn and share with the broader community to help collectively develop and accelerate the adoption of more secure technologies across the entire computing industry. The bottom line is that the profession is dedicated to helping make our borderless online world safer and more secure for everyone.
Parents, caregivers, counselors and teachers can play a significant role in paving the way for children to pursue cybersecurity careers. It’s important for these influencers to learn about and have conversations with kids about the breadth of opportunities available. In addition to a focus on STEM ‒ a curriculum based on educating students in science, technology, engineering and mathematics ‒ there are quite a few interesting and fun options to explore.
“A highly skilled, motivated and passionate cybersecurity workforce is just as critical to the internet’s security as everyone’s role in helping to protect it,” said Russ Schrader, NCSA’s executive director. “Inspiring kids when they’re young can propel them into a gratifying career that matches their interests in a specific sector like education, finance or health. In addition, jobs in cybersecurity are highly portable and pay well while doing something good.”
For adults thinking about a new career or re-entering the job force, consider re-inventing yourself and pursuing various positions in cybersecurity. Technical skills can be acquired through a number of ways, including traditional college courses, vocational training, industry certifications and on-the-job experience. Experts acknowledge that a workforce with diverse expertise and backgrounds has a greater chance of defending our assets.
Top tips for the cybersecurity job seeker
- Get credentialed: Four out of five cybersecurity jobs require a college degree.
- Get experience: Test the waters through volunteer work and internships; offer to help IT professors at your local college/university or employer to gain insight and experience. Think about becoming a white hat hacker and help top tech companies find bugs within their software.
- Get smart: Keep up with the latest on internet security; follow top cybersecurity personalities on Facebook or Twitter and stay on top of the headlines. Join the conversation #CyberAware on Twitter and Facebook.
- Get ready: A great place to find out if a cybersecurity career is right for you is to start at the National Initiative for Cybersecurity Careers and Studies (NICCS). From career resources to learning more about jobs in the field, NICCS is a go-to-guide to learning about and joining the ranks of a cybersecurity professional.
Advice for parents, teachers counselors
- Volunteer at school, an after-school program, boys and girls clubs and community workshops to teach kids about online safety and cybersecurity careers.
- Expose students to opportunities in the field of cybersecurity by hosting an open house at your company to talk about what your cybersecurity department does.
- Inspire students to learn about cybersecurity by mentoring a team in a cyber challenge or hosting events and afterschool programs.
- Work with your schools or community-based organizations to create an internship program for hands-on learning.
- As a parent, learn about the “educational steps” to a career in cybersecurity and about community organizations that host cyber camps to educate kids about internet safety and security. | <urn:uuid:ae70e516-d95b-4a6f-a5e8-c2450cdce5c8> | CC-MAIN-2022-40 | https://www.helpnetsecurity.com/2018/10/11/infosec-professionals-shortage/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00063.warc.gz | en | 0.941144 | 891 | 2.53125 | 3 |
Where are all the database administrators (DBAs)?
According to the U.S. Bureau of Labour Statistics (BLS), the number of database admin jobs in the USA between the years 2000 and 2016 increased by just over five per cent (6,000), up from 108,000 in 2000 to 114,000 in 2016.
At the same time, the amount of data generated and stored has grown exponentially and continues to do so. There is now more data—and more databases—in the world than ever before. And the number keeps increasing. So, can the role of DBA keep pace with this massive growth in both the volume and importance of data and databases?
It can, thanks to automation. While complex tools and systems are increasingly taking on roles and responsibilities that DBAs once provided (such as backup and restore, service outages, and query tuning), there’s still an important role for this community. In fact, rather than displacing DBAs, automation is not only redefining the role, but also liberating it from the more onerous and mundane aspects of the job.
Automation brings freedom
Thanks to automation, DBAs can now administer a larger number of databases than was previously possible. Furthermore, automation frees them up to perform more “human” tasks. Rather than having to constantly monitor network availability and server performance or implement disaster recovery measures, they instead get to focus on more advanced and complex scientific data matters like architecture, design, engineering, and analytics.
Data is the lifeblood of every aspect of our increasingly digitalised society. Data security and privacy are therefore mission-critical to an organisation’s daily operations. Consequently, automated security, privacy, and access management are included in the widespread cloud-based storage and hosting services offered by the likes of Amazon® and Microsoft®.
These functions were once the responsibility of the DBA. Today, a low-cost automated security feature can identify and respond to the common security threat of human error, whether it’s highlighting a vulnerability in newly-written code or alerting the DBA to an attack.
A new role and responsibilities
Humans still have a significant part to play in data security in terms of planning and implementing an effective security strategy. An empowered DBA who’s liberated from the humdrum day-to-day tasks of database administration has the time and space to take a holistic view of the data in his or her charge.
They can make informed decisions as to which data assets require a particular level of security and privacy—and can design the database with the necessary safeguards and access rights in place.
This is one of the positive impacts of automation: freeing the team to do advanced, human-based tasks such as architecture, design, and analytics. That’s why the role and responsibilities of DBAs today increasingly focus more on application processing and data analytics.
The position formerly known as DBA
The term “database administrator” is itself open to a range of interpretations. The BLS has no way of knowing if other organisations and companies share its definition of a DBA’s role and responsibilities.
From my experience, the DBA’s focus is—or rather, used to be—data recovery. But this is no longer the case when data recovery is now an automated feature within a larger hosted cloud or Data as a Service (DaaS) bundle.
The changes resulting from the expansion of automation, plus the increasing reliance on cloud, means that the term “data professional” is a far more accurate way to describe the position formerly known as DBA.
These people are uniquely positioned to see and understand how data moves in and out on a daily basis. They’re able therefore to take on other interconnected roles such as developer, architect, business analyst, or data scientist.
How to become a “data professional”
So, what’s the best advice for someone that wants to become a data professional in 2019?
Firstly, don’t specialise in anything that’s now available as an automated feature in Microsoft Azure or AWS®. The rapid pace of change means deep knowledge of these features is nice-to-have, but won't be necessary as time goes on.
Second, focus on learning tasks humans will need to do for a longer time, including data security, design, architecture, and analytics. There’s a huge demand worldwide for those with analytics in their skillset, and that’s only set to increase.
Thanks to automation, the role of the DBA is evolving into a blend of developer and data architect. Many of the tasks that DBAs oversee today such as backups, restores, security, configuration, and query tuning can (and are) being automated away.
Time to focus on the things we’re good at
Far from replacing DBAs, automation is instead changing the role and liberating these professionals to do more than was previously possible. Thanks to breakthroughs in automation, we can now let tools and systems take on the tasks that they’re designed to be good at, such as backup and restore, security, configuration, and query tuning.
In turn, this means that humans can focus on the things they’re good at: asking questions, analysing and interpreting results, and arriving at conclusions. The result? Data professionals can move up the value chain and spend their time working with and analysing data to identify trends and extract insights to help organisations make informed choices and decisions. Being skilled and proficient in this area can set an individual up for the next twenty years.
Thomas LaRock, Head Geek, SolarWinds (opens in new tab)
Image Credit: The Digital Artist / Pixabay | <urn:uuid:f837a9b3-73eb-4f7d-ae59-63078478db41> | CC-MAIN-2022-40 | https://www.itproportal.com/features/the-age-of-the-database-administrator-is-over/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00063.warc.gz | en | 0.941739 | 1,183 | 2.71875 | 3 |
Database design is a process that takes place early on the development life cycle of a new system. It’s an important part that shouldn’t be rushed because bad design practices can cost businesses more headaches and costs down the line, especially when changes are required to support business growth.
Good database design pays off. Here are some common database design problems.
1. Poor planning
Would you start building a house without a plan? The same goes for databases. It’s easy to get carried away when a project has received the green light from management. It’s best to take your time and get it right the first time to avoid costly disasters and unexpected downtimes due to a sluggish database.
Some of the questions that need to be answered: do developers need to be engaged for software development to build a solution that interfaces with the database? What will the scope of the project entail, how long is the entire project intended to run for?
An experienced database professional must be consulted to gather business requirements and to understand how the business works to establish how the database solution will meet those needs.
2. Communication issues between the business and tech
The communication between the business and the technical specialists must be sound to reduce the risk of requirements not being sufficiently understood. Both sides must not be afraid to ask questions and seek clarification. The purpose of the database is to store data so it can be accessed later.
A good database designer will know how to assess business requirements and understand what data needs to be represented, how it’s going to be accessed and at what rate, and what the operational volume will be. To do this, good communication must exist between the business and the designer.
3. A poorly documented database
Another common database design problem that ties into poor planning is that often this will also involve a lack of quality documentation through the entire development lifecycle. This includes comments within the database for example within stored procedures. If sound documentation isn’t produced this will become a problem when changes need to be made and developers do not have any reference material to draw from.
The planning phase should involve appropriate documentation and database diagrams such as an entity relationship diagram depicting the central tables and columns.
4. Ignoring Normalization
Normalization is a set of rules and refers to how data in a database is organized and stored. It is a process that should be planned before a database is built to ensure its longevity, so it doesn’t require too much maintenance or waste excessive disk space. Normalization supports the idea that every table should have a purpose and should cater to that purpose only.
Normalization, to the third normal form, if possible, keeps databases as efficient as they can be. Creating a normalized database eliminates duplicate or redundant data and inconsistent dependencies.
5. Redundant tables and fields
If a database isn’t planned properly it can lead to redundant tables and fields which can be a nightmare for developers to maintain, therefore normalization is so important. Database redundancy is where the same data is stored in two or more tables within a database. This means that if that data needs to be updated it needs to be updated multiple times and this can introduce errors and data inconsistency.
6. Poor naming standards
It may sound obvious, but names given to tables and fields are important. Having consistent naming conventions makes it easier to maintain a database or to expand the database and add more functionality. Often when database design and supporting documentation is rushed naming conventions go out the window. It may seem a small thing, but improper naming conventions increase risk and costs in database administration resources.
7. Bad referential integrity
One of the best tools that a database can provide is the enforcement of referential integrity. Every table should be given a primary key. When a primary key exists in another table, this is called a foreign key, and this dependency should be enforced within the database to ensure data integrity. It is essential to the health of the database that referential integrity is set up correctly early in the planning phase.
8. Insufficient indexing
Using database indexes increases database performance, particularly for larger databases that contain complex stored procedures (queries) for reading/writing/updating data. An index is a way of sorting data so it can be accessed quickly when a request is made. There is a downside however, they do take up storage space. In the planning phase, once tables and stored procedures have been defined, an experienced database professional can assess what indexing needs are required and strike the right balance to have the database running at an optimum level.
9. Not taking advantage of database engine features
A database engine is a powerful piece of software. A good database engineer will know how to harness this power to help guarantee that information in the database is correct and secure. Some of the database engine tools include stored procedures to access data, views to provide a quick and efficient way to look at data, functions to allow for complex calculations and triggers to automate changes when an event occurs.
10. Not considering the best way the database should be accessed and secured
Early on a database designer should consult with the business and other technical specialists on the best way for the database to be accessed. Does it need a front-end application to be built? This will also help to determine the best database software (for example SQL Server) for the overall IT solution. Database security should be considered in the design phase as well as user permissions and access levels.
Database design is an important step in the development lifecycle of any IT solution. Doing it right can save time and money. Talk to the experts at Everconnect to find out how they can support good database design. | <urn:uuid:46d98bb7-735b-49e0-9363-b58a800183df> | CC-MAIN-2022-40 | https://everconnectds.com/blog/10-common-mistakes-in-database-design/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00063.warc.gz | en | 0.914345 | 1,160 | 2.625 | 3 |
A very interesting observation has come up in a survey conducted by the Microsoft Corporation. As per the survey, 20% college students in USA feel that the high school maths and science courses are not well equipped enough to prepare them for the college courses.
The survey (opens in new tab) commissioned by the Redmond based operating system giant put forth questions to parents as well as the students about the science, technology, engineering and math (STEM) education.
Microsoft said that in the coming 7 years there will be 1.2 million job openings in the STEM field but at the same time there will be scarcity of qualified college graduates to fill this huge number of posts.
Some more surveys were conducted on the same or related matters and the finding says that 90% parents in USA thinks that STEM education should be prioritised but only 49% think it should be top priority.
Further out of the 50% parents who would like to see their child doing STEM studies only 24% parents are ready to pay extra money to help the child to succeed in the science and math course. The survey among students revealed the fact that 64% female and 49% male students feel that K-12 STEM education is excellent to prepare them for the college courses.
[Image courtesy: Microsoft Survey (opens in new tab) (PPTX)] | <urn:uuid:6ce34f37-f2c1-4ebb-b434-2c8b91dbf380> | CC-MAIN-2022-40 | https://www.itproportal.com/2011/09/08/students-lack-stem-skills-claims-microsoft-survey/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00063.warc.gz | en | 0.965567 | 265 | 2.5625 | 3 |
The UK public wouldn't really feel comfortable driving alongside autonomous vehicles, a new study by Goodyear and the London School of Economics says. More than half (55 per cent) of UK drivers feel that way, compared to 39 per cent in 10 countries in Europe, also part of the survey. More than a quarter (28 per cent) would, on the other hand, feel comfortable, similar to the rest of Europe (30 per cent).
The main concern is with security, followed by issues of principle. More than four fifths (83 per cent) of respondents fear 'autonomous cars could malfunction'. In other ten countries, 71 per cent of respondents had the same fears. Almost two thirds (64 per cent) think humans should be in control of their vehicles, and 78 per cent believe the car should have a wheel.
“Our study explores how the road might evolve with the arrival of Autonomous Vehicles,” says Carlos Cipollitti, Director of the Goodyear Innovation Centre Luxembourg.
"Enabling a "social interaction" between human drivers and AVs will be a crucial part of this process. As an active contributor to the debates on road safety and innovation, Goodyear is exploring some of the key areas that are shaping the future of mobility. We hope that the insights generated by this research will help all relevant stakeholders to work together towards a successful introduction of AVs.”
UK respondents are aware of the security advantages autonomous vehicles can bring, with 41 per cent agreeing “most accidents are caused by human error, so autonomous vehicles would be safer”. Less than a quarter (22 per cent) disagrees. Almost half (44 per cent) believe machines would be better drivers, as they have no emotions, and 65 per cent think machines don’t have the common sense to interact with human drivers. Ten per cent disagree.
Image source: Shutterstock/LifetimeStock | <urn:uuid:cf6b46f2-3b51-47e1-aaea-8ab1c77a1013> | CC-MAIN-2022-40 | https://www.itproportal.com/news/uk-public-still-not-confident-in-autonomous-vehicles/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00063.warc.gz | en | 0.947695 | 391 | 2.515625 | 3 |
SAS Data Set is a collection of observations (rows) made on some variables (columns). A single observation or row usually consists of one or more pieces of numerical variables; each piece is called a variable. The values contained by an observation are called the observations (or rows ) of the dataset. A spreadsheet is a very simple, easy-to-use way of organizing data. On the other hand, SAS Data Set is a more powerful and complex tool for this. Typically the values in an observation are measurements made on a certain type of animal or experimental unit under consideration, or they could be observational data. Thereafter, the observations are usually saved so that they can be used later by SAS programs for statistical analyses or in some other way.
Why SAS Data Set?
- Data organization:
Improve performance and efficiency by performing data analysis using SAS Data Sets. SAS is a leader in statistical analysis, business intelligence, data mining, text mining, and advanced analytics. As a result, thousands of large enterprises around the world use SAS.
SAS Data Sets allow researchers and data scientists to work together more easily. Hence, it allows them to quickly and easily share large data sets with one another.
Reduce the time to market by using SAS Data Sets in new and innovative ways. By filing data sets, you can add value to your organization and keep up with competitors by streamlining your processes.
Build better products, improve services, and develop new methodologies by viewing the same data from different perspectives with SAS Data Sets.
Take advantage of multiple opportunities in a single, flexible environment by gaining faster access to accurate information with SAS Data Sets.
SAS Data Sets help to reduce the margin of error by allowing you to understand your data more thoroughly. For example, with SAS, examples can identify and remove data errors before they impact your organization. | <urn:uuid:790b9f71-bace-4c57-aeff-05f46078db18> | CC-MAIN-2022-40 | https://data443.com/data_security/supported-platforms-sas-data-set/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00063.warc.gz | en | 0.911149 | 399 | 3.53125 | 4 |
As the threat landscape changes and advances with time, being able to address the most common types of cyber security vulnerabilities has gained the utmost importance. In this article, we will consider various types of cyber security vulnerabilities and how you can mitigate them.
As information becomes an organization’s most important asset, cyber security gains increasingly more priority. In order to successfully conduct your business and preserve the hard-earned reputation of your organization, you need to be able to protect your data from data breaches, malicious attacks, hackers and other threats.
The average data breach cost in 2021 is $4.24 million, a 10% rise from 2020 findings. This also represents a new data breach cost peak in the entire history of the IBM and Ponemon Institute report. This is especially relevant as 90% of web applications are vulnerable to hacking, and 68% of those are susceptible to the breach of sensitive data.
With the recent advancements in technology and rising trend of remote working, organizations have an increased amount of vulnerabilities, such as end-points. We will take a closer look at the most common types of cyber security vulnerabilities and what you can do to alleviate them.
But first, we need to define what security vulnerabilities are in cyber security.
In order to define a cyber security vulnerability, first, we need to understand what a vulnerability is. A vulnerability, in broad terms, is a weak spot in your defense.
**Every organization has multiple security measures that keep intruders out and important data in.**We can think of such security measures as the fence that circumvents your yard. Vulnerabilities are cracks and openings in this fence.
Through security vulnerabilities, an attacker can find their way into your systems and network, and even extract sensitive information. Bearing in mind that a chain is as strong as its weakest link, we can assume that the security posture of your organization is as strong as its vulnerable spots.
Now having defined a vulnerability, we can narrow down our definition to cover cyber security vulnerabilities. The term cyber security vulnerability refers to any kind of exploitable weak spot that threatens the cyber security of your organization.
For instance, if your organization does not have a lock on its front door, this poses a security vulnerability, since one can easily come in and steal anything valuable.
Similarly, if your organization does not have proper firewalls, an intruder can easily break into your networks and network assets and steal important data. Since the assets under threat are digital, not having proper firewalls poses a cyber security vulnerability.
Having defined a cyber security vulnerability, we must also understand the difference between a system vulnerability, a threat and an exploit. Otherwise, we can not perceive what we are encountering, and therefore will not be able to manage cyber security risks effectively.
Exploit: Once a cyber attacker finds a weak point, exploitation is the next step by using a vulnerability to mount an attack. An exploit is a piece of code, or a program, to benefit from a security vulnerability.
Threat: A threat is a hypothetical cyber event where a cybercriminal attempts to take advantage of a vulnerability. It is a malicious act that aims to damage or steal data, or disrupt your organization's digital assets. Cyber threats include computer system viruses, data breaches, Denial of Service (DoS) attacks, and other attack vectors.
Vulnerability: To define once again, a security vulnerability is an error, flaw or weakness in a system that could be leveraged by a cybercriminal to compromise network security.
Of course, there are various types of security vulnerabilities. Let’s take a closer look at them now.
According to the CWE/SANS Top 25 List, there are three main types of security vulnerabilities:
Faulty defenses refer to porous defense measures that fail to protect your organization from intruders. There are various defense techniques including authorization, encryption and authentication.
When employed properly, these techniques have the ability to protect your organization from a great deal of cyber attacks. On the other hand, with poor implementation, they create an illusion of security while exposing your organization to grave risks.
Resource management practices include transferring, using, creating and even destroying the resources within a system. When management of resources is poor or risky, your organization is prone to have vulnerabilities like path traversal, use of potentially dangerous functions, buffer overflow, and much more.
When the interaction between components of your system and/or network is insecure, your organization is exposed to many threats including SQL injection, open redirect, cross-site scripting, and much more.
In order to ensure that your organization is free from such vulnerabilities, it is critical to pay the utmost attention to how data circulates across your networks and systems. If you can secure the circulation of data, most aforementioned vulnerabilities and threats can be considered solved. Yet you must also consider unique vulnerabilities and develop appropriate solutions for each.
There are specific cyber security vulnerabilities that are targeted by attackers more often, especially computer software vulnerabilities. Below you can find a list of the top three cyber security vulnerabilities that have caused the most harm to organizations in this decade.
In order to pose as the original user, malicious attackers can hack user sessions and identities by compromising authentication credentials. In the past, multi-factor authentication was vastly popular, but due to its difficulties in use, password authentication prevailed.
Two-factor authentication, on the other hand, is still a widely implemented security process that involves two methods of verification. One method is usually password verification. Frequently used types of authentication technology are username/password, one-time password and biometric authentication.
An injection flaw is a vulnerability which allows an attacker to relay malicious code through an application to another system. This can include compromising both backend systems as well as other clients connected to the vulnerable application.
Security misconfiguration gives attackers a chance to gain unauthorized access to some system data or functionality. Generally, such flaws evolve into a complete system compromise.
The business impact depends on the protection needs of the application and data.
**Logsign SOAR empowers your SOC team to achieve a delicate balance between automated and manual processes for vulnerability management. **It assists your team in:
Adding manual information about vulnerabilities
Using contextual information about assets and vulnerabilities
Enriching alerts with endpoint information and CVE data
Adding information about vulnerabilities to an incident
Calculating the risk and impact of an incident
Allowing the SOC team to remain in control of mitigation measures and patch management
Now is the time to consider advanced security practices such as Logsign SOAR and manage security vulnerabilities effectively!
Cybercrime is always a relevant threat but especially during the holidays.
Cyber threat intelligence is the process through which companies identify weaknesses in their own networks. | <urn:uuid:d3d181c2-082b-4ebe-ac39-6db06bdea3b1> | CC-MAIN-2022-40 | https://www.logsign.com/blog/what-are-the-types-of-cyber-security-vulnerabilities/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00063.warc.gz | en | 0.945169 | 1,385 | 2.59375 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.