text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
Two categories of protocol exist at the network layer: routed and routing.
A routed protocol is a Network Layer protocol that is used to move traffic between networks. Routed protocols allow a host on one network to communicate with a host on another, with routers forwarding traffic between the source and destination networks. IP, IPX, and AppleTalk are all examples of routed protocols.
Routing protocols let routers route routed protocols after a path has been determined. RIP, IGRP, EIGRP, OSPF, IS-IS, BGP are all examples of routing protocols.
The routing protocols can be classified in 2 ways:
- Based on the algorithms: Distance vector or Link-state
- Based on the “extension“: IGP or EGP
Distance vector versus link state
Distance vector algorithms use the Bellman-Ford algorithm. This approach assigns a number, the cost, to each of the links between each node in the network. Nodes will send information from point A to point B via the path that results in the lowest total cost (i.e. the sum of the costs of the links between the nodes used).
The algorithm operates in a very simple manner. When a node first starts, it only knows of its immediate neighbours, and the direct cost involved in reaching them. Each node, on a regular basis, sends to each neighbour its own current idea of the total cost to get to all the destinations it knows of. The neighbouring node(s) examine this information, and compare it to what they already ‘know’; anything which represents an improvement on what they already have, they insert in their own routing table(s). Over time, all the nodes in the network will discover the best next hop for all destinations, and the best total cost.
When one of the nodes involved goes down, those nodes which used it as their next hop for certain destinations discard those entries, and create new routing-table information. They then pass this information to all adjacent nodes, which then repeat the process. Eventually all the nodes in the network receive the updated information, and will then discover new paths to all the destinations which they can still “reach”. Examples of distance vector protocols are RIP, IGRP, EIGRP.
When applying link state algorithms, each node uses as its fundamental data a map of the network in the form of a graph. To produce this, each node floods the entire network with information about what other nodes it can connect to, and each node then independently assembles this information into a map.
Using this map, each router then independently determines the least-cost path from itself to every other node using a standard shortest paths algorithm such as Dijkstra’s algorithm. The result is a tree rooted at the current node such that the path through the tree from the root to any other node is the least-cost path to that node. This tree then serves to construct the routing table, which specifies the best next hop to get from the current node to any other node. An example of link state protocol is OSPF.
IGP versus EGP
An IGP (Interior Gateway Protocol) is a protocol for exchanging routing information between gateways within an autonomous network. RIP, EIGRP, OSPF are examples of IGPs.
Exterior Gateway Protocol (EGP) is a protocol for exchanging routing information between two neighbor gateway hosts in a network of autonomous systems. EGP is commonly used on the Internet to exchange routing table information. The most important EGP is BGP. The Border Gateway Protocol (BGP) is the core routing protocol of the Internet. It works by maintaining a table of IP networks or ‘prefixes’ which designate network reachability among autonomous systems (AS). | <urn:uuid:f00ea935-ee26-448d-92bc-4eb461b36a16> | CC-MAIN-2022-40 | https://www.ciscozine.com/routed-versus-routing-protocols/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00716.warc.gz | en | 0.911545 | 793 | 4.59375 | 5 |
Burglars could spy on you using your home security cameras, according to researchers from Queen Mary University of London (QMUL) and the Chinese Academy of Science. Criminals could tell when someone was at home without watching the footage. The reason is that the cameras uploaded data in unencrypted form, which increases in size when a camera is recording something moving. This type of information could allow criminals to differentiate various types of motion, such as sitting or running. The research utilized data from a large Chinese manufacturer of connected Internet Protocol (IP) cameras. The study was the first to investigate the privacy risks of video streaming traffic generated by the cameras and was published at the IEEE International Conference on Computer Communications.
The nature of the study
The joint researchers analyzed over 15.4 million streams of data from 211,000 active home security camera users of both free and paid services. The devices used in the study were IP home security cameras directly connected to the internet and which do not require a computer to upload data. Some of the brands investigated include 360, Hikvision, Nest, Netgear, and Xiaomi.
Privacy risks found in the home security cameras
The associated privacy risks originate from the operational design of the home security cameras. To keep production costs low, the cameras are designed to upload data every time motion is detected. The volume of data uploaded in the unencrypted form increased when motion was detected.
This creates a predictable pattern that allows third parties to know when someone was present at home without the need to watch the footage.
The attackers could monitor the traffic from home security cameras over an extended period and discover the pattern. Using this information, they could predict when the homeowner was most likely to be in the house.
Dr. Gareth Tyson, a Senior Lecturer at Queen Mary University of London, said the attacker requires modest technical knowledge to monitor the data, although there was a chance that someone could develop a program for that purpose and sell it online.
The researchers noted that they had not witnessed this form of attack in the wild, but it remains a possibility.
The study authors found that the privacy risks were present even on brands such as Xiaomi and Google-owned Nest.
Mitigating the privacy risks
The researchers said vendors should randomly inject data into their systems to mitigate the privacy risks stemming from the predictable pattern generated by motion detection. They were also working on ways to maintain video clarity after injecting the electronic noise into the home security cameras.
The scientists advocated for the development of intelligent home security cameras that understood the privacy risks associated with predictably uploading data. Such cameras could assess the level of risk associated with the motion detected and only upload when threats were found. For example, the camera could ignore motion associated with pets or children and only upload when a human intruder was detected.
Home security cameras have become very popular, with the global market expected to reach $1.3 billion by 2023. The increase in the number of home security cameras will open a new attack landscape, thus putting more people at risk. While the research did not dwell on the risks associated with uploading data in unencrypted form, IP camera vendors should also address the issue to reduce the possibility of interception of live video footage. | <urn:uuid:8ba40890-3469-49c7-b9c7-9c810b04a139> | CC-MAIN-2022-40 | https://www.cpomagazine.com/data-privacy/privacy-risks-found-in-home-security-cameras-allow-spying-without-having-to-watch-the-video-footage/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00716.warc.gz | en | 0.965525 | 648 | 2.671875 | 3 |
The Complete Guide to IoT Security and What Every Business Owner Needs to Know
We might feel that technology plays an enormous role in our lives, always with our eyes on our phones or turning on the TV right after we got home – maybe even consider, in a certain way, that electronic gadgets are a part of our family, like Mildred from Fahrenheit 451, Bradbury’s famous dystopia. We must not forget, though, that although technology has had a huge contribution to the evolution of human civilization, our devices can also be seen as a source of possible threats, especially if they are connected to the Internet. This happens because Wi-Fi routers, Smart TVs, smart cameras, smart locks, smart lights, voice assistants, some medical devices or Internet-connected cars fall under the category of the so-called Internet of Things, and may very easily become the target of cybercriminals.
What Is IoT Security?
The Internet of Things (IoT) describes the physical objects that are embedded with software, sensors and other technologies that allow them to connect and exchange data with other devices and systems over the Internet.
Consequently, IoT security refers to all the measures we can take to ensure the (cyber)security of this kind of devices, while also keeping in mind the various dangers that threaten them.
Both manufacturers and consumers prefer these devices. Consumers for the added functionality (it’s easier to watch Netflix if the TV already has Internet).
Manufacturers, however, like IoT devices because they allow them to silently collect information about how consumers use their products. As a result, they can then tailor future products around these usage patterns.
The Rise of IoT Devices
The emergence of IoT has been fostered by an impressive series of factors that include:
Connectivity. Hosts of network protocols for the Internet easily connect sensors to the cloud and “things”, streamlining data transfer.
Access to low-cost and low-power sensor technology. Nowadays, manufacturers use affordable and reliable sensors.
Cloud platforms. Cloud platforms’ increase in availability enables both businesses and consumers to benefit from their advantages, without having to manage them.
Machine learning and analytics. The advances in machine learning and analytics plus the vast amounts of data stored in the cloud allow companies to gather insights faster and more easily.
Rise of conversational artificial intelligence (AI). IoT devices (like the digital personal assistants Alexa, Cortana and Siri) can now benefit from natural-language processing due to advances in neural networks.
As i-SCOOP shows, “In 2020 the number of IoT endpoints is forecasted to reach 5.8 billion endpoints, as mentioned a 21% increase from 2019. […] The fastest-growing segments in terms of IoT endpoints installed base: building automation, automotive and healthcare. The second-largest user of IoT endpoints is physical security, says Peter Middleton. Here building intruder detection and indoor surveillance use cases will drive volume.” Other industries use this kind of technology as well, so this growth tendency only underscores the importance of IoT security for business.
IoT endpoints 2018, 2019 & 2020 – selected segments (source and more information)
Benefits of IoT Devices
The major benefits of IoT secure devices for your business are the following:
- They increase the productivity and efficiency of business operations.
- They create new business models and revenue streams.
- They easily connect the physical business world to the digital world, which saves time and creates value.
The tricky part is, whether we use them as home consumers or in our workplace, they really are convenient – IoT devices allow us to turn lights on and off remotely, unlock the front door when we are not even in the building, or get Alexa or Siri to check our calendar for us.
As Peter Milley says, in his paper Privacy and the Internet of Things,
This convenience comes at a price. The unfortunate reality is the companies making these devices, although well steeped in the challenges of manufacturing physical products, are not as well versed in software development. […] Appliance makers create back-door access for support personnel or hard-coded passwords and encryption keys to simplify manufacturing and support with little regard for security. Furthermore, they rarely take into account the need for regular patch maintenance and rely too heavily on the end-user to make security changes to their products.
IoT Security Risks
Identity and access management
Identity and access management is typically related to end-users, but it also extends to devices and applications that require network and resource access. What they have access to and the legitimacy of their request in the first place must always be verified, because devices left exposed in various locations can be easily attacked and used by cybercriminals to infiltrate your organization.
Data is crucial for IoT operations and it’s also critical that its integrity is wholesome. Take measures to assure that your data has not been manipulated, neither while at rest, in transit, or in use. Don’t forget about personal data either. This kind of information and any data generated by an IoT device must be protected through encryption, whether it’s in transit or at rest.
The great number of devices
Another aspect that threatens IoT security is the use of a great number of devices. To be precise, integrating new systems and devices provides more points of access for potential attackers, which raises the safety stakes exponentially.
The simplicity of the devices
Simplicity and ease of use are crucial principles within the IT and electronics industry. Every software and device out there is designed to be as easy to use as possible, so as to not confuse consumers and discourage them from using the product.
Unfortunately, this often means that some products cut corners, and don’t implement security features consumers might find “too clunky”.
IoT devices are being more and more used in various sectors, and even the most simple devices (like a fish-tank thermometer in a casino who can gather tens of GB of personal data and expose it to hackers, for example) can be potential gateways to private segments of a company’s network.
Poor software updates
What’s more, many Internet of Things creators don’t even patch or update the software that came on their devices. If your device has a software vulnerability (nearly 100% chance that it does), there’s little you can do to prevent an attacker from exploiting it without help from the manufacturer.
Insecure user interface
A device’s user interface is generally the first thing a malicious hacker will look into for any vulnerabilities. For instance, he might try to manipulate the “I forgot my password”, in order to reset it or at least find out your username or email.
A properly designed device should also lock out a user from attempting to log in too many times. This stops dictionary and brute force attacks that target passwords and greatly secures your device credentials.
In other cases, the password might be sent from the device to the central server in plain text, meaning it isn’t encrypted. Pretty bad if someone is listening in on the device and reading all of your data.
The physical protection and disposal of connected devices
Anyone with physical access to some products can extract the owner’s password from the plaintext, private keys, and root passwords. As companies adopt and upgrade IoT, it’s also important to think about the aspect of protection during the use and disposal of old or defective smart devices.
Malware on an industrial scale
Hackers are developing more and more dangerous sorts of malware, so companies must not forget to ensure the security of the industrial control systems that are connected and depending on IoT devices.
Innovation always includes the possibility of opening potential loopholes for data protection. The fines levied for GDPR exposure show that the European Commission regulators are very serious when it comes to ensuring that personal data remains private. There are some new security laws on the horizon that promise to hold device manufacturers accountable for vulnerable entry points, yet companies need to take more responsibility for the imperfections within their own IT architecture.
Inertia is, generally, one of the greatest cybersecurity threats of today. Technology constantly evolves, hackers elaborate more and more strategies to get what they desire, yet so many companies still rely on security tools developed decades ago.
Up to this point, the safety systems of a Saudi Arabian oil refinery have been targeted by the Triton industrial malware. Vast amounts of personal data have been accidentally exposed at the British Airways, Marriott Hotels, and various local authority organizations. A group of hackers got access to impressive amounts of a casino’s sensitive information by using an Internet-connected thermometer in an aquarium. Don’t let anything like this happen to your company!
Moreover, researchers have proven more than once that it’s possible to physically take control of a car by breaking into apps that control onboard software. For now, this has only been done in experimental situations, but as Internet-connected cars gain ground, it’s only a matter of time until it happens to someone, somewhere.
Smart devices can be hacked in a number of ways, depending on the type of vulnerability the attacker decides to take advantage of.
The main types of attacks against IoT devices are:
Every software has its vulnerabilities. It’s nearly impossible not to. Depending on the sort of vulnerability, you can use them in multiple ways.
Buffer overflows. This happens when a device tries to store too much data in a temporary storage space. This excess data then spills over into other parts of the memory space, overwriting it. If malware is hidden in that data, it can end rewriting the code of the device itself.
Code injection. By exploiting a vulnerability within the software, the attacker is able to inject code into the device. Most often, this code is malicious in nature, and it can do a multitude of tasks, such as shutting down or taking control of the device.
Cross-Site Scripting. These work with IoT devices that interact with a web-based interface. Basically, the attacker infects the legitimate page with malware or malicious code, and then the page itself will infect the IoT device.
The most frequent and well-known malware attacks on PCs target a device’s login credentials. But recently, other types of malware such as ransomware have made their way onto IoT devices.
Smart TVs and other similar gizmos are most exposed to this kind of threat, since users might accidentally click on malicious links or download infected apps.
Password attacks such as dictionary or brute force target a device’s login information by bombarding it with countless password and username variations until it finds the proper one.
Since most of the people use an easy password, these attacks are fairly successful. Not only that, but according to one study, nearly 60% of users reuse the same password. So if an attacker gets access to one device, they get access to all devices.
Sniffing / Man-in-the-middle attacks
In this attack, a malicious hacker intercepts the Internet traffic that goes into and out of a smart device.
The preferred target is a Wi-Fi router, since it contains all of the traffic data sent from the network, and can then be used to control each device connected to it, even PCs or smartphones.
Spoofing works by disguising device A to look like device B. If device B has access to a wireless network, then a disguised device A will trick the router into allowing it on the network. Now that the disguised device A can communicate with the router, it can inject malware into it. This malware then spreads to all the other devices on the network.
Internet of Things devices are prime candidates for a botnet. They are both easier to hack and harder to diagnose if they’re compromised.
Once a device is enslaved, it can be used for a wide variety of cybercriminal activities, such as DDoS attacks, sending spam emails, performing click fraud (basically using the enslaved device to click an ad), and Bitcoin mining.
Mirai is the biggest IoT botnet we know about, and it was built on the backs of default passwords and usernames.
Taking control of an IoT device doesn’t sound so menacing at first glance. After all, it’s not as if a malicious hacker could poison you if he hacked your coffee maker.
But things will quickly get serious if the attacker takes control of your car as you’re driving it. This isn’t even a hypothetical situation, it’s actually been done, albeit by cybersecurity researchers – whitehat hackers were able to hack into the car’s braking system and acceleration.
Some people now use smart locks to secure their homes, but ultimately they’re just software on hardware. At DEF CON 2016 (the biggest hacker conference in the world), researchers tested out 16 smart locks and proved how many of them used very simple security features such as plain text passwords. Others were vulnerable to device spoofing or replay attacks.
Smart devices process a lot of personal information, such as:
- medical data
- location data
- usage patterns
- search history
- financial information, etc.
8 Tips for Flawless IoT Security:
Pay special attention when you choose the IoT devices providers
Make sure that you choose a well-known and reliable supplier, most likely one who will probably still be around for a long time. IoT devices require regular updates, especially when new security flaws appear, so you need a manufacturer that, over the years, provides patches and fixes any security bugs that may arise.
Invest in a network analysis tool
Monitor activity and quickly identify potential security issues by investing in a network analysis tool. This way you will not risk missing instances of information being accessed without permission or at unexpected hours – both signs that can point to a breach of your company’s IT system through IoT device.
Consider network management protocols a priority
IoT devices manufacturers often include an in-built protocol that allows the monitoring of internal activity. This usually isn’t enough if you want top security, so it’s crucial for your business to choose IoT devices that support Simple Network Management Protocols (SNMP). SNMP is a worldwide standard for network management, which allows them to be monitored by intrusion detection and prevention systems.
Consolidate your network’s security
It’s crucial to have an up-to-date router, with a firewall enabled because it can be the first point of attack. If the router is compromised, your entire network will be vulnerable.
Make sure your IoT devices get patched up
Security updates are often released by responsible manufacturers, but you must also make sure that your IoT devices are patched regularly, with the latest updates. If you happen to stumble upon a device that doesn’t receive updates, it’s best to think whether the benefits of the device surpass the potential impact of a potential attack in your company’s case.
Remove unsupported operating systems, applications and devices from the network
Improve your business’s IoT security by conducting an inventory to check which operating system a device might be running. If a certain operating system is not getting patches anymore, it shouldn’t be connected to the network.
Narrow down internal and external port communication on your firewalls
Companies should restrict outbound communication if that communication is not particularly necessary. As Cyber Security Services says,
Ports 80 and 443, typically associated with the internet, are common services that are open from the corporate network. But 80/443 might not be required for other VLANs associated with specific device types. These two ports are known to pose significant network threats since they allow web surfing, are rarely monitored and offer an entry path into the network. It is very common for malicious hackers and identity thieves to use those ports to exfiltrate data, as they are often left open in most organizations. This could allow a backdoor into the organization.
Last but not least, change default passwords!
This may seem commonsense, but you must ensure that the default passwords are changed for every IoT device on your network. The new passwords should also be changed over a period of time and stored in a password vault.
A few other strategies are synthesized in the image below:
Heimdal™ can also help you with IoT Security. Here’s how!
You can ensure your IoT devices’ security by choosing our Threat Prevention – Network solution, an Intrusion Prevention System that can actively protect your network and is delivered as SaaS. Threat Prevention Network can shield your organization from DNS queries to unwanted domains by stopping communication between infected devices and malicious servers, which guarantees that every device used in the perimeter of your company’s network will pose no danger to your business. Here we include any (possibly compromised) personal device that your employees or visitors use to connect to your corporate network.
Heimdal® Threat Prevention
Your organization’s protection can be also enhanced in the case of remote work with Heimdal’s Threat Prevention – Endpoint module, our proactive DNS security solution deployed at the endpoint level.
As i-SCOOP says, “despite challenges, different speeds and the fast evolutions which we will see until the first years of the next decade, the Internet of Things is here.” That, at the end of the day, the number of IoT security breaches is only going to grow is also a fact.
IoT is one of the biggest technological trends since the smartphone and promises to be just as impactful. Unfortunately, the promise and opportunity they offer are just as tempting for cybercriminals as they are for regular customers. Consequently, securing connected devices can no longer be treated as optional – it is mandatory.
Drop a line below if you have any comments, questions, or suggestions regarding IoT security – we are all ears and can’t wait to hear your opinion! | <urn:uuid:5704eb24-9e39-4eae-b625-27e97af0d96a> | CC-MAIN-2022-40 | https://heimdalsecurity.com/blog/iot-security-for-business/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00716.warc.gz | en | 0.932742 | 3,763 | 2.921875 | 3 |
A virtual private network (VPN) is a type of connection you can use over the internet to secure your identity as you surf the web. Many articles by our team have already stressed the importance of online anonymity, and how you can secure your internet sessions with little-to-no knowledge of networking.
A VPN is important because it acts as a barrier between you and other internet users. Although VPNs can be used for streaming services and gaming, the most common use for these networks is privacy and security. Some VPNs are free, and some are paid for, but all VPNs have terms and conditions that you need to abide by. Paid VPNs are higher in quality. They also tend to be more reliable in terms of connectivity and coverage.
You may have tried to connect to a Wi-Fi network before and received the notification that it was an “Unsecured Network.” When you connect to networks that are not secure, it means that your sensitive information is at risk and could be easily compromised! A user could erase his or her digital footprints on a Wi-Fi network by connecting to the network and then enabling the VPN, ensuring that personal information isn’t able to be viewed by anyone else.
VPNs can be added on your iOS device by downloading VPN apps from the App Store, or by configuring your own VPN profile in your device’s Settings app. Most of the time, downloading a VPN service from the App Store is the easiest route to take.
Call LI Tech Advisors!
LI Tech Advisors is a Long Island, New York-based Managed IT service company. When you partner with LI Tech Advisors as your next IT services company, you’ll have a partner who has over 30 years of experience working with organizations across Long Island. | <urn:uuid:dc4a4619-2ac5-452b-9089-f3d8f2bb7aee> | CC-MAIN-2022-40 | https://www.litechadvisors.com/make-your-iphone-more-secure-using-a-vpn/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00716.warc.gz | en | 0.971418 | 380 | 2.515625 | 3 |
There are many vulnerabilities that criminals can exploit for a ransomware event, and organizations must account for them all. Well-known tactics like phishing and brute force attacks receive the most attention. However, more obscure threats like “malvertising” can also become the organization’s Achilles heel.
Malvertising is the downloading of malicious code by tricking victims to click on seemingly harmless online advertising. First named back in 2015, malvertising remains an effective method for gaining unauthorized access and information from unsuspecting victims and installing malware like viruses, worms, Trojan viruses, spyware, adware, and even ransomware. Hackers infect devices via a website pop-up or mobile device alert that prompts the user to click and visit another website. This may also be referred to as a “drive-by download,” where the victim unknowingly visits a malicious ad on the website that searches the computer for vulnerabilities.
In some instances, hackers have targeted online advertising agencies to gain access to their systems. Once compromised, their now-infected network sends out malicious ads to legitimate websites. More recently, an Eastern European malverstising campaign demonstrated hackers’ ability to adopt to evolving technology. Hackers used malvertising to deploy malware that compromised Internet of Things devices, which have grown ubiquitous and often lack basic security protections.
So how can you prevent being the victim of a drive-by download? Build good security hygiene with your networks and data.
- Keep your browsers and plugins current on updates,
- Keep your security software and your operating system updated with the latest updates,
- Use continuous monitoring to detect vulnerabilities and threats, and
- Utilize updated ad-blocking plugins to help block those malvertisements.
Cybersecurity often comes town to the single point of failure. But organizations can defeat malvertising and other threats with layered security including updating code, monitoring endpoints, filtering spam and malware, and training employees to recognize threats. | <urn:uuid:cbca0e7a-2fd7-47e3-9cf8-ff571cd0330e> | CC-MAIN-2022-40 | https://kivuconsulting.com/blog/malvertising/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00716.warc.gz | en | 0.915969 | 403 | 2.890625 | 3 |
Cybersecurity Best Practices for Remote Learners
As so many students and parents navigate remote learning again this Fall, there is an abundance of helpful online articles providing suggestions on how to create designated learning spaces, minimize distractions, set schedules, communicate with teachers, and above all, keep a positive attitude (and perhaps an adequate selection of wine for mom and dad).
With students now spending the majority of their days online, it is never too early to teach cyber awareness to the next generation. Schools can help by mitigating ongoing cybersecurity risks and providing information security best practices within their distance learning plans. Some basic best practices for students that can also be shared with employees, are listed below:
1.) Think before you click!
Remind all students (and parents) of the risks associated with clicking links and opening attachments from unknown senders, and the need to verify the legitimacy of an email before responding or providing any information. With so much information being consumed daily from news outlets, public health agencies, and schools as their plans for the Fall semester are continuously changing, it is not uncommon for an anxious parent to quickly open an email that claims to be an update from their child’s schools without taking the time to carefully review and look for any indications of fraud. Schools should not be asking for sensitive information via email, so think twice before responding to emails requesting it.
2.) Keep ALL systems updated
With both the parents and the kids at home working and schooling on the same network, a failure or breach of one person will quickly spread to all the others on the same network. Encourage families to verify their home wireless routers, as well as the operating systems, web browsers, and applications on all devices used for remote learning are automatically installing security updates and software patches. Anti-virus and anti-spyware software should also be installed and kept up to date. Some colleges provide anti-virus software at no charge for students, faculty, and staff. Some schools may also be providing students with dedicated devices for remote learning, but many students are also using personal devices to access school materials, so it is important to keep those devices secured and up to date.
3.) Use secure Wi-Fi connections
Remind students and parents to verify their home Wi-Fi network is password-protected (and not with the default password!).Secure connections will keep strangers from easily accessing your network. Consider setting up separate virtual networks for each person or group (i.e. “work” network for parents, “school” network for the kids) to provide some separation.Even better, create separate virtual networks for your family and another “guest” one for friends or relatives who come to visit and need access.This way any malware on their device(s) cannot spread across the network and infect your family as well.
4.) Use strong passwords
Reinforce the need to create strong passwords and explain why it is important not to share passwords with anyone else. Use unique passwords for each website/application; if the same password is reused for more than one computer, account, or website, all of those systems can be compromised if one account is. For younger students, parents will want to be the ones keeping track of passwords and logging in as necessary.
5.) Check privacy and security settings
Review and adjust privacy settings on web applications, games, social media, and video-conferencing tools so student profiles are set to the strictest privacy setting, and check the safety and security settings on any new programs that are downloaded. Use parental controls to help block child accounts from accessing specific websites, applications, or functions. Parental controls can also help monitor a child’s use of connected devices and set appropriate time limits (although, I think we can all agree any guidance on the recommended amount of screen time has gone out the window in 2020!).
6.) Practice video conferencing
If students are Zoom-ing into class or using other video conferencing software, teach them appropriate online etiquette and practice the application controls including how to share camera, adjust the volume, view participants, and mute/un-mute (If I had a dollar for every time adults on conference calls fail to unmute themselves before speaking….well, perhaps we can all do our part to make the next generation more capable!) Also remind students that everyone can see them at all times so they need to behave appropriately.
7.) Preach the importance of online behavior
With younger children online much more than many parents previously allowed, it is important that they understand the implications of posting or sharing too much information. Tell them that anything they say online should be able to be said in front of their entire school, family, and church. Also be aware of the risks of cyberbullying, as well as potential online predators, and explain to your students the consequences of acting inappropriately online. It may feel like they are being “watched” less than in the physical classroom, but stress that they are still expected to act responsibly and to report any unusual behaviors or requests they receive online.
8.) Interact with students’ online environments
Parents should view and play with their kids’ online environment as much as possible in order to fully understand their online world. Knowing the web applications, games, and social media sites kids are using allows parents to make sure they are age-appropriate and are being used safely. Parents can also learn how to limit messaging or online chat and location-sharing functions, as these can expose students to unwanted contact and inadvertently disclose physical locations.
Additional guidance from our Offensive Security team below:
[Wallace]: Whether your remote learners are in a shared space, or their own rooms, help them to be mindful of what's in the background on video calls. Turn on the camera and take a good objective look at the space in open view. Practice good Operational Security by removing personally identifiable objects, and any sensitive data written or printed in the field of vision.
Along with practicing the controls for conferencing, ensure they have a good grasp on how to do some basic troubleshooting in the event that their peripherals fail, including Wi-Fi connections. Knowing how to check or replace batteries on wireless devices, re-seating headset connections, or changing devices on the fly will make for a smoother day for both you and your remote learner.
Also, don't forget to back up their important work regularly! Many of has have experienced the heartache of drive failure, at the most inconvenient of times. While some schools may be providing dedicated devices with services that automatically save to the cloud, not all remote learning platforms are the same. Ensure that your remote learner doesn't lose work by asking where important work is saved. If they are saved locally, provide a dedicated external drive or network device for your learner to back up important files to, and add it to their regular routine. | <urn:uuid:5e74a2d5-94fd-4606-8a27-e19aba886516> | CC-MAIN-2022-40 | https://www.campusguard.com/cybersecurity-for-remote-learners | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00116.warc.gz | en | 0.947254 | 1,409 | 2.84375 | 3 |
By now, you have probably heard of Apache Hadoop (opens in new tab). The name is derived from a cute toy elephant, but Hadoop is all but a soft toy. Hadoop is an open source software project that offers a new way to store and process big data.
The Hadoop software framework is written in Java for distributed storage and distributed processing of very large data sets on computer clusters built from commodity hardware.
While large Web 2.0 companies such as Google and Facebook use Hadoop to store and manage their huge data sets, Hadoop has also proven valuable for many other more traditional enterprises based on its five big advantages.
Let's take a look.
1. Hadoop is scalable
Hadoop (opens in new tab) is a highly scalable storage platform, because it can store and distribute very large data sets across hundreds of inexpensive servers that operate in parallel.
Unlike traditional relational database systems (RDBMS) that can't scale to process large amounts of data, Hadoop enables businesses to run applications on thousands of nodes involving thousands of terabytes of data.
2. Cost effective
Hadoop also offers a cost effective storage solution for businesses' exploding data sets. The problem with traditional relational database management systems is that it is extremely cost prohibitive to scale to such a degree in order to process such massive volumes of data.
In an effort to reduce costs, many companies in the past would have had to down-sample data and classify it based on certain assumptions as to which data was the most valuable.
The raw data would be deleted, as it would be too cost-prohibitive to keep. While this approach may have worked in the short term, this meant that when business priorities changed, the complete raw data set was not available, as it was too expensive to store. Hadoop, on the other hand, is designed as a scale-out architecture that can affordably store all of a company's data for later use (opens in new tab).
The cost savings are staggering: instead of costing thousands to tens of thousands of pounds per terabyte, Hadoop offers computing and storage capabilities for hundreds of pounds per terabyte.
Hadoop enables businesses to easily access new data sources and tap into different types of data (both structured and unstructured) to generate value from that data.
This means businesses can use Hadoop to derive valuable business insights from data sources such as social media, email conversations or clickstream data. In addition, Hadoop can be used for a wide variety of purposes, such as log processing, recommendation systems, data warehousing, market campaign analysis and fraud detection.
4. Hadoop is fast
Hadoop's unique storage method is based on a distributed file system that basically 'maps' data wherever it is located on a cluster. The tools for data processing are often on the same servers where the data is located, resulting in much faster data processing.
If you're dealing with large volumes of unstructured data, Hadoop is able to efficiently process terabytes of data in just minutes, and petabytes in hours.
5. Resilient to failure
A key advantage of using Hadoop is its fault tolerance. When data is sent to an individual node, that data is also replicated to other nodes in the cluster, which means that in the event of failure, there is another copy available for use.
The MapR distribution goes beyond that by eliminating the NameNode and replacing it with a distributed No NameNode architecture that provides true high availability. Our architecture provides protection from both single and multiple failures.
When it comes to handling large data sets in a safe and cost-effective manner, Hadoop has the advantage over relational database management systems, and its value for any size business will continue to increase as unstructured data continues to grow.
Michele Nemschoff is Director of Corporate Marketing at Quantcast (opens in new tab).
The Apache Software Foundation (opens in new tab) was founded in 1999 and provides support for the Apache Community of open-source software projects, which provide software products for the public good.
Apache is known for other open source software such as:
OpenOffice (opens in new tab) is the leading open-source office software suite for word processing, spreadsheets, presentations, graphics, databases and more. It is available in many languages and works on all common computers.
Geronimo (opens in new tab) is an open source server runtime that integrates the best open source projects to create Java/OSGi server runtimes that meet the needs of enterprise developers and system administrators. Its most popular distribution is a fully certified Java EE 6 application server runtime.
Tomcat (opens in new tab) is an open-source web server and servlet container developed by the Apache Software Foundation (ASF). Tomcat implements several Java EE specifications including Java Servlet, JavaServer Pages (JSP), Java EL, and WebSocket, and provides a "pure Java" HTTP web server environment for Java code to run in. | <urn:uuid:5e099fbf-5277-42f4-9395-683e7b91d0c9> | CC-MAIN-2022-40 | https://www.itproportal.com/2013/12/20/big-data-5-major-advantages-of-hadoop/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00116.warc.gz | en | 0.949297 | 1,057 | 3.265625 | 3 |
Over the past decade, we have witnessed the emergence of revolutionary innovation, of which the evolutionary significance is yet to be fully recognised. Of course, we’re referring to the blockchain, cryptocurrencies, and more generally the phenomenon that we describe as the Internet of Value. The blockchain and related technologies have the opportunity to transform the world of finance, and other value systems, in exactly the manner by which the Internet has transformed the way we exchange information.
There have been some key milestones leading up to this point: the launch of Bitcoin in 2008; the emergence of altcoins from 2011 onwards; the launch of Ethereum in 2015. We call this Layer 1 - the foundational level - with the economic function of value creation, and the technical one of ensuring the basic functionality of accounting and transfer of crypto assets. All this is implemented on the basis of distributed registries and with the conditions of interaction strictly regulated at the code level.
It would seem that, finally, technology had appeared which would enable us to create, digitise, and transmit value, just as we create and transmit information on the classic Internet, without the mediation and restrictions of a third party. But mass adoption remains unrealised. And the challenge is not only that of inertia; that very few people in the world have yet realised the significance of this innovation. The problem lies primarily in limitations of a purely technological nature, chief among them being:
1. Limited scalability and effectiveness of basic crypto-ecosystems, and as a result:
- low network throughput;
- low transaction speed.
2. Constantly growing need for memory, for local storage of copies of distributed registries.
3. High transfer fees (the cost of consensus will always be higher than the cost of confirmation in non-distributed databases).
Of course, some blockchain systems are trying to solve these problems. Some even do. But their efforts rest mainly with reducing decentralisation and lowering resistance to censorship (which is more suitable for enterprise solutions than for the universal Internet of Value (opens in new tab)). Furthermore, these efforts are isolated to individual projects, without providing an answer to the shared problems of the existing ecosystems: Bitcoin, Ethereum and others.
So, here we face at least one additional challenge, namely:
4. The absence or extreme limitation of interoperability; the impossibility of interaction between different blockchain-ecosystems.
All four of these restrictions relate to the problems of Layer 1. The problems of the first group (1-3) are already being addressed both by the developers of individual blockchains and ecosystems, as well as by special projects, the broad goal is to create some kind of overlay over popular blockchain systems. Examples might include projects such as Lightning for the Bitcoin ecosystem or Raiden for Ethereum. Therefore, solutions that address the problems of the first group are referred to as the so-called Layer 2.
Layer 2 is actually an off-chain superstructure above the baseline (Layer 1), designed primarily to solve problem number 1 – scalability. Most proposed solutions involve state channels or side chains. The major benefits include increased productivity and reliability as well as trustless operation.
Layer 2 projects such as Lightning Network, Raiden Network, Trinity Network (payment channels), as well as Celer Network and Counterfactual (generalised state channels) belong to state channel projects. Meanwhile, Plasma, RSK, and Liquid use side chains to solve the problem of scalability for basic blockchain systems.
But, as we discussed above, none of them solves our problem number 4 – the problem of interoperability. And although projects such as LN and Raiden state that they will in the future, that’s quite a challenge for any project focused on a specific DLT. The fact is that the provision of atomic untrusted transactions between two different blockchains is an extremely difficult task due to differences in technological approaches and the misalignment between project schedules.
But why is this such a serious problem?
Let's try to imagine a physical world without interoperability, using some historical examples. In the 19th century, the United States experienced a boom in railway construction. Several large companies were engaged in the construction and development of a network of railways. But imagine the consequences if each of these companies had built a railroad to their own standards, incompatible with the others? Different track gauges, different rail shapes, etc. Perhaps one of these hypothetical widths or shapes would be more optimal than that used today, or some of these standards would be better suited for specific tasks. However, each company would have to forge its own line to each point, while producing separate cars and locomotives compatible with their proprietary standards, and so on. Instead of one universal railway station in each village, we would have to build several different ones for each of the incompatible railway networks.
What do you think would happen in this case? Would the railroad still have become an essential conduit for the development of new territories, the improvement of economic ties between regions, etc.?
In other words, how would a lack of interoperability have affected the adoption of this technology?
Some developers of Layer 2 solutions state that they plan to solve the problem of interoperability in the future. But today, nevertheless, they continue to improve their projects focusing on a particular blockchain; at best, on compatible blockchains (for example, Bitcoin forks, or smart contract projects similar to Ethereum, etc.)
On the other hand, should Layer 2 projects even try to solve the problem of interoperability? In our opinion, this problem belongs to a different level; namely, Layer 3. And this means that it’s better to solve it separately and with the help of more specialised approaches.
It might be appropriate to draw an analogy here with the development of the Internet itself. The Internet has its own technological layers, each of which performs a specific function:
- Link layer: various technologies and data transfer protocols between devices on the local network, for example, Ethernet, WiFi, PPP, HDLC, etc.
- Network layer: protocols that transfer data over a wide area network.
- Transport layer: protocols responsible for the complete delivery of data, etc.
The peculiarity of the Network layer is that it is as abstract as possible from lower-level protocol technologies. Its task is to ensure the possibility of global data transmission, or, in our terms, to ensure interoperability for various devices and local area networks (LANs), binding all of them into a single global network (the Internet).
A similar main task, in our opinion, should be accomplished by projects of the Layer 3 level: ensuring interoperability and functioning of separate blockchain-ecosystems in a single global Internet of Value. The main characteristics that should be inherent to projects of the Layer 3 level are:
- Should be based on off-chain technologies.
- To be blockchain-agnostic, i.e. not be tied to a particular blockchain-ecosystem.
- To provide the possibility of trustless multi-asset transactions, i.e. allow easy exchange of one cryptoasset for another in the process of making a payment.
- To ensure payments’ atomicity.
As for atomicity, we should return to the analogy using the conventional Internet. It transpires that the Transport layer is in charge of the analogue of atomicity: it’s rendered into a separate technological layer there. Whether a similar state of affairs will eventuate in the case of the Internet of Value will only be known with time. Meanwhile, this problem should be tackled one way or another.
In general, precisely because of the characteristics discussed above, Layer 3 technologies are not restricted to ensuring interoperability and combining individual blockchain systems into a single global network. They can also connect traditional financial systems and physical assets, or other non-blockchain value carriers. Just as the Network layer in the conventional Internet abstracts from specific low-level data transfer technologies and ensures their global interaction, Layer 3 technologies in the Internet of Value can provide interoperability and global value transfer, regardless of where this value comes from or what its basic carrier is.
Furthermore, Layer 3 technologies in the Internet of Value can become the basis for the functioning of the following practical solutions and services:
- Payment systems
- Decentralised cross-chain exchanges (Cross-chain DEX)
- Cross-border Payments (“Crypto VISA”)
- dAPP scaling
- Solutions in the field of Internet of Things (IoT), and others.
However, the most significant benefits will flow from the network effect of the Internet of Values. Each new service, and each new participant attracted by this service, will increase the value of the entire Internet of Value network for all participants, including the various DLT projects.
Imagine if someone in the 1970s, exposed to only a few local area networks, were asked to describe a global Internet of the future - one that would allow half of the world's population to instantly share any information. Imagine if they were asked to predict how this would affect the economic, professional, and social life of people. In the 1970s, it would be very difficult to predict what is now commonplace to us.
Similarly, at our present stage of evolution, it is very difficult to imagine where a similar development - what we call the Internet of Value - will lead us.
It’s likely that, at some point in the future, the basic DLT technologies (Layer 1) will perform the main function of the carrier of value, a kind of custodianship of specialised value. Developers will focus primarily on the thorough implementation of these basic and natural functions.
Meanwhile, the logistical functions for value transfer will be folded into the superstructures of the technological levels – Layer 2, 3, 4. Their task will be to solve a particular level of problems such as scalability, interoperability, and atomicity. Being separated into distinct levels of technology, they will be able to act beyond the limitations of lower level technologies.
This is especially true for interoperability. After all, a given blockchain-ecosystem, whatever it may be, is unlikely in itself to become what we could call the Internet of Value, just as WiFi could not become the internet. But all of them working together can. However, this will happen only in the event of the emergence of a Layer 3 technology which can ensure their interoperability.
We in the GEO Protocol team have been working on research in this area since 2015, and we are developing a solution that, we hope, can become part of the future Internet of Value.
Max Demyan, CEO, GEO Protocol (opens in new tab)
Image Credit: GEO Protocol | <urn:uuid:3b6dee69-d51b-4557-8287-a99007b30a11> | CC-MAIN-2022-40 | https://www.itproportal.com/features/the-need-for-layer-3-in-the-internet-of-value/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00116.warc.gz | en | 0.936575 | 2,203 | 2.78125 | 3 |
NIST is a physical sciences laboratory and non-regulatory agency of the United States Department of Commerce. In response to the unprecedented number of cyberattack against organizations, NIST published SP 800-207 which provides organizations with a systematic guideline for updating their network cybersecurity to a zero trust framework, also known as perimeterless security, that modernizes security in a world where users are accessing the internet remotely and address the inadequacies of traditional network defenses.
The zero trust security model is “never trust, always verify”. The iboss Zero Trust Edge platform was purpose built from the ground up leveraging one of the world’s largest proprietary security cloud architecture which was designed take all the guess work out of implementing NIST’s guidelines making it easy for schools to increase their cybersecurity posture and protect its users and resources from cyber-attacks.
Legacy security architectures were based on a ‘castle and moat’ framework focused on restricting user access from outside the network.
This approach was effective because your school’s sensitive data (HR, Payroll, student health records, PII, etc.) resided on school owned servers hosted inside of school managed data center and accessed by users while on campus and on school managed devices.
Today, the vast majority of software is cloud hosted and being access by users and contractors remotely and on personal devices.
The result is your school’s sensitive data is now residing across multiple third party clouds and accessed from users who are not inside the ‘castle’ rendering traditional security ineffective in preventing cyberattacks. By shifting security to the cloud, iboss is able to reduce breach risk and protect your data where it resides, in the cloud.
Military grade zero trust security is essential to protect against social engineering attacks
Education is by far the top target for cyber attacks. From Aug. 14 to Sept.12, 2021, educational organizations were the target of over 5.8 million malware attacks, which represented 63% of all attacks. The preferred approach by cyber criminals is social engineering.
iboss Zero Trust Edge incorporates the next gen firewall tools which incorporate multiple attributes of a user when accessing cloud apps including the user’s physical location at the time of access, type of device they are accessing from, device security posture, user privileges and dynamically limits what can be accessed on the device based on these attributes.
This results in greater protection against ransomware, lower downtime for students and lowers the resources needed to manage security for the technology team.
In 2020, 77 ransomware attacks on U.S. schools and impacted 1.3 million students, resulted in 531 days of downtime at a cost of $6.6 billion in economic terms. Ransomware is a challenge for schools due to its unique dynamic with teachers, staff and administrators need to access files and web links as part of standard business operations. Unfortunately, it only takes one exposure to cripple the network.
iboss Zero Trust Edge incorporates Browser Isolation which allows users to access files and web links through a virtual pane of glass in the cloud decoupling the user from the actual files. If a file is malicious, the school’s network is completely isolated from the malicious content as the file never has the chance to touch the user’s device.
This results in massively lower exposure to ransomware, reduces student downtime due to a breach and potentially saves the school millions in losses from productivity, resources and paying ransoms.
High risk activity from at risk students isn’t always constrained to activity performed on campus. Since the iboss Zero Trust Edge follows the student to provide protection at all times, it has the ability to provide visibility to high risk activity while the student is off campus and at home.
This is critical in providing the visibility needed to prevent tragedies on campus and create the investigative reports necessary to deal with a high risk circumstance.
With one to one initiatives, the goal is to assign one laptop to every child so that digital learning can continue at home. The challenge is that CIPA compliance must be met for all school owned devices regardless of location. With traditional web gateway appliances, filtering students while at home is challenging as the equipment for filtering typically resides on campus. With the iboss Zero Trust Edge, security follows the student, as it is delivered in the cloud. The school issued device is always connected to iboss and without the use of a VPN.
This ensures that the same level of protection and compliance is applied to a student regardless of whether they are on campus or at home. This also alleviates the challenges of securing devices by leveraging a cloud solution that can be deployed in minutes.
- Users Go Mobile
Desktops change to laptops and users are no longer in the network perimeter.
- Encrypted Traffic
Unencrypted traffic shifts to encrypted HTTPS traffic due to mobility.
- Cloud Applications and Files
Applications move from servers to SaaS, and data & files move to online storage & access.
- Bandwidth Explosion
Increased bandwidth usage due to remote users accessing remote cloud data needed for productivity.
- Security Moves to Cloud
Data Center network security moves to the cloud where the users, applications, and data live.
|Education Security Features|
K12 Web Filtering Competitor
|Category Based Web Filtering to meet CIPA compliance|
|Secure Web Gateway with Proxy|
|Instant Messaging Application Controls|
|High Risk/At Risk Student Monitoring|
|Encrypted Traffic Inspection and Protection (HTTPS Decrypt)|
|Block Anonymizing Proxies|
|Protect Home Users without a VPN|
|Comprehensive Reporting down to the user level|
|Reporting - including Real-Time Dashboards, Drill Down Reports, Reporting Templates|
|Cloud Connector Agents for Windows, Mac, iOS, Chromebooks, Linux and Android|
|Secure Access to Third Party Cloud Apps|
|Ransomware Detection and Prevention|
|Infected Device Detection and Isolation (CnC Callback Prevention)|
|Unified Zero Trust Service Edge|
|Zero Trust Resource Access Policies|
|Zero Trust NIST 800-207 Criteria-Based Access Policies|
|Asset and Device Posture Checks|
|Inline Data Loss Protection (PII, HIPAA, etc.)|
|Exact Data Match for DLP|
|SAML & OIDC Identity Provider (IdP) Integration|
|Log Forwarding to SIEM - Syslog, SCP, SFTP| | <urn:uuid:cb5866ca-7170-4f95-9892-e12ac371fddc> | CC-MAIN-2022-40 | https://www.iboss.com/education/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00116.warc.gz | en | 0.914827 | 1,388 | 2.546875 | 3 |
What is a Bailiwick?
By Joe St. Sauver
One of the most common questions DNSDB users have is, "What's a 'bailiwick'?"
It's easy to see what motivates this question. If you use the web interface to DNSDB, the bailiwick field is front and center on that web form (see the red boxed area in Figure 1):
Figure 1. Sample DNSDB Search Web Interface
Alternatively, if you use the DNSDB API (either via your own code or through one of the command line API demonstration clients such as dnsdb_query.py), you'll have seen references to "bailiwicks" there, too. See Figures 2-5:
Figure 2. DNSDB API Information from https://api.dnsdb.info
Figure 3. More DNSDB API Information from https://api.dnsdb.info
Figure 4. Bailiwick Reference in dnsdb_query.py Command Synopsis
Figure 5. An Example of How "bailiwicks" Get Mentioned in the Output From dnsdb_query.py
Bailiwick, bailiwick, BAILIWICK! Clearly, "bailiwick" is a term you'll routinely bump into when playing around with DNSDB.
2. "Bailiwick" is Commonly Seen – But Often Not Understood
Our impression is that while the term "bailiwick" is quite commonly seen in conjunction with DNSDB, few users understand what that term actually means. Because most users don't understand what "bailiwick" means, only a few users take advantage of bailiwick filtering when working with DNSDB.
In fact, we believe that most users simply "tune out" the whole bailiwick "thing" entirely. Fortunately, simply ignoring bailiwicks (while suboptimal), isn't catastrophic. DNSDB will deal with any silence about bailiwicks by politely giving you results for ALL potentially-relevant bailiwicks by default.
For example, consider someone hypothetically interested in the name servers used by the domain
ieee.org over the last week. They could use the dnsdb_query.py API demonstration client to make the following query:
The output from that request, as shown above, returned three blank-line-separated "chunks" of data:
Nameserver information from the Zone File Access (ZFA) program. This will always be for a bailiwick equal to the top level domain in use (in this case, "org").
Nameserver information for a bailiwick equal to the top level domain (as above), BUT based on observed data contributed by Farsight's sensor operators (rather than being based on ZFA data).
Nameserver information for a bailiwick equal to the second-level domain (in this case
ieee.org), again based on observed data contributed by Farsight's sensor operators.
In this simplistic example, the data was entirely consistent across bailiwicks, and there might be a temptation to scoff and say, "Pshaw! This whole bailiwick thing is just a boondoggle and a waste of time! There's no difference in the results for the different bailiwicks!"
It's true that there was no difference this time, but that will NOT always be true, as we'll show below.
In fact, if you DON'T learn about bailiwicks, how they're derived, and what they can potentially do for you, you may find yourself missing out on some potentially valuable filtering capabilities. For instance, if your DNSDB query found more total results than you can normally display, specifying a bailiwick is one potentially easy way to get the volume of results down under your cap.
And if you don't know about bailiwicks, you run the risk of accientally misinterpreting the data you're shown, or always finding yourself wondering in the back of your mind, "So what's that dang bailiwick thing?"
Finally, if you don't know about bailiwicks, you may find yourself expecting to see some data in DNSDB (such as evidence of DNS cache poisoning) that you'll actually never see, because Farsight's gone to a fair amount of effort to intentionally filter cache poisoning traffic out of DNSDB.
If you stick with this blog post a little more, we'll help you see how bailiwicks are actually quite important and potentially helpful to your work. Let's dive in.
2. Bailiwicks in Real Life and Bailiwicks in the Domain Name System
If we set aside the DNS world for the moment, and just focus on the everyday world, you may have previously encountered the term "bailiwick" in reference to one's areas of talent or expertise.
For instance, if you've ever heard me attempt to play a musical instrument, you'll know that playing music is "not my bailiwick:" I've never had music lessons, I can't read music, and I'm tone deaf. That lack of expertise and talent is definitively noticable. You'd never mistake me for Billy Joel's "Piano-Man", and in fact I can only wish that I might someday be as much of a musician as the most excellent "Kazoo Men".
In the DNS world, "in-bailiwick" has two formal and not particularly approachable definitions in RFC 7719, "DNS Terminology". The first of those reads in part:
In-bailiwick: (a) An adjective to describe a name server whose name is either subordinate to or (rarely) the same as the zone origin. In-bailiwick name servers require glue records in their parent zone [...]
For example, under definition a,
ns1.example.com might be an
"in-bailiwick" name server for
would be "out-of-bailiwick" for
This definition is the way the term "in-bailiwick" is most commonly used by people who are part of the technical DNS community. [We recognize that that definition likely may not mean much to you if you're not a DNS geek, and that's okay]
There's a second definition for in-bailiwick mentioned in RFC 7719, and while it is equally arcane, this is the one that's actually relevant to DNSDB (and this post):
(b) Data for which the server is either authoritative, or else authoritative for an ancestor of the owner name. [...]
By paying attention to a name server's "bailiwick" as defined in definition b, DNSDB can avoid mis-accepting DNS results from untrustworthy sources. Thus, bailiwick checking is an important part of how Farsight actively works to keep garbage out of DNSDB. We'll come back to this data quality assurance function in section 4.
For now, let's see how an analyst might use the bailiwick field to filter DNSDB traffic.
3. Selecting Data By Bailiwick
As an analyst, you've got a choice about what sort of data you retrieve from DNSDB for review:
Do you want to focus on name server information that's officially registered with a domain TLD registry via a registrar?
Or, recognizing, that a domain owner may elect to "call an audible" and change the name servers his domain is using "on the fly," do you want to focus on the data returned by the domain's actual name servers?
To see how these can be different, consider the name servers for the domain
handwashmaterials[dot]com (since that domain is listed on the Spamhaus Domain Block List at the time we wrote this article, we've "defanged" that domain by replacing the dot in that name with the string literal "[dot]" in an effort to keep you from accidentally visiting what may be a potentially problematic site).
Checking domain whois for that site, we can find its officially registered name servers. At the time this was written, the regisitry whois reported:
$ whois handwashmaterials[dot]com Domain Name: HANDWASHMATERIALS[dot]COM Registrar: MONIKER ONLINE SERVICES LLC Sponsoring Registrar IANA ID: 228 Whois Server: whois.moniker.com Referral URL: http://www.moniker.com Name Server: NS32.HANDWASHMATERIALS[dot]COM Name Server: NS33.HANDWASHMATERIALS[dot]COM [etc]
The registrar whois reported the same name servers. If you were looking just at whois, you'd assume that the name servers shown were the be-all, end-all, answer to the question "What nameservers are being used by this domain."
What do we see if we look in DNSDB?
To keep the output manageable, let's just specify that we only want to see the domains's NS records, only for the top-level "com" bailiwick, and only for the last week. The required command and resulting output look like:
$ dnsdb_query.py -r handwashmaterials[dot]com/NS/com --after=7d ;; bailiwick: com. ;; count: 152 ;; first seen in zone file: 2016-09-14 16:01:48 -0000 ;; last seen in zone file: 2017-02-13 17:02:32 -0000 handwashmaterials[dot]com. IN NS ns32.handwashmaterials[dot]com. handwashmaterials[dot]com. IN NS ns33.handwashmaterials[dot]com. ;; bailiwick: com. ;; count: 9,373 ;; first seen: 2016-09-15 02:34:41 -0000 ;; last seen: 2017-02-13 22:13:26 -0000 handwashmaterials[dot]com. IN NS ns32.handwashmaterials[dot]com. handwashmaterials[dot]com. IN NS ns33.handwashmaterials[dot]com.
Decoding the command we just executed:
dnsdb_query.py This is one of Farsight's sample command line clients -r Run a "left hand side" ("rname") query handwashmaterials[dot]com This is the domain we're interested in /NS Return Name Server records only /com Filter out any records not from the top-level "COM" bailiwick --after=7d Exclude any records more than a week old
Decoding the output from the command we just executed, note that the first chunk of those results was pulled from the dot com zone file. We can tell that data comes from the zone files by noting the "first seen in ZONE FILE…" comment. [more on zone files below]
The second chunk of results are largely identical except for timestamps, and except for the source of the results. The second chunk of results came from data contributed by Farsight sensor operators (note the bare "first seen" comment, with no referrence to "zone files").
All of the above is good so far.
However, if we were to now make a new query, asking for the name servers used by that domain this past week AND specify that we want to use a more specific bailiwick ("
handwashmaterials[dot]com" intead of just "com"), we get a totally different answer:
$ dnsdb_query.py -r handwashmaterials[dot]com/NS/handwashmaterials[dot]com --after=7d ;; bailiwick: handwashmaterials[dot]com. ;; count: 8,073 ;; first seen: 2016-07-12 07:52:02 -0000 ;; last seen: 2017-02-14 05:04:09 -0000 handwashmaterials[dot]com. IN NS ns1.handwashmaterials[dot]com. handwashmaterials[dot]com. IN NS ns2.handwashmaterials[dot]com.
Note the "ns1" and "ns2" name servers rather than the "ns32" and "ns33" name servers seen previously!
This domain owner has "called an audible" and effectively changed the name servers he wants to have used for his domain, and he's done so without updating the name servers officially listed for his domain!
Note that the two sets of name servers,
could be on totally different sets of IP addresses, and could potentially deliver totally different responses when queried. Moreover, many users, perhaps accustomed to just checking whois or looking at copies of zone files for bulk name server information, might not even know that
ns2.handwashmaterials[dot]com. even existed or were in use.
YOU, on the other hand, with the "power of bailiwicks" and DNSDB in your professional repetoire of techniques, understand what you're seeing if you see different results for a TLD-level bailiwick (such as "com") vs. a 2nd-level domain bailiwick (such as "
handwashmaterials[dot]com."). You can dig in and follow an analysis target even if they attempt to jink and dive away from your investigation.
We have one more subtopic we need to tackle before we wrap up this post. Let's take a minute or two to explain how in-bailiwick/authoritative name servers can be found. We'll describe that process for:
The root zone,
A top-level domain, and
A 2nd-level domain.
This process of paying attention to bailiwick filtering is operationally critical to keeping potentially misleading garbage out of DNSDB.
4. So How DOES Farsight Know What Nameservers Are Genuinely IN-Bailiwick/Authoritative For The Root Zone?
When it comes to deciding what name servers are in-bailiwick or out-of-bailiwick, it all starts with the apex or "root" of the DNS hierarchy (normally shown as a bare "dot").
The name servers for that root zone cannot be retrieved via the DNS because until you know how to reach the root zone, DNS won't work. (nice circular dependency, eh?)
To overcome this circular dependency, DNS gets "bootstrapped" from a relatively small static root zone "hints" file, shipped as part of each DNS server's software.
The "hints" file specifies the name servers that are to be relied upon for the root zone, and the IPv4 and IPv6 addresses where those root name server instances live. For example, for the first and last root name servers defined in that file:
. 3600000 NS A.ROOT-SERVERS.NET. A.ROOT-SERVERS.NET. 3600000 A 184.108.40.206 A.ROOT-SERVERS.NET. 3600000 AAAA 2001:503:ba3e::2:30 [...] . 3600000 NS M.ROOT-SERVERS.NET. M.ROOT-SERVERS.NET. 3600000 A 220.127.116.11 M.ROOT-SERVERS.NET. 3600000 AAAA 2001:dc3::35
Given a true copy of that file, we now explicitly know the full set of name servers we SHOULD trust for the root zone.
Because we've been given an exhaustive list of nameservers we SHOULD trust to act as name servers for the root zone, that means that we ALSO know that we SHOULDN'T trust ANY OTHER name servers that may spontaneously "volunteer" to be root zone name servers. Any other name server would be an "out-of-bailiwick" (untrustworthy) name server for the root zone, and must be disregarded
Let's check DNSDB for the root name servers which we've seen over the last month:
$ dnsdb_query.py -r ./NS --after=30d ;; bailiwick: . ;; count: 2,484 ;; first seen in zone file: 2010-04-13 18:39:17 -0000 ;; last seen in zone file: 2017-02-12 20:00:04 -0000 . IN NS a.root-servers.net. . IN NS b.root-servers.net. . IN NS c.root-servers.net. . IN NS d.root-servers.net. . IN NS e.root-servers.net. . IN NS f.root-servers.net. . IN NS g.root-servers.net. . IN NS h.root-servers.net. . IN NS i.root-servers.net. . IN NS j.root-servers.net. . IN NS k.root-servers.net. . IN NS l.root-servers.net. . IN NS m.root-servers.net. ;; bailiwick: . ;; count: 6,596,511,613 ;; first seen: 2010-06-24 03:10:38 -0000 ;; last seen: 2017-02-13 16:10:48 -0000 . IN NS a.root-servers.net. . IN NS b.root-servers.net. . IN NS c.root-servers.net. . IN NS d.root-servers.net. . IN NS e.root-servers.net. . IN NS f.root-servers.net. . IN NS g.root-servers.net. . IN NS h.root-servers.net. . IN NS i.root-servers.net. . IN NS j.root-servers.net. . IN NS k.root-servers.net. . IN NS l.root-servers.net. . IN NS m.root-servers.net.
You'll note that this list of servers (e.g., a.root-servers.net through m.root-servers.net) agrees with the contents of the root zone hints file, and the now-familiar presence of data from the Zone File Access Program as well as data from actual sensor contributions.
All of the above is "as it should be." However, you may wonder if Farsight has ever seen out-of-bailiwick name servers for the root zone. As a matter of fact, we have. You can see this historically if you query DNSDB without time fencing your results:
$ dnsdb_query.py -r ./NS [selected output only shown below] ;; bailiwick: . ;; count: 2 ;; first seen: 2013-01-15 18:07:19 -0000 ;; last seen: 2013-01-15 18:07:19 -0000 . IN NS . ;; bailiwick: . ;; count: 2 ;; first seen: 2015-10-16 09:38:23 -0000 ;; last seen: 2015-10-16 09:38:23 -0000 . IN NS b.nic.dk. ;; bailiwick: . ;; count: 2 ;; first seen: 2015-10-16 08:22:22 -0000 ;; last seen: 2015-10-16 08:22:22 -0000 . IN NS c-dns.pl. ;; bailiwick: . ;; count: 1 ;; first seen: 2013-01-26 17:16:50 -0000 ;; last seen: 2013-01-26 17:16:50 -0000 . IN NS 127.53.0.1. ;; bailiwick: . ;; count: 2 ;; first seen: 2015-10-16 10:53:01 -0000 ;; last seen: 2015-10-16 10:53:01 -0000 . IN NS ns-nl.nic.fr. ;; bailiwick: . ;; count: 5,403,834 ;; first seen: 2013-01-05 20:58:00 -0000 ;; last seen: 2013-01-28 21:09:59 -0000 . IN NS ns1.trafficz.com. . IN NS ns2.trafficz.com. ;; bailiwick: . ;; count: 1 ;; first seen: 2014-04-15 19:15:07 -0000 ;; last seen: 2014-04-15 19:15:07 -0000 . IN NS a.gov-servers.net. ;; bailiwick: . ;; count: 2 ;; first seen: 2014-09-25 03:28:20 -0000 ;; last seen: 2014-09-25 03:28:20 -0000 . IN NS a.rokt-servers.net. . IN NS b.rokt-servers.net. . IN NS c.rokt-servers.net. . IN NS d.rokt-servers.net. . IN NS f.rokt-servers.net. . IN NS g.rokt-servers.net. . IN NS h.rokt-servers.net. . IN NS i.rokt-servers.net. . IN NS j.rokt-servers.net. . IN NS k.rokt-servers.net. . IN NS l.rokt-servers.net. . IN NS m.rokt-servers.net. ;; bailiwick: . ;; count: 235,800 ;; first seen: 2013-01-09 18:08:12 -0000 ;; last seen: 2013-01-28 20:06:50 -0000 . IN NS ns0.dnsmadeeasy.com. . IN NS ns1.dnsmadeeasy.com. . IN NS ns2.dnsmadeeasy.com. . IN NS ns3.dnsmadeeasy.com. . IN NS ns4.dnsmadeeasy.com. ;; bailiwick: . ;; count: 1,833,521 ;; first seen: 2013-01-05 22:14:59 -0000 ;; last seen: 2013-01-28 21:10:56 -0000 . IN NS ns1.ndoverdrive.com. . IN NS ns2.ndoverdrive.com. ;; bailiwick: . ;; count: 2,315 ;; first seen: 2013-01-20 17:06:01 -0000 ;; last seen: 2013-01-28 20:31:41 -0000 . IN NS ns1.devnameserver.com. . IN NS ns2.devnameserver.com.
Why do we need to look at historical data to find out-of-bailiwick entries for the root domain? Well, these days Farsight's filtering of out-of-bailiwick data is much improved, and there no recent out-of-bailiwick root zone data to look at.
So what would have happened if you were to have TRUSTED a non-authoritative/out-of-bailiwick name server, like the historical ones mentioned above?
Let's consider a simple example using the name
Resolving that name normally we see:
$ dig www.uoregon.edu +short drupal-cluster5.uoregon.edu. 18.104.22.168
22.214.171.124 is the known/expected IP address for that host… no problem so far.
Now let's try one of the servers we saw that was formerly willing to answer for domains outside it's bailiwick. Is it possible that it could still be doing so?
$ dig +aaonly +norecurse www.uoregon.edu @ns1.ndoverdrive.com +short 126.96.36.199
ns1.ndoverdrive.com is still answering queries it shouldn't be.
188.8.131.52 is NOT what we expected to have returned for
www.uoregon.edu! The importance of bailiwick filtering quickly becomes evident when you consider concrete examples like this one of how a user could be lead astray if you were to trust an out-of-bailiwick root name server!
5. What About Name Servers for Top Level Domains Like .com, .net, etc.? How Do We Know What Name Servers Are "In-Bailiwick" For Those TLDs?
Just as a root hints file was provided for the root zone, we can also retrieve a copy of the root zone file for the TLD zones here.
If you look at that file, you'll see that it has the authoritative name servers for each TLDs. Any name servers OTHER than the name servers listed for each Top Level Domain would be "out-of-bailiwick" for that TLD.
MORE GENERALLY, however, we don't need to rely on downloading the root.zone zone file to learn what name servers are in (or out) of bailiwick for TLDs. Since we've bootstrapped the root zone, we can just ask the trusted root name servers (or passively watch existing traffic to those name servers), and let those trusted servers tell us what name servers we should, in-turn, trust for any specific TLD. For example, actively querying for the biz TLD:
$ dig +aaonly +norecurse biz @h.root-servers.net [...] ;; QUESTION SECTION: ;biz. IN A ;; AUTHORITY SECTION: biz. 172800 IN NS a.gtld.biz. biz. 172800 IN NS b.gtld.biz. biz. 172800 IN NS c.gtld.biz. biz. 172800 IN NS e.gtld.biz. biz. 172800 IN NS f.gtld.biz. biz. 172800 IN NS k.gtld.biz. ;; ADDITIONAL SECTION: a.gtld.biz. 172800 IN AAAA 2001:502:ad09::30 f.gtld.biz. 172800 IN AAAA 2001:500:3682::12 k.gtld.biz. 172800 IN AAAA 2001:503:e239::3:2 a.gtld.biz. 172800 IN A 184.108.40.206 b.gtld.biz. 172800 IN A 220.127.116.11 c.gtld.biz. 172800 IN A 18.104.22.168 e.gtld.biz. 172800 IN A 22.214.171.124 f.gtld.biz. 172800 IN A 126.96.36.199 k.gtld.biz. 172800 IN A 188.8.131.52 [...]
6. What About Finding In-Bailiwick Name Servers for 2nd-Level Domains?
In-bailiwick name servers for the hundreds of millions of 2nd-level domains are too numerous and too dynamic to efficiently load via a static file (although TLD Zone File Access Program files do exist and do have the data that's needed if you wanted to try to do that).
In reality, however, the information that Farsight needs for bailiwick filtering 2nd-level domains will typically get bootstrapped from observed sensor traffic.
As an exercise, however, we could directly query the appopriate TLD server for a 2nd-level domain of interest. For example, if we wanted to find what name servers are in-bailiwick for the arbitrarily-selected domain panasonic.biz, we could ask one of the biz nameservers we found for the biz TLD in part 5 of this article:
$ dig +aaonly +norecurse panasonic.biz NS @a.gtld.biz [...] ;; QUESTION SECTION: ;panasonic.biz. IN NS ;; AUTHORITY SECTION: PANASONIC.biz. 7200 IN NS A1-35.AKAM.NET. PANASONIC.biz. 7200 IN NS A10-64.AKAM.NET. PANASONIC.biz. 7200 IN NS A12-66.AKAM.NET. PANASONIC.biz. 7200 IN NS A16-64.AKAM.NET. PANASONIC.biz. 7200 IN NS A11-65.AKAM.NET. PANASONIC.biz. 7200 IN NS A13-67.AKAM.NET.
Fortunately we don't need to do that sort of thing by hand; it gets handled automatically for us by DNSDB.
We know that this has been a rather long answer to what sounded like a short and simple question, but we hope that you've come away from this post with a better understanding of what bailiwicks are, why they're important to the accuracy and usability of Farsight Security's DNSDB passive DNS data, and how they can help your DNSDB investigations.
For more information about obtaining access to DNSDB, please contact Farsight Security, Inc., Sales at https://www.farsightsecurity.com/order-services/.
Joe St Sauver, Ph.D. is a Scientist with Farsight Security, Inc. | <urn:uuid:2bd07374-02ef-4d43-a6f6-8bffda27efbb> | CC-MAIN-2022-40 | https://www.farsightsecurity.com/blog/txt-record/what-is-a-bailiwick-20170321/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00116.warc.gz | en | 0.810434 | 6,341 | 2.53125 | 3 |
DNS Request flood is a DDoS attack which sends DNS request packets to a DNS server in an attempt to overwhelm the server’s ability to respond to legitimate DNS requests.
If DNS services are unavailable to legitimate users it can completely cripple most modern networks since FQDN names are used to provide most services.
As seen in Image 1, a DNS request uses the UDP protocol with a destination port of 53.
“Image 1: DNS using UDP”
Image 2 highlights the UDP packet containing the query information, which consists of a name, a type, and a class. The name is the fqdn name to retrieve the IP for. The type specifies the record to be fetched. Common ones are A which will retrieve the IP, MX which will retrieve the mail exchange servers IPs, etc. The Class will be IN (stands for internet) most of the time.
“Image 2: The name, type and class of a DNS request”
Images 3 and 4 show the server’s response with the result of the query. There you can see that identifying the request-response pair can be done using the Transaction ID. Depending on the request type the server may respond differently.
“Image 3: DNS Request Transaction ID”
“Image 4: DNS Response”
Analysis of the DNS Request Flood in Wireshark – Filters
As mentioned in the Technical Analysis, DNS uses the UDP protocol, so the very basic filter that can be used is “udp”. Further more, to identify DNS packets specifically, the “dns” filter can be used. Finally, to identify the response for a specific request or vice versa, use “dns.id == <needed_id>”.
If you see a single source sending many such requests, it could be an attacker.
Download Example PCAP of DNS Request Flood
*Note: IP’s have been randomised to ensure privacy.Download | <urn:uuid:8e20ee89-cd12-45c0-b18e-fb356d010db2> | CC-MAIN-2022-40 | https://kb.mazebolt.com/knowledgebase/dns-request-flood/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00116.warc.gz | en | 0.865211 | 418 | 2.78125 | 3 |
Identifying potential threats are vital for an organization’s proactive security and data management measures.
What is a Vulnerability Assessment?
A vulnerability assessment identifies and classifies security holes in a computer network infrastructure. Vulnerability assessments can also forecast the effectiveness of countermeasures and evaluate their actual effectiveness after they are put into use.
Vulnerability assessments define and classify networks, assign relative levels of importance to resources, identify potential threats to each resource, develop a strategy to deal with the most serious potential problems first, and define and implement ways to minimize the consequences if an attack occurs. | <urn:uuid:6e08a30b-08ad-442f-a707-afa98840dac5> | CC-MAIN-2022-40 | https://www.nexustek.com/it-support/vulnerability-assessment/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00116.warc.gz | en | 0.88033 | 121 | 2.921875 | 3 |
Side-channel attacks over AES is not new, previous attacks required a direct access. Now the security experts from Fox-IT and Riscure show how to covertly recover the encryption key with AES implementations.
The attacker needs to observe input or output data to launch this attack, so it is possible with publically available Network encryption devices. Instead of traditional oscilloscope method, security experts used radio hardware here.
Experts use a kit composed of the magnetic antenna connected to an external amplifier and bandpass filters that were bought online and then plugged it into a radio USB stick software, the recording equipment can go from extremely high-end radio equipment, down to €20 USB.
Experts used the kit to read the signals for one block of AES-256 encryption running on the smartFusion2 target running on the ARM Cortex-M3 core. They can see a clear distinct pattern on each stage. Here is the PDF wrote by security experts from Fox-IT which demonstrates the complete analysis.
We see I/O to and from the Cortex-M3, calculations for the key schedule, and the 14 encryption rounds.To extract the key, instead of measuring signal they observed many different encryption blocks with different inputs and attempt to model how the device leaks information.
They took a set of Encryption block and correlate between either the(plaintext) input or (ciphertext) output data and our measurement traces. And by checking how well our measurements correlate with the number of “1” bits in the data (i.e. the data’s Hamming weight).
By executing this method, the experts could speculate the 256 possible values of a single byte.
Using this approach only requires us to spend a few seconds guessing the correct value for each byte in turn (256 options per byte, for 32 bytes — so a total of 8192 guesses). In contrast, a direct brute-force attack on AES-256 would require 2(256) guesses and would not complete before the end of the universe.
With small loop antenna, the attack works only for a few CM, and they are not able to succeed with their goal of 1m if they increase distance signal drops out. So they switched to a Long PCB periodic antenna, which makes attack success even from 30cm.
Now the tests are performed in the close lab environments and not sure how it will perform open world noise environments, may be this technique need to be improved with another expensive equipment.
In practice, this setup is well suited to attacking network encryption appliances. Many of these targets perform mass encryption and the ciphertext is often easily captured from somewhere else in the network. | <urn:uuid:a6f281a8-69ce-4418-99aa-11ea797b952b> | CC-MAIN-2022-40 | https://gbhackers.com/aes-256-keys-can-sniffed-within-seconds-using-e200-worth-hardware-kit/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00316.warc.gz | en | 0.922276 | 539 | 2.703125 | 3 |
In preparation of your CCNA exam, we want to make sure we cover the various concepts that we could see on your Cisco CCNA exam. So to assist you, below we provided a CCNA Wireless Cliff Notes article. This section will probably be most helpful to review immediately before you take your Cisco CCNA certification exam on test day!
Wireless Communications (IEEE 802.11 standards, Wi-Fi)
Another new concept on the CCNA exam is the wireless network. The IEEE 802.11 standard also known as Wi-Fi, specifies the standards for communication between your wireless devices. Since this is the future, you know Cisco will be increasing this topic on the future CCNA exams. But right now there is not too much information covered on the current basic CCNA exam. So they have recently come out with a CCNA Wireless certification that goes much more in depth than what you need to know for this exam.
Advantages to wireless are the elimination of cables and the freedom of movement. Disadvantages include lack of range, reliability and security. But rest assured, they are improving each and every day on the short falls of wireless.
802.11b support 11 Mbps at 2.4 GHz frequency
802.11a support 54 Mbps at 5 GHz frequency
802.11g support 54 Mbps at 2.4 GHz frequency
BSS is a single access point which provides network connectivity for its clients.
In ESS, each access point still defines a BSS, but a group of access points and their BSSes form an ESS. This way they can overlap coverage areas so you will not lose network connectivity.
WEP is the Wired Equivalent Privacy standard which is a part of the 802.11 standard and is one of the WLAN security standards.
WPA is the Wi-Fi Protected Access security standard that is a hybrid of proprietary and standards based protocols.
WPA2 is the second version of the Wi-Fi standard. It is called 802.11i and is not backward compatible with the older standard.
Line-of-Sight: Direct line between the wireless device and the receiver where there are no matters in between (i.e. cell phones do not require Line-of-Sight but TV remotes do.)
I hope you found this article to be of use and it helps you prepare for your Cisco CCNA certification. I am sure you will quickly find out that hands-on real world experience is the best way to cement the CCNA concepts in your head to help you pass your CCNA exam! | <urn:uuid:49ae357e-910f-4f6e-9dea-aba802afb02d> | CC-MAIN-2022-40 | https://www.certificationkits.com/ccna-wireless-cliff-notes/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00316.warc.gz | en | 0.940251 | 536 | 2.8125 | 3 |
Both tape-based and disk-based archives are growing at tremendous rates that are exceeding the density increases in storage technology and the reliability of storage. Humans are pack rats and the amount of data we save keeps growing, and this is unlikely to change. Part of the reason is that we do not know when data becomes unimportant, and becomes important again, and there is no standard framework (see Data is Becoming Colder, by Jeff Layton).
Since we do not have the tools to know when and if we should delete data, we archive everything. This is one of the reasons we have more and more data that is archived and must be protected. Protecting files in a large archive requires generating a checksum for each file, as well as regular validation of every file in the archive to ensure data integrity. When a checksum is invalid, you need software that uses a secondary valid copy of the file and then replaces files that are corrupted with valid copies.
I was recently talking with a customer who has large preservation archives and I stated that checksum verification can turn your archive problem into a high performance computing (HPC) problem. My definition of a preservation archive is an archive where the stated goal is that the information in the archive must be bit-wise exactly the same forever unless the file was rewritten for something like a format change (e.g., PDF 1.3 to 1.5).
The customer paused and asked why, and as I started to explain the answer he suggested that I write an article on the subject. It dawned on me that large preservation archives require significant amounts of computational power, memory bandwidth, PCIe bus bandwidth and storage bandwidth, and this is not much different architecturally than HPC computing, which is very computation- and I/O-intensive.
Today, many preservation archives are well over 5PB and a few are well over 10PB with expectations that these archives will grow to more than 100PB. With archives this large, the requirements for HPC architectures for checksum validation are not much different than many of the standard HPC simulation problems, such as weather, crash, and other simulations.
Most HPC problems require large numbers of floating point operations, but some problems – such as genetic pattern matching — also require significant integer performance. In large archives, checksums should be validated regularly; how regularly depends on the quality of the hardware and the amount of data, but even good hardware can go bad and corrupt your data.
Some archive systems use commodity hardware, which has well-known reliability issues including, but not limited to, memory without parity, low-end network adapters, and consumer-level disk drives that have much higher silent data corruption issues compared to, for example, ECC memory, high-end RAID controllers with SAS disk drives, enterprise-level tape systems, etc. Checksums must be regularly validated and checksum algorithms must be robust, which requires significant computational resources.
To validate the checksum for a file, the whole file must be read from disk or tape into memory and have the checksum algorithm applied to the data read and then compare the checksum that was just calculated to the stored checksum, which should be checksummed also so you are sure that you have a valid checksum to compare to the file you read into memory. With large archive systems, this is often an ongoing process whether the data resides on disk or tape, but checksum validation is particularly critical for disk-based archives with consumer-grade storage.
HPC problems almost always involve CPU cores that are waiting on memory requests. In fact, some people have jokingly said that this is the definition of an HPC problem. Similarly, checksum calculations require significant memory bandwidth and will have idle cores. Since the whole file must be reading into the core and have the checksum algorithm applied one time, there is no data reuse in any of the caches as the file streams through the caches until it reaches the core to be processed.
You would think that most of the memory bandwidth would be used reading data into memory; since all of the files reside on disk or tape, these files must be read into memory. This actually becomes a write from the PCIe bus into memory and then a read from memory into the core to calculate the checksum. So for checksum calculations, memory usage in terms of reads and writes is nearly 50/50, as files are written into memory from the PCIe bus and read from memory into the cores and processed. Of course, at the end of the process the checksum must be compared to the originally generated checksum.
The PCIe bus is likely the most critical element of the system architecture given that historically many PCIe buses do not run at the rated performance. With most CPU architectures today, memory bandwidth is at least 2X or even 8X or more than the performance of the PCIe buses, and memory bandwidth is slower than the CPU performance. This means that PCIe bandwidth is critical for the checksum calculations. Buying machines with poor PCIe bus bandwidth will limit the checksum verification speed because you need to get the data into memory.
With the PCIe 3.0 standard recently ratified, you can expect to see PCIe 3.0 systems later this year. This will help by doubling PCIe performance, given the significant increase in memory with the latest generation of technology from vendors such as AMD, IBM and Intel. The problem is that PCIe 3.0 only doubles performance over PCIe 2.0, while memory bandwidth increases have gone up at a far greater rate. This imbalance is an issue that impacts how much data you can read from storage and validate checksums.
Storage bandwidth is the long pole in the checksum validation tent given that storage performance has not kept pace with either PCIe bandwidth or memory bandwidth. Though flash technology has much higher bandwidth than rotating storage, it is not cost effective for large archives.
Storage resources must be able to read the data at a reasonable rate. Say you have a 10PB archive and want to validate checksums every 30 days. That would require just over 4GB/sec of bandwidth (10PB/(30*24*3600), and that 4GB/sec of bandwidth does not include ingest and file recalls from users. This means that storage systems must be able to read at 4GB/sec from disk or tape into memory. Clearly, validation every 30 days is not practical given the high cost, but the validation requirements — and how often you want to validate your archive — must be designed into the architecture and should be a major architectural consideration.
The reliability of digital format can be impacted by many factors, from things such as bit rot to bad hardware to the statistical probability of a silent data corruption based on standard channel error rates and other factors. Checksum validation is critical in keeping the archive data valid, as is having multiple copies. Using robust checksums improves the validation process, but increases the computational requirements.
The key is to have a balanced system that meets the requirements for checksum validation, ingest and access. Balancing CPU, memory, PCIe and storage bandwidth is often a difficult part of the architectural planning process.
The only difference between large archives and large HPC problems is the network interconnection between the nodes, which in the case of HPC is usually InfiniBand given the need for high performance and low latency.
Large preservation archives could benefit from some of the architectural techniques developed in designing HPC systems. With large archives you can not expect the data to come back the same years later without regular checks of the data, which reduces the probability that more than one copy of the data will be corrupted.
Henry Newman, CEO and CTO of Instrumental, Inc., and a regular Enterprise Storage Forum contributor, is an industry consultant with 29 years experience in high-performance computing and storage.
Follow Enterprise Storage Forum on Twitter. | <urn:uuid:bef40154-e15d-4a74-a70c-7e1c5f2aadf4> | CC-MAIN-2022-40 | https://www.enterprisestorageforum.com/hardware/is-architecture-planning-for-large-archives-an-hpc-problem/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00316.warc.gz | en | 0.948142 | 1,611 | 2.734375 | 3 |
The research organized in the University of Florida by biologists. To explain the significance of each finger ability in men and woman.
The importance of fingers helps doctors to understand the behavior and diseases. It is Useful for treatments, predicting risks with specific medical conditions. Biologists Martin Cohn, Ph.D., and Zhengui Zheng, Ph.D., department of molecular genetics and microbiology at the UF College of Medicine, justify the balance of sex hormones during early embryonic development are determined by male and female digit proportions.
Finger bones differ in levels of sensitivity to Androgen and Estrogen
Male and female receptors activate during difference in level of these hormones which affects growth of specific digits. The discovery provides a genetic explanation that link finger proportions with features ranging from sperm counts, aggression, sexual orientation, musical ability and sports prowess, to health problems such as autism, depression, heart attack and breast cancer. The digit ratio influenced by sex hormones, but experimental evidence still inadequate.
The hormonal signals control the rate at which skeletal antecedent cells divide, and showed that different finger bones differ in levels of sensitivity to androgen and estrogen. From Roman times, people have associated the hand’s fourth digit with the wearing of rings. Different cultures around the world longer ring finger in men taken as a sign of fertility.
Biologists found Parental development of limb in mouse embryos packed with receptors for sex hormones. The scientists controlled the signaling effects of androgen also known as testosterone and estrogen. More estrogen resulted in a feminized appearance.
The developing growth digits controlled directly by androgen and estrogen receptor activity. In addition more differences between the sexes. That our fingers can tell us about the signals exposed to a short period of our time in the womb.
Provided 19 genes sensitive to prenatal testosterone and prenatal estrogen. There is growing evidence that a number of adult diseases have fetal origins. | <urn:uuid:6ea6e13b-e63f-4e8e-a59d-3e7c244372f9> | CC-MAIN-2022-40 | https://areflect.com/2017/08/22/study-reveals-ring-finger-in-men-indicate-competence/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00316.warc.gz | en | 0.926572 | 380 | 3.59375 | 4 |
Every day, organizations around the world are subjected to a ransomware attack. Ransomware attacks can take many forms, in fact, the variety and ingenuity of these attacks increases as the business community becomes more aware of the challenges and adept at meeting them. But all forms of ransomware follow the same basic pattern: an employee receives an email containing an attachment.
The email is written in such a way as to coax the user into opening the attachment: it purports to be time-sensitive information from a superior or an invoice from a vendor, for example. Upon opening the attachment, a virus runs that encrypts information on the local computer. The user is then greeted with a dialog box or window informing them that their information is locked, and they must pay a ransom to regain access to it. Learn more about ransomware attacks and see how ransomware protection can help your organizations.
Even though a ransomware attack directly affects only the user that opens it, the entire organization can suffer because of mapped network drives or even shared cloud storage.
The challenge is that ransomware attacks grow more sophisticated as corporations become more aware of the problem. Since ransomware is launched via email, defensive strategies must focus on email security even for cloud based office 365 anti phishing services.
By the time any business is aware that they are the target of a ransomware attack, the damage has already been done. Once a user clicks on a malicious link or attachment, access to local data on that employee’s computer is locked. In order to unlock the data, some form of ransom must be paid. In about 91% of cases, the vector for ransomware is incoming email, often in the form of a spear phishing attack that purports to be from a sender known and trusted by the victim.
Examples of Ransomware Attack – Variations on a Theme
While there are many different types of ransomware, all follow the same basic pattern and have the same goal: to extort payment from your organization by making the information vital to your organization’s success inaccessible.
Here are some of the more commonly seen variations on the theme of data kidnapping.
CryptoLocker and its spiritual successor, CryptoWall, share the dubious distinction of being the reason for the more widespread awareness of ransomware in recent years. Some form of ransomware has been in existence since the early days of the internet, but it only became a household word with the emergence of CryptoLocker. With the shutdown of the original CryptoLocker botnet in 2013, CryptoWall and its successors emerged. Today, variations on the CryptoLocker approach are still widely used. The original CryptoLocker attacked files on Microsoft Windows computers, encrypting them with PKE, and storing the private keys on the CryptoLocker servers.
Like most newer forms of ransomware, is capable of encrypting both local and shared network drives as well as removable media, meaning it can spread throughout a corporate network extremely quickly. It makes use of a very strong encryption algorithm that is nearly impossible to crack within a reasonable period of time. Double file extensions are usually used to make the file appear to be non-executable to Windows users. Crysis has also been disguised as an application installer in addition to being an email attachment.
It takes a “franchise” approach to ransomware, outsourcing the distribution and infection tasks to partners, who are then cut in for a share of the profits. This approach ensures rapid spread of infection and maximizes revenue within a short time frame.
Rather than encrypting files, Jigsaw deletes them until the ransom is paid. After one hour, a single file is deleted, and the number of deleted files increases with each hour. After 72 hours, all remaining files are deleted.
Its ransom demand approach begins with an “invoice” in an email. When the invoice is opened, its content is obscured, and the user is directed to enable macros in order to unscramble it. Once macros are enabled, the payload goes to work, using AES encryption to lock down a wide variety of file types.
It takes a wholesale approach: rather than locking individual files, it overwrites the master boot record. After the computer is restarted, the operating system no longer boots.
TorrentLocker (sometimes referred to as CryptoLocker) usually is sent out as an attachment to a spam email sent to specific targeted regions. It uses an AES encryption technique to not only lock out files, and it also grabs email addresses from the user’s contact list in order to continue propagating itself.
WannaCry is spread through the EternalBlue Microsoft exploit and has become one of the most damaging and widespread examples of ransomware in the world. Over 125 thousand companies in over 150 countries have been affected by this malware, which demands ransom payments in BitCoin, as well as installing backdoors for future exploits on infected systems.
What Can Be Done About the Threat of Ransomware?
The only adequate defense against ransomware attacks is two-pronged: strong ransomware protection technology to prevent phishing must be coupled with secure and accessible email backup and archiving that gives users access to email in the event the organization falls victim to attack. DuoCircle’s Advanced Threat Defense is a multi-layered approach to email threat protection that pulls all the features you need together in a single integrated solution to fight…
- Phishing attacks
With Advanced Threat Defense, DuoCircle protects your employees (and your entire enterprise) from spam, malware, ransomware, phishing, and malicious attachments. Our sophisticated classification engine detects and defends your entire organization against these threats in real-time, and with the highest possible level of accuracy.
Advanced Threat Defense from DuoCircle provides:
- Protection from malware and zero-day attacks, with 100% availability.
- Spam protection that eliminates 99% of all incoming spam with a false positive rate of less than one in ten thousand.
- Unlimited users and unlimited inbound message volume
- Protection against domain name spoofing
- Blocking of malicious attachments.
- Real-time activity logs, with access to the email queue and click reporting
- Smart Adaptive Quarantine, which puts the burden of sorting spam messages on the sender rather than the recipient.
- A thirty-day backup queue – 30 days of MX backup service included
- Chat, email and phone support is available 24/7 | <urn:uuid:03a72fc8-e88e-4931-a771-5cd9252691ee> | CC-MAIN-2022-40 | https://www.duocircle.com/phishing-protection/ransomware-attacks-will-you-be-ready | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00316.warc.gz | en | 0.938134 | 1,345 | 2.546875 | 3 |
Northrop Grumman's Pegasus XL rocket has carried a satellite designed to help NASA study the border between terrestrial and space weather as well as its potential impact on communications systems, people and technologies.
The company said Thursday its Stargazer L-1011 aircraft carried Pegasus as part of the launch's first phase, after which the rocket ignited and sent the Northrop-built Ionospheric Connection Explorer satellite to a 357.3-mile orbit.
ICON's design is based on Northrop's LEOStar-2 satellite bus and serves as the ninth Northrop-manufactured and launched science satellite for NASA. Previously, the agency certified the air-launched Pegasus rocket to carry its small satellites as a Category 3 vehicle.
Steve Krein, vice president of civil and commercial satellites at Northrop, said initial data show that ICON is “in good health and performing as expected.“ He added that the company has been manufacturing satellites to support NASA missions over the past 35 years.
Northrop is working on the JPSS-2 and Lansat-9 satellites for NASA, which will operate as larger spacecraft based on the LEOStar-3 bus. | <urn:uuid:914c7cae-a6ba-411b-9aac-b002f62a96dd> | CC-MAIN-2022-40 | https://blog.executivebiz.com/2019/10/northrops-pegasus-rocket-launches-nasas-ionosphere-satellite-to-orbit-steve-krein-quoted/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00316.warc.gz | en | 0.961669 | 240 | 2.78125 | 3 |
Researchers Propose COVID-19 Tracking AppSmartphone App Would Warn of Proximity to Those With Virus While Protecting Privacy
Researchers at Boston University have written a research paper that proposes creating a smartphone app that uses short-range transmission technologies that can inform users if they have been in close proximity to a person infected with COVID-19 – while maintaining privacy.
In the paper, researchers Ari Trachtenberg, Ran Canetti and Mayank Varia describe the use of short-range transmission technologies, including Service Set Identifier broadcasts, near-filed communication and Bluetooth, to generate a random identification number for the user's device that avoids divulging personal details. This could help avoid some of the privacy issues that come with tracking GPS or real-time location data.
"We believe that the privacy guarantees provided by the scheme will encourage quick and broad voluntary adoption," according to the researchers, who advocate a voluntary approach to using this type of smartphone tracking app.
Current methods of tracking the locaction of individuals who could potentially be infected with COVID-19 rely on the collection of cell phone location data and combining it with personal information, which is an "unprecedented encroachment on individual privacy," the researchers note.
In March, the Trump administration spoke with technology firms, including Google and Facebook, about ideas for using real-time location data from smartphones to help track COVID-19 cases. But lawmakers, attorneys and security professionals expressed concerns about the concept (see: Should Location Data Be Used in Battle Against COVID-19?).
Since then, Google devised a COVID-19 Community Mobility Report that provides insights into trends in individuals' movements by using aggregated, anonymized data.
On Monday, the European Data Protection Supervisor, the EU's independent data protection authority, called for creation of one app to track COVID-19 cases, arguing that the use of too many apps would put citizens' privacy at risk.
Using the App
The Boston University researchers propose creating a smartphone app that would create a randomly generated identification number that can be shared with those in the community who also use the app.
Each smartphone using the app would broadcast a random token number to ensure that there's no obvious link to the user's personal information. The number would change every few minutes to ensure privacy and reduce the possibility of hacking. The app would record all the tokens it receives from nearby devices, according to the paper.
If a user is diagnosed with COVID-19, they could voluntarily share that data through the app, Trachtenberg explains in a LinkedIn post.
"When a person is tested positive for COVID-19, the person could choose (through the administrating medical professional) to voluntarily share their list of random numbers - either their own generated numbers or the numbers that the app observed," Trachtenberg notes.
The data and tokens could then be uploaded to a trusted website, such as the U.S. Centers for Disease Control and Prevention, to help track cases. At the same time, the app could connect to a database, which could alert users if they have been in contact with someone who has tested positive, according to the researchers. That could help with decisions about self-quarantining or testing.
"The random numbers break up the user's location history by varying them over time," Trachtenberg says. "The approach is dead simple and does not require any sophisticated math."
The researchers are seeking feedback on their proposal. The team acknowledges that the app needs security in place to ensure that identification tokens can't be spoofed and prevent counterfeit versions from appearing in app stores.
"Perhaps by far the greatest hurdle for this app is adoption - very quickly getting a large body of people to use the application, including medical professionals who are administering tests," according to the paper.
The researchers believe that they have the technology to create the app, but they are seeking more input from healthcare professionals, their report states. | <urn:uuid:c898ddc1-9c7f-434b-a225-d2d3f8eada4d> | CC-MAIN-2022-40 | https://www.bankinfosecurity.com/researchers-propose-covid-19-tracking-app-a-14073 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00316.warc.gz | en | 0.936339 | 812 | 2.65625 | 3 |
The IoT cybersecurity firm, Armis, has revealed eight vulnerabilities in the implementation of Bluetooth in several operating systems, including Android, Windows, Linux, and iOS, successful exploitation of which could allow hackers to take complete control of a device. Indeed, these are the most severe vulnerabilities found in Bluetooth in recent years and are worrying due to their ability to be spread over an air interface. They have been termed ‘The BlueBorne Vulnerabilities’.
Airborne attacks on mobile devices date back to the Cabir worm, an attack that presented the first proof of concept of a Bluetooth malware that was spread fast and wide, and even penetrated enclosed air-gapped networks.
The BlueBorne vulnerabilities are the result of a complex protocol which has been discarded and ignored by the research community for years, along with two common misconceptions regarding Bluetooth. The first misconception is that Bluetooth cannot be intercepted via the air, the second being that it always requires some sort of user interaction. The BlueBorne vulnerabilities prove both assumptions wrong as merely having Bluetooth on a device switched on renders it vulnerable to an attack.
What is crucial to understand here however is the sheer magnitude of this set of vulnerabilities. It is simply breathtaking; virtually any device with a Bluetooth interface is susceptible to at least one of BlueBorne’s vulnerability sets. Since the discovery of BlueBorne, all operating system manufactures have issued patches mitigating the vulnerabilities, and on iOS, the vulnerabilities only affect versions prior to iOS 10. On September 9th 2017, Google issued a security update for its Android users.
Check Point SandBlast Mobile however can protect mobile devices from this threat, both on iOS and Android, by helping to verify that mobile devices on your network are in compliance with the latest OS versions and security patches.
In addition, any active exploitation of the Android Bluetooth stack will be detected by SandBlast Mobile on device detection, giving you an extra layer of protection.
Here’s how to make sure you are protected:
In Settings->Policy Settings->Device, change to the following configuration: | <urn:uuid:b920447a-7690-4602-985e-b5de38ec64bb> | CC-MAIN-2022-40 | https://blog.checkpoint.com/2017/09/12/blueborne-new-set-bluetooth-vulnerabilities-endangering-every-connected-device/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00316.warc.gz | en | 0.940125 | 429 | 2.796875 | 3 |
What we detect
Who creates malware and why?
Have you ever wondered who creates malware? Or why they do it? Find out more about the people behind the threat – the script kiddies, virus writers, and cybercriminals – and what motivates them.
Trojans, viruses, worms, dialers – the programs we detect have lots of different names. Find out how Kaspersky Lab and other antivirus companies classify the many different types of programs which can harm your computer or your data.
History of Malicious Programs
Do you know the name of the first computer virus? Or perhaps you want to find out when the first email worm was created. This section covers the evolution of malicious programs from their initial appearance to the present day.
What if my computer is infected?
With the number of threats rising every day, you may find that your computer has been infected. Find out more about the symptoms of infection, and what steps you should take to clean your computer. | <urn:uuid:aa648aac-30ac-47ba-ad0b-f5a9d2bbb90e> | CC-MAIN-2022-40 | https://encyclopedia.kaspersky.com/knowledge/detected-objects/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00316.warc.gz | en | 0.940481 | 206 | 2.828125 | 3 |
Encryption of VoIP traffic was, for some of us a humorous concept. I remembered as a young development professional how much fun it was to use a packet sniffer to capture the bosses packets and reassemble his email over the LAN. Years before that when I worked at the phone company as a central office test engineer, it was not uncommon to find an interesting phone call and plug it into the over head paging system to provide entertainment for the late night test crew. There are times I still think the concept of encryption on VoIP is humorous, but it is becoming less funny all the time as we move toward end to end VoIP with no TDM at all in a world populated by terrorists and other evil doers. In any VoIP environment today, you can at some point use the usual tapping tools to capture a phone call as it hits the TDM gateway and is converted from VoIP to traditional analog or digital signals. From an induction coil to a line mans butt set, you can still intercept a VoIP call as it crosses the TDM boundary.
Now that VoIP is being used end to end, we do need to have a mechanism for encrypting at least the media stream. Today we generally do that with SRTP and IETF standard in combination with AES. AES or the Advanced Encryption Standard was adopted by the US Government and comprises three block ciphers: AES 128, AES 192 and AES256. Each AES cipher has a 128 bit block size with key sizes of 128, 192,and 256 respectively. This standard has generally replaced the former Data Encryption Standard or DES. It is important to understand the difference between encryption and authentication. Determining that a signal is “authentic” and originated from a source we believe to be authentic, and encrypting the contents of that communication are two very different issues. Media authentication and encryption ensures that the media streams between authenticated devices (i.e. we have validated the devices and identifies at each end) are secure and that only the intended device receives and reads the data. We need to encrypt both the media (i.e. the voice) and the signaling information (i.e. the DTMF). In most VoIP systems today, SRTO or secure RTO is implemented to assure media encryption. Understand that this encryption is not passed through to the TDM network, so once the media stream leaves the VoIP environment it is subject to eavesdropping.
Clearly as we are now able to employ VoIP end to end, SRST/AES encryption has very powerful ramifications for both the good guys and the bad guys!
Does your Microsoft Outlook Integrate with your phone system? This functionality is getting to be the “minimum daily adult requirement” feature in the VoIP vendor space. We all just expect that our phone system “knows” about our “contacts”. We don’t dial phone numbers anymore! We enter Names and the phones system gets the number out of our contact list and places the call. Often, an incoming phone call to our desktop, will cause our contact information to be displayed. Some integrations enable your phone system to change user profiles and call handling modes based on your Outlook contact. Of late I have been wondering how far this integration can go? I mean, if I have a conference call scheduled in my Microsoft Outlook, shouldn’t the phone system know about that? My thinking is the phone system should just call me and remind me of the conference and then ask me to approve joining the meeting!
Using a VoIP phone system and installing one, are two entirely different user experiences! Most product development efforts focus necessarily on end user features and benefits. Anyone who has ever installed a VoIP system knows that if design engineers ever actually installed a system we would have a range of exciting new configuration automation tools! Case in point: LLDP. Until version 9, ShoreTel IP phone deployment was a two step process. First you would install your handsets on the net and they would boot up in the native VLAN. If you were deploying 100 desktops, this meant that the phones would eat up 100 of your native VLAN DHCP leases. The phones would then obtain their VLAN tag and reboot in the correct VLAN. The native VLAN lease damage, however was already done, not to mention waiting for that second boot DHCP broadcast request for service. A number of vendors had previously create proprietary discovery protocols to overcome this behavior. CISCO has always had CDP,( not to be confused with Enterasys Cabletron Discovery Protocol); Nortel had NDP; Extreme Networks had EDP and Foundry had FDP. In 2005 an industry standard, LLDP was adopted and later modified to become LLDP-MED or Link Layer Discovery Protocol for Media Endpoint Devices. ShoreTel, rather than event yet another vendor proprietary protocol has adopted this industry standard greatly simplifying IP phone deployment. LLDP-MED allows network devices to advertise their identify and capability through a multicast. This enables the phones to come up on the network in the correct VLAN, eliminating the multi-boot requirement. Now this is not a feature that an end user will notice or appreciate, but those of us who have to spend hours deploying VoIP desktops say Cheers!
update 2/15 see this link http://support.drvoip.com/admin/knowledgebase_private.php?article=43&back=1 | <urn:uuid:c0e9a8b3-90d5-4739-bbe2-d25d075bca64> | CC-MAIN-2022-40 | https://drvoip.com/tag/voip/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00316.warc.gz | en | 0.949333 | 1,116 | 2.609375 | 3 |
Spear phishing is an email scam targeted towards a specific individual, organization or business. Cybercriminals are targeting these businesses and high earning individuals because they can be much more lucrative for them. We don’t want anyone to get phished, so in this article, we break down what spear phishing attacks are and tips to help keep you and your business safe.
What is a Spear Phishing Attack?
To better explain, we need to describe a little bit about the difference between phishing and spear phishing. Let’s get some definitions out of the way to get started:
Like its namesake (fishing), phishing is going in search of a ‘catch’ – in this case, valuable personal information that they can use leverage and get money.
Phishing is the act of sending fake emails designed to look exactly like an actual message from something such as a bank, credit card company, or paid service that you use. The intent is to get you to click on a link that will redirect you to a scam website to collect your legitimate login and account details. Other times, the attack is an attachment in the email that looks legitimate but will actually infect your computer with malware.
Phishing is essentially a ‘volume business’ for the scammers. Meaning they will send emails continuously to millions of people without any research just trying to get as many victims as possible.
Phishing vs Spear Phishing Attacks
As the name implies, spear phishing is a much more targeted approach. Rather than try to grab many small victims of little value, scammers attempt to catch just a handful of big targets that may be worth a lot of money. As opposed to phishing, spear phishing is often carried out by more experienced scammers who have likely researched their targets to some extent.
Like a regular phishing attack, intended victims are sent a fake email. It will contain a link to a website controlled by the scammers, or an attachment with malware inside. In spear phishing, however, the email will often be customized to match the specific target organization.
Customization in this instance can mean creating a fake email address that looks almost identical to an actual co-worker (sometimes only a difference of a single letter in the domain name). The body of the email itself may use ‘letterhead’ and corporate logos appropriately matching the company it is supposedly coming from.
The end-goals of phishing and spear phishing differ. In phishing, the goal is to steal an individual’s bank, credit card, and personal details that they then sold off on the black market. Easy and fast money is usually the goal.
In spear phishing, passwords and system access are often the main goals. Why get a handful of bank account passwords when you might be able to break into the bank’s system and get every customer’s password? That is the appeal of spear phishing to the scammer.
Phishing vs Whaling Phishing Attacks
One final definition to add to this glossary of techniques is whaling. Whaling can be viewed as an even more refined version of spear phishing. They specifically will target victims who are C- Suite executives at a company.
If spear phishing is targeted usually at employees or small businesses (the ‘fish’), then the ‘whale’ in whaling is the ‘Big Fish’ of a high-level member of an organization. To attract their attention, emails may appear to be legal threats or important complaints.
Spear Phishing Real Life Examples
Here are some examples of real-world spear phishing attacks that have been in the news. They show how financially damaging these scams can be.
Austrian manufacturer lost $55 million and replaced CEO
FACC, an Austrian manufacturer of airplane parts, allegedly lost $55 million to a spear phishing scam in 2016. FACC has not released the full details of what transpired, but it is thought that there was some kind of whaling attack, involving impersonation of high-level financial executives. The CEO of FACC was replaced.
Employee impersonation cost company $47 million
In a similar case, NBC News found that Ubiquiti Networks, a computer networking company, was allegedly scammed out of $47 million. Like FACC, the company declined to release full details, but Ubiquiti said the scam involved the impersonation of employees and was targeted at their finance department.
Lithuanian man takes $100 million from Big Tech
According to the US Justice Department, from 2013 to 2015 a Lithuanian-based man named Evaldas Rimasauskas allegedly ran a clever scheme that made him millions. He created a shell company in Latvia with a name identical to a computer hardware company. He then sent spear-phishing emails to the top Silicon Valley corporations that did business with that hardware company. His messages had him posing as that legitimate computer hardware company, with his copycat name acting as cover. By then “billing” these companies he allegedly raked in over $100 million before being discovered by federal authorities.
Common Spear Phishing Email Traps
Here are some common situations that spear phishers use to target high priority companies and employees. Notice how they manipulate trust and the mindset of the user in order to gain access.
- An email about an unpaid invoice from a business partner; the attached invoice itself is actually a piece of malware.
- An “urgent request” for an immediate wire transfer, supposedly from a top-level officer at a company.
- A supposed security alert email from a software service a company uses, urging that the user immediately follow the included link to reset their password. The link leads to a scammer-controlled site, where the password info is stolen.
- Asking for copies of employees’ W-2 information, supposedly from a top-level payroll/HR officer at a company.
- A PDF detailing a new business procedure, appearing to be sent from a top-level executive. The PDF is malware.
- A link to an online document that needs review (like a Google Doc). The link leads to a malicious website.
- A phone call from “tech support” for software your business uses, asking for login information so they can “fix a problem.”
Be wary: scammers working in spear phishing often take the extra time to research their victims, using publicly accessible data like staff contact lists, in order to aim their attacks at the right target and to know whose identity they will fake in their spear phishing email message.
Tips To Avoid Spear Phishing Attacks
Here are some suggestions that may help avoid spear phishing attacks.
Run it by the actual person
If an email from a supervisor or business partner seems surprising or unusual, confirm it with them that this is something they actually sent you. Do not reply directly to the suspicious email. If you are able, confirm the email message with the individual either in person or by phone.
Have employees in your organization use an anti-phishing software to learn how to not fall for email scams, phishing, spear phishing attacks. If you can build a knowledge base and work culture based around security, you have a better chance of avoiding problems.
Check and recheck the email address
If you cannot immediately confirm with the person who allegedly sent you a suspicious email, recheck the email address. Sometimes the sender’s name appears to be totally correct, but you may notice the email address itself is totally wrong. (Think of it like using a fake return address on old-fashioned ‘snail mail’.) Other times, a more sophisticated scammer may alter only a few letters in the address or the domain name. (Rather than website.com, the scam email may be from something like wesite.com [missing a letter], or something like website.ru [altered domain].
‘Hover’ and review
Most email software allows the user to hover the mouse over the sender’s name on a piece of unopened email, revealing the full address of the sender. This can be an easy way to determine if the email address is legitimate. Similarly, many web browsers allow users to hover over link text to view the destination URL. If the site seems suspicious, it could be a link to a scam website that contains malware.
Have varied passwords
Try to have different passwords across the accounts and software your business uses. If a spear phishing attack is successful and a scammer is able to gain access to one password, they could still be blocked by others.
On the other hand, if all passwords are simple variations of each other, you may have handed over everything. It could just be a simple series of guesses for the scammer to correctly get all the rest of your logins.
Have security software
Having anti-virus and anti-malware software on your organization’s systems may help lessen the impact of a phishing attack, in the event an employee falls for one. The successful implementation of anti-malware software may be able to ‘quarantine’ the malware and keep it from gaining access to data.
Run it by an expert
If you are still not certain whether an email is legitimate or not: try to confirm it with the appropriate IT professional within your organization.
Here are some recent statistics on phishing attacks that give an idea of the scope of the problem and its dangers:
- Up to 30% of phishing emails sent are opened. It only takes one to be a potential security vulnerability for an organization.
- An average company of 10,000 employees can spend up to $3.7 million to deal with phishing attacks.
- Up to 90% of phishing emails can contain ransomware
- Phishing attacks were in the hundreds back in the early 2000s, but now number in the hundreds of thousands
If you follow these anti-spear phishing tips you will be able to keep yourself and your company safe. | <urn:uuid:062e2d09-c59c-4f12-9540-bfe20683aaec> | CC-MAIN-2022-40 | https://inspiredelearning.com/blog/spear-phishing-explained/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00316.warc.gz | en | 0.951964 | 2,096 | 2.609375 | 3 |
A new cybersecurity industry report has revealed that a group of cybercriminals is infecting government, university and enterprise computer networks with malware created for the National Security Agency. The security firm that released the report described the hacking campaign as something “that exceeds anything we have ever seen before,” noting that the malware being used is very flexible and incredibly hard to detect.
The cybercriminal organization
Researchers revealed that the group even appears to have a connection with Stuxnet, the computer worm responsible for sabotaging Iran’s nuclear enrichment program in 2010. Stuxnet was later revealed to be a joint project between the U.S. and Israel. Even more worrying, according to the report, is that the hackers have been using a tool known as GROK – something exclusively used by the NSA’s cyber-warfare unit. Use of GROK by the U.S. government was revealed in the classified NSA files leaked by former contractor Edward Snowden.
This malware is scary, but nothing new
While the findings of the report are alarming, this is just the latest occurrence in a string of incidents using government malware to commit corporate espionage plots. Hackers backed by China have stolen files from power plants containing business plans and Russian cyberspies have infected the corporate networks of oil and gas companies. However, the researchers responsible for the study don’t believe the Equation Group is backed by one particular government. There is evidence the cybercriminals have hacked into Chinese hospitals, Iranian banks and aerospace companies, Russian universities, rocket science research institutions, military facilities and even Pakistani government agencies.
The hackers used the malware to monitor keystrokes on enterprise machines and steal documents using legitimate credentials. In one particular scenario, the group programmed the malicious software to look specifically for shipping contracts and inventory price lists related to oil sales.
To defend against malicious software infecting their privileged network, many companies have started to employ a layered security solution. Faronics Anti-Virus provides protection for multiple endpoints and leverages a variety of strategies, such as Web filtering, firewalls, anti-rootkit and anti-spyware, to protect against cybercriminals and keep important information and systems safe. | <urn:uuid:0f68f0b3-a4c7-4373-a503-2b1c297a2401> | CC-MAIN-2022-40 | https://www.faronics.com/news/blog/new-super-sneaky-malware-highlights-enterprise-need-for-layered-security | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00316.warc.gz | en | 0.949343 | 449 | 2.515625 | 3 |
9.12.2022 • Blog
Nobody wants to download malicious software to their computer or other devices unknowingly causing viruses or malware to spread across your network. Without proper security, you could be a victim of this inconvenience and fork out thousands of dollars.
The number of security options that are out in the world to protect your network can be overwhelming at first. With so many advances in technology, this has led to the need for more advanced security options.
Coeo does not offer EDR but we have helped thousands of people secure their network and know what it takes to prevent you from being a victim of malicious software.
We want to help educate you on everything network security so you know how to combat cyber-attacks and malicious software.
By the end of this article, you will know what Endpoint Detection and Response is, the difference between it and Antivirus Protection Software, as well as if Endpoint Detection and Response is something your organization should invest in.
What is Endpoint Detection and Response (EDR)?
Endpoint Detection and Response (EDR) is a cybersecurity technology that continually monitors an endpoint so that malicious actors do not enter your network.
EDR detects a threat that exists in your network already, whether it be a software virus or malware, and contains it so it does not spread throughout your network.
Once the threat is contained, EDR analyzes and defines the nature of the threat and notifies your IT team.
Studying the threat will give information on its behavior which can be conveyed to the cyber threat intelligence system so it can help develop and evolve to address and detect future threats.
EDR will give your IT team information on the threat such as the parts of your network that have been affected, what the threat is currently doing, and how to stop the attack altogether.
EDR is constantly monitoring your endpoints and analyzing for threats that may be in your network. EDR does not prevent threats from getting through to your network only it detects and notifies you when a threat enters your network.
Before the system eliminates the malicious software from your computer, it first gathers critical information about the software and the attack. The system has to figure out where the threat came from originally which can be used to enhance future security measures.
The system also pins down applications and files that have been affected by malicious software. It also checks the malicious software to see if has replicated itself to spread throughout more of your network.
Once the affected files and software are pinned down and the threat is contained, the threat is eliminated and the affected files and software are restored.
EDR vs antivirus software
You may have read all of this information about EDR and thought it is similar to antivirus software. In a lot of ways they are similar and perform a lot of the same tasks but what are the differences?
● Detection and management
EDR places importance on what to do when you respond to a threat. It provides tools that aid in the investigation of threats and the management of those investigations.
Alarms will show up in a panel and someone will come in and work them using the tools and data present within EDR. It typically will include log management and the monitoring of systems to provide additional detail around an attack.
These logs that EDR provides give data that would correlate events as to how a virus appeared.
For example, it can show you that you received a virus as a result of a user clicking on a specific link that showed up in an email. It would also show other users that received the same email.
● Detection and management
Antivirus software does not go in-depth as EDR does. With antivirus, you may get an alarm that a virus was detected and mitigated by antivirus on an endpoint but that’s about it.
Should you install EDR into your network?
Since antivirus and EDR can be used simultaneously, it is often recommended that both be included in your network for as much protection as possible.
EDR is a newer software so it has a lot more features when it comes to detecting and responding to software than antivirus software.
Antivirus software is a good software but with the continuous advancement of technology, it is recommended that you have EDR as well.
At the end of the day, it is your decision on what to install or not install into your network but this article can be used to help you decide if EDR is right for your organization.
Next steps to better securing your network with EDR
By now you have an idea of what EDR is, the differences between EDR and antivirus software, and if you should install EDR into your network. With the continuous advancement of technology, it can be hard to keep your network secure.
EDR is just another piece of technology on the market to help you not become a victim and keep a safe, secure network. Nobody wants to be a victim and EDR can help you avoid that inconvenience.
Coeo takes pride in being fully transparent with you and giving you all of the information and tools you need to help you avoid becoming a victim of malicious software.
Coeo understands the stress a cyber-attack, malware, or virus can put on an organization. While we don’t offer EDR as a product, we understand its importance and want you to be as prepared and secure as possible.
If you are new to network security, we recommend reading these articles to help educate you on additional ways you can secure your network:
After reading these articles, you will have a better understanding of network security and know more ways that will help you secure your network. If you have any additional questions you can schedule an appointment to meet with our sales team. | <urn:uuid:1a8d72fd-9e80-465d-a4e0-bbba72890132> | CC-MAIN-2022-40 | https://www.coeosolutions.com/news/endpoint-detection-response | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00516.warc.gz | en | 0.959382 | 1,187 | 2.6875 | 3 |
Creating digital makers through digital intelligence
- Posted on May 27, 2022
- Estimated reading time 5 minutes
Have you ever been in a situation that required you to utilize emotional intelligence?
Emotional intelligence helps individuals recognize, understand, and manage their own emotions as well as recognize, understand, and influence the emotions of others. This skill is built over time through repeated experiences and encounters, and it’s integral to our livelihood. I liken emotional intelligence to a survival tactic that helps us better navigate the world we occupy. An individual who has not mastered the art of digital intelligence will deeply struggle to understand the implications of technology.
Coined by Dr. Yuhyun Park, digital intelligence refers to a comprehensive set of technical, cognitive, meta-cognitive, and socio-emotional competencies that are grounded in universal moral values and that enable individuals to face the challenges and harness the opportunities of digital life. This means our actions are governed by our ability to acquire knowledge and then utilize that knowledge to navigate structures put in place by our digital world for the betterment of humanity. Because the world we live in is heavily influenced by the digital landscape, it is important that all individuals, regardless of socioeconomic background, cultivate this knowledge. Our collective future depends on our ability to comprehend how our digital lives are improved, worsened, or manipulated by evolving technologies.
Just as there are consequences for disregarding emotional intelligence, there are consequences for disregarding digital intelligence. In the context of emotional intelligence, it might look like a delayed project with heavy financial implications because colleagues are not able to listen to one another with empathy. In the context of digital intelligence, it could look like unknowingly deploying artificial intelligence with racist, sexist or agist programming. Perhaps the team failed to test and consider how their biases could potentially be encoded in the algorithmic model. In this context, while it may have not been the intent to program discriminatory ideology into code, genuine intent is not enough to limit widespread consequences. It is good to be genuine, but it is critical to also make informed decisions based on those intentions.
The need to cultivate digital intelligence
We see organizations across all industries taking the initiative to navigate these complex topics with digital ethics practices and programs. Executives are realizing that a comprehensive digital strategy should include aspects such as diversity in decision makers, sustainability, transparency in design, governance, and a strong culture of responsible engineering. Organizations have taken the time to have frank discussions about the implications of their products, while consulting with external experts and investing in responsible innovation. When organizations get this wrong, their reputation, brand, and revenue stand to be impacted. But when they get it right, they’re likely to see higher satisfaction among employees and customers or new business opportunities from investors and partners who have similar goals.
Simultaneously, as important as it is for organizations to get digital ethics right, it is equally important for individuals to take responsibility. It is not enough to solely rely on organizations to lead the charge. We must understand what it looks like to implement an ethical approach in our respective communities and roles. That includes having substantial conversations about the impacts of technology, ideally with people of different socioeconomic and cultural backgrounds. These conversations help us develop, cultivate, and sustain our digital intelligence.
Two ways to cultivate digital intelligence
My recommendations for cultivating digital intelligence are straightforward. One recommendation is to introduce the topic of digital ethics to children and young adults as they learn more about technology. We should convey that they do not have to compete against technologies like artificial intelligence, but instead, that they are to be the shapers of such technologies. They are to be critical thinkers as they examine how technology can contribute to the good of humanity. It benefits our society if leaders of tomorrow can grasp and develop their digital intellect early on. As history collides with technology, they will need to learn about historic events like the 2008 Housing Crisis in America and how models encoded with human prejudice, misunderstanding, and bias were programmed into software systems that managed the financial market (Source: Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, O'Neil, C. (2016)). They will need learn about racial profiling and how artificial intelligence has contributed to mass incarceration. Educators must advocate for this knowledge be part of their student’s repertoire as they will eventually occupy positions that require critical thinking in our digital world.
Second, when it comes to the topic of advanced technologies like artificial intelligence, we should all take into consideration situations for which technology is not the right solution. There is this notion that artificial intelligence is useful and practical in all situations. At what point do we stop as say, “Perhaps this tool is not needed” or “Is there a better alternative?” Just because the technology exists does not mean it has to be applied. Critical thinking needs to be applied so that we drive technology and not the other way around. Developing digital intelligence in this sense can help us explore what it means to address the problems that plague our society.
The path forward
We all have a personal responsibility to develop our digital intellect so we can better navigate the difficult decisions we’re sure to face in the near future. Decisions can have generational consequences and it could take massive amounts of time and resources to reverse certain negative outcomes. Because of the magnitude of impact technology can have, ethical concerns should have a large audience among global organizations as well as everyday tech users. For those who work in tech, we become better technologists if we are able bring this understanding to our work. For those who work outside of tech, consider your input to be just as valuable as we develop this intellect across all socioeconomic backgrounds.
Now that you have this information, I challenge you to take the time to develop your own digital intelligence. The world needs its critical thinkers and decision makers to be the shapers of today and tomorrow.
As always, we look forward to your input, and if you’re interested in a more in-depth discussion or help on this topic, you can contact us directly or post a comment below. | <urn:uuid:f3ba69a2-ed6b-4660-8355-977b273724a9> | CC-MAIN-2022-40 | https://www.avanade.com/en/blogs/avanade-insights/digital-business/creating-digital-makers | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00516.warc.gz | en | 0.944904 | 1,241 | 2.65625 | 3 |
Whether a business is relatively small or a huge global corporation, it is vital for them to follow standards to help ensure their business runs smoothly. One of the most common issues a business can face is when it suffers from a lack of information security. Whether it’s stolen credit card details, mishandled personal information or even intellectual property, businesses are obliged to protect this sensitive data. To help companies keep their data and information assets secure from threats, it’s important to understand security standards such as the ISO 27000 series. This will be important to protect financial information, customer data, employee details and also intellectual properties. In this article, we’re going to explain what ISO/IEC 27000 is, why you should use it as a standard and also discuss some of the advantages of achieving certification to those standards.
What Is ISO/IEC 27000?
Also known as the ISO 27000 Family of Standards, it’s a series of information security standards that provide a global framework for information security management practices. They’re published and developed by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC).
ISO/IEC 27000:2018 focuses on information technology, security techniques and information security management systems. This particular standard involves an overview and vocabulary used by the ISO 27000 series standards and serves as a general introduction to the more common ISO/IEC 27001:2013, also known as ISO 27001.
What Is ISO/IEC 27001?
The ISO 27001 standard explains the requirements for an organization’s information security management system (ISMS). It enables organizations to prove that they meet regulatory requirements that are related to information security and it demonstrates that the company is committed to protecting sensitive and confidential data.
The ISO 27001 standard provides a framework for organizations to use when protecting information. This is often done through the use of different technologies, auditing practices and tests. It also helps to improve staff awareness on ISO 27001 so that internal incidents have a low risk of breaking ISO 27001 standards due to uninformed or untrained staff.
In most cases, an organization will have a number of different security controls that it uses to regulate the flow of information in and out of the business. However, these controls are often disjointed without an ISMS governing them. This is because security controls are often implemented as point solutions to specific areas of the business for convenience but cannot be monitored or controlled from a central area. An ISMS seeks to simplify these security controls in order to make data security easier to manage. It’s a systematic approach that helps to manage sensitive company data in order to secure it and it can be applied to virtually any business that uses technology regardless of its size.
The ISO 27001 standard requires that your data management staff are capable of systematically examining the organization’s information security risks. This means taking into consideration all vulnerable points in your system, the threats that could be posed to these weaknesses and also the impact it could have on your overall data management solution. It also requires that you design and implement a comprehensive suite of information security controls that can address risks that would be deemed as dangerous or risky. Lastly, it also requires that your management staff adopt a management process that ensures that all of your information security controls meet the information security needs of the organization.
|Related Reading: What is ISO-27001 Compliance?|
Why Use the ISO 27000 Series Standards?
The ISO 27000-series standards are designed to assist companies in managing cyber attack risks and internal data security threats. As an organization grows, it becomes more complex and the technological solutions are open to more vulnerabilities that aren’t immediately obvious. Cyber criminals pose a constant threat to all industries that make use of networked technologies and it can become incredibly difficult to protect your data.
In addition, the ISO 27000-series standards focus on helping companies implement effective and affordable solutions that can assist in protecting personal data, corporate data and intellectual properties. Among the standards, ISO 27001 is arguably the most popular because it’s currently the only standard that can provide a company with an audited certification. However, ISO 27001 isn’t the only standard that can provide an organization with assistance in how they protect their business. For instance, ISO 27005 provides guidance on conducting risk assessments for your information security and ISO 27032 provides general guidance on the best practices to enforce cyber security measures.
What Are the Advantages of Following ISO 27000 Series Standards?
There are a number of useful advantages to following the ISO 27000 series standards. For starters, it allows an organization to protect business-critical data and also helps to safeguard employee and customer details. This can help give your customers and employees more faith in your processes, drastically improving your reputation and potentially avoiding any hits to how trustworthy you are in the eyes of your audience.
Data breaches can also come with expensive fines especially if you breach standards such as the General Data Protection Regulation. These expensive fines can be incredibly damaging to not just your financial situation but also your reputation. Penalties may also halt your business which can be devastating, often enough to completely ruin your business. Lastly, following the ISO 27001 series standards and receiving certification for ISO 27001 mean that you’ll improve customer confidence and show that your company is capable of abiding by the strongest and most trusted security practices.
It’s important to remember that while the ISO 27000 series of standards is already well-defined, it’s a constantly evolving standard that will continue to be updated as new technologies and threats appear. By adopting these new standards and always ensuring that you’re up-to-date with ISO 27000 regardless of your chosen industry, you’ll always be able to protect your organization’s most sensitive data and build trust with both employees and customers. | <urn:uuid:3556fde4-3722-412b-a82b-c05ff421da72> | CC-MAIN-2022-40 | https://www.bitlyft.com/resources/what-is-iso-27000 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00516.warc.gz | en | 0.937726 | 1,205 | 2.953125 | 3 |
Can Quantum Computing Be Beneficial to Healthcare
(BBN.Times) Can the faster processing ability of quantum computing be beneficial to the healthcare industry? Author Naveen Joshi provides several predictions about how quantum computing will benefit healthcare in the future. Joshi writes that the benefits will be many and different ways. The merge of quantum computing and healthcare can help clinicians determine the therapy best suited for a patient, based on several patient characteristics like age, comorbidities, gender, co-medications, and genetic make-up. In addition to determining the best therapy, quantum computing can improve healthcare systems in many other ways.
The research industries are spending millions and millions to study the interaction of different drugs. The healthcare contract research organizations’ market size is predicted to grow up to $54.7 billion by 2025, at a CAGR of 6.6%. It takes several years to get a proper understanding of the effect of one drug in combination with others. Quantum computing can significantly shorten the period, as it has enough computational power to visualize all the possible outcomes. Quantum computing can also help to provide accurate medical imaging and medical therapies. Here’s how:
Improved Imaging Solutions
Quantum imaging machines can generate extremely precise imaging that allows visualization of single molecules. Machine learning algorithms and quantum computing together can aid a physician in interpreting the results of treatment. Machine learning can help to detect abnormalities in the human body, and quantum computing can help interpret the results of the treatment. The traditional MRIs can identify areas of light and dark, and the radiologist must have to evaluate the issues. But, quantum imaging solutions can differentiate between tissue types, which allows more detailed and precise imaging.
Radiation beams are used to destroy or stop the multiplication of affected cells completely. Minimizing damage to the surrounding cells is a major challenge of radiation therapy. Arriving at an optimal radiation therapy plan requires numerous simulations before an optimal plan is determined. With quantum computers, the possibilities that can be considered for each simulation can be found easily and quickly. It will allow physicians to determine the best therapy plan faster. | <urn:uuid:2efe9279-a349-45df-aec6-18caa20dc409> | CC-MAIN-2022-40 | https://www.insidequantumtechnology.com/news-archive/the-concurrence-of-quantum-computing-and-healthcare/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00516.warc.gz | en | 0.921469 | 431 | 2.875 | 3 |
Prior to artificial light being created, we got the majority of our light supply from the sun and, after the sun went down, people were basically without any light. Today, the majority of people always have some kind of light to light up their evenings. We take for granted having access to these lights and often do not think about the effect that they have on our eyes.
When we use artificial light at night, however, it doesn’t come without a cost—and that cost can be more than just a monetary expense. During the night, any light that is used throws out of whack the body’s biological clock, also known as the circadian rhythm. The real downside is that studies show that side effects of exposure to blue light can be high or low blood sugar, heart trouble, being overweight, and even cancer. However, not all light colors affect us in the same way.
During the day, blue lights are good for us because they keep us attentive, help us react in a timely manner, and help keep us in good spirits. But despite how they benefit us during the day, blue lights are the worst offenders when it comes to lighting at night. Electronics with screens are rapidly being used more, and the same goes for energy-efficient lighting. The increased use of these sources of light increases the level of exposure to blue wavelengths that people get, especially after the sun goes down. Being exposed to this sort of lighting at night has been linked with a myriad of health risks.
A good deal of research has connected working the graveyard shift and being exposed to light during the night with several different kinds cancer (like breast and prostate), diabetes, heart disease, and being overweight. No one knows for sure why being exposed to light during the night is so bad for us, but what we do know is that light exposure suppresses Melatonin being produced and discharged in our bodies. Melatonin is a hormone that affects circadian rhythms, and there is some preliminary research suggesting that lower levels of Melatonin could explain how the use of light during the night has a connection to cancer. Even if a light is dim, it can affect a person’s circadian rhythm and the production and discharge of Melatonin in a person’s body.
We’ve Got The Blues
While any type of light can lower the product level of Melatonin, blue during the night does so at a much higher rate. Harvard researchers conducted a study comparing the effects of green light to the effects of blue light. The blue light lowered the production and distribution of Melatonin for two times as long as the green light and altered circadian rhythms by twice as much (three hours in comparison to an hour-and-a-half).
The university of Toronto later studied the effects of bright light on people wearing goggles that blocked blue versus a control group of people without goggles. Their results suggested that people that work graveyard shifts and people that like to stay up late or all night could protect their eyes if they purchase and wear goggles that block blue light. However, these goggles do block out other colors, so they should not be used inside during the night. If a person wants goggles that only block out blue light, they could have to pay as much as $80 for specialty equipment.
What Can You Do?
If you’re concerned about the side effects of blue light, consider using red light due to the fact that it is the color of light that suppresses Melatonin levels the least and has the less of an effect on circadian rhythms than other wavelengths. Don’t look at any form of bright screen three hours prior to going to bed. For those of you that work during the night, consider using goggles that block out blue light. Finally, be sure to expose yourself to bright light during the day, as this will allow you to sleep better at night and stay more alert during the daytime.
With the ubiquitous use of electronic devices that emit blue light these days, this is a problem that we can’t afford to not act on. If you follow these guidelines, however, exposure to blue light can be helpful rather than an issue for you. You may notice that you’ll sleep better at night and feel more awake during the day. | <urn:uuid:860b3d8f-9a07-4e5e-aa34-b7cea0538e91> | CC-MAIN-2022-40 | https://mytekrescue.com/not-bright-side-blue-light/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00516.warc.gz | en | 0.96799 | 864 | 3.71875 | 4 |
Burglary and robbery are two very different crimes, but people often confuse them. When thinking about burglary vs robbery, we should know that the former stands for the unlawful entry of a structure with the intent to commit a felony. In contrast, robbery represents taking something from someone by force or threat of violence.
No one wants to become a victim of either of these crimes. However, it’s essential to make clear distinctions between the two to be better prepared.
What Is Burglary?
Burglary stands for illegal entry into a structure or onto a property with the purpose of committing a crime. Let’s note that the building in question doesn’t necessarily need to be a home or a commercial one. In addition, there don’t have to be any physical signs of breaking and entering; the culprit can also trespass through an open door or window.
Different states have different laws governing burglary, but all of them require three crucial elements to be present.
- unauthorized entry (breaking and entering, or entering without permission)
- occupancy (the structure or building in question must be occupied at the time of entry)
- the intent to commit a crime (there must be proof that the person who committed burglary intended to commit another crime once inside the occupied structure)
The penalties for burglary depend on several factors. These include the nature of the crime, the type of property, and whether the crime involved violence. Examples of punishments are fines, probation time, and time in prison.
First-degree burglary is the most serious and is typically treated as a felony. Penalties can include hefty fines and even life imprisonment in some states. Second-degree burglary charges involve up to 15 years of probation or prison, along with surcharges and fines.
Those who commit third-degree burglary offenders may have to pay lower fines, face up to five years in prison, or receive probation time. Fourth-degree burglaries are usually treated as misdemeanors, so they typically involve jail sentences of up to three years or fines of no more than $3,000.
Example of Burglary
A typical burglary example is breaking into a home or business to steal money or property. Others include breaking into a car to steal valuables or using a false key to enter a locked building.
Having a home security system, like Frontpoint, helps in these situations. Burglars are looking for easy targets, and a home security system makes your home a much less attractive option. Moreover, statistics show that over 20% of Americans have an alarm system.
What Is Robbery?
A robbery is a theft through violence or the threat of violence — the offenders confront the victims directly and use force to steal their belongings. There are three different levels, depending on the level of force used. Moreover, each robbery pertains to a set of elements that need to be present for it to be treated as such.
You may be wondering: “what is aggravated robbery?” Robberies that involve weapons or the ones where the victims suffer physical injuries are called “armed” or “aggravated,” in contrast to simpler thefts where the victim isn’t harmed.
Elements of Robbery
Robbery is a serious crime that involves a person taking another person’s property by force. The elements include:
- the property must belong to another person
- the offender must have taken the property from that person
- there must have been some form of force or threat used to take the property
- the intent to steal the property must have been present, and the individual must have intended to deprive its owner of it permanently
The sentence for robbery always depends on the level of the crime:
- Level one (robbery with minimal force)
This could result in a fine or up to a year in jail.
- Level two (robbery with the use of a weapon)
This will result in a lengthy prison sentence or even life imprisonment.
- Level three (robbery involving the use of a weapon or force or a severe injury)
This is a serious offense, with sentences ranging from three years to life imprisonment.
What are the penalties for armed robbery charges? If found guilty of robbing while armed, the offender will be subject to a maximum sentence of 30 years in prison.
If you are caught in possession of items that may be used to commit the crime in question, it could be considered that you were going equipped for theft. Factors such as the nature of the crime and the value of the items stolen also affect the sentence. For example, if the robbery involved a firearm, the prison sentence can be extended by up to 15 years.
What Is a Robbery? — Examples
To better differentiate it from other crimes, here are two examples of robberies:
- A man walks into a bank and demands money from the teller, stating he has a weapon in his pocket. He takes the money and escapes before the police can apprehend him.
- A group of thieves approaches a man on the street and threatens him with violence unless he doesn’t hand over his wallet and phone. The man quickly surrenders his belongings, fearing for his life. The thieves walk away with the stolen goods and escape before the police arrive.
Burglary vs Robbery — Similarities and Differences
Both of these crimes involve stealing from a victim, but they still have some critical differences.
Robbery typically means taking property from a person by force or threat of force. This can include physical violence, such as hitting or punching the victim or using a weapon.
On the other hand, burglary typically means breaking into a home or a commercial building with the intent to commit a crime. Examples include picking a lock, prying open a window, or breaking down a door. Moreover, two-thirds out of the 2.5 million annual burglaries are home break-ins.
However, burglary vs robbery facts also reveal some similarities between these two crimes. For example, both include the intent to steal something, and the former can (but doesn’t have to) involve the use of violence, which is always present in a robbery.
Let’s look at these examples to see the differences:
|a person breaking into your home with force while you are not there and taking your belongings||a robber forces the victim to withdraw money from an ATM through threats|
|thieves entering through a window of a house and stealing jewelry, cash, and other valuables while the occupants aren’t home||a masked group holds up a convenience store with firearms|
Home Invasion vs Burglary vs Larceny vs Theft vs Robbery — Most Commonly Confused Crimes
These five are some of the most commonly confused crimes in the legal system. Keep in mind that each has its own set of elements and penalties. Even though people most often confuse burglary and theft, the other three are similar enough to further add to the confusion. The following sections will provide details on each of these crimes so that you can make a clear distinction.
Burglary vs Theft
Burglary is defined as the unlawful entry into a structure to steal something. To charge someone with this crime, the prosecutor must prove that the person intended to commit a crime once they entered the building. On the other hand, theft stands for taking another person’s property without consent, intending to deprive the victim of it permanently. It also often occurs without the victim being aware.
The difference between theft and burglary is that, in the case of theft, the prosecutor doesn’t need to prove you intended to commit the crime once you took the property; they only need to prove that you wanted to keep it for yourself.
The penalties for these two crimes depend on the value of the property involved and the state where they occurred. Most often, they are punishable by fines and/or jail time.
Robbery vs Theft
At their core, both of these crimes involve taking or trying to take someone else’s money or property without their permission. However, the main difference is whether the perpetrator uses force or threats of physical harm during the act.
For instance, in a theft, the offender takes the property without the use of violence and usually without the victim’s knowledge. In a robbery, though, they use force or the threat of force to take the goods from the victim.
Another difference between robbery and theft is that the robber takes the property directly from the victim. In a theft, the criminal takes the goods while out of sight and while the victim is away. It also stands for all kinds of stealing, such as intellectual property theft, identity theft, and theft of services.
The penalties for theft and robbery also depend on the property’s value and the crime’s circumstances. This can vary widely between states and countries, so it’s essential to check the laws in your jurisdiction for precise information.
Larceny vs Robbery
In general, larceny only refers to the theft of physical items. In fact, it can include the theft of cars or jewelry and theft of services, such as cable TV or Internet service, and can also happen when the owner is not around.
Both crimes are treated as theft of personal property. However, robbery involves the use or threat of force, unlike larceny-theft. For example, if someone steals your car at gunpoint or intimidates you with a weapon to make you hand over your purse, that’s robbery.
The critical difference between the two crimes is the use of force or threat of force in robbing a person versus the lack of confrontation in larceny. Both are usually considered felonies, meaning they’re punishable with a lengthy prison sentence.
Robbery vs Burglary vs Theft
Robbery, burglary, and theft are all crimes that involve taking property that doesn’t belong to you. However, they differ according to the type of property in question, the location, whether the perpetrator used force or threats, and other circumstances.
Robbery involves taking something from a person by force or the threat of force through confrontation. On the other hand, the burglary definition states that the criminal has to enter a structure to commit a crime, with or without force, and often with the intent to steal something.
At the same time, in a theft, the culprit takes the goods without the use of force while the victim is unaware. It also doesn’t refer only to stealing physical items; the goods in question may also include intellectual property and identity.
Burglary vs Robbery vs Home Invasion
As mentioned, burglary stands for illegally entering premises to commit a crime. This can be anything from stealing property to vandalizing the building. The perpetrators can use weapons and force to enter the building or rely on more constructive means, such as using a key to enter or tricking the owners into letting them in. In most states, it is classified as a felony offense.
When comparing robbery vs burglary, it is important to say that the former stands for taking or attempting to take the goods from the victim by using threats, force, or weapons. While it may be part of a burglary, it can also occur in the street and always involves confrontation with the victim. State laws typically consider it a severe offense, so it’s often classified as a felony.
Moreover, when considering robbery vs burglary in California, we can say that the latter has a shorter minimum sentence of imprisonment — it can result in felony probation or two to six years in prison. On the other hand, robbery may lead to misdemeanor or felony probation or three to six years of jail time. Finally, in terms of fines, offenders of both crimes may need to pay up to $10,000.
Home invasions are also a type of burglary, but they are more violent. Home invaders tend to be armed and are often focused on specific items or people within the building. They may use intimidation tactics, threats, or violence to get what they want.
Final Words on Robbery vs Burglary
These two terms refer to different offenses but are often used interchangeably; burglary stands for the unauthorized entrance of a building to commit a crime, while robbery involves using force or threats of force to obtain the property.
It is essential to understand the different aspects of these two crimes to stay safe and prevent them from happening. To learn more about them or receive advice on how to stay safe, make sure to speak with a trusted legal professional.
People Also Ask
What is aggravated burglary?
Aggravated burglary involves using a weapon or explosive during the act of stealing. One of the examples is breaking into a home with a gun. The punishments for this type of crime are more severe since it poses a greater risk to public safety. Offenders who commit this form of burglary face lengthy jail sentences and consequences such as fines and community service.
Is robbery a felony?
Yes, robbery is a felony. Under federal law, it means taking something of value from another person by force or threat of force. It is a severe offense, punishable by imprisonment, fines, restitution to victims, and other penalties. If you have been charged with robbery, it is crucial to speak with an experienced criminal defense attorney. They can help you understand the charges against you and the consequences.
Is burglary a felony?
Except for fourth-degree burglary, which is most often regarded as a misdemeanor, burglary is treated as a felony. This means it is punishable by a lengthy prison sentence and hefty fines. Moreover, the punishment varies depending on the severity of the crime and the jurisdiction in which it occurred. In some cases, the severity may increase if the offender inflicted physical harm or possessed a weapon during the burglary.
Is burglary a violent crime?
There is no definitive answer to this question. However, a few factors may influence whether or not burglary is treated as a violent crime. These include the victim’s perception of violence, the severity of the crime, and whether or not the perpetrator used force.
An example of a violent crime is when a burglar uses weapons or other forms of physical violence against a victim. Cases in which the victim feels threatened or intimidated during a burglary incident can be regarded as violent crimes, too.
What is more serious, robbery or burglary?
In general, robbery is a more severe crime because it often involves violence or the threat of violence. Burglary, on the other hand, usually stands for unauthorized entry into a building or structure, and often when the occupants aren’t there, so it doesn’t necessarily include the use of violence. As such, penalties for the latter are usually more severe than for the former.
When comparing burglary vs robbery, the seriousness of a crime can vary depending on the situation and potential harm to victims. For example, if a firearm or another deadly weapon is involved, the jail sentence can be extended to 15 years. | <urn:uuid:2378c7f7-ee09-4674-8980-3357f8c59d61> | CC-MAIN-2022-40 | https://safeatlast.co/guides/burglary-vs-robbery/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00516.warc.gz | en | 0.95392 | 3,113 | 2.796875 | 3 |
In our previous article (How to Scale and Manage Millions of Metrics), we looked at correlations in terms of name similarity, but there are other types of similarities that occur between metrics.
Abnormal Behavior Similarities
Metrics can be correlated by their abnormal behavior. When anomalies often appear at the same time in two metrics, we can assume there is a correlation between them.
If two metrics are independent, the probability that they will have overlapping anomalies gets lower and lower the more anomalies that they have. This is why it’s important to identify abnormal similarity.
Calculating Abnormal Similarity
The procedure for calculating abnormal similarity is almost identical to the procedure for name similarity, the only difference is in the way that the sparse vector representation is calculated.
How is this done? Anomalies are discovered in each of the metrics for a fixed period, say the last 90 days. The metric is represented in a binary sparse vector in the size of this period, so that whenever the metric is abnormal it is designated by a ‘1’ in this vector and whenever it is abnormal it is designated by a ‘0’.
The next steps are exactly the same as done with name similarity.
The hashes of each of the transformed time series are compared, and for each hash group the exact similarities between its members are calculated.
Another alternative which is applicable for other types of similarities as well, is instead of calculating similarities between each pair in the hash group, a clustering algorithm can be run within each hash group, such as LDA (latent Dirichlet allocation), using the similarity as the distance measure. This is sometimes very useful because groups of similar metrics are often more interesting than just pairs.
Normal Behavior Similarities
When two metrics behave similarly, and they have similar patterns, then it can be assumed that there is a correlation between them.
By looking at these examples, it can be assumed that the first three metrics and the last one are probably correlated.
Computing the normal behavior similarities is not so simple. One way to do it is to use various types of linear correlation methods such as Pearson’s correlation, or cross correlation if there is a lag between the metrics. Linear methods are very problematic. They’re very sensitive to trends and to seasonal patterns. Even after removing these using de-trending and de-seasoning techniques, they often don’t work well. They are super sensitive to scale; the two metrics in the following example are very different most of the time.
Over a very small period with extremely high values, they are similar. This small period may totally bias the similarity between them. In this specific example the similarity is almost 0.99, which is very high.
So far, we experimented by using linear methods. Despite tweaking, these always gave results that had either too many false positives or too many false negatives. Another problem is scaling, because by using the original values of the metrics, it’s impossible to avoid high dimensional dense vector representation. For example, 90 days of hourly sampling intervals induced a vector size cardinality of 2160. In order to use LSH (Locality-sensitive hashing) to scale, we must have sparse vector representation or low dimensional representation.
Getting a Sparse or Lower Dimensional Vector Representation
How can we get a sparse or lower dimensional vector representation for each of our metrics?
Instead of looking at the metric as a sequence of values, this metric can be looked at as a sequence of patterns. Take this example:
It can be viewed as a sequence of spike, straight line, straight line, straight line, then spikes and so on.
A pattern dictionary can be created, then using a pattern matching engine each metric can be represented by those patterns. Each metric’s original value sequence is converted to a sequence of patterns. This representation is sparse or low dimensional (depending on the implementation). It is amenable to regular clustering and similarity methods. The same procedure that is used for name correlation and abnormal correlation can be used. IDF (Inverse Document Frequency algorithm) can be used to weight the patterns so that common patterns get lower weights and rare patterns get higher weights.
Creating a Pattern Matching Engine
The pattern matching engine is built using deep learning. This entails training a stacked auto encoder with many time series data metrics. The input of the metric is ordered entries (after some preprocessing, including detrending and normalization) and the output is the matched patterns of the metric.
After training the network, the network can be used to convert each of the metrics into their new sparse representation based on patterns. When the metrics have been converted to pattern based representation of each sequence, any similarity procedure can be applied and it can be scaled with LSH.
The Value for Anodot
To show the value for Anodot, here are some weekly statistics that we’ve collected.
- Collected: 230 million metrics over the week.
- Detected from them: 260 thousand anomalies
- Found from these that there were significant anomalies with high scores: 25 thousand
- After using the similarity procedure, this was reduced to 4,700 correlated incidents of anomalies.
Taking a look at some other stats, where data was collected for over a month, we calculated:
- ~350 million abnormal correlations [358,543,449]
- ~430 million normal correlation [436,796,325]
- ~370 million name correlations [371,020,643]
- Overall over 1 billion correlations in a month
Summary: Find relationships between different metrics
To summarize, it’s very important to correlate or find relationships between different metrics. This can be used to make order, cluster anomalies. This can reduce the number of incidents of anomalies, and be used to identify and analyze problems or discover opportunities. Knowing the relationship between metrics allows for more accurate prediction and forecasting. For example, knowing that the sales of a product are increasing, we can predict that the sales of another related product will also increase or decrease accordingly.
Presented at O’Reilly’s Strata Data Conference, New York, Sept. 2017 | <urn:uuid:56a86aa9-859a-4547-baf0-3eee30ecc9bf> | CC-MAIN-2022-40 | https://www.anodot.com/blog/finding-data-insights-from-relationships-between-metrics-part-3/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00516.warc.gz | en | 0.932489 | 1,269 | 2.953125 | 3 |
How to format a hard drive
If you are planning on ditching your old computer, formatting a hard drive is almost always a necessary thing to do
When it comes to deleting personal and private data or purging a PC of sensitive information before selling it on or disposing of it, the process of formatting a hard drive can be indispensable in Windows 10 or macOS.
Formatting a hard drive is one way to completely clean it of data, almost reverting it back to its factory settings, which is handy if you have got to a point where you no longer wish to use the PC or laptop and want to pass it on to another person.
But aside from also cleaning data, it can help when setting up a fresh installation of Windows, as a formatted hard drive will be fresh, clean and ready for a new install of Windows.
Furthermore, if you have a faulty hard drive, a full reformat can go some way to removing problems with it if they are not physical issues.
There are several ways to reformat a hard drive. One such way is to quick format a selected hard drive, which will go someway to wiping the data and partitions of the targeted drive. But it will not fully purge all the data on that drive, meaning the quick format process is better for people who wish to carry out a fresh install of an operating system rather than fully clean a hard drive of all its data before getting rid of it.
As such, there are other methods you can use besides this, though all come with benefits and downsides.
Partitions may seem like separate drives - but in reality, they are just the divisions that appear on a single disk - with every hard drive being made up of at least one, or more, partitions.
There can be any number of partitions on a disk, and having them makes it easier to get to work on one particular section that you need to format without tampering with the data stored elsewhere on the disk. This gives you the flexibility, for instance, to isolate one partition for the operating system installation files. If, however, you are aiming to format the hard drive as one entity, these partitions must first be removed.
What file system?
The file system you use when formatting a hard drive will depend on which operating systems you use.
Windows uses NTFS and Mac OS uses HFS so they're incompatible with each other. A file system called exFAT works with both Mac and Windows. This exFAT is better than the FAT32 file system it supersedes as FAT32 has a maximum 4GB file size limit whereas exFAT can work with files as large as 16EB (exabytes). The exFAT file system also runs better than FAT32.
In This Article
Big data for finance
How to leverage big data analytics and AI in the finance sectorFree Download
Ten critical factors for cloud analytics success
Cloud-native, intelligent, and automated data management strategies to accelerate time to value and ROIFree Download
Remove barriers and reconnect with your customers
The $260 billion dollar friction problem businesses don't know they haveFree Download
The future of work is already here. Now’s the time to secure it.
Robust security to protect and enable your businessFree Download | <urn:uuid:94d0a478-b60f-426f-b2e7-e598140897a4> | CC-MAIN-2022-40 | https://www.itpro.com/hard-disks/26116/how-to-format-a-hard-drive | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00716.warc.gz | en | 0.933106 | 671 | 2.984375 | 3 |
In computing, the term zero day refers to the unknown. If a vulnerability, exploit, or threat of any kind is not known to security researchers, it can be classified as a “zero day attack”.
Threat actors actively look for existing zero day vulnerabilities they can exploit or to create these vulnerabilities. The goal? Launch malware or network attacks while victims are not aware, and are not prepared to protect themselves.
Zero day malware exploits unknown vulnerabilities. Traditional antivirus solutions rely on known quantifiers such as signature-based methods to detect malware. To protect against the unknown, organizations can leverage next-generation antivirus (NGAV) solutions, which leverage machine learning to detect zero day malware.
In this article, you will learn:
In IT security, the term zero day is used to describe vulnerabilities or threats that are not yet discovered or patched by the vendor or user. This term is used to define vulnerabilities after the fact; usually after a successful or attempted attack is discovered.
Zero day can also be applied to malware, although it may not be used consistently. Some references to zero day malware define it as malware that is used to exploit zero day vulnerabilities. Other references define zero day malware as malware that is not yet known by the security community or security solutions. This means there are no signatures or hashes that can be used to identify malware.
Based on how the term zero day is used to define vulnerabilities, it is more consistent to use this term to refer to unknown malware. This is because many zero day vulnerabilities can be exploited by well established malware that is repurposed. In these cases, the malware was not created specifically to exploit the unknown vulnerability. This definition of zero day malware (i.e. unknown malware) is the one used in the rest of this article.
Traditional antivirus (AV) solutions use signature-based methods to detect malware and attacks. Signatures are strings of characters found in metadata, file names, or inside of files that identify an item as malware or related to malware. This method requires knowing that malware exists, having a sample of malware to pull signatures from, and for solutions to have a list of signatures against which new files are compared.
Using these methods, legacy AV solutions can detect around 57% of attacks and malware. However, as attackers develop new methods for exploiting vulnerabilities this number is decreasing. New types of malware, such as fileless malware, operate outside of traditional file-based methods, instead relying on scripts, macros, and system processes. Since there is no specific file associated with the malware, no signature can be created.
Because legacy AV solutions rely on signature-based detection, organizations are restricted to only being able to respond reactively. Organizations are also limited to whatever signatures or definitions their solution can ingest. This is fine for traditional malware but is inadequate for modern variations.
In contrast to legacy AV, next-generation antivirus (NGAV) technology combines machine learning and behavior detection technologies with signature-based methods. These technologies enable NGAV to identify zero day malware and other unknown threats based on suspicious patterns of events. Additionally, because NGAV incorporates machine learning, it is not restricted to reactive protections and can instead investigate activity as it occurs.
The unknown nature of zero day malware makes it unpredictable and challenging to both detect and defend against. To detect this type of threat, you need to implement proactive, in-depth security strategies. Below are a few practices and tools you can use to ensure that your systems are defended against zero day attacks.
Ensuring that your infrastructure, devices, and applications are up to date is essential to minimizing your risk. Even though zero day threats are by definition not yet patched, older patches may prevent these threats from being exploited. This is also true for zero day malware. Even if malware is unknown, protections against similar, known malware may prevent it from being used successfully.
Endpoint Protection Platforms are platforms that are designed to layer protections over your endpoints. These platforms often incorporate a range of security tools, including NGAV, web application firewalls (WAFs), and EDR.
The purpose of endpoint security is to help you centralize your security measures, enabling you to more effectively detect and investigate suspicious events. For example, unexpected processes, transfers of data, or downloads. It enables you to implement both traditional and modern methods of protection and layers reactive and proactive measures for greater security.
EDR solutions are proactive monitoring and response solutions that you can use to protect your perimeter and endpoint devices. These solutions specialize in providing visibility into endpoint activity and can enable you to automate responses to suspicious events before an attack occurs.
These solutions use machine learning and behavioral analysis methods to compare traffic and events to known acceptable and unacceptable behavior. This enables solutions to detect potential threats in real-time, including potential zero day malware. These threats can then be stopped at your perimeter, preventing malware from spreading beyond the affected device.
Consider segmenting your networks
Segmenting your network involves applying access controls to isolate your various services and components. It enables you to layer security measures and can significantly reduce the amount of damage a successful attack can cause.
Segmentation can be useful in mitigating the damage caused by zero day attacks since it prevents malware’s spread. When components are segregated, authorization and authentication measures prevent attackers from being able to easily move laterally through networks.
Additionally, segmentation enables easy sandboxing (strict isolation) of suspicious activity or files. This enables teams to investigate potential zero day malware without affecting the rest of the system.
Enforce the principle of least privilege
Regardless of the threats you are trying to protect against, enforcing the principle of least privilege is best practice. This principle requires that you only give users, devices, and applications the most basic permissions they need to operate. By restricting permissions, you limit the actions that can occur and prevent abuse of access.
In cases of zero day malware, minimal privileges are particularly important since this type of malware often exploits root or administrative privileges. By ensuring that only minimum privileges are provided, you can limit the ability of zero day malware regardless of whether it’s detected.
Learn more in our article about privilege escalation, which explains how threat actors exploit privileges to launch network attacks.
The Cynet 360 Advanced Threat Detection and Response platform provides protection against threats including zero-day attacks, advanced persistent threats (APT), advanced malware, and trojans that can evade traditional signature-based security measures.
Block exploit-like behavior
Cynet monitors endpoints memory to discover behavioral patterns that are typical to exploit such as an unusual process handle request. These patterns are common to the vast majority of exploits, whether known or new and provides effective protection even from zero-day exploits.
Block exploit-derived malware
Cynet employs multi-layered malware protection that includes ML-based static analysis, sandboxing, process behavior monitoring. In addition, they provide fuzzy hashing and threat intelligence. This ensures that even if a successful zero day exploit establishes a connection with the attacker and downloads additional malware, Cynet will prevent this malware from running so no harm can be done.
Uncover hidden threats
Cynet uses an adversary-centric methodology to accurately detect threats throughout the attack chain. Cynet thinks like an adversary, detecting behaviors and indicators across endpoints, files, users, and networks. They provide a holistic account of the operation of an attack, irrespective of where the attack may try to penetrate.
Accurate and precise
Cynet uses a powerful correlation engine and provides its attack findings with near-zero false positives and free from excessive noise. This simplifies the response for security teams so they can react to important incidents.
You can carry out automatic or manual remediation, so your security teams have a highly effective yet straight-forward way to detect, disrupt, and respond to advanced threats before they have a chance to do damage.Learn more about Cynet’s Next-Generation Antivirus (NGAV) Solution. | <urn:uuid:2f6e47e1-5c59-4121-a798-807f32a1bc52> | CC-MAIN-2022-40 | https://www.cynet.com/zero-day-attacks/5-ways-to-defend-against-zero-day-malware/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00716.warc.gz | en | 0.920466 | 1,640 | 3.890625 | 4 |
Public addresses are assigned by InterNIC and consist of class-based network IDs or blocks of CIDR-based addresses (called CIDR blocks) that are guaranteed to be globally unique to the Internet. When the public addresses are assigned, routes are programmed into the routers of the Internet so that traffic to the assigned public addresses can reach their locations. Traffic to destination public addresses are reachable on the Internet. For example, when an organization is assigned a CIDR block in the form of a network ID and subnet mask, that [network ID, subnet mask] pair also exists as a route in the routers of the Internet. IP packets destined to an address within the CIDR block are routed to the proper destination. In this post I will show several ways to find your public IP address from Linux terminal. This though seems like a waste for normal users, but when you are in a terminal of a headless Linux server(i.e. no GUI or you’re connected as a user with minimal tools). Either way, being able to getHow to get Public IP from Linux Terminal public IP from Linux terminal can be useful in many cases or it could be one of those things that might just come in handy someday.
There’s two main commands we use, curl and wget. You can use them interchangeably.
Curl output in plain text format:
curl icanhazip.com curl ifconfig.me curl curlmyip.com curl ip.appspot.com curl ipinfo.io/ip curl ipecho.net/plain curl www.trackip.net/ip
curl output in JSON format:
curl ipinfo.io/json curl ifconfig.me/all.json curl www.trackip.net/ip?json (bit ugly)
curl output in XML format:
curl all IP details – The motherload
Using DYNDNS (Useful when you’re using DYNDNS service)
curl -s 'http://checkip.dyndns.org' | sed 's/.*Current IP Address: \([0-9\.]*\).*/\1/g' curl -s http://checkip.dyndns.org/ | grep -o "[[:digit:].]\+"
Using wget instead of curl
wget http://ipecho.net/plain -O - -q ; echo wget http://observebox.com/ip -O - -q ; echo
Using host and dig command (cause we can)
You can also use host and dig command assuming they are available or installed
host -t a dartsclink.com | sed 's/.*has address //' dig +short myip.opendns.com @resolver1.opendns.com
Sample bash script:
#!/bin/bash PUBLIC_IP=`wget http://ipecho.net/plain -O - -q ; echo` echo $PUBLIC_IP
Quite a few to pick from.
I was actually writing a small script to track all the IP changes of my router each day and save those into a file. I found these nifty commands and sites to use while doing some online research. Hope they help someone else someday too. Thanks for reading, please Share and RT. | <urn:uuid:967cbc89-7dd2-4b97-b1cb-359de5e64964> | CC-MAIN-2022-40 | https://www.blackmoreops.com/2015/06/14/how-to-get-public-ip-from-linux-terminal/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00716.warc.gz | en | 0.850934 | 725 | 3.203125 | 3 |
The replication of human intellectual processes by machines, particularly computer systems, is known as Artificial Intelligence, and it is one of the modern world's fastest developing technologies. Also, discover emerging technologies.
The various creative Artificial Intelligence initiatives in various sectors such as Finance, Healthcare, Business, Marketing, Security, Automation, and so on are creating promising trends in the market. These developments are reshaping the world, altering people's perceptions about technology.
Let us look at the Top 10 latest innovations in the field of Artificial Intelligence that are currently trending in 2021.
Top innovations in Artificial Intelligence
Intelligent Process Automation (IPA)
Companies may automate the processing of unstructured data using a feature called Intelligent Process Automation in Artificial Intelligence. Unstructured data is more difficult for machines to understand than structured data, yet unstructured data makes up the majority of data received from the actual world.
In the banking and financial industries, IPA is utilized with other technologies such as Machine Learning, Cognitive Automation, and Robotic Process Automation. IPA is used by investment bankers to spot discrepancies in data collection that are practically impossible to spot with human eyes.
(Must read: Introduction to Investment Banking)
AI in the Healthcare industry
The Healthcare industry is one of the most important working industries in the world. With the passing days, technology is having a huge impact in the healthcare field and AI is currently supporting the healthcare business in a meaningful and precise way.
(Similar reading: Role of AI in healthcare)
Some of the latest innovations of AI, according to health it analytics are:
AI has helped in narrowing the gaps in mental healthcare. There are many smart-phone based AI applications that recognize different problems one might be facing with their mental health and offer cognitive behavioral therapy for them.
By identifying damage patterns and studying the types of fractures depicted on x-rays, AI is utilized to detect any symptoms of domestic abuse that may be experienced by individuals. This will allow medical personnel to approach the patient without fear of offending the patient's partner/spouse.
AI can recognize the sort of stroke a patient is having and determine where the clotting or bleeding is occurring. This aids detection since every second matters when someone is having a stroke.
AI has also made it possible to monitor brain health in real-time. The brain is one of the most complex organs of the human body and it produces a lot of complex data and is harder to treat. But, with the help of AI, many things like predicting seizures, identifying early stages of dementia, reading EEG, etc, are possible now.
(Recommended read: Top 9 Healthcare Technologies)
AI with the Internet of Things (IoT)
The Internet of Things (IoT) is a method of connecting a large number of physical objects over the Internet in order to gather and share data. IoT has done miracles in the world of technology, alongside AI.
Many products, such as smart locks, smart plugs, Google Nest, and others, have taken automation to the next level. It is a business solution and a powerful technology in the sphere of business. It is assumed that it comprehends and meets the basic needs of humans.
IoT devices function with instructions, but when combined with AI, they can foresee the requirements of the user and start equipment without the need for human interaction.
(Also read: How is AI integrated with IoT?)
AI in Smart Money
Artificial Intelligence is proving itself to be revolutionary in the field of finance. According to appen, the fast advancing applications of AI are as follows:
60% of the companies are now using Natural Language Processing (NLP), which is a subfield of machine learning that helps computers understand and manipulate the natural language spoken by humans.
70% of the firms are now actively using machine learning for business solutions.
Leading financial services businesses are seeing a 19% increase in overall income as a result of their AI projects.
49% of frontrunners have a full AI adoption plan in place, with departments expected to follow, providing them an instant size and speed advantage over competitors.
Today, 45 % of AI frontrunner companies are investing more than $5 million in AI projects, which is three times the amount of early or late adopters.
AI in Automobiles
Artificial Intelligence has made vehicles Autonomous now. Cars are now able to operate themselves without the need for any driver’s intervention. There are six levels of automation that can be installed within the vehicle.
At level 0, the car requires human control.
At level 1, the advanced driver assistance system (ADAS) in the car may aid the driver with navigation, acceleration, and brakes.
In some circumstances, the ADAS can handle steering, acceleration, and braking at level 2, but the human driver must maintain total attention to the driving environment during the travel while also completing the other responsibilities.
In some circumstances, the ADS (advanced driving system) can execute all aspects of the driving duty at level 3, but the human driver must be able to restore control when the ADS requests it.
In the remaining cases, the human driver performs the required tasks. In some circumstances where human attention is not required, the vehicle's ADS can execute all driving duties autonomously at level 4.
Finally, level 5 entails complete automation, in which the vehicle's ADS is capable of performing all duties in all situations and no human driver assistance is necessary.
The use of 5G technology will enable full automation by allowing cars to communicate not just with one another, but also with traffic signals, signage, and even the roadways themselves.
AI for Virtual Assistants and Chatbots
We've all heard of voice assistants like Alexa, Siri, and Google Assistant, as well as chatbots that are embedded into many websites to aid and advise new users. A voice assistant is a piece of software that employs NLP, Artificial Intelligence, and speech recognition to interpret and respond to a user's spoken instructions.
Chatbots, on the other hand, are programs that are meant to help a user 24 hours a day, seven days a week, and to reply correctly and answer any questions that the user may have. Most Chatbots and Virtual Assistants have pre-programmed response systems that respond in accordance with particular rules and patterns.
Some voice assistants can now communicate with the user and reply properly thanks to powerful AI. They even improve as they are used more. Siri and Alexa, for example, may converse with the user just like a normal human being!
(Recommended: Examples of AI)
AI in Processors
Because of the abundance of low-cost processors on the market, it is relatively simple to incorporate Artificial Intelligence and related technologies into projects and tasks.
AI-enabled processors or chips are offered from a variety of firms, including NVIDIA, AMD, and Qualcomm, and aid in the improvement of all business operations.
These chips are utilized in facial recognition and object detecting features. Biometrics are widely used in many devices since they improve security and restrict access to any system to only registered users.
Artificial Intelligence has been playing a very important role in the progress of quantum computing. With classical computers, artificial intelligence creates usable applications; nevertheless, it is restricted by the computing capabilities of classical computers.
Quantum computing can provide artificial intelligence a computational boost, allowing it to solve more difficult tasks. The use of quantum computing for the calculation of machine learning algorithms is known as quantum AI. Because of the processing benefits of quantum computing, quantum AI can assist in achieving outcomes that are not achievable with traditional computers.
It is slowly but steadily expanding as firms embrace technology that allows them to solve problems more quickly. This can aid in the analysis and processing of large amounts of data, allowing the pattern to be quickly retrieved. Quantum AI is improving the sectors of banking and healthcare.
AI for Cybersecurity
Cybersecurity is very essential nowadays as most of the important data is stored on the internet by companies and businesses. Even as an individual one might have many personal data stored on the internet like passwords, photos, documents, etc. It is convenient and easier to find but with it comes the risk of data breach and leaking.
Every business requires internet security since all of their company's key databases, including financial data, plans, and private information, are housed online. Cybersecurity is a must-have for all businesses, making it one of the most essential uses of AI.
Cyber professionals can use Artificial Intelligence to identify and eliminate undesirable noise or data that they may notice. It enables them to be aware of any unusual activity or viruses and to be ready for any assault. In order to decrease cyber risks, it also analyses large volumes of data and improves the system accordingly.
(Also read: Best data security practices)
Robotic Process Automation (RPA)
Robotic process automation (RPA) is a technology that allows the creation, installation, and management of software robots that mimic human movements while dealing with digital systems and software. Many businesses are turning to RPA to improve their operations.
RPA is capable of handling and automating repetitive activities. It can help in the repetition of any task multiple times each day, freeing up human time for other useful pursuits.
RPA is widely utilized in the insurance business, but by incorporating AI into typical insurance RPA procedures, automation can employ image recognition to access and process claims with minimum human intervention.
(Suggested reading: Uses of RPA in manufacturing industry)
With the passing days, Artificial Intelligence is advancing more and more in its domain and setting new trends every day. In this article, we have discussed some of the latest innovations that have been possible with the help of AI. AI has now taken over many fields and has become a very essential part of businesses, healthcare, and other industries. | <urn:uuid:cd126152-c977-4f35-acc8-27712f5900e1> | CC-MAIN-2022-40 | https://www.analyticssteps.com/blogs/10-latest-innovations-artificial-intelligence-ai | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00716.warc.gz | en | 0.944243 | 2,073 | 2.703125 | 3 |
Microsoft Excel Tips to Make Your Life Easier
Microsoft Excel is packed with a lot of powerful features. At times it can seem intimidating to use. But knowing shortcuts can help get more comfortable with the product and save you time. Discover how to use Excel more efficiently with this collection of tips from CBT Nuggets trainer Simona Millham.
Want to learn even more about Excel? Check out her Microsoft Excel 2019 training at CBT Nuggets.
Microsoft Excel Tech Tips
Cell Styles are a great way to save and transfer a custom number format. Click the More button on the Styles gallery on the Home tab. Then choose to Create a New Style and be sure to include your custom number format. That's saved with the workbook, but you'll also notice a Merge Styles option, which allows you to bring in Cell Styles from other open workbooks.
When you have a list of Excel data, click in it somewhere and choose Format as Table from the Home tab. It applies some pretty formatting, yes, but more importantly, it gives you a Table Tools tab on the ribbon with lots of useful options.
When you format an Excel list as a Table from the Home tab, it knows you want to be able to see your column headings when you scroll. See how the A, B, C column headings have been replaced with the table headings?
Have you ever lost the grey gridlines around a cell? That's because the cell fill is set to White rather than No Fill.
You'll probably already know that you can type ' before a number to format it as text. This is useful, for example, if you need to keep the 0 at the beginning for a phone number or something. But what if you've got a list of phone numbers already entered that are missing zeros? The answer is to apply a custom number format.
You may know that if you type "January" in a cell, you can use Excel's fill handle to continue the pattern. But you can also create your own lists. Type the list, select it, go to File, Options, Advanced, and scroll down to General and click Edit Custom Lists. Click Import, OK, and try it out.
Do you wish your list of data was in a row rather than a column or vice versa? Copy the cells, click where you want the new list to start, and click the bottom half of the Paste button. Look for the "Transpose" option or just press "T" — and you're done!
There are some great pre-prepared templates available for you to choose from in Excel. Choose File, New, and if required, search for what you're looking for. You can use them to learn different ways to use Excel.
Want to change defaults in Excel such as the default number format or header and footer? First, create your ideal blank workbook with the required number format or whatever, and save it as a template called book.xltx in your XLSTART folder.
Use the Split button on the View ribbon tab to create different window panes that scroll independently.
If you have cells on one sheet that change when you update figures on other sheets, you might find the Watch Window useful. Click the cells you want to keep an eye on, choose Watch Window from the Formulas tab, and click Add Watch.
For an instant PivotTable, click in your list of Excel data, go to the Insert tab, and choose Recommended PivotTables. Select the option that looks most useful and bingo!
Get Excel to read out what you've just typed. Right-click on your Quick Access Toolbar, choose the customize option, display the Commands Not in the Ribbon list, and then find the Speak Cells on Enter button. We imagine this would be highly amusing to switch on for some unsuspecting colleague.
If you have historical time-based data, try out the new Forecast Sheet option on the Data ribbon tab in Excel 2016!
If you want to create a PivotTable from more than one Excel list, you can. Just make sure that you tick the box to add the data to the Data Model, which means you'll be able to see ALL the tables in your workbook in the field list. So, when you try to use one, you'll be prompted to create a relationship.
Does anyone remember in older versions of Excel that the default was 3 sheets in a new workbook? Now it seems to be 1, but you can change this by going to File, Options, and then General.
Sure, if you want to learn more about an Excel function, you can click "Help on this function" in the Insert Function dialog box, but if you want to browse a list of functions with examples, to potentially uncover some gems, have a look here.
Try out the Find & Select button on the Home tab. We especially like using this to select all formulas on a sheet.
Use Flash Fill rather than the text functions to tidy up your data. Just type the data as you wish it had been inputted and let Flash Fill do the rest!
With Flash Fill, there's no need to use the TRIM function to remove those odd spaces at the beginning of your imported data. You can just type the data as you wish it had appeared —and let Flash Fill do the rest!
With keyboards getting smaller and smaller, here's a useful tip for navigating your Excel list. Double click the bottom edge of the active cell to get to the bottom of the list and top edge to get to the top.
You can compare and merge different versions of a Shared Workbook, but you need to add the Compare and Merge Workbooks button to your Quick Access Toolbar. Please note that this tip comes with an official disclaimer about the use of the Share Workbook option on the Review tab, but if you've used it successfully before, you'll find Compare and Merge useful!
Slot alternative figures into your spreadsheet using the Scenario Manager to see how it affects your calculations. Select the changing cells, go to the Data tab, click What-If Analysis, and choose Scenario Manager. Then create a scenario for each set of figures you want to slot in. As an added bonus, add Scenarios to your Quick Access Toolbar to quickly switch between them. Right-click on the toolbar, choose Customize, and look for Scenarios from the Commands Not in the Ribbon list.
Have you tried Excel's Map Charts? They're a great option if your data has a geographical element. Find them on the Insert ribbon tab.
The next time you want to show how many people have done something, how much money something costs, how many products you've shipped, or how much time something takes, try Excel's People Graph. It's really neat! Find it in the Insert menu for Excel 2013 and 2016.
Oh my goodness, as if six new chart types wasn't enough for Excel 2016, another one has arrived! Find the new Funnel chart in the Waterfall or Stock chart category.
Keyboard & Mouse Shortcuts
One of my favorite keyboard shortcuts is using CTRL+Page Up and CTRL+Page Down to switch between sheets.
To enter the same figure into a group of cells, select the cells, type the number, and press CTRL+Enter.
Press CTRL+SHIFT+7 to add a border around selected cells.
You're probably used to doing this with a right-click on the row heading, but you can press CTRL+9 to hide a row. To unhide, select cells that span the hidden one and press CTRL+SHIFT+9.
Press CTRL+0 as a quick way to hide a selected column.
Have you ever got stuck in some strange number format in Excel? Just press CTRL+SHIFT+# to reset the cell to a general number format.
Press CTRL+D or CTRL+@ to repeat the contents of the cell above. You'll be surprised how often you use this command.
Press CTRL+SHIFT+L to turn the sorting buttons on and off in your lists. Simona says: "I actually discovered this by mistake. I mistyped CTRL+SHIFT+; to insert the current time in my timesheet — and found myself turning on the sorting buttons instead!"
Press CTRL+Spacebar to select a column, or SHIFT+Spacebar to select a row.
Press CTRL+Tab to switch between open workbooks.
If you ever need to enter the same information in the same place on multiple sheets, then you can select them and do this all in one go. CTRL+Click the sheet tabs you want to group (or click and SHIFT+Click a range) and get typing.
To quickly show all formulas on your Excel spreadsheet, press CTRL and then that funny key just to the left of your 1 key, . CTRL+
… What IS that key called?! I'll just call it "that" key. Anyway, try it!
Select a row or column and press CTRL SHIFT +to insert another row or column.
Press SHIFT+F2 to insert a comment.
Press the F4 function key to put the dollar signs into your formula for your absolute cell references. This is much easier than trying to type them in. Keep pressing F4 to toggle through fixing the cell, the column, or the row.
You can press F9 instead of Enter to input the result of the formula rather than the formula itself. Select the bit you want to evaluate in the formula bar and press F9. Press Escape to return to the full formula.
To enter a fraction, type 0, press the spacebar, and type the fraction — including the slash. Excel will display the value as a fraction and store the decimal value.
Select the block of cells you want to enter data in, and then use the Enter key to move through the selection in columns or Tab to move in rows. This is more efficient than using the arrow keys!
If you need to fill weekdays only, use your RIGHT mouse button while you click and drag on the Fill Handle, which is the blob on the bottom right-hand corner of the active cell.
Press ALT+Enter to start a new line within the same cell. This is useful if the Wrap Text button alone doesn't put the line breaks in quite the right place.
For more tips like this, follow Simona on LinkedIn. Start a free week with CBT Nuggets to check out Simona's training for popular tools like Microsoft Word and Skype for Business. | <urn:uuid:cbae9352-98cf-4a37-b784-4d0f5e3ddd44> | CC-MAIN-2022-40 | https://www.cbtnuggets.com/blog/certifications/microsoft/microsoft-excel-tips-to-make-your-life-easier | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00716.warc.gz | en | 0.890534 | 2,182 | 2.609375 | 3 |
The Internet opens up a vast world of experiences, information, and possibilities, but it also comes with some inherent dangers.
Any piece of software designed to cause a computer to malfunction, perform tasks the primary user doesn’t want to do, or steal information from private files, is a computer virus. There are many ways for viruses to get onto your devices, including downloading viruses that are designed to mimic other content like music or videos, opening email attachments containing viruses, or even just visiting a website that launches a virus automatically.
Symptoms of a computer virus include slower than usual performance, the appearance of programs and icons you didn’t intentionally download, and the automatic redirection of your web browser to sites you don’t want to visit.
There are a number of safeguards and good practices that can stop viruses from accessing your devices.
1. Antivirus software runs programs while you surf to alert you of potential threats and stop you from downloading dangerous content from distrustful sources. These programs also run manual or regular automated scans of your device to identify and delete viruses that are already present.
2. Software updates of the programs you use every day often include patches and additions that close loopholes and exploits hackers and virus-makers might use to get access to your data.
3. Smart surfing practices, like only downloading from trusted sources, navigating away from websites that look suspicious, and not opening emails from people you don’t know, are all great ways to avoid inviting viruses into your devices.
We keep a lot of personal data on the Internet. This includes contact information, bank account numbers, or even our Social Security numbers. Without adequate protection, other people can get access to your personal data when you don’t want them to.
What is identity theft? Simply, identity theft is when someone accesses your personal data and uses it without your permission. Criminals use various means, including deception, hacking, and viruses, to steal others’ personal data and use it to do everything from making unauthorized credit card purchases to signing up for services using someone else’s name.
The two biggest components of identity protection online are being careful about giving personal information and making sure others can’t access your devices without your permission.
1. Only give out personal data to trusted sources and be wary of people who contact you claiming to represent a trusted source. Any trusted source, like your bank or a government organization, will publish its policies regarding data security. Unexpected or suspicious phone calls and emails claiming to be from trusted sources are often attempts by criminals to obtain your personal information.
2. Make strong passwords. Whether it’s for your online banking, your email account, or your online gaming ID, create unique, complex passwords for each application. A strong password includes both uppercase and lowercase letters, numbers and non-numerical symbols. To keep all of these passwords in order, you can use secure password management software.
3. Maintain a secure connection, whether wired or Wi-Fi. Wired connections are hard to hack, so most hackers try to break into others’ Wi-Fi. When setting up a Wi-Fi signal at home, at work, or on the go, make sure to enable secure access that requires a strong password. Also enable the extra security features (listed as WEP) in your Wi-Fi router’s device manager.
Safe surfing may seem daunting with all the extra programs and mindful practices involved, but it’s actually just a shallow learning curve to a greater degree of personal security. With the right suite of programs and the consistent application of data defense behaviors, you can enjoy all the Internet has to offer without worry.
By Yazmin Gray | <urn:uuid:a4257f72-22e1-46ab-bbdb-9489a02f1ad7> | CC-MAIN-2022-40 | https://www.colocationamerica.com/blog/identity-theft-and-device-security-go-hand-in-hand | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00716.warc.gz | en | 0.92638 | 766 | 3.34375 | 3 |
WHAT IS SPOOFING ATTACK?
In a spoofing attack, a malicious party or program impersonates another device or user on a network in order to launch attacks against network hosts, steal data, spread malware, or bypass access controls. Spoofing is often the way a bad actor gains access in order to execute a larger cyberattack such as an advanced persistent threat or a man-in-the-middle attack.
- IP address spoofing (or IP spoofing): The creation of IP packets with a false source IP address for the purpose of impersonating another computer system and gaining unauthorized access to machines.
- DNS spoofing (aka DNS cache poisoning): A form of computer security hacking in which corrupt Domain Name System data is introduced into the DNS resolver's cache, causing the name server to return an incorrect result record, e.g. an IP address.
- ARP spoofing: Spoofed Address Resolution Protocol (ARP) addresses are sent onto a LAN in order to associate the attacker's MAC address with the IP address of another host, causing any traffic meant for that IP address to be sent to the attacker instead. | <urn:uuid:a98cbf80-fd4b-4a14-b9e9-30f531396f9f> | CC-MAIN-2022-40 | https://www.contrastsecurity.com/glossary/spoofing-attack?hsLang=en | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00716.warc.gz | en | 0.903314 | 234 | 3.671875 | 4 |
With the increase in technology, the risk of threats and the number of cybercrimes has also increased. According to a cybercrime report, it is proven that there is a hacker attack every 39 seconds.
This fact, too, was hard-striking to the businesses as in the year 2018, the total cost for cybercrime committed globally added over one trillion dollars.
This huge number is following the fact that people miss minor details in security, and this detail becomes a significant loophole in safety, making the breaching easier.
Well, everything is in support of the fact that the people involved in cybercrimes are more organized, prepared, and well, collaborative than the people who are trying to protect the system.
A security breach causes many problems in a system. Some of the common problems are leak and loss of data, also with a permanent disabling of the system. Well, in the new era of technology, no data is hidden if one crosses every limit. But this doesn't mean that we can't protect ourselves.
Organizing the best line of defense is the best option we should go for, and protecting the parts which are prone to attacks, also plays a significant role. This protection can be done by following the top 'CIS Critical Security Controls' guidelines.
Top 20 ‘CIS Critical Security Controls’ Guidelines
The ‘CIS Critical Security Controls for Effective Cyber Defense’ is a prioritized set of actions that are recommended to be taken to form a line of defense from cybercrime attacks. A community of IT experts who have firsthand experience in defending against the severest of cybercrime has developed these control guidelines to prevent cybercrime.
The guidelines apply to a wide range of sectors, including retail, manufacturing, education, defense, and others.
Actively managing hardware devices requesting to connect to the network. Grant permission to the authorized ones only and block all others.
Actively managing all the software present on the network so that only the authorized ones can execute and installed. Except that, all others are blocked and prevented from execution or installation.
Continuous acquire, access, and analyze any new information present for any vulnerability to minimize the window for any attacks.
Controlled Privileges to the tools and applications to prevent any grave breach in the system. And also, to provide proper administration privileged to the mechanisms and service which prevent it.
Establish and actively manage (track and report) the security configuration setting for any device, workstation, or server using a configuration management service. Alter the control process to prevent attacks from accessing or exploiting any acute environment or facility.
Collect, manage, and analyses audit logs of all events that could help in understanding, detection, and prevention from any threat.
Prevent attackers from manipulating the user's behavior through internet activities.
Controlling of services such as installation and others to prevent any malware attack in the system. Take appropriate and correct actions according to the threat.
Track and control network ports and services to prevent the system from any external connection attacks.
The tools and processes which excel adequately backing up the data and in time can recover it properly.
11) Secure Configuration for Network Devices such as Firewalls, Routers, and Switches
Establish and actively implement network configurations for network devices in live time to prevent any external network threats. In addition to this, they also provide proper measures in such cases.
Detect, record, analyze, and ensure the user about any threat from the outside. The boundary defense is the first line of defense and watches over all networks to see if there is any un-trusted service which is focusing on breaching the system's data.
The proper tools and services to prevent any data leakage from maintaining data privacy and data integrity.
14) Controlled Access Based on the Need to Know
Controlled Access for all the accessing network so that the person trying to access the data through another network will only be able to reach the data which he needs to know. Any access to additional information is prevented.
The proper tools and services which will help in controlling the wireless connections in the premises as well as in any wireless client system.
Actively manage and process the completion of account cycle – creation, processing, and deletion of an asset to prevent any attacker from accessing any information.
For every role in an organization, identify and learn specific knowledge and skills, which will help one in strengthening the line of defense from any attacks. This program will also help in identifying small loopholes that can create disaster in the future.
Prevention, Detection, and Correction of any security weakness in any type of in-house application by processing the security cycle of the system.
Developing and Implementing an incident response infrastructure to make sure of the quick discovery and, thus, appropriate action for the threat and to make the system free from it as soon as possible.
Testing the full strength of the system by organizing a sample test on system security. The test determines the vulnerability of the system as well as the effectiveness of the security measures taken.
The ‘CIS Critical Security Controls’ guidelines are already beginning to revolutionize the world of cybersecurity for many governments as well as private organizations.
They include simple methods focusing on basic controls that block the attack efficiently and also protect the network from a more significant attack in the future.
Agreed upon by a powerful consortium including organizations such as NSA, DoD, US-Cert, the Department of Energy Nuclear Laboratories, and other top forensic organizations and also significant communities are already relying on these controls. So, it is up to you now to make your network more secure. | <urn:uuid:e8a0897e-6483-44c2-9bb2-b06bed48ca8d> | CC-MAIN-2022-40 | https://www.appknox.com/blog/cis-critical-security-controls | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00716.warc.gz | en | 0.941112 | 1,149 | 2.640625 | 3 |
One of the most impactful documentaries of 2020, “The Social Dilemma” raises important questions about how Artificial Intelligence (AI) is used in social media platforms. While the documentary was revelatory and even disturbing for some viewers, it didn’t offer a full picture of the current digital landscape.
Like many constantly evolving forms of technology, AI is prone to misconceptions thanks in part to inaccurate portrayal in books and movies like “The Social Dilemma.” As Deadline describes the plot, an actor plays a human embodiment of an AI algorithm. He keeps his prey, a high schooler, hooked on social media with a dopamine drip of “Likes,” notifications, friend suggestions, etc.
As an owner-operated managed IT solutions business with 20 years in the industry, we’ve seen technology be a force for bad and good things. But our rapidly evolving technology is merely a tool. It’s up to us what we make of it.
As an example, social media can be used for good, from nonprofits using profiles for fundraising to small businesses leveraging the power of word-of-mouth referrals. Social media platforms merely connect people and their ideas faster.
“AI is growing at an exponential pace. The new algorithms and the new way that people are aggregating different types of data, it can be used for both good and bad. I think we need to educate both children and adults on all the ramifications,” Kaladhar Voruganti of Equinix noted at a recent CompTIA AI Advisory Council discussion.
The discussion about AI use and ethics is ongoing will continue into the future, and rightly so. | <urn:uuid:ff2eff37-77dc-439c-a1f9-cdeaccd97fb3> | CC-MAIN-2022-40 | https://www.corcystems.com/insights/the-social-dilemma-and-artificial-intelligence/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00116.warc.gz | en | 0.940703 | 345 | 2.546875 | 3 |
A Defensive Computing Checklist
by Michael Horowitz
CHROMEBOOKS and CHROME OS
ChromeOS is the operating system on Chromebook laptops and Chromeboxes (tiny desktop computers).
Note that configuration settings for ChromeOS are in two places. Some are Chrome browser settings, others are ChromeOS settings. The browser settings are available either by clicking the three vertical dots in the top right corner, then click on "Settings". Or, in the address bar, by typing
chrome://settings. From the initial browser settings screen, click on "Chrome OS settings" to see the other settings.
- Suggested Browser Settings:
- Set the default Search Engine to something other than Google. Some pre-defined choices are DuckDuckGo, Ecosia, Yahoo and Bing. Note that DuckDuckGo gets its search results from Bing.
- Advanced -> downloads -> "Ask where to save each file before downloading" should be on
- Privacy and Security -> Security -> "Safe Browsing" should be set to "Enhanced protection" for an account used by a child. However, an adult may want this set to Standard or disabled because it require browsing data to be sent to Google.
- Cookies and other site data -> turn on "Block third-party cookies". In the same section, maybe turn on "Clear cookies and site data when you quit Chrome" It does not
always work perfectly, but it helps.
- Suggested Sync and Google services (in browser settings)
- Turn off "Autocomplete searches and URLs"
- Turn off "Help improve Chrome's features and performance" which "Automatically sends usage statistics and crash reports to Google"
- Turn off "Make searches and browsing better" which "sends URLs of pages you visit to Google"
- Turn off Enhanced spell check which "sends the text you type in the browser to Google"
- Turn off "Google Drive search suggestions" which lets Chrome access the files on Google drive to make suggestions in the address bar
- Suggested ChromeOS settings:
- Just underneath the Preferred search engine, is "Google Assistant". Turn it off, if you don't use it.
- Security and Privacy -> Turn off "Help improve Chrome OS features and performance" which Automatically sends diagnostic and usage data to Google.
- Security and Privacy -> Turn off "Suggest new content to explore"
- Advanced -> Languages and inputs -> "Suggestions" -> Turn off "Emoji suggestions"
- Safety Check-> click the blue "Check now" button to check for missing OS updates, malicious extensions, weak passwords and more
- In the Browser settings -> Cookies and other site data -> See all cookies and site data. This does just what it says. Maybe manually delete stuff here. This page can be bookmarked at
- Bluetooth is enabled by default. If you don't need it, turn it off. The On/Off switch for it is in the box that pops up when you click in the bottom right corner of the screen.
- DNS tip: You can specify an Encrypted DNS provider that works system-wide (for all Google accounts on the Chromebook, and Guest Mode too). As of Chrome OS version 88, do:
Settings -> Security -> Use secure DNS. I am a big fan of NextDNS and you can get a free account at their website, nextdns.io. Then, in Chrome, select the Custom option for secure DNS and
enter a URL such as
where zzzzzz is a NextDNS Profile ID.
- MICROSOFT OFFICE
There are many ways to view/edit Microsoft Office files on a Chromebook. This list is not complete.
- A how-to writeup from Google: Open & edit Office files on your Chromebook
- Any browser should be able to use www.Office.com but a free Microsoft account is required
- You can edit most Office files on ChromeOS if you install the Office Editing for Docs, Sheets, and Slides extension from Google
- You can edit Office files if you install the Office Online Chrome extension from Microsoft. This also requires a Microsoft account
- Guest Mode Explained: Think of it as private browsing mode on steroids. You start with a virgin copy of the operating system. No Android. No bookmarks. No extensions. Just the Chrome browser. While in Guest Mode, you can not create a bookmark or install a browser extension. When you log out of Guest Mode anything and everything you did is thrown away. To save a file from Guest mode, you have to copy it to a USB flash drive before logging out. This is one of my favorite aspects of a Chromebook. One down side is that you can not create a VPN connection while in Guest mode.
- GUEST MODE TIPS
- Safe browsing has three options: Enhanced protection, Standard protection and no protection. Enhanced protection sends browsing data to Google, No protection does not. It is a bit iffy as to whether Standard protection, which is the default (last checked Oct. 2021), phones home to Google. You may, therefore, want to turn off Safe browsing. To turn it off: Settings -> Security -> Safe Browsing. This has to be done every time you enter Guest Mode.
- Turn off the Chrome OS setting "Suggest new content to explore". It is in the Security and Privacy section. The description says that it "...sends statistics to improve suggestions only if you have chosen to share usage data". It is also available from the Address Bar using
chrome://os-settings/osPrivacy (Note: case sensitive). this has to be done every time you enter Guest Mode.
- While in the Privacy and security section, verify that "Help improve Chrome OS features and performance" is disabled. The description says "Automatically sends diagnostic
data and usage data to Google".
- Turn off Bluetooth while in Guest mode if you don't need it. This setting seems to stick.
- Turn off the Chrome OS setting for "Emoji suggestions". It is at Languages and Inputs -> Suggestions. This has to be done every time you enter Guest Mode.
- Turn off the browser setting to "Preload pages for faster browsing and searching". It is in Privacy and Security -> Cookies and other site data.
- Maybe change the Search Engine from Google. The only options are Bing and Yahoo! This has to be done every time you enter Guest Mode.
- When a Chromebook wakes up from sleeping, it can either be ready to use immediately, or, require either a PIN or the Google account password to unlock it. There is no one right choice, just be aware that you can opt for security or convenience. The option is in Settings, look for Screen Lock. It is called "Show lock screen when waking from sleep".
- Chromebooks are Wi-Fi creatures, but you can also plug an Ethernet adapter into a USB port and make them more secure by using Ethernet for the Internet connection. It automatically uses Ethernet when available, still, you would be even safer if you disabled the Wi-Fi.
- NEW GOOGLE ACCOUNT ON CHROMEBOOK
When you first setup a new Gmail/Google account on a Chromebook, there are a number of steps. Here are some highlights (as of May 2022):
- Sign in to your Chromebook: enter a Gmail email address here
- There will be a checkbox to review Sync options after the initial setup. I would turn it on, can't hurt.
- There is a checkbox to backup to Google Drive that is on by default. No one right answer.
- There is a "Use Location" checkbox under Google Play apps and services. Turn that OFF as it lets Google spy on you. It is on by default. This is the Android side of the house.
- There is a checkbox that says "Optional: Help improve Chrome OS features and performance by automatically sending diagnostic and usage data to Google."
I would turn that off.
- You can enable or block Google Assistant. I think it is a bit more private to have it off. The choice is "No thanks" or "Turn on"
- There may be an option to sign up for Chromebook SPAM from Google. It used to be on by default and it was never explained. Not sure it still exists.
- If you chose to review Sync settings, you end up at chrome://settings/syncSetup where there is an option:
"Make searches and browsing better. Send URLs of pages you visit to Google". This is On by default, I would turn it Off.
- After setup you are dumped in a Welcome to Chromebook app. If you want to find it later, it is called "Explore"
- BlueTooth is on by default. If you do not need it, then turn it off
- BUYING A CHROMEBOOK?
- HAVE A CHROMEBOOK? Find the software expiration date:
- Starting around Feb. 2020, the expiration date of the Operating System started to be displayed in the About Chrome OS section. Click/press on "Additional Details".
- Google refers to the drop dead date for software updates as the AUE (Auto Update Expiration) date. See Check when your Chromebook's updates will stop from Google.
- The Google AUE is for ChromeOS. They say nothing about Android. In my one experience with a Chromebook past its AUE date, Android apps were still updated and they continued to function.
- How To: Check How Long Until Your Chromebook Stops Getting Updates
by Daniel Golightly for Android Headlines (June 2019)
- KIDS ON A CHROMEBOOK
A child's account is different from a normal Google account. A parent creates a child account using the Family Link app. This will keep the child's account under the control of the parent.
- Google has a SafeSearch option designed to prevent explicit content from showing up in search results. This is a Google search thing, not a Chrome OS thing. There are no settings on a Chromebook for this. To enable it, log into Google or Gmail and go to google.com/safesearch
- Settings -> Privacy and Security -> Security -> "Safe Browsing" should be set to "Enhanced protection" for an account used by a child. However, an adult may want this set to Standard or disabled because it requires browsing data to be sent to Google.
- How to Setup Chromebook for Your Child by Ravi Teja (Aug 2021)
- How to securely set up your own Chromebook for your kid’s remote school learning by Kevin C. Tofel (April 2020). Process starts at Settings -> People -> Parental Controls -> "Set up" button while logged on as the child. Family Link is the name of this feature and it lets adults allow access to only specified websites, limit screen time and approve/block Android apps. Adult gets prompted to install the Family Link mobile app on their phone - it is optional.
- How to prepare a Chromebook for your child with family link
controls by Shubham Agarwal (Nov 2020).
- Kids: Manage your child's account on a Chromebook from Google. Its complicated, lots of options.
- Sometimes, software requires the user to hit an F (aka Function key) which does not exist on a Chromebook, by default. This article, How to use function keys on a Chromebook (by Kevin Tofel Aug 2022) shows that you can re-define the top row on the keyboard to act as Function Keys with Settings -> Device -> Keyboard -> "Treat top-row keys as function keys".
- There is no Delete key on a Chromebook but you can get the function with Alt-Backspace
- Thinkpad Chromebook: In February 2021, Lenovo released the first Chromebook with a Thinkpad keyboard. These are great keyboards. I blogged about my disappointment with the keyboard in the Chromebook in April 2021: First impressions of the Lenovo Thinkpad C13 Chromebook. It is expensive for a Chromebook and, if the keyboard is the attraction for you, not worth the money.
Printing was never great from a Chromebook, but it has gotten better over time.
- Excellent article: Which printers work with Chromebooks? Here’s a resource.
by Kevin C. Tofel (March 2022). The author says that most recent printers from the last few years will work. The article discusses Brother, Canon, Lexmark, HP, Epson, Ricoh and Kyocera.
- To add a printer, search for "add printer". Some printers can be found and configured automatically, especially those that support Internet Printing Protocol (IPP).
- Expect a very different experience using a Wi-Fi printer vs. a USB connected printer.
- A Wi-Fi printer is best assigned a permanent IP address, something that can be done either by the router or by the printer itself.
- USABILITY TIPS
- To see the extensions installed in Chrome browser enter chrome://extensions
- The ChromeOS task manager is available at both Escape+Search or three vertical dots in top right corner -> More tools -> Task manager
- If text on the screen is too big/small: Chrome OS Settings -> Device -> Displays -> Display Size
- If text on the screen is too big/small: Browser Settings -> Font size
- Mouse pointer too small? Chrome OS Settings -> Advanced -> Accessibility -> Manage accessibility features -> Show large mouse cursor
- If things go really bad:
- Fix hardware and system problems from Google
- Recover your Chromebook from Google. Covers removing and reinstalling the OS.
- Reset your Chromebook to factory settings from Google about Powerwashing
- As of September 2022, PDF files on a Chromebook open in the Gallery app rather than the Chrome browser. This change adds new features: you can fill out forms in PDF files, add text annotations, add highlights (yellow background color) and sign documents in free-hand. See Review, edit & sign PDFs from Google.
- A Chromebook can take dictation. When the option is enabled, a Microphone button will appear in the bottom right corner of the screen next to the time and the Wi-Fi indicator. Enable it: Chrome OS Settings -> Advanced -> Accessibility -> Enable dictation (speak to type).
- File types and external devices that work on Chromebooks from Google. ChromeOS does not support Apple HEIC format pictures. It does support Box and some other cloud file storage systems (and Google Drive of course). Locally, it supports SMB file sharing.
- As of ChromeOS version 90, released April 2021, there is a new Diagnostics app that shows info about the battery, CPU and RAM memory. It also offers tests of each. In addition, it can do a network connectivity test. Find it by searching for "diagnostics" in the search box that pops up after clicking on the start button/circle. The official term for this search box is the "Launcher search bar".
- How to revert Chrome OS to a prior version on a Chromebook by Kevin C. Tofel (Feb. 2022)
- You can transfer files from an Android device to a Chromebook using a USB cable.
- Multiple peeks into the internals of ChromeOS are available from chrome://chrome-urls. Perhaps the most useful is
- 5 reasons Chromebooks are the perfect laptop (for most users) by Jack Wallen for ZDnet (July 2022). They are cheap, fast, reliable and "The ease of use found in ChromeOS is light years ahead of the competition."
- Alt-clicking (right or left click, both work) on an icon on the taskbar is how you can pin or unpin the app from the taskbar.
| This page: 8 views per day (over 42 days) Total views: 340 Created: August 16, 2022|
Copyright 2019 - 2022 | <urn:uuid:8a2c3fea-22a2-42a7-88f6-8891e1bb1934> | CC-MAIN-2022-40 | https://defensivecomputingchecklist.com/chromebook.php | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00116.warc.gz | en | 0.87734 | 3,353 | 2.703125 | 3 |
Your smartphone rings and you see “Spam Risk” on screen. Who is this mysterious Spam Risk, and why do they keep calling? We’ll explain—and tell you what you can do about it.
The Quick Answer
If you see “Spam Risk” on your iPhone or Android Caller ID screen when you receive an incoming call, it means that your mobile carrier has already automatically detected that the call is likely from a fraudulent or deceptive source. In general, that means you should ignore the call and not answer it.
In particular, “Spam Risk” is the label that AT&T uses as part of its automatic fraud call blocking system, called AT&T Call Protect, which it first launched on December 20, 2016. It aims to automatically identify and block fraudulent calls (often in conjunction with a paid add-on service) and to also identify suspected spam calls.
If you use another mobile carrier, you might see an alternate label such as “Spam,” “Telemarketer,” “Scam Risk,” “Scam Likely,” “Potential Spam,” or something similar. They all mean roughly the same thing: that your cell carrier has flagged the call as potential spam.
Wait, What Is Spam Anyway?
In tech, “spam” is a term for unwanted communications that come in at a high frequency. The term originated as a Monty Python TV comedy sketch reference that was applied to high volumes of disruptive messages in early online services, then to unsolicited emails on the internet in the 1990s. In the Monty Python sketch, a woman in a restaurant is confronted with a menu full of items made with Spam (the food product) to a frustrating and repetitive degree.
Since then, the term “spam” has generally become applied to any high-volume unwanted communications, including telephone calls and even physical paper mailings at times. Spam is a scourge of the modern connected world, and avoiding it is difficult.
Can I Silence “Spam Risk” Calls?
If you’re tired of being disturbed by calls labeled Spam Risk, you can turn on a feature called “Silence Unknown Callers” on your iPhone (in iOS 13 and up). The feature sends unknown callers (who aren’t in your Contacts list) straight to voicemail and silences any ring or notification from the call.
To do so, open the Settings app and navigate to Phone > Silence Unknown Callers, then flip the switch beside “Silence Unknown Callers” to the “On” position.
After that, any “Spam Risk” calls you receive will no longer ring your phone. You can manually screen the “Spam Risk” calls later by checking your “Recents” list in the Phone app on iPhone or Android. Or you can review voicemails you’ve received on iPhone.
Can “Spam Risk” Be Wrong?
Since the process of labeling calls as “Spam” is automated by your mobile service carrier, it’s possible that the “Spam Risk” label is incorrect. In that case, if you’re expecting an important call and got “Spam Risk” instead, tap the Phone app on your Android or iPhone and check your list of recent incoming calls.
Once there, review the numbers labeled “Spam Risk” to see if any look familiar, and if so, you can call them back or add them to your Contacts list so they don’t get mislabeled again in the future. Good luck!
RELATED: How to Add a New Contact to iPhone | <urn:uuid:9f84499c-5f96-4626-8859-d57d3e1cbb3a> | CC-MAIN-2022-40 | http://dztechno.com/who-is-spam-risk-and-why-do-they-keep-calling-me/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00116.warc.gz | en | 0.936205 | 793 | 2.703125 | 3 |
The information security objectives can be different for different levels of the organisation. These objectives must be established by the organisation according to the functions and at what level they are applicable. These objectives should be consistent with the information security policy and measurable (if practicable). These objectives must be formed considering the information security requirements, and results from risk assessment and treatment. This acceptance will justify the risk acceptance criteria discussed above. These objectives must be updated regularly and communicated with the organisation’s members and be kept in documented form.
Now the organisation has to plan a course of action to achieve its objectives. Planning a course of action include what to be done and how. For example, an organisation’s objective is to secure servers within the organisation, the course of action will be securing its physical location and installing security software to protect it from cyber-attacks (internally or externally).
Another thing which to plan is what resources need the availability of resources required. The planning phase will also include who will be responsible for achieving the security objectives. The final phase of the planning will include how much time needed to achieve these objectives and how the results will be evaluated. | <urn:uuid:affceefc-da38-4b20-9978-995c74c9d50e> | CC-MAIN-2022-40 | https://hicomply.com/resource-hub/iso-27001-clause-6-2-information-security-objectives-and-planning-to-achieve-them | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00116.warc.gz | en | 0.93223 | 231 | 2.578125 | 3 |
The Analog Monkey on Our Backs: Wrong-Thinking about Data over Cable
We inherit a lot of things from the past – not all of them good. When the telephone was invented by Elisha Grey, it evolved from wired telegraph lines. This was a natural evolution since the telegraph lines already existed and were proven, whereas radio technology was still in its infancy. When television was first introduced, it was radiated through the air to receivers in homes, as there were no cable systems initially, but radio broadcast by then had become a proven technology. However, this was not how the customers wanted to consume these services. When you are being entertained with pictures and sound you usually prefer to be seated. But when you want to talk to someone, you frequently are on the go. So due to technical constraints at the time, we ended up using a wired media for an ideally mobile product, and a wireless media for ideally sedentary product. This helps explains the marketing success of both cell phones and cable television.
As service providers, we are still adjusting the wired and wireless mix, and current technology is still influencing that adjustment.
Claude Shannon (1916-2001)
In 1947 Claude Shannon came along with a remarkable paper "Communication in the Presence of Noise" on the data capacity of a communications channel in the presence of random noise. He stated that the data capacity (C, in bits per second) of the channel is proportional to the bandwidth (B, in Hz) of the channel times log2(1 + signal power/noise power).
This brings up the question of what is the constraint on data capacity for any medium? If you want to increase the data capacity of a channel, is the channel lacking bandwidth, or is signal power constrained? The answer varies by medium. In wireless networks generally bandwidth is scarce. The US government sells it by the Hz, and it is very expensive. In fiber optic cable, signal power is usually required to be relatively low. A single mode fiber starts to go nonlinear at only about 10 milliwatts of power, but it has an enormous amount of bandwidth. On coaxial cable both power and bandwidth are constrained, and increased cable loss at high frequencies increases the noise power at higher frequencies. This is also true for telephony twisted pair, although the attenuation of twisted pair is much greater than coax. Cable’s capacity is not as constrained by Shannon’s Capacity Theorem as it is by common wrong assumptions, some of which are:
We need to deliver analog TV signals at any frequency that the cable plant carries.
Not true. There is no requirement for analog TV delivery at 1.8GHz, which is the highest frequency supported in the DOCSIS® 3.1 specification. In fact, the requirement to deliver ANY analog TV signals is disappearing, or is already gone.
4096 QAM is four times better than 1024 QAM.
Not true again. 4096 QAM transports 12 bits per symbol, and 1024 QAM transports 10 bits per symbol. So, 4096 has only 20% more capacity than 1024. But this 20% comes at an enormous price of 400% more required signal power. On the other hand, if you could somehow find 4X more bandwidth, you could use the same 4096 QAM signal power to transmit 400% more data. (Hint: look above 1GHz)
In RF line amplifiers, steep up-tilts are required.
Not true again. This is a carry-over from analog TV delivery, and is no longer required in DOCSIS 3.1 networks. In fact, the optimal way to spend your signal power is explained in Shannon’s classic paper, in particular Fig. 8. It is now referred to as the water-pour method. For the water-pour method, basically, in bands with low noise, it is optimal to use a higher percentage of the available signal power, and in bands with high noise, it is optimal to use lower percentage of the available signal power. Besides, it is just not practical from a signal power standpoint to extend the up-tilt to 1.8GHz for analog TV delivery.
A technical paper on this topic has been published on the CableLabs website.
In conclusion, our networks are going all digital, the old requirements have changed, and there is a lot of additional data capacity available on cable networks. Data capacity can be accessed both by using higher frequencies and reallocating signal power more wisely.
Tom Williams is a Principal Architect at CableLabs. | <urn:uuid:ff395062-5445-4a27-8df6-bb25e0dbfb21> | CC-MAIN-2022-40 | https://www.cablelabs.com/blog/the-analog-monkey-on-our-backs-wrong-thinking-about-data-over-cable | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00116.warc.gz | en | 0.952433 | 952 | 2.71875 | 3 |
With the world going digital at a very quick pace, there is an increased sense of cautiousness in cyber-security. There is an extended reach for cyber-criminals to target and attack victims all over the world. Thankfully, we have AI (Artificial Intelligence) and ML (Machine Learning) tools that can help us fight cyber-crime. Integrating the strength of artificial intelligence (AI) with cybersecurity brought instant insights, resulting in reduced response times. Updating existing cybersecurity solutions and enforcing every possible applicable security layer doesn’t ensure that your data is breach-proof. But, having a strong support of advanced technologies will ease the task of security professionals.
Artificial Intelligence (AI) is the branch of computer sciences that emphasizes the development of intelligent machines. These machines are developed to think and work like humans.
With an everyday increase in cyber-attacks, we need to up the game and use AI tools to protect ourselves from cyber-attacks. AI tools make it easier to detect and resolve cyber-attacks. AI in cyber-security dives deeper into key areas to find the threats and adjust itself in a suitable way to resolve them.
AI is firstly trained by feeding billions of data artifacts from sources such as blogs and news stories, through machine learning and deep learning techniques, the AI improves itself to “understand” cyber-security threats.
AI then gathers insights and uses reasoning to identify the relationships between threats, such as malicious files or insiders. This process takes seconds or minutes, allowing analysts to respond to threats up to 60 times faster.
While many other industries have seen AI systems replace human workers, this isn’t necessarily the same in cybersecurity. Humans, by their insights and instincts, are able to accomplish more when supported by the right set of tools. AI supports and reacts to human behavior allows cyber professionals to focus on critical tasks and analyze threats. In addition, it also informs decisions when rectifying a breach. Autonomous cybersecurity doesn’t mean cybersecurity without humans.
- Handling huge volumes of security data: AI software running on today’s powerful processors can zip through more data in minutes than humans could tackle in months and list problems and anomalies immediately.
- Acceleration of detection and response times: AI can speed up the detection of genuine problems, rapidly cross-referencing different alerts and sources of security data.
As influential AI is and CAN be on cybersecurity, the answer is no. However, AI will change the kinds of work cyber engineers are doing. Because for IT teams to successfully implement AI-technologies in cyber-security, they will need a new category of experts to train the AI-technology and analyze the results.
AI can help to fill the gap and improve some domains in the cybersecurity sector, although it may create a need for new skill-sets to be learned by humans in the industry. AI and the human workforce are not in conflict with one another in this field, the more likely to complement each other. The future is bright and full of potential for AI and humans to work at the front lines of cyber-defense.
AI, maybe great for processing large amounts of data and replacing manual tasks, but it’ll never replace an analyst’s insights. There are some data points that require a level of interpretation that even computers and algorithms can’t quite comprehend yet. | <urn:uuid:9659bf23-0e5c-4e7f-94b5-61f94b92a4c5> | CC-MAIN-2022-40 | https://cybersecurityhive.com/artificial-intelligence-ai-brings-a-new-pace-in-new-pace-in-cyber-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00116.warc.gz | en | 0.932361 | 686 | 3.359375 | 3 |
What You Will Learn
After reading this chapter, you should be able to
Understand the IP version 4 (IPv4) addressing protocol, and the similarities to the addresses used in the postal delivery system
Explain why hosts need two addresses, Ethernet and Internet, and how the Address Resolution Protocol (ARP) is used to determine the Ethernet address of a host given the host's IP address
Understand the TCP and UDP protocols, and how they are used to deliver data to a destination host
Explain the TCP/IP layered model by comparing it to the layer model that was developed for the postal delivery system
Differentiate between classful and classless IP addresses
Learn how to subnet and summarize IP networks
Learn how routers forward packets using the longest match operation
Describe the IP version 6 (IPv6) addressing protocol
In Chapter 1, you examined systems for delivering the mail, planning a road trip, and making telephone calls. Chapter 2 introduced the binary, octal, and hexadecimal numbering systems. You need to understand how computers represent information, and how you can move between number systems to represent binary numbers in a more readable form. In this chapter, the concepts from the first two chapters will be combined to understand the schemes that are necessary to create a scalable computer communication system the Internet.
To begin our discussion on computer communication over a network, this section looks at the similarities between mail delivery between houses, and data delivery between computers. The endpoints in mail delivery are houses, and the endpoints between electronic data delivery are computers. Certainly there can be other endpoints in both systems. Letters can be delivered from a house to a business, from a business to a house, between two businesses, and so on. Electronic data delivery can be from a news service to your cell phone or personal data assistant (PDA), from your computer to your friend's pager, from environmental sensors in a building to the heating and cooling control systems for that building, and so on. But to keep the discussion simple, it will suffice to concentrate on mail delivery between houses, and electronic data delivery between computers. The first analogy is that an endpoint in a mail delivery system, a house, is equivalent to the endpoint in a computer communication system, a PC. (See Figure 3-1.)
Figure 3-1 Equivalent Endpoints in the Mail and Data Communication Systems
In the mail delivery system, the function of the post office is to deliver mail to a particular house. In the computer communication system, the function of the Internet is to deliver data to a particular PC. Yet, in both systems, the endpoint is not the ultimate destination. For mail, the ultimate recipient is a person. For data, the ultimate recipient is an application such as an e-mail program, a web browser, an audio or video program, an instant messaging program, or any number of wonderful applications that exist today. (See Figure 3-2.)
Figure 3-2 Final Destinations in the Postal and Electronic Data Delivery Systems
Although the ultimate recipient is a person or a software application, the responsibility of the systems stops when the mail, or data, is delivered to the proper house, or computer. However, as part of the address, you still need the ultimate recipient; either a person or an application, even though this information is not used for delivery to an endpoint. The endpoint uses the name or application to enable delivery to the recipient.
Because the two systems are analogous, it is instructive to revisit the format of an address in the mail delivery system and see if you can use a similar format for electronic data delivery:
Street Number, Street Name
Although there are five distinct pieces of information in the mail address (name, street number, street name, city, and state), you can consider an address to contain only four pieces of information. For endpoint delivery, you can ignore the name field. You are left with
The postal system routers (core, distribution, and access) use the state, city, and street names to deliver the mail from the source access post office to the destination access post office. The street number is not needed until the mail arrives at the access post office that is directly connected to the destination street. So, the address can be broken down into
State, City, Street Name
The state, city, and street name information enables the mail to get close to the destination (a particular street). The street number is used to deliver the mail to the proper house. What is the analogy in the computer world to houses on a street? Recall from Chapter 1 that a group of computers can directly communicate with each other through a switch residing on a local-area network (LAN). So a LAN is the computer equivalent to a street. (See Figure 3-3.)
Figure 3-3 LAN of Computers Is Similar to a Street of Houses
Chapter 1 also mentioned that computers have an address, and the most common technology used for computer communication is Ethernet. The sample Ethernet address that was presented in Chapter 1 was 00-03-47-92-9C-6F.
Before you learn more about Ethernet addresses, take the following quiz to make sure you understand the concepts described so far:
What number base is used to represent the Ethernet address?
How many bytes are in an Ethernet address?
How many bits are in an Ethernet Address?
How many Ethernet addresses are possible?
Answer:@Hexadecimal, because the symbols C and F are not used in the other number bases that we discussed. Computers compute using binary. The hexadecimal representation is for our benefit because it is easier to read and write.
Answer:@Six. One hexadecimal digit contains 4 bits, or 1/2 bytes. Two hexadecimal digits contain 8 bits, or 1 byte. An Ethernet address contains 12 hexadecimal digits or 6 bytes.
Answer:@48 (8 bits per byte).
Answer:@248 or 281 trillion, 474 billion, 976 million, 710 thousand, 6 hundred fifty-six (281,474,976,710,656).
An Ethernet address is not a property of your PC. An Ethernet address is a property of the Ethernet card, or built in Ethernet port in your PC. If you put a new Ethernet card in your PC, the Ethernet address of your PC changes.
By itself, an Ethernet address cannot deliver data between two endpoints on the Internet. The reason is that there is no structure to an Ethernet address. There are many manufacturers of Ethernet cards for computers, and each manufacturer is assigned a block of Ethernet address to use for their particular brand of card.
An analogy would be to have 281,474,976,710,656 postal addresses that are sold in a local postal address store. Each local postal address store is given a block of numbers from the total range of numbers that are possible. A postal address is just a number between 0 and 281,474,976,710,655. When you build a house, you would go to the local postal address store and your house would be assigned one of the numbers that hasn't yet been assigned. Everyone in your city would need to get a number assigned from the local postal address store. Because people will not be going to the store in any order, numbers will be assigned randomly throughout the city. The only way that these numbers can be used to deliver mail is if every post office at every level (core, distribution, and access) maintained a list of every number, and the route to reach that number. Therefore, every post office would need to maintain a list of 281,474,976,710,656 addresses and the route to get there. Obviously, this is not scalable. So in addition to an Ethernet address, you need another address that has a structure analogous to the structure of the postal address. What you need is an Internet addressing protocol. | <urn:uuid:37e5ff5f-a9e6-4199-93fb-a49059c194f4> | CC-MAIN-2022-40 | https://www.ciscopress.com/articles/article.asp?p=348253&seqNum=3 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00116.warc.gz | en | 0.919371 | 1,651 | 4.0625 | 4 |
From self-activating street lights to available parking space alerts, our cities are getting increasingly smarter. By 2020, global spend on smart cities will exceed $34bn, including everything from augmented reality to connected cars.
Current demand will continue to accelerate, as consumers increasingly seek ubiquitous, multi-device connectivity. Traditional networking technologies will no longer be enough for high-bandwidth-dependent applications, as they’ll quickly run out of spectrum. The smart cities of the future will depend on innovative technologies, such as 5G mmWave, to provide the low-latency connectivity and ultra-fast speeds of data transfer needed to deliver this new, connected world.
There are three technologies specifically that will be critical for creating the 5G-powered smart cities of the future.
The principle of smart cities is the sharing of data for intelligent applications. This idea of sharing information is at the core of open source technology. For smart cities, dissolving the exclusivity around networking technologies is a substantial benefit. With open source software development, technology providers can take the elements they need for their own network and configure the technology to their own application. It opens up telecoms networks to be applied to a whole host of third-party use cases – enabling anyone to develop a product or application based on these technologies. This means a company focused on connected cars, for example, can easily integrate with 5G mmWave networks in the area, benefitting from 5G speeds of connectivity even at speeds of 160mph, thanks to these high-frequency wavelengths.
The shift towards software-defined networking (SDN), or “software-isation”, is a considerable change for the telecoms industry. Hardware used to be at the core of wireless networking – a hardware switch performing one function, and a box performing another. With SDN, we finally have a means of controlling network performance from end to end with one consistent interface. Bandwidth and performance are managed by the software layers in the network, meaning we now have more control over network responsiveness, allowing for more data-intensive smart city applications.
Integrating a software layer also allows more data to be processed at the edge of the network, bringing the cloud to smart city devices, and reducing data “travel time” to ensure a higher speed of connectivity. This all makes for a smoother, seamless user experience.
Currently, telecoms providers buy licences which enable them to use a part of the commonly used licensed spectrum. However, as user numbers have grown, the licensed spectrum has become overcrowded, impacting connectivity performance and reliability. The logical solution is to find a way to increase the available bandwidth – unlicensed mmWave provides the means to do this. This band has a large amount of spectrum available (14GHz), meaning higher performance and a better user experience for demanding smart city applications, and as no licence is required more funds can be fed back into providing connectivity for more users. mmWave is also ultra-fast: Blu Wireless’s mmWave Typhoon units can achieve fibre-level high-bandwidth connectivity with per link data-rates of up to 3.5Gbps at 64QAM.
Many major global tech firms are already advocating for mmWave as the future of wireless networking, including Arm and Facebook, whose TIP project specifically focuses on use cases for mmWave.
Smart Cities of the Future
In the future, everything will be smart. To prepare for this, we need to embrace this shift towards software-defined networking hardware. By investing in cost-effective, unlicensed small cell mmWave networks, we can provide enough bandwidth for the data-intensive, demanding applications of tomorrow’s smart cities. It will be fascinating to watch this new connected world unfold before our eyes. | <urn:uuid:daa14b5a-9a38-4663-94b7-da48aa9bba07> | CC-MAIN-2022-40 | https://www.bluwireless.com/insight/blog/these-3-networking-technologies-are-critical-for-tomorrows-smart-cities/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00116.warc.gz | en | 0.9246 | 776 | 2.828125 | 3 |
Microsoft Excel: Level One
Who Should Attend
Federal employees, contractors, self taught individuals, and anyone else who wants to learn Microsoft Excel fundamentals - and beyond.
What You Will Learn
- Master the Excel interface and fundamentals, such as mouse controls, keyboard shortcuts, and dialog boxes.
- Select, retrieve, view, and chart data.
- Discover how to use Excel as a database and understand how and why we use spreadsheets.
- Review sorting & filtering, formulas, functions, charts, and graphs.
- Enter labels and values into a workbook.
- Navigate, name, and save a workbook.
- Enter and work with date values and Autofill.
- Edit, clear, replace cell contents, and use the Clipboard.
- Cut, Copy, Paste, Paste Special, and move cells.
- Insert and delete cells, rows, and columns, and adjust row height and column width.
- Use Undo, Redo, and Repeat.
- Find and replace information.
- Understand and effectively incorporate Smart Tags.
- Use the Format Painter to copy formatting.
- Merge cells, align a cell's contents, add borders and colors, and rotate text.
- Create charts, format objects in charts, change a chart’s source data and chart type, and work with a 3-D chart and custom charts.
- Hands-on exercises with time set aside to practice what you have learned.
Why You Should Attend
Our training maximizes learning and allows for more “hands-on” practice. You also receive a copy of Teach Yourself Visually Excel – a full-color, user-friendly manual. | <urn:uuid:23e29b0a-6657-4f80-b5dd-074b4d914a94> | CC-MAIN-2022-40 | https://www.federaltraining.com/courses/Microsoft_Office/Excel_2016_2019_training/introduction.aspx | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00317.warc.gz | en | 0.736398 | 368 | 2.9375 | 3 |
- Quantum Necklaces and accessories claiming to "protect" people from 5G mobile networks are radioactive.
- Dutch Officials issue product alerts and say ‘quantum pendants’ could damage DNA with prolonged use.
Some Dutch authorities issued a warning for nuclear safety about ten products it found gave off harmful ionizing radiation. It urged people not to use the products, which could cause harm with long-term wear. There is no evidence that 5G networks are harmful to health.
The World Health Organization says 5G mobile networks are safe, and not fundamentally different from existing 3G and 4G signals. Mobile networks use non-ionizing radio waves that do not damage DNA.
People who wear “anti-5G” pendants to “protect” themselves from radio frequencies emitted by phone masts have been told by the Dutch nuclear authority that their necklaces are dangerously radioactive. Owners of “quantum pendants” and other “negative ion” jewelry have been advised to store them away, as they have been found to continuously emit ionizing radiation.
The safety agency proclaimed that “due to the potential health risk they pose, these consumer products containing radioactive materials are therefore prohibited by law. Ionizing radiation can damage tissue and DNA and can cause, for example, red skin. Only low levels of radiation have been measured on these specific products and exposure to ionizing radiation can cause adverse health effects.”
This product alert was issued by the Dutch authority for nuclear safety and radiation protection (ANVS) concerning 10 products.
Since there have been attacks on transmitters by people who believe they are harmful. The products identified included an "Energy Armor" sleeping mask, bracelet, and necklace. A bracelet for children, branded Magnetix Wellness, was also found to be emitting radiation. “However, someone who wears a product of this kind for a prolonged period (a year, 24 hours a day) could expose themselves to a level of radiation that exceeds the stringent limit for skin exposure that applies in the Netherlands. To avoid any risk, the ANVS calls on owners of such items not to wear them from now on.”
Governments across the world have started to establish the infrastructure for fast 5G internet, a range of groups have emerged voicing fears over the health effects of mobile telephony. A person in a mask walks past a 5G advert, a Christian TV channel fined by Ofcom over Covid conspiracy theories. The concerns vary from questioning the level of research that has been done into the impact of radio frequencies and proximity to masts, to allegations that 5G is the cause of anything from headaches to immune deficiencies.
"The sellers in the Netherlands known to the ANVS have been told that the sale is prohibited and must be stopped immediately and that they must inform their customers about this." Conspiracy theories have fuelled a market of "anti-5G" devices that are typically found to have no effect.
In May 2020, the UK Trading Standards sought to halt sales of a £339 USB stick that claimed to offer "protection" from 5G. So-called "anti-radiation stickers" have also been sold on Amazon. Despite this, an industry suggesting that certain types of jewelry, including one product mentioned in the Dutch alert that claims to “utilize pure minerals and volcanic ash that are extracted from the Earth,” has burgeoned. WHO has said that 5G is safe and that there is nothing fundamentally different about the physical characteristics of the radio signals produced by 5G compared with those produced by 3G and 4G. Last year, 15 EU member states called on the European Commission to address a spate of conspiracy theories that had led to arson attacks against telecommunications masts. | <urn:uuid:169e9c03-461f-4e0d-9f14-31a78f3e7870> | CC-MAIN-2022-40 | https://industryoutreachmagazine.com/anti-5g-quantum-chains-found-to-be-perilous-and-radioactive/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00317.warc.gz | en | 0.969999 | 771 | 2.65625 | 3 |
Having your IP address end up on a so called ‘blacklist’ can be a troublesome experience, especially when not anticipated. In most cases, it is a sign that something is wrong on the server(s) you rent or own, or that maybe one of the end users hasn’t followed email sending guidelines. This post is dedicated to those who want to know more about IP address reputation and what can be done to resolve issues identified by other parties.
The ongoing fight against spam
As we have already explained in the Spam blogs (I and II), email spam continues to be an issue. Due to the ever-evolving problem of email spam, there is an understandable need to have measures to combat this. Over the years, several efforts have been made to prevent unsolicited emails from reaching email inboxes by a plethora of means. Many of these proposed solutions have had promising technical white papers but few have actually resulted in an implementation that is either scalable, reliable or both.
What is a blacklist (or DNSBL)?
Nowadays, practically speaking, the most useful identifiers to help with stopping spam en masse are the IP addresses of the servers that emit the unsolicited messages. Thus, the prevention mechanism most often employed by mailserver administrators is a simple block of these ‘bad’ IP addresses. In order to create efficiency in this process the idea of crowdsourcing this data and centralizing it was fostered and ‘DNS-based Blackhole Lists’ (DNSBLs) were born.
DNSBLs are in some way a form of internet police, the “internet sheriffs” you might say. If an IP address gets involved with something the DNSBL operators disapprove of, and they become aware of this, they might decide to put that IP address on their list.
How do IP addresses end up on a DNSBL?
The thing with DNSBLs is that each one of them operates within its own set of rules and with a focus on a certain abuse category. The most common abuse category among these lists is obviously spam but there are also blacklists that focus on hacking, malware, botnets or even Tor exit nodes. The various DNSBLs employ a wide variety of techniques to gather these IP addresses including: mailtraps, honeypots, botnet analysis and crowdsourcing data from participating mail clients.
How do IP address lookups work?
As mentioned already, each DNSBL has its own criteria for designating an IP address as having a bad reputation. This reputation is published by means of a DNS record and the DNS servers run by DNSBL administrators are open to the public to perform lookups of IP addresses on. DNS was originally meant for looking up domain names but it has become the de-facto method to distribute IP address reputation designations due to its low overhead and high scalability.
From a technical perspective, a lookup is done by performing the following steps:
- Reverse the IP address
- Append the DNSBL domain
- Do a DNS lookup of the resulting ‘domain’
This will either result in ‘NXDOMAIN/Non-existent domain’ response or will return an IP address (usually in the 127.0.0.x range). When an IP address is returned, the IP address is ‘listed’. Below, you will find an example of each:
‘Blacklisted’ IP address (220.127.116.11):
‘Clean’ IP address (18.104.22.168):
*** server can’t find 22.214.171.124.zen.spamhaus.org: Non-existent domain
Most DNSBLs have guidelines on how to use the responses from their DNS server. In general, it is advised to use data from multiple sources before blocking emails. However, many mail servers are knowingly or unknowingly set up to refuse emails from any IP address that is on at least one DNSBL. While not ideal and often not according to guidelines, this is the reality that email senders have to live with; a single, potentially false positive listing can have disastrous results on email deliverability.
Listed, now what?
Once an IP address is listed on a DNSBL, for whatever reason, there is a chance that email deliverability will be affected. This is a problem that needs to be resolved. Luckily, most lists allow for de-listing once the operator of the IP address has confirmed a solution to the problem or incident that caused the listing. An example of a de-listing request form can be found on Barracuda Central’s website. Just as the criteria for listing an IP address differ from DNSBL to DNSBL, the requirements for de-listing are also list-specific. However, in most cases, de-listing requests are processed within 24 hours.
There is one thing that most DNSBLs have in common: the way they deal with removal requests while the source of the problem is NOT taken care of. Often, this will result in more difficulty getting the listing removed in future requests. While mitigation of the cause would initially have been enough to get an IP address de-listed, after invalid removal requests, the DNSBL might now require you to provide additional proof of the resolution. It is thus wise to only request de-listings when you are sure that the problem has actually been resolved.
What about Hotmail/Microsoft?
If you have mail delivery issues to Microsoft managed domains, it might be because Microsoft is bouncing your emails, if this is the case, you will get the following response from the destination mail server:
“host mx4.hotmail.com[xx.xx.xx.xx] said: 550 SC-001 Mail rejected by Windows Live Hotmail for policy reasons. Reasons for rejection may be related to content with spam-like characteristics or IP/domain reputation problems. If you are not an email/network admin please contact your E-mail/Internet Service Provider for help. Email/network admins, please visit MSN Postmaster for email delivery information and support (in reply to MAIL FROM command)”
Microsoft takes a different approach to preventing spam. The above message doesn’t necessarily mean that your specific IP address is ‘blacklisted’. Lately, more and more ranges are ‘listed’ by default. While the above bounce might indicate otherwise, Microsoft has effectively taken a ‘whitelist’ approach for email delivery originating from certain ranges to their platform. Simply said, Microsoft wants to know what type of email you send before you can send email to their managed inboxes. A request to be whitelisted can be made on this page.
While the Microsoft list is not publicly available, you can request to have access to your IP address status through Microsoft’s Smart Network Data Service.
As Leaseweb offers unfiltered access to the internet, like any other large unmanaged hosting provider, it cannot always prevent the negative effects on network reputation by intentional and unintentional unsolicited – or even malicious – network activity. To mitigate these issues, in addition to actively monitoring our network reputation, we also put effort in educating our customers because, after all, we can only create a safer internet with the collaboration of our customers.
When we identify new issues within our network we do our best to mitigate these as quickly as possible. To facilitate this, we use every available information source. To support the DNSBL community, we have included several in our Community Outreach Program, a notable one is Spamhaus.
If you run a medium to large sized DNSBL, we are happy to help you out as well by providing free servers for additional mirrors! | <urn:uuid:acea82ef-2e8a-41dc-b94d-3ea9568d083a> | CC-MAIN-2022-40 | https://blog.leaseweb.com/2016/04/14/need-know-ip-address-blacklisting/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00317.warc.gz | en | 0.942356 | 1,653 | 2.703125 | 3 |
Software can also be used to report unusual calling patterns from ‘legitimate’ phones, drawing any that might be running rogue dialler software to the administrator’s attention.
Access to billing systems and records can be protected using conventional IT security measures. A point of weakness that organisations need to be aware of is disaffected staff – especially if they are employed in management of the IP telephony system.
The goal: To cause inconvenience or offence
How it works: SPIT – Spam over Internet Telephony – can be thought of as a new, and potentially more disruptive, way for people to make nuisance calls.
Because VoIP is a data service, the rate at which voice messages can be sent isn’t limited by the number of a lines the caller has available or the rate at which numbers can be dialled. Instead, an audio file could be uploaded to a computer and sent to a list of target IP addresses in much the same way that email spam is sent to people’s inboxes. Depending on the performance of the computer and the capacity of the network connection, thousands of ‘calls’ could be made every few minutes.
These might simply promote products and services that recipients don’t want, or they could have a more malicious intent.
How to stop it: While not yet a major problem, experts expect SPIT to become an increasing irritation as IP telephony becomes more commonplace.
Solutions similar to those used to remove spam messages from email inboxes will be required to prevent SPIT reaching its target, wasting the recipient’s time and consuming network resources unnecessarily.
These will need to achieve higher levels of performance, however, to avoid introducing delays that could disrupt ‘legitimate’ calls.
The goal: To listen in to calls or otherwise acquire confidential information.
How it works: VOMIT is an acronym for Voice Over Misconfigured Internet Telephony. It is a technique that can be used when the data packets that make up phone calls are transmitted through a network that also carries data. | <urn:uuid:876cb968-96e2-4f02-b089-05f5479ef6bd> | CC-MAIN-2022-40 | https://it-observer.com/voip-security5.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00317.warc.gz | en | 0.932018 | 435 | 2.546875 | 3 |
What is Scareware
Scareware is a type of malware attack that claims to have detected a virus or other issue on a device and directs the user to download or buy malicious software to resolve the problem. Generally speaking, scareware is the gateway to a more intricate cyberattack and not an attack in and of itself.
Scareware attacks often begin with a pop up ad that appears to be from a legitimate security software provider or the computer’s operating system. If clicked, the scareware ad will direct the user to an infected website where they are given additional instructions to solve their so-called problem. This may include installing a new tool or program, running a computer scan, entering log-in credentials for more information or uploading their credit card information to continue the recovery process. This will often result in the user inadvertently and unknowingly downloading malicious programs, such as malware, ransomware, spyware, a virus or a Trojan onto their device.
Scareware attacks may also be conducted via email. In this type of attack, cybercriminals, also usually disguised as a fake antivirus software program, send a high-priority or urgent email that requests immediate action by the user. Clicking links within the email, which are often presented as ways to resolve the threat or scan the system, result in the user downloading and installing infected files, malicious code or malicious programs.
Scareware is often part of a multi-prong attack which incorporates social engineering techniques and spoofing to heighten the sense of urgency and drive the desired behavior. Scareware attacks, like many forms of malware attacks, are especially troublesome in that the scammer may gain access to the user’s account information or credit card details, which can put the user at risk of identity theft or other forms of fraud.
Scareware vs Ransomware
Scareware commonly falls into the category of a ransomware attack in that the cybercriminals’ end goal is to have the user download ransomware software. Ransomware is a type of malware that denies access to a user’s system and personal information, and demands a payment (ransom) to regain access.
That said, while some types of scareware lead to ransomware attacks, others are more of a nuisance. For example, these attacks may simply flood the screen with pop-up alerts without actually damaging files.
2022 CrowdStrike Global Threat Report
Download the 2022 Global Threat Report to find out how security teams can better protect the people, processes, and technologies of a modern enterprise in an increasingly ominous threat landscape.Download Now
What to do in the event of a suspected scareware attack
If you suspect that you are the victim of a scareware attack, it is important to act quickly and decisively to contain the problem. Follow these steps:
- Disable WiFi or internet access from the affected device and disconnect it from any network.
- If you are using a company-owned device, immediately contact your IT team for further instructions.
- Otherwise, launch a full security scan using a reputable antivirus software provider to look for infected files and known threats, such as malware, ransomware, spyware, viruses and Trojans.
- Restart the device in safe mode and run the sweep again.
- If the scan reveals signs of infection, take it to a licensed and reputable computer specialist. Do not use the computer or mobile device or allow it to connect to a network, even if it appears to be operating normally.
In the event of a scareware attack, users should also take extra steps to safeguard against potentially compromised information. This may include:
- Changing passwords or other long-in credentials
- Performing a scan on other personal devices to ensure they were not inadvertently compromised
- Requesting new credit cards from your bank or financial institution
- Periodically checking your credit report to ensure you were not the victim of fraud or identity theft
Can scareware be removed?
The best way to prevent a scareware attack as an individual user is through prevention. By recognizing the signs of a scareware scam, it is possible to avoid these cyber threats.
It is important to keep in mind that reputable antivirus software programs typically do not notify customers of a security incident via pop up ad—and none will require the user to share log-in credentials or credit card information within a pop up window.
Many of the tips offered to avoid scareware scams are similar to the best practices used to prevent malware and spoofing attacks:
- Never click links or download files from pop up ads or unfamiliar email senders.
- Install a pop up blocker and spam filter which will detect many threats and even stop scareware pop up ads and infected emails from reaching your device.
- Invest in cybersecurity software from a reputable antivirus vendor and ensure all installations are up to date.
- Log into your account through a new browser tab or official app—not a link from a scareware alert, email or text message.
- Only access URLs that begin with HTTPS.
- Never share personal information, such as account numbers, passwords or credit card details, via phone, email or unsecured site.
- Use a password manager, which will automatically enter a saved password into a recognized site (but not a spoofed site).
- Enable two-way authentication whenever possible, which makes it far more difficult for attackers and scareware scammers to exploit.
Preventing scareware attacks at the enterprise level
At the enterprise level, protecting against scareware attacks will be similar to protecting against malware, ransomware and other cybersecurity threats. These attack techniques are constantly evolving, making protection a challenge for many organizations. Follow these best practices to help keep your operations secure:
Train all employees on cybersecurity best practices
Employees are on the front line of your security. Make sure they follow good hygiene practices — such as using strong password protection, connecting only to secure Wi-Fi and being on constant lookout for phishing — on all of their devices.
Keep the operating system and other software patched and up to date.
Hackers are constantly looking for holes and backdoors to exploit. By vigilantly updating your systems, you’ll minimize your exposure to known vulnerabilities.
Use software that can prevent unknown threats.
While traditional antivirus solutions may prevent known scareware and ransomware, they fail at detecting unknown malware threats. The CrowdStrike Falcon® platform provides next-gen antivirus (NGAV) against known and unknown malware using AI-powered machine learning. Rather than attempting to detect known malware iterations, Falcon looks for indicators of attack (IOAs) to stop ransomware before it can execute and inflict damage.
Continuously monitor the environment for malicious activity and IOAs.
CrowdStrike® Falcon Insight™ endpoint detection and response (EDR) continuously monitors endpoints, capturing raw events for automatic detection of malicious activity not identified by prevention methods and providing visibility for proactive threat hunting.
For stealthy, hidden attacks that may not immediately trigger automated alerts, CrowdStrike offers Falcon OverWatch™ managed threat hunting, which comprises an elite team of experienced hunters who proactively search for threats on your behalf 24/7.
Integrate threat intelligence into the security strategy.
Monitor systems in real time and keep up with the latest threat intelligence to detect an attack quickly, understand how best to respond, and prevent it from spreading. CrowdStrike Falcon® Intelligence automates threat analysis and incident investigation to examine all threats and proactively deploy countermeasures within minutes. | <urn:uuid:d4e4d51d-778c-4dcc-ace2-a50adc5302bc> | CC-MAIN-2022-40 | https://www.crowdstrike.com/cybersecurity-101/malware/scareware/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00317.warc.gz | en | 0.91986 | 1,532 | 3.09375 | 3 |
What Is Network Security Assessment?- According to the FBI, about 4,000 ransomware assaults have occurred every day since January 1, 2016. Antimalware software isn’t enough to keep dangers at bay. You must carry out a network security audit. But, first and foremost, what is network security assessment?
The goal of a network security assessment is to look at the network’s security. It’s a network audit or study that looks for flaws or vulnerabilities. It determines which network components require immediate security attention.
It is a method of determining which network components offer a security risk. Vulnerability scanning is another word for network security assessment. A vulnerability scanner and a network assessment tool are similar.
In the next section, you’ll learn more about network security evaluation. You’ll be familiar with the objectives and types of network security evaluations. You’ll also learn about the advantages of doing a network security audit.
When someone gains illegal access to sensitive data, it is called a data breach. Network security evaluation is all about preventing this scenario from occurring. A network security evaluation examines the network’s security setup. This is to ensure that it meets security requirements.
Every procedure has a purpose or goal. An evaluation of network security is no exception. Some of the objectives of a network security assessment are as follows:
- It locates the points of entry and security flaws in your network.
- It detects flaws or vulnerabilities in a variety of programmes, files, and databases, among other things.
- It assesses the impact of an attack from both inside and outside the organisation.
- It assesses network security’s ability to identify and respond to assaults.
- It serves as evidence of support for the advancement of network security.
Consider this a network security assessment checklist to help you out.
The use of network evaluation software allows security vulnerabilities to be resolved more quickly. There are two types of network security evaluations:
|1. Vulnerability assessment or basic security audit||This is the traditional approach of finding bugs and vulnerabilities in a network. This process checks both the internal and external part of a network for any sign of weakness.
This also shows the areas where security risks will happen. This also offers the possible remediations.
|2. Pen test or penetration testing||Pen testing is an actual simulation of cyber attacks on a network. To beat the black hat hackers, you must think and act like one as well.
This will examine the true strength of a network. Pen testers and ethical hackers or white hat hackers are the ones who conduct this.
Businesses and organisations must do a network security evaluation. Anyone who has or is a part of a network is at danger. The highest emphasis should always be network security. It is vital to discover a network’s flaws before someone else does.
What is a network security evaluation, and how does it benefit you? To answer this question, you must first comprehend the advantages of completing a network security evaluation. The following are the main advantages:
It detects the network components that need to be safeguarded. You learn what software or programme has to be updated or configured properly. Patching or updating software is a critical operation. A patch is a collection of fixes for known faults and flaws. Software upgrades or updates are also included in a patch. The patching method varies depending on the operating system. Patch management software can also assist you.
An evaluation of network security looks for evidence of data breaches. This will notify you if someone is attempting to breach your system or network in advance. It aids administrators in conducting comprehensive scans of client networks. This includes both servers and endpoint devices. Laptops, tablets, and smartphones with a network connection are examples of endpoint devices.
It keeps you informed about the most recent security threats. Being aware of and knowledgeable about cybersecurity is quite beneficial. It is possible to secure your network by holding awareness lectures and meetings. An educated mind is the most effective weapon against security risks.
It also demonstrates to your clientele that you are concerned about their safety. This demonstrates your commitment to protecting their personal information. This fosters client loyalty and satisfaction. A satisfied consumer is a sign of growth and profit for your company.
You now know what network security evaluation is. You were also aware of its objectives and sorts. You’ve now learned about the advantages of network security evaluations. | <urn:uuid:9a73c5b3-bea3-4acd-894c-653c3de2712f> | CC-MAIN-2022-40 | https://cybersguards.com/what-is-network-security-assessment/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00317.warc.gz | en | 0.931515 | 942 | 3.046875 | 3 |
How To Prevent Ransomware Attacks: An Essential Guide
The Internet has changed the world by giving unlimited access to information and global connections. The government, educational institutions, and businesses depend on the web to carry out their daily functions.
Unfortunately, the web is not entirely safe. Security threats are the dangerous side of the Internet. They can lower productivity and damage the reputation of affected organizations. While some cyberattacks are subtle attempts to steal data, there are ransomware attacks that take money from an organization or an individual. | <urn:uuid:fb2b6682-cb96-46ed-b866-52b1e23e4f6d> | CC-MAIN-2022-40 | https://www.mcafee.com/en-us/antivirus/how-to-prevent-ransomware-attacks.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00317.warc.gz | en | 0.922621 | 104 | 2.875 | 3 |
Cyber security and cyber defense sound a lot alike. Some people even prefer to call the latter cyber security defense. However, they’re not the same. I recently had the chance to respond to a LinkedIn post from Larry Cole about terminology for Cyber Security vs Cyber Defense. The conversation with Larry really hits home regarding what we are all doing with technology and services: defending what we consider valuable. I think we have all been wrong in calling it Cyber Security. It’s time to start saying Cyber Defense and act accordingly.
In this article, we’ll cover the cyber defense definition and how it interacts with cyber security. Let’s consider the impact of the distributed encryption attacks like WannaCry, Petya, NotPetya and all the other variants. They have been using encryption offensively to attack what we consider valuable: the information that we use and technology to create, distribute, and enrich ourselves in both a corporate or personal way.
What Is Cyber Security Defense?
Cyber security defense combines the concepts of cyber security and cyber defense into one whole moving machine. Cyber defense is the strategy used to protect networks or systems and the information they contain. This is usually done with network detection and response, firewalls, key management, and more. The goal of cyber defense is to guard networks, identify potential problems and report incidents inside the networks. Cyber security consists of the solutions that help ward off threats.
What Is the Difference Between Cyber Security and Cyber Defense?
Protecting networks from attackers is an ongoing contest. Every network has vulnerabilities that could be exploited, and cyber security defense has to find and close those security flaws before an attacker can take advantage.
Before we go too much further, let’s clarify some definitions for “Security” and “Defense” so we can use that to drive the rest of this post. For fun, let’s also throw in one more word: “Attack.”
- Security: The state of being free from danger or threat.
- Defense: The action of defending from or resisting attack.
- Attack: An aggressive and violent action against a person or place.
If we apply “Cyber” to either definition, we start seeing how a word changes perspective and objectives. When looking at the meanings of cyber security vs cyber defense, cyber security speaks to solutions that make you free from danger or threat. Cyber defense speaks to solutions that actively resist attack. In other words, we can define the term ‘cyber defense’ as a proactive solution to prevent, guard against, and respond to cyber threats and attacks. Cyber defense and cyber security are both important for keeping an organization’s data safe. No need to pit them against each other; cyber defense and cyber security should be combined to provide overall cyber security defense.
Understanding the Dangers of Cyber Attacks
Everyone knows that cyber attacks aren’t a good thing. The ramifications of stolen data are often immense: identity theft, stolen banking information, privacy breaches, and more. It’s important that cyber security defense addresses vulnerabilities with active resistance.
Before we dive into what active resistance means, let’s look at recent ransomware attacks. Considering how WannaCry, Petya, and NotPetya were deployed, each package had objectives whether financial or otherwise. In terms of the NSA tools released by the Shadow Brokers, they were delivery systems as defined by The Register here.
These tools were developed by the NSA to deliver exploits in SMB1, SMB2, RDP, IMAP payloads that attack systems to destroy, disrupt, or disable targeted systems. WannaCry, Peyta, and NotPetya’s use of encryption as an attack against businesses and individuals are not particularly sophisticated despite being effective. This is made self-evident by the challenges the attackers had in coordinating the release of key material for those who opted to pay the ransom. In the case of NotPetya, it looks like the payment behavior isn’t even a working feature – they need better QA\QC. So what we have is the equivalent of a North Korean warhead on an American Missile.
Even with the less-than-stellar attack payloads, these weapons took advantage of systems without cyber defense solutions in place. They orchestrated an attack that took advantage of how we work – by exploiting flaws in earlier versions of communications protocols like SMB1. This presents a challenge to anti-virus, firewalls, and other technologies that focus on observation and restriction. The cyber security challenge in light of the above creates a scenario where technology’s requirement to deliver “freedom from threat” correlates with “inability to work”. If we can’t work, what’s the point?
Why is Cyber Defense Necessary for Organizations?
Without a cyber defense strategy, networks are left undefended from data breach campaigns. Cyber defense is necessary for businesses and organizations to protect confidential business and consumer data. This data can be dangerous in the wrong hands and cyberattacks can end up costing your business a great sum of money. For example, without cyber defense, hackers could access your customer’s private information. This could lead to a decrease in customers, less trust in your brand, and even potential legal issues. Cyber defense is important for all industries, including government organizations, telecoms & ISPs, healthcare, financial & banking institutions, industrial and smart metering organizations, the automotive industry, and more.
Applying Cyber Defense
But there is hope! The concept of Cyber Defense — Cyber Active Resistance, can be applied with the same construct as the delivery systems used by the recent spike in ransomware attacks. Just like on the battlefield, Cyber Defense is an act in coordination and resistance. The differences are what technology is used and how the technologies are coordinated to respond to a threat. Just as soldiers have the means to coordinate artillery, machine guns, and grenade launchers to respond to an attack, we need the capacity to coordinate the myriad of cyber security technologies such as firewalls, systems management, identity management, and encryption management to address cyber threats as they occur.
At Fornetix, when we talk about VaultCore’s ability to provide key orchestration, we mean coordinated, actively controlled key management. We learned that if you use standard protocols (KMIP, PKCS11,CEF, etc) and deploy them to actively interact with other systems beyond the “Want Key / Have Key” paradigm of encryption key management, you create solutions where you can use encryption key management to actively resist — DEFEND — against an attack by allowing the technology to interact, coordinate, and respond to other systems. Demonstrating that existing security technology can be coordinated like NSA capabilities, Racktop’s Secure Data Protection Platform, VMware vSphere, Seagate’s ClusterStor CL220 and even tactical radios can use orchestrated encryption key management for coordinated active resistance of attackers.
Cyber Defense extends to coordination of patching strategy, leveraging analytics and machine learning, and even how we assert our identity. Each security technology plays its part and becomes an asset to coordinate to respond to Cyber Attacks as they occur. Coordinated, orchestrated defense lets us work with confidence, knowing that the enemy at the gates will be answered.
Ladies and Gentlemen: Cyber Security is dead, long live Cyber Defense. | <urn:uuid:f6b9ff7e-a409-44f1-9b19-4d4636f33b91> | CC-MAIN-2022-40 | https://www.fornetix.com/articles/pivoting-from-cyber-security-to-cyber-defense | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00317.warc.gz | en | 0.935657 | 1,529 | 2.96875 | 3 |
Small- and medium-sized businesses are considered top candidates for ransomware attacks. If you are the owner or executive in charge of such a business, there are some things you can do to protect your company against ransomware.
What Is Ransomware?
Ransomware is a specific type of malware. If your computer system is hacked and ransomware is introduced, your entire system, or at least key data and programs, are encrypted. The only way to get your stuff released is to pay off the cyber-kidnappers.
Ransomware began in earnest in 2013 when cyberthieves began to attack solo computers and lock them. For a few hundred dollars, the cybercriminals unlocked the stricken computer and returned the data and or system to its owner. It wasn’t long before these evil hackers realized that the business community, especially small- and medium-sized business, is a target-rich environment. Large companies may have taken appropriate precautions against ransomware, which is much harder for companies with smaller budgets to handle before an attack, but who can better afford to pay a higher ransom than a PC owner.
The actual definition of ransomware is malware or malicious software that has been designed in such a manner that it takes control of the victim’s computer system.
According to a 2015 article in the Wall Street Journal, no known count exists of the businesses that have been hacked and have had ransomware capture their system.
Aside from the obvious costs of downtime from locked systems and/or the ransom paid, members of certain regulated industries face another issue: privacy breaches. The United States Department of Health and Human Services (HHS) Office of the Inspector General has been settling privacy breach issues for staggering sums of money; hospital fines are coming in at more than $2 million for data breaches, and the financial industry is subject to similar privacy laws since HIPAA guides the health industry.
Preventing Ransomware on Your Business Computer System
No matter what precautions you decide to take to manage your ransomware risk, somehow or other, your system may be compromised. The absolute best way to solve the problem of being infected with ransomware is to keep an entire backup copy off the premises but that’s easily retrievable. If you do this, you can restore your system with current data and software programs.
One way to do this is storing your backup in a secure place in the cloud. No cases of ransomware have been reported as having been perpetrated on data in the cloud, which makes it a good place to store your backup copy.
The more you use hosted storage, apps and software from the cloud, the more secure your data becomes.
Ransomware prevention relies on users following some simple rules and your acquiring robust anti-malware and anti-virus software. Let’s look closer at each.
The primary method of infection by malware is when an email is opened from an unknown source. Teach employees to never open or download attachments from an email when the sender is unknown.
Also, educate your employees about the dangers from sites that are unknown. Cybercriminals are so sophisticated today that it is easy for them to forge security certificates and logos to display on their phony site. If your employees have never heard of the phoney “business,” tell them to not click anywhere on the site or download anything from it.
Software Protection From Ransomware
You should install specific software programs on your business computer system to help thwart ransomware attacks. Make sure your virus software is robust and kept up-to-date; do the same for your malware protection. Make sure to update the signatures for both types of software on a daily basis.
Because the cost of a shutdown is so great, many companies opt to use a managed services provider for security protection.
Integris is the trusted choice when it comes to staying ahead of the latest information technology tips, tricks and news. Contact us at (888) 330-8808 or send us an email at firstname.lastname@example.org for more information. | <urn:uuid:1f6a756d-4ef4-47d3-9fd4-00e8806e5e53> | CC-MAIN-2022-40 | https://integrisit.com/cloud-key-ransomware-prevention/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00317.warc.gz | en | 0.943957 | 824 | 2.59375 | 3 |
SSL versus TLS – What’s the difference?
SSL versus TLS
TLS (Transport Layer Security) and SSL (Secure Sockets Layer) are protocols that provide data encryption and authentication between applications and servers when that data is sent across an insecure network. The terms SSL and TLS are often used interchangeably or in conjunction with each other (TLS/SSL), but one is, in fact, the predecessor of the other. SSL 3.0 served as the basis for TLS 1.0, which, as a result, is sometimes referred to as SSL 3.1. With this said, is there a practical difference between the two?
See also our Infographic which summarizes these differences.
Which is more secure – SSL or TLS or TLS v1.x?
It used to be believed that TLS v1.0 was only marginally more secure than SSL v3.0, its predecessor. However, SSL v3.0 is very old, and attacks such as the POODLE vulnerability have shown that SSL v3.0 is now completely insecure (mainly for websites using it). Even before the POODLE was set loose, the US Government had already mandated that SSL v3 not be used for sensitive government communications or HIPAA-compliant communications. If that was not enough, POODLE certainly was. In fact, as a result of POODLE, SSL v3 was disabled on websites worldwide and for many other services.
SSL v3.0 is effectively “dead” as a helpful security protocol. Places that still allow its use for web hosting place their “secure websites” at extreme risk. Organizations that allow SSL v3 use to persist for other protocols (e.g., IMAP) should take steps to remove that support at the soonest software update maintenance window.
Subsequent versions of TLS — v1.1, v1.2, and v1.3 are significantly more secure and fix many vulnerabilities in SSL v3.0 and TLS v1.0. For example, the BEAST attack can completely break websites running the older SSL v3.0 and TLS v1.0 protocols. The newer TLS versions, if properly configured, prevent the BEAST and other attack vectors and provide many stronger ciphers and encryption methods.
As of this writing (May 2018), 11% of websites still support SSL v3.0. However, a significant majority, 92%, already support TLS v1.2+. Check the latest statistics over at SSLLabs.
The trend is, of course, to deprecate the older protocols in favor of the new ones. For example, the use of TLS 1.0 by websites that accept credit cards (and services used by the US government) must stop by June 30th, 2018. Instead, they must use TLS 1.1 with TLS 1.2+ strongly encouraged). As early as 2014, NIST (National Institute of Standards and Technology) revised its guidelines and recommended only the use of TLS 1.1+ for government communications. NIST does indicate that TLS v1.0 is OK for non-government communications, even in its latest 2018 draft updates. Everyone should use TLS 1.2 and 1.3 when possible. However, so many organizations still have older computers (more than five years old) that completely turning off TLS 1.0 and 1.1 support could cause significant disruption.
As a result, we see the transition to TLS 1.2 as being more gradual. Sites and systems that need it (for PCI and government work) are leading the way, followed by those whose users are likely to have more recent computed systems. TLS 1.0 support across the internet will steadily decline in the next several years. However, as we see with SSL 3.0, it will not go away in general for some time to come.
But wait: are not TLS and SSL different encryption mechanisms?
If you set up an email program, you may see different options for “no encryption,” “SSL,” or “TLS” encryption for transmission. This leads one to assume that TLS and SSL are very different things.
In truth, this labeling is a misnomer. When making this choice, you are not actually selecting which method to use (SSL v3 or TLS v1.x). You are merely selecting between options that dictate how the secure connection will be initiated.
No matter which “method” you choose for initiating the connection, TLS or SSL, the same level of encryption will be obtained when talking to the server. That level is determined by the software installed on the server, the configuration, and what the program supports.
If the SSL versus TLS choice is not SSLv3 versus TLS v1.0+, what is it?
There are two distinct ways that a program can initiate a secure connection with a server:
- By Port (a.k.a. explicit): Connecting to a specific port means a secure connection should be used. For example, port 443 for HTTPS (secure web), 993 for secure IMAP, 995 for secure POP, etc. These ports are set up on the server, ready to negotiate a secure connection first and do whatever else you want second.
- By Protocol (a.k.a. implicit): These connections begin with an insecure “hello” to the server and only switch to secured communications after the handshake between the client and the server is successful. If this handshake fails for any reason, the connection is severed. A good example is the command “STARTTLS” used in outbound email (SMTP) connections.
The “By Port” method is commonly referred to as “SSL” or “explicit,” and the “By Protocol” method is commonly referred to as “TLS” or “implicit” in many program configuration areas.
Sometimes, you have only the option to specify the port and if you should be making a secure connection or not, and the program itself guesses from that what method should be used. Many old email programs like Outlook and Mac Mail did that. In such cases, you need to know if the program will try an explicit or implicit connection to initiate security and choose the port appropriately (or else the connection could fail).
To Review: In email programs and other systems where you can select between SSL or TLS and choose the port a connection will be made on:
- SSL means a “by port” explicit connection to a port that expects the session to start with security negotiation.
- TLS means a “by protocol” connection where the program will connect “insecurely” first and use special commands to enable encryption (implicit).
- Using either could result in a connection encrypted with either SSL v3 or TLS v1.0+, based on what is installed on the server and what is supported by your program.
- Both connection methods (implicit and explicit) result in equally secure (or insecure) communications.
Sidebar: It is unclear why the “By Protocol” method is referred to as “TLS” as it could result in either TLS or SSL actually being used. It is likely because the folks who designed the SMTP protocol decided to name their command to switch to SSL/TLS in the SMTP protocol to “STARTTLS” (using “TLS” in the name as that is the newer protocol name). Then email programs started listing “TLS” next to this and “SSL” next to the old “By Port” option which came first. Once they started labeling things this way, that expanded to general use in the configuration of other protocols (like POP and IMAP) for “consistency.” I am not certain if this is the real reason, but based on my experience dealing with all versions of email programs and servers over the last 15 years, it seems very plausible.
Both methods ensure that your data is encrypted as it is transmitted across the internet. They also enable you to be sure that the server you are communicating with is the server you intend to contact and not some “middle man eavesdropper.” This is possible because servers supporting SSL and TLS must have certificates issued by a trusted third party. These certificates verify that the domain name they are issued for belongs to the server (all about SSL certificates). Your computer will warn you if you try to connect to a server and the certificate it gets back is not trusted or doesn’t match the site you are trying to connect to.
So then, should I choose TLS or SSL?
If you are configuring a server, you must install software that supports the latest versions of the TLS standard and configure it properly. This ensures that your users’ connections are as secure as possible. Using an excellent security certificate will also help a lot. Choose one with 2048+ bit keys, Extended Validation, etc. You should avoid using SSL v3 and use only strong ciphers, especially if compliance is required.
If you are configuring a program (especially an email program) and have the option to connect securely via SSL or TLS, you should feel free to choose either one, as long as your server supports it.
Note: many web browsers have special (usually hidden) preferences that allow you specifically enable/disable SSL v2, SSL v3, TLS v1.0, etc. In these cases you are actually telling the browser what versions of these security protocols you will allow your browser to use when establishing secure connections. We recommend turning off SSL v2 and SSL v3 (they provide no real security). A few web sites still support SSL v3 only; if you encounter one of these, please let them know that they are seriously behind the time and doing themselves and their visitors a serious disservice by pretending to provide safety while actually only providing broken, ancient encryption.
What happens if I do not select either one?
If neither SSL nor TLS is used, then the communications between you and the server can quickly become a party line for eavesdroppers. Your data and login information are sent in plain text for anyone to see; there is no guarantee that the server you connect to is not some middle man or interloper. For more on this, see the case for email security.
Does LuxSci support these security protocols?
SSL/TLS form the basis of client-server security used by LuxSci for all of its services. Our web servers do not support SSL v3.0 and support TLS v1.2. We use only strong, NIST-recommend ciphers for compliance reasons. We offer a variety of ports for connecting securely to POP, IMAP, and SMTP using both implicit and explicit methods for establishing TLS encryption. LuxSci also offers WebMail over SSL and provides SSL for web hosting clients.
To ensure the integrity and security of your data, LuxSci strongly recommends taking advantage of our secure capabilities, such as the enforced use of PGP, S/MIME, TLS, and email Escrow protocols.
What about TLS v1.3?
TLS v1.3 is the latest and greatest version of TLS. It became an internet standard on March 25th, 2018. According to NIST, organizations should plan to support TLS v1.3 by January 1st, 2020, or sooner.
TLS v1.3 brings many significant changes over TLS v1.2. Some of these include:
- All new ciphers. The ciphers used with TLS v1.3 are incompatible with previous versions of TLS.
- Drop weak security. Many things are known to be cryptographically weak, such as MD5, RC4, and weak elliptic curves, have been completely dropped, so it will be impossible to use them with TLS v1.3.
- Drop seldom-used features. Features that are little used, like compression and “change cipher” ciphers, have been dropped to simplify and strengthen the protocol.
- Faster. TLS v1.3 speeds up the client-server negotiation of security, making secure connections faster to initiate.
- No Corporate/ISP Eavesdropping. With TLS v1.3, it is no longer possible for organizations to seamlessly monitor secure connections by passively decrypting and re-encrypting them.
What is next? Who knows, maybe “TLS v1.4” will start to include some of Google’s New Hope post-quantum algorithms. | <urn:uuid:2929ccf2-fb09-4e82-afd8-9ba4b3b2a78d> | CC-MAIN-2022-40 | https://luxsci.com/blog/ssl-versus-tls-whats-the-difference.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00317.warc.gz | en | 0.942752 | 2,616 | 3.40625 | 3 |
What does a phishing email look like? We've compiled phishing email examples to help show what a spoofed email looks like to prevent against phishing attacks.
Brand deception phishing is the most common example of phishing people will come across. Brand deception phishing occurs when an attacker mimics a trusted company in an email and asks someone for their personal information like credit card numbers or login credentials.
What exactly is phishing?
Phishing is the act of using social engineering to steal information from victims through email or text message by impersonating another person or company. Emails will usually be disguised as a customer invoice, password reset, or login request.
Links and attachments often contain malware that is used to steal sensitive information and gain a foothold inside a company network. For scam emails that request password resets, a fake copy of a real website is used to trick the user into logging in, thus stealing their credentials.
These scams are responsible for millions of dollars of lost revenue every year, and is one of the most popular forms of cybercrime to date.
Examples of phishing emails
There are multiple types of scams that use different techniques to try and steal data from recipients. These can range in complexity, payload type, and how hard they are for an average person to detect. Let’s review a few of the most popular types of phishing emails.
Domain spoofing can be done directly to the email header, when the attacker tries to actually use and send from our example banktrust.com. Email authentication, specifically DMARC records, can be used by receiving mail servers to check to ensure that the server that sent the email is allowed to send emails on behalf of that domain.
DMARC records have a line of text that contains all of the servers that are allowed to send on behalf of that domain. When an email is received the receiving mail server can run an DMARCcheck on that domain to ensure that the server is listed as authorized to send. If the server is not authorized to send on behalf of that domain the DMARC check will fail.
Without DMARC email authentication, attacks would run rampant across the internet. Luckily DMARC is a widely adopted standard and in use almost everywhere.
Lookalike domains are when an attacker uses a domain that appears to be legitimate but is actually different and uses slightly altered letters and numbers that make it hard to tell the difference. For example, a scammer trying to impersonate the domain of banktrust.com may register the domain banktust.com (note the slight difference in spelling) and begin to send password reset emails from that address.
Lookalike domains can also be applied to websites as well. Using the banktust.com example, an attacker can send an email from that fake address that points to a fake website that is a clone of the real banktrust.com. The page is monitored by the attacker and once the victim enters their credentials, that information is stolen.
Some domain spoofing attacks are quite sophisticated and utilize techniques such as cross site scripting (XSS) attacks that make identifying fake URLs and web pages even more difficult.
While most email scams use thousands of messages to find a few victims, spear phishing takes the complete opposite approach. By extensively researching a target company, attackers customize a spear phishing campaign around how that company operates in an attempt to seem as legitimate as possible.
This could include registering a similar looking domain, using stolen email signatures, company logos, and even names of individuals that are known within the company. Stolen information is often leveraged to craft messages that appear real and urgent. Sometimes these messages go as far as learning the company structure and exploiting the hierarchy to create false urgency in the phishing email.
Spear phishing can impersonate both internal staff members, or known and trusted vendors that the organization has a relationship with. Since spear phishing doesn’t rely on a single tactic to succeed it can be tough for an untrained eye to spot a problem. Implementing a phishing defense system can help automatically detect and stop these types of attacks.
You can think of whaling as an even more targeted version of spear phishing, where the attackers now begin to impersonate senior representatives within a company. They use this knowledge of company hierarchy to pressure other staff into sending funds, resetting passwords, or clicking on links without hesitation.
With whaling there is usually a sense or urgency or pressure that appears to come from a senior staff member within the company. The victim, which is usually just an employee at the company, will feel pressured into completing the task quickly.
This is sometimes also referred to as CEO fraud, as the whaling usually aims to impersonate c-level executives within an organization in order to gain access to the most valuable information a company has access to.
Whaling techniques have evolved over the years and could request the victim to do a number of tasks such as reset their login passwords, buy gift cards, or forward sensitive information such as tax forms or other company documents.
Attackers can impersonate staff relatively easily by searching on the target company website for information, and guessing the formatting of the email account they wish to impersonate. Stolen company logos, signatures, and phone numbers are also used to make these emails appear more legitimate.
Consumer phishing impersonates well-known brands and then targets consumers prompting them to update their account information, or fix an issue with their account. This can lead the victim to either click on a malicious link that steals their credentials, or call a fake hotline where scammers will ask the victim for their personal information, and sometimes even their credit card numbers.
Like all forms of this scam, the attack relies on impersonation, but chooses to masquerade as already known and trusted companies in hopes that recipients of the phish will be less on guard when the message comes from a brand they like and trust.
How to identify phishing emails
No matter what type of email you may encounter, there are few ways you can identify if that email is legitimate or not.
Carefully check the sending domain. This is often the most important step in identifying a scam email. Many times recipients will glance at the From field and skim through the rest of the email. Attackers can format emails to look identical to internal emails using signatures, logos, and fonts that all look like a real email.
When DMARC email authentication is in place to block domain spoofing, attackers will leverage lookalike domains to confuse victims. If an email doesn’t seem right, spend an extra minute or so verifying that the email address in the From field is actually who you think it is. If you’re still not sure, consider contacting your IT department or contacting the sender by phone using a number that you already have on file not listed in the email.
Preview links before clicking. Even if an email appears to be legitimate, it’s best practice to preview a link before clicking on it. This can be done in almost all email browsers by hovering your mouse over a link for a few seconds without clicking. If the link appears to be directed to a strange domain, or something that looks gibberish, it’s best to take caution and not click the link.
Even with the link preview technique, attackers can perform redirects from that page. For example, the email link could go to Dropbox, which is a real service. But within that dropbox link is a document that contains another link that redirects you to somewhere else that attempts to install malware or steal your information.
Does the email suddenly feel urgent? Urgency and scare tactics are used in most phishing attempts in order to scare victims into acting quickly without thinking their actions through. Before taking action, review the sender's addresses to verify if it is real. Official services such as Chase will come from chase.com or jpmorgan.com. If you think the email is real but still aren’t 100% sure, consider calling the service or person from a number you already know, or find outside of the email in question.
Be on the lookout for misspellings. In the case of mass phishing campaigns, emails are usually poorly spelled or contain other punctuation errors. Many of these massive scam operations are stationed in non-english speaking countries, which forces them to use translators which don’t always work as intended.
Keep a lookout for low resolution branding images. When images are stolen for signatures in emails, they are usually low resolution screenshots that are simply re-pasted into the email. While this doesn’t always mean an email is a phish, it should raise a red flag for you to investigate the email further.
How to report phishing emails
If you’ve received a scam email or may have had your information stolen from a phishing attack, you can report this incident to the Federal Trade Commission.
If a scam email was sent to your inbox, you can forward it directly to the FTC Anti-Phishing Working Group at [email protected]. If the message was a text message you can forward it to SPAM (7726).
You can also file a report of the attack by visiting http://ftc.gov/complaint.
How to protect against phishing emails
Unfortunately you can’t simply download a program and be safe from email-based attacks. You need a complete phishing response and defense system in place. Since these attacks are constantly evolving you’ll not only need to ensure that your email servers are configured correctly, but that your staff is kept up to date with the latest email threats and company policies.
Two factor authentication can be paired with threat detection to help stop compromised information from being accessed outside of the organization. Two factor authentication relies on combining what a user knows, with something that user has, such as their cell phone. Even if credentials are stolen, the attacker will need the user’s cell phone in order to login.
The Agari Advantage
Agari offers a turnkey solution to combat phishing email attacks through automatic response, remediation, and containment. The system utilizes both signature-based security as well as behavioral analysis to stop malicious files and bad actors at the same time.
If you’re looking to learn how to keep your business safe from email-based attacks, see how Agari Phishing Defense works in action and sign up for our newsletter for the latest in email security. | <urn:uuid:17b1f80a-dc63-4a54-af51-29d74643e4b0> | CC-MAIN-2022-40 | https://www.agari.com/blog/common-phishing-email-attacks-examples-descriptions | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00317.warc.gz | en | 0.941126 | 2,166 | 3.078125 | 3 |
With AI-powered adaptive signaling technology at nearly 150 city intersections, Pittsburgh plans to improve traffic flow and reduce idling times for buses.
By revamping close to 150 city intersections with adaptive signaling technology, Pittsburgh plans to improve traffic flow and decrease idling times for city buses.
The initiative will incorporate technology from Rapid Flow Technologies’ Scalable Urban Traffic Control program (Surtrac), an artificially intelligent adaptive signal control system first deployed in 2012, into eight high-priority traffic corridors, or “Smart Spines,” throughout Pittsburgh.
Surtrac uses cameras, sensors and radar technology to first capture real-time traffic conditions at each intersection. With that data, it creates an optimization plan for moving traffic through the intersection, which it then sends to the signal controllers in a specific intersection, to nearby signals and to connected vehicles.
“The original application was to decrease congestion and idling time in the neighborhood of East Liberty” where a number of redevelopment projects were already in progress, said Stan Caldwell, executive director of Carnegie Mellon University’s Traffic21 Institute.
The biggest takeaway from the decentralized adaptive signaling pilot was that adaptive signaling worked, even when circumstances unexpectedly changed.
At one point during the pilot, the nearby Highland Avenue bridge was shut down for close to six months, and “the signals never had to be retimed,” Caldwell said. “The signals changed themselves for the time period that the bridge was out, and the day that it was fully operational again, the signals readjusted themselves automatically.”
This demonstrated that the technology could also adapt to changes in population or land use patterns. Furthermore, the solution likely would save money because resource-strapped municipalities would not have to field engineers to periodically retime signals.
The project is now expanding to support integration of bus traffic.
By connecting buses to Surtrac, the system will have more data to analyze to improve traffic efficiency, Caldwell said.
“The idea is to have individual units [like buses] communicating with each other and informing decisions being made in real time,” CMU Metro21 Smart Cities Institute Executive Director Karen Lightman said. “The main thing is to give priority to high-occupancy vehicles so that instead of having a dedicated space on the roadway for [public transport], the system can improve traffic flow to the point where we don’t need a physical bus lane.” | <urn:uuid:761d563f-2675-428b-9731-7822ec22d9e7> | CC-MAIN-2022-40 | https://gcn.com/cloud-infrastructure/2022/05/signals-along-smart-spines-optimize-traffic-flow/366767/?oref=gcn-next-story | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00517.warc.gz | en | 0.947778 | 498 | 2.796875 | 3 |
What is stranded capacity?
Capacity is a term we often use to measure the available power, cooling, and space within a data centre. Stranded capacity is installed capacity (cooling, power, space) that cannot be used to support the critical load. When assessed individually each of these parameters (cooling, power, space) do little to depict the actual capacity of the data centre. As each parameter is interrelated, they must be assessed as a whole to ensure a balance is maintained between each. Stranded capacity occurs when an imbalance occurs between the 3 parameters that restrict the overall power efficiency of the data centre.
Power and cooling capacity become stranded when predicted IT power densities never materialize. For example, many sites that were designed for 100 W/sq. ft or more continue to operate at 50 W/sq. ft or less. In this case whilst there is limited physical space available, 50% of the rack power is still available resulting in stranded power as there is no space to install additional infrastructure at the current rack density.
As racks are not operating at full power density, the cooling capacity installed to support the unused 50% of the load is also stranded. Vice versa, available power can become stranded due to inability to install cooling infrastructure where it is needed. When it comes to stranded space capacity, this typically comes down to footprint of infrastructure, the larger the footprint the less W/sq. ft of available power.
The bottom line is that stranded capacity reduces the operating efficiency of the data centre and should be carefully monitored to ensure data centres are operating with an optimum power, cooling and space balance.
The Problem with Traditional Data Centre Capacity Planning
As previously highlighted, the key to avoiding stranded capacity is striking an efficient balance between the available space, power, and cooling capacities. The actual capacity of the site will be limited by the most restrictive parameter of the 3.
However, accurately planning data centre capacity is easier said than done, with the constant balancing act between power, space and cooling presenting a challenge in the design and modification of data centre facilities.
Traditionally data centre capacity planning relied on this formula:
Future Resources = Current Usage x (1 + Normal Growth + Planned Growth) + Headroom
Although the use of a standard formula would seem to simplify data centre capacity planning, it does quite the opposite as it fails to account for the variability of load on the servers. Granted this formula is fine for standard office buildings and other static loads but data centres require a more advanced method of planning. Critical load requirements in the data centre can be very unpredictable due to rapid changes in demand and as the demand for data increases, data centre infrastructure must evolve to facilitate higher power capacities and cooling requirements. If a static formula is relied upon for capacity planning when demand is so volatile, stranded capacity can almost be guaranteed as power requirements continue to grow and the balance between power, space and cooling become mismatched. It is important to note that data centre capacity planning is not a one-time task, but instead it is a continuous process of monitoring and modification to strike the optimal balance to support efficient data centre operations.
Avoiding stranded capacity
To avoid stranded capacity you must first identify the limiting factor and modify the capacity of the remaining 2 elements to rebalance each of the 3 defining parameters.
Space as limiting factor
When physical space within a data centre facility has been exhausted, modular e-houses are an efficient solution to facilitate continuous upscaling of power capacity. Additionally, the introduction of low footprint infrastructure within the data centre can help to optimise the white space and eliminate wasted square footage.
Power as limiting factor
Overbuilding of data centre capacity is a major issue and is largely due to the notion that data centres need to be armed with enough capacity to meet unforeseen demand. In reality, this can lead to excessive stranded capacity which can be very costly. It is important to right size your data centre to support optimal operating efficiency where cooling, power and space are in balance.
Cooling as limiting factor
If cooling capacity is not able to efficiently cool the power load, the data centre’s PUE will suffer; systems will become overheated and some of the available power will be wasted as heat output.
Underfloor cabling with limited space for cooling can contribute to stranded power capacity.
Power is critical to the reliability and availability of IT services. As a result, data centre owners are under mounting pressure to continuously monitor their infrastructure to plan for future capacity requirements and ensure system health to mitigate downtime. Data centres are very complex entities and therefore manual procedures and limiting calculations must not be relied upon to manage infrastructure requirements. It is vital that data centre owners have the tools required to better leverage their data to make informed capacity planning decisions that support data centre efficiency
Thanks to DCIM software, data centre capacity planning has been revolutionized enabling the automation of system monitoring and data analysis to facilitate accurate and fluid planning where capacity is optimized to the needs of IT and business services.
Alongside DCIM software, successful and efficient capacity planning can be done in the following steps:
- Outline all of the required components for new equipment to be implemented and installed in the facility.
- Measure the current usage level of the required component. Are they fully utilised or do they have additional capacity?
- Create a list of capacity requirements
- Build a plan, outlining how you will provision for the new equipment
- Issue work orders to physically provision the new equipment
- Audit the provisions and update the DCIM database to show the level of capacity utilization. | <urn:uuid:f168c272-e98b-415c-8902-546b7d08666c> | CC-MAIN-2022-40 | https://blog.e-i-eng.com/data-center-capacity-planning-what-is-stranded-capacity-and-how-to-avoid-it | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00517.warc.gz | en | 0.924111 | 1,131 | 3.171875 | 3 |
Counterintuitively, open-source OSs are often more secure than closed-source versions. The same worldwide high profile that helps a closed-source OS like Windows gain such market acceptance also makes it the target for cyber-attacks. And while the closed-source embedded OSs used for PLCs are relatively obscure, the emergence of Stuxnet almost a decade ago demonstrated that commercially available industrial control platforms are viable targets for cyberattacks.
A benefit of open source is its crowd-sourced nature. The large number of developers involved ─ far more than a traditional closed-source industrial controller manufacturer can employ ─ allows quick reaction to address vulnerabilities.
When open-source OSs are used for industrial applications, they should be custom-built for a specific device and include only the packages the device requires. This streamlining reduces the ways it can be attacked, also known as reducing the attack vectors or the attack surface.
In addition, the purpose-built OS for an industrial controller should be cryptographically signed by the manufacturer. Only vendor-approved OS builds should be accepted by the controller, guaranteeing the build’s origin and precluding unauthorized OS code alteration.
Modern industrial controllers rely heavily on commercial Ethernet, although many specialized industrial fieldbuses remain in use. Ethernet for industrial controllers is provided by physically wired network interfaces. Wi-Fi devices may also provide connectivity.
For any networking, it is important to understand the concept of a trusted network as opposed to an untrusted network. A trusted network is usually within a private facility and may be an IT-managed network, where all users with access are known. An untrusted network is any network where those who can access it are unknown, like the Internet.
A router is a network device configurable to route traffic between any two networks. Many people are familiar with routers for home use, with these devices handling traffic between the Internet network and devices on the home network. These routers move data between these two networks.
Industrial applications, on the other hand, call for controllers with multiple independent network interfaces, so that trusted and untrusted networks can be kept separate. One network interface can be assigned to the local trusted network, and another to the external untrusted network. These interfaces must be non-routable, so that no external attacker can connect to the trusted network from the untrusted network (Figure 1).
Another crucial networking concept is a firewall, which provides security by preventing unsolicited traffic from accessing the network, device or host. Typically, local device-originated outbound connections are considered trustworthy and therefore allowed, as are the associated inbound responses. However, other inbound connection attempts from outside are rejected, although the firewall may be configured for certain specific ports to be opened and allow inbound traffic.
An industrial controller should have its own firewall and provide the means to configure it. For industrial applications, the trusted network interface will need the ports associated with control logic, I/O connections or other industrial protocols opened. But the untrusted network interface should typically have all ports blocked except for a secure port allowing authenticated users to access the controller over an encrypted connection. Best practice is to open only the ports specifically needed and to block all other ports, and for the untrusted network interface to block all ports by default (Figure 2).
Proper network configuration is essential, and it goes hand-in-hand with carefully assigning user access and privileges.
A key feature of modern digital computing ─ whether used on mobile devices, PCs or industrial controllers ─ is the concept of user accounts with assigned access and privileges. Typically, an administrator account must be created initially; it has global privileges. This account must be carefully protected by the owner.
An industrial controller should not offer a default username or password on any account, but instead should require that the administrator select unique credentials upon account creation. Default credentials can be easily obtained and used by anyone, whereas a unique administrator account better protects the controller from nefarious actors. If administrator credentials are lost, the account should not be recoverable, and instead would require a reset of the controller to factory defaults.
The administrator account is used to create user accounts. For an industrial controller, authorized users may be people, but they may also be software services.
Best practices are to create accounts only for the necessary users, grant them only the essential privileges, assign strong passwords and always require authentication for system use. Careful user account management gives the administrator complete and granular control of who and what can access the system, and therefore of who and what cannot (Figure 3).
A common requirement is for offsite users to connect with a controller via the Internet. A secure port that encrypts all data communications and allows only authenticated users to connect can meet this requirement.
Another way is with a separate device on the network capable of creating a secure virtual private network (VPN) tunnel to outside clients or servers. However, setting up a VPN can require extensive involvement and coordination with IT personnel.
A better option is to select a controller with built-in secure VPN tunnel capabilities, giving OT personnel complete control of VPN connections to securely match their needs (Figure 4).
Whether using onboard controller features or site networking devices, best practice for remote connections via any form of untrusted network is to always use a properly configured secure VPN tunnel and to disable it when not needed.
The whole point of equipping controllers with network interfaces is to provide data communication connections. However, the main thrust of this article has been how to prevent connections, at least from unauthorized entities. Communications on the trusted side of a controller are relatively simple, and as discussed, inbound connections over an untrusted network should go through a VPN or be blocked. So how do you communicate data if a VPN is difficult to set up? For example, how does an OEM get data needed from machines at customer sites for billing or maintenance?
The answer is to use outbound, device-originated data communication protocols. One such protocol is MQTT, which uses a publish/subscribe model (Figure 5). As pointed out above, outbound connections are generally allowed through a firewall because they are trusted. The controller is configured to publish data of interest to an external central broker using an outbound connection. Remote users connect and subscribe to the central broker in a similar manner. Because the connection originates from a trusted source, it is allowed through the firewall, and responses are allowed in return, safely permitting two-way data flow. All connections are authenticated and encrypted.
Another key feature to look for in an industrial controller is built-in security certificate management. A security certificate basically verifies a machine’s identity to another machine, so an originating machine can be sure it is connecting to the proper destination machine and not an imposter. Certificates can be implemented in various ways and can be generated by the end user or registered through a certificate authority (Figure 6). Industrial controllers should use industry-standard certificate practices, similar to banking and e-commerce sites.
Even with these security provisions in place, there are other best practices to consider.
Other best practices
So far, we have looked at configuration and best practices for system design. However, there also are procedural best practices for improving security.
- Minimum interfaces: The most cyber-secure system is air-gapped and has no interfaces. This isn’t usually practical, but it may be possible if the controller offers an onboard interface. In any case, always remove unnecessary network connections and block unused ports.
- Minimum access: Assign users the lowest possible privileges consistent with what they need to see and do, and require them to log out when inactive – especially administrators. This advice extends to all control system elements, including HMIs, which should be run in read-only kiosk mode whenever possible.
- Development versus production: Restrictions are sometimes relaxed for testing and prototyping. Make sure the controller is completely protected after testing and before being placed into production. Some Linux-based controllers allow users to take advantage of secure shell access (SSH) for developing custom applications. Once development is complete, make sure shell access is disabled.
A secure approach
The best practices outlined in this article are a solid starting point for embarking on any new industrial automation project, or for revisiting one already in service. Good security must be carefully implemented at many levels and is most effective when security provisions are built into automation products, not bolted on. Built-in security features help you implement security quickly with minimal expense.
Every situation is different, and you are most familiar with your applications and network architectures. Built-in network security features for industrial controllers can help you design and maintain a secure system, but ultimately you are responsible for using them wisely in your application and as part of your overall security strategy.
This article appears in the IIoT for Engineers supplement for Control Engineering and Plant Engineering. See other articles from the supplement below.
See additional cybersecurity strategy stories including:
Protect PLCs and PACs from cybersecurity threats
Protect PLCs and PACs from cybersecurity threats
Compensating controls in ICS cybersecurity
Compensating controls in ICS cybersecurity | <urn:uuid:de54f31c-394d-4d79-8452-335af1c38c7b> | CC-MAIN-2022-40 | https://www.industrialcybersecuritypulse.com/regulations/industrial-controller-cybersecurity-best-practices/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00717.warc.gz | en | 0.916475 | 1,885 | 3.09375 | 3 |
Context-focused giving is a method through which corporate philanthropy and strategy are combined to achieve both social and economic gains. The basic idea behind context-focused giving is giving that benefits the environment in which a company operates and, thus, that company's competitive advantage. More specifically, context-focused giving considers the contextual conditions most important to a company’s strategies and industry, and targets their philanthropy toward improving one of these contexts so that the community and the company both reap rewards from the efforts. When companies are able to clearly identify how their philanthropic initiatives are not only creating good for society, but also for the company, charitable expenditures will not suffer from lack of justification in terms of bottom-line benefit.
In their Harvard Business Review publication, "The Competitive Advantage of Corporate Philanthropy," Michael E. Porter and Mark R. Kramer dispel the "myth of strategic philanthropy" in cause-related marketing efforts. Cause-related marketing, or corporate giving campaigns that often include a vague link between a corporation and a non-profit campaign, are largely intended to benefit the corporation's public image, acting as forms of publicity and marketing to generate goodwill. They argue that most corporate giving programs lack any solid connection to a company's strategy, and that "the acid test of good corporate philanthropy is whether the desired social change is so beneficial to the company that the organization would pursue the change even if no one ever knew about it" (Porter and Kramer 8).
Context-focused giving involves careful research and analysis as to how Corporate Social Responsibility initiatives dually create social benefits and benefits to one or more areas of their competitive context: factor conditions, demand conditions, context for strategy and rivalry, and related and supporting industries. Company's can engage in successful context-focused giving by identifying contextual conditions most important to their strategy and the health of their industry, and developing a giving program that improves the nature of this context, creating social and economic benefits. Just as individuals are impacted and shaped by their environment, the same is true for corporations. Context-focused giving provides an avenue for which to benefit both the individual and the company.
Factor Conditions refers to the size, quality and nature of the specialized inputs necessary for a company to operate. This includes a company's capital resources, its physical, administrative, information, scientific and technological infrastructure, and the availability of adequately trained employees along with natural resources. DreamWorks SKG implemented a successful context-focused giving strategy geared toward improving education and training for low-income students in Los Angeles. Partnering with Los Angeles Community College District and local schools, DreamWorks created a multifaceted program that combined classroom learning, mentoring and internships to provide low-income students in the area with the knowledge and skills necessary to work in the entertainment industry. The program had the social benefit of improved education and better employment opportunities in the community (context), as well as the economic benefit of expanding DreamWorks' availability of specially trained workers. Even for the specially trained graduates who did not go on to work for DreamWorks and instead worked for other companies, including competitors, DreamWorks could still count on the benefit of their project in improving the entertainment industry as a whole. DreamWorks is a part of an entertainment cluster, or "a geographic concentration of interconnected companies, suppliers, related industries, and specialized institutions in a particular field..." (Porter and Kramer, 4).
Corporations may choose to focus on the context of demand conditions when developing corporate strategic philanthropy, or conditions related to the size of the local market, customer sophistication, and potential areas of growth and change in regard to customer demands and needs, both locally and globally. One area that corporations have targeted is improving the sophistication of customers, and thus their demand for more sophisticated products and services. Apple Computer has targeted customer sophistication as a part of a long-standing context-focused corporate giving program that provides schools with Apple products. This creates social benefit of improved education and access to learning products in low-income areas while also expanding Apple's customer base.
Stay tuned for part 2! | <urn:uuid:664daf14-e4ae-4414-b40d-54f1322b481e> | CC-MAIN-2022-40 | https://www.givainc.com/blog/index.cfm/2015/1/15/contextfocused-giving-part-1 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00717.warc.gz | en | 0.951573 | 824 | 2.671875 | 3 |
One of the world’s most crucial and selfless acts is still simply washing your hands.
It’s a familiar situation in a public restroom: You’re on your way in, and someone else is leaving without washing their hands. They see you, and wheel around toward the sink. They start whistling, as if to seem casual, and then give their hands a quick spritz with water.
Even among people who will never see each other again, there’s a compulsion to perform a tiny baptism of the fingertips: Not enough scrubbing or soap to actually remove a virus, just enough to signal civility. Accordingly many Americans’ standard of what constitutes a washing of the hands is abysmal. Studies have put the average hand-washing time at about six seconds, less than half of what is recommended by global-health guidelines. Only around 5 percent of us regularly wash long and thoroughly enough.
Our failures feel newly relevant as, for the past month, panic has gripped parts of the world over how to stop the spread of a deadly strain of coronavirus—a variant of the common-cold virus. So far, the virus is known to have killed at least 500 people and infected some 25,000 more, primarily in China, where the outbreak began. In response to the crisis, the country has enacted a historically unprecedented quarantine. Streets in the urban heart of Wuhan are seen empty, and people caught outside are berated by drones.
The U.S. government dipped its toe into similar waters on Sunday, ordering a mandatory two-week quarantine of all travelers inbound from Hubei province. Two-thirds of Americans feel that the virus is a “real threat,” according to an NPR poll released yesterday, and a sense of need for forcible action is pervasive. Scientists at the National Institutes of Health have mobilized to work on an emergency vaccine. Face masks have sold out in many places, despite little evidence that they are helpful outside of specific situations.
Amid so much concern and resource allocation, many people remain dismissive of the most widely accepted, simple advice to slow the spread of most viruses. The Centers for Disease Control and Prevention and other agencies around the world have one clear, concise, definitive recommendation: Wash your hands for at least 20 seconds.
Last week on The Daily Show, Trevor Noah captured the standard response to that advice when he joked, “Wash your hands? Scientists always warn us about some new, weird death virus, and then when we say, ‘What’s the plan?,’ they’re like, ‘Uh, wash your hands.’” The audience laughed. “That’s not a plan!”
Hand-washing does seem extremely obvious—which may be the problem. Those of us who’ve lived our entire lives removed from epidemics of cholera and other deadly hygiene-related outbreaks haven’t witnessed the power of hand-washing and take it for granted. But it may be the single most important thing any given person can do to help stop and prevent outbreaks.
Respiratory infections are diseases we very often give to ourselves. People are told to cover their coughs and sneezes, but studies show a vast majority don’t wash their hands after doing so. Someone carrying the pathogenic microbes might shake your hand, or touch a doorknob or desk that you later touch. Once you pick them up, if you touch your face, the circle is complete.
It’s impossible to know exactly how much people have changed their hand-washing habits since the outbreak first made headlines a month ago; comprehensive studies have not yet been published. But America’s general history of focusing less on evidence-based preventive behaviors than on billable treatments does not bode well, nor does our health-care system’s tendency to prize newer, marketable products over the cheap and obvious ones.
To get some vague sense of whether the long-standing 20-second guideline is suddenly resonating widely, I asked people on Twitter whether their hand-washing length has changed in recent weeks. A few people told me that they’re becoming more conscious of others’ behavior—and that they’re especially grossed out when witnessing the three-second spritzes or performative soapless washes. But no one said Yes, I’ve started to actually wash my hands properly. I never really used to do it. While that’s likely not something people are eager to admit, suboptimal standards seem common even among those who you’d think would be most meticulous. “Sometimes researchers who work in labs with viruses don't take that much caution in washing their hands,” Robert Lawrence, a biochemist and science writer, told me.
Following outbreaks always makes me conscious of my own habits, and those of everyone around me, too. I haven’t noticed any changes in the bathrooms I frequent. Subtle shifts could be happening, but I assumed that our HR department wouldn’t let me put a video camera in our office restroom to get a proper sample size.
What would make people want to change? At what point does “I’m really freaked out by this virus” become “I’m so freaked out by this virus that I’m going to regularly wash my hands for at least 20 full seconds”? Even if you have zero fear of the flu or coronavirus, or death at all, there is good reason to spend 20 seconds. Guys have said to me: I didn’t pee on my hands, so why should I wash them? To which I say: Man, the point isn’t to get pee off of your hands. The act is, truly, a selfless one. Hand-washing could help prevent the millions of cases of cold, flu, and gastrointestinal disease that spread around the world each year. In the U.S., we apparently believe we’re too important to spare 20 seconds to play our part in not contaminating others.
Instead of shaming hand-hygiene negligence, it may be more productive to celebrate hygienic awakenings. Part of the solution is developing a routine that everyone enjoys and looks forward to. If washing our hands feels like penance, we will never keep it up. One way is to kill time by singing. This is, no joke, one of the official CDC recommendations: “Need a timer? Hum the ‘Happy Birthday’ song from beginning to end twice.”
Since humming that song as you loom over a sink makes you sound unhinged anyway, you might as well sing. If “Happy Birthday” isn’t feasible because of the melodic range, feel free to try a cooler song I made up: “I’m washing these hands, oh yes I am, yes I am,” to the tune of “The Wheels on the Bus.” You’ll know you’ve sung long enough when the person next to you has sung “Happy Birthday” twice. Then you’re supposed to dry your hands, which I find can be done just by putting your arms out to your sides and spinning around a few times. | <urn:uuid:ef384130-dea5-4cc8-a422-0abda01a378d> | CC-MAIN-2022-40 | https://www.nextgov.com/ideas/2020/02/20-seconds-optimize-hand-wellness/162979/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00717.warc.gz | en | 0.95371 | 1,522 | 2.59375 | 3 |
The AI industry that was once optimistic about data abundance now understands its unsuspected side-effects
The increased data-driven enterprises with their expensive AI models might have some troubling times ahead. With their search for higher data accuracy, the rise of an infodemic, where data turns inaccurate, is inevitable. Although data is considered an asset for any business growth, the growing overuse of computational AI can cause data to become a liability.
Organizations are increasingly investing maximum funds in AI to stay ahead of their competitors. They feel compelled to invest in hopes for better discovery mechanisms and authentication processes. While higher financial and AI environmental costs consume businesses, the data-heavy approach will cause a steady decline in the productivity scale in the long run.
OpenAI research indicates that even if AI has been efficient in data science goals, the community states that they require more compute to achieve success. Furthermore, an MIT study suggests that deep learning will reach its computational limit as it heavily relies on increased compute by tweaking existing techniques or discovering new computational methods.
To build such AI models, trial and error and training will require more compute resources. If three years of improved algorithms can equal a 10x increase in computing power, an imminent threat looms over enterprises.
Experts believe that even if the community creates a state of the art AI model, they cannot guarantee beneficial and successful results. They elaborate that the model might focus on misleading variable correlations rather than identifying hidden correlations that could actually provide insight.
Data practitioners must realize that their efforts are only expanding AI-capabilities that already exist and produces efficient results. An increasing number of companies invest in the expansion of algorithmic efficacy by experimenting with new technology and innovations. However, the issue is that the algorithms only cater to some specific tasks. Linear progress rather than a thought progressive advancement is one of the key reasons behind the infodemic.
To find a better working solution that can augment AI resources, experts suggest the integration of artificial intelligence and the human touch. AI should be implemented as a rule-based algorithm that hard codes human judgment. It can work wonders. Such models could be used on security applications that would need less training data. Many security vendors have already begun to favor AI-driven solutions that enhance human judgment.
While it is still early to evaluate the AI-human judgment combination success rate, AI experts urge the implementation of hardware and purpose-built cloud instances for AI computation. Some hardware and software are tailor-made for AI applications and are capable of unparalleled computations, processing of graphs, and matrix multiplications.
Conventional beliefs about maximum data providing maximum results may not hold true for very long. Industry experts are beginning to identify the dark side of data-hungry brands. Enterprises can find themselves to be more productive, actionable, and cost-effective by being wary of data abundance. | <urn:uuid:d3ab9e59-2925-4a18-82f1-32393a705428> | CC-MAIN-2022-40 | https://enterprisetalk.com/featured/the-future-of-ai-abundance-of-data-can-be-detrimental/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00717.warc.gz | en | 0.939057 | 572 | 2.6875 | 3 |
Automation and AI tech changing the way the world moves
It all started with the invention of the wheel in the 4th millennium BC – human ingenuity transformed the way we move around our world and transport our goods. It was a watershed moment that enabled a revolution in human progress spanning construction, farming and migration. Today, we are on the cusp of another watershed moment in transport, propelled by the convergence of emerging digital technologies – and the maturing of the infrastructural services and skills to master them.
How we move around the world and transport our goods is ever more critical today, for people and planet. According to the UN, nearly 70% of the world’s population will live in cities by 2050. This makes the performance of modern networks to support connectivity as we move through these environments even more critical – particularly as we look for greater efficiencies, sustainability and intelligence across transport networks and supply chains.
But today’s progress faces evolving challenges. Exacerbated by the pandemic, global conflicts and just-in-time resourcing, the supply chain crisis continues on – and is expected to linger well into next year. Meanwhile, citizens and governments are prepping for the rise of autonomous vehicles, as businesses adapt to the new normal and its implications for demand of increasingly intelligent transport networks. It is clear that digital technologies are poised to not only have their day in the transport industry, but to save the day with smart solutions. But it’s not necessarily going to be an easy drive.
Connected cars, virtual assistants and autonomous tech to dominate future of auto industry
The technologies that propel the quest forwards towards autonomous vehicles are progressing through the Gartner Hype Cycle, with some far closer to the slope of enlightenment than others. The transformational technologies with near-term maturity include connected car platforms, virtual assistants and embedded SIM (eSIM). The majority of high-benefit technologies, including 5G, automotive real-time data and over-the-air software updates will also mature in usage over the next five years. Longer term, electric vehicle technologies and autonomous vehicle technologies will dominate the transformation of the auto industry, according to Gartner.
The pace of transformation might feel sluggish for those in the front seat – but it is fast and furious compared to pre-digital industrial revolutions. It's estimated that by 2025, when 100% of all new vehicles to market will be connected, there will be more than 400 million connected passenger vehicles alone. But it’s not only consumers who will benefit from slicker connected transport systems.
The logistics industry will be a major beneficiary of our quest towards greater automation within the transport industry, going some way to ease supply chain bottlenecks. From the automated cranes and robotic machinery already functioning in ports and warehouses, to automated delivery vehicles like the Autonomous Vehicle trucks trialled recently in the US – the progress we are seeing will transform future supply chains and solve current sore points like driver shortages and efficiency challenges.
The bottom line is that more connected cars require connected vehicle ecosystems in smart cities to facilitate the shift – with well-crafted edge infrastructures set up to absorb the data deluge. Governments are already working to build the necessary ecosystems and prepare citizens for the changes ahead. In the UK for example, the first autonomous bus scheme is rolling out and the government has just updated the highway code in preparation for autonomous vehicles. It is the back-end data infrastructure that forms the foundations for these technologies – with autonomous vehicles producing 300TB a year, optimised data management is mission critical.
Investing in resilient, cyber secure and agile cloud computing strategies, that utilise powerful compute and ignite real-time analysis and decision making from edge devices is absolutely crucial. It’s critical now, not tomorrow. We are already driving partially automated, intelligent cars – but they are about to get smarter. Building networks and ecosystems that can handle this data – as much as 40 terabytes of data an hour from cameras, radar, and other sensors from driverless cars – will determine the success, the safety, and the experience of autonomous driving.
Partnering with trusted technology specialists, to build and manage these infrastructures is the final piece of the puzzle. It will ensure that the full ecosystem is finely tuned, prepared for a symphony of data. Germany recently approved its first level-3 autonomous vehicle, while highway codes are being adapted across the continent in preparation for a new era of travel and logistics. Staying ahead of the curve is possible now, if we lay the groundwork together we can truly move the world.
To read another article from Jordan MacPherson click here. | <urn:uuid:6acb1362-2f00-406a-b50c-56bea5aeda84> | CC-MAIN-2022-40 | https://aimagazine.com/articles/automation-and-ai-tech-changing-the-way-the-world-moves | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00717.warc.gz | en | 0.923416 | 950 | 2.609375 | 3 |
Imagine scrolling an online news article by merely staring at the bottom of the webpage. How about reorganizing your desktop files by dragging them around with your gaze?
For years we’ve been using mice and keyboard (and later touch screens) as the main tools to control and send commands to our computers and devices.
But 2016 proved that things are headed for a change. With great leaps in artificial intelligence and machine learning, we saw a new array of highly efficient assistants and devices that can be controlled with voice commands.
The start of 2017 gave a hint at what the next breakthrough might be.
If you’re following tech publications, eye tracking has made the rounds quite a bit lately. The Facebook-owned Oculus acquired eye tracking startup Eye Tribe; Acer announced a new monitor that tracks eye movement at CES 2017; and again, at CES, Tobii announced a new line of eye tracking initiatives for the coming year.
Here are the key points if you’re wondering what is eye tracking technology and what it can do.
What is eye tracking?
Eye tracking is about understating the state and activity of the eye. This includes tracking your point of gaze, the duration of your stare at any given point, when you blink and how your pupils react to different visual stimuli.
But it’s also about where you’re not looking, what you’re ignoring, what gets you distracted and so forth.
The information gathered by eye tracking technology can be used to facilitate a number of tasks that were previously cumbersome, and also opens up possibilities that were inconceivable before.
While the concept might sound simple, the technology behind it is quite complex and has been made possible thanks to advances in sensors technology as well as image analysis and recognition.
Eye tracking devices
Based on task requirements, eye tracking gear are usually head mounted or remote. Head mounted or mobile units, such as eye tracking glasses, are more suitable for settings where you’re moving around such as task performance in real life or virtual environments. Remote devices, now reduced to the size of very small panels, offer a less intrusive experience and are convenient for when you’re sitting behind your computer and gazing at the monitor.
Most common eye tracking devices usually involve two main components: an infrared or near-infrared light source and a camera. The light is directed toward the eye, and the camera picks up the reflections to calculate rotation of the eyes and direction of the gaze. Eye tracking devices also pick up other activity such as blink frequency and changes in eye pupil diameter.
The collected data is then fed to algorithms and software, which discover details in the user’s eyes and reflection patterns, and interpret the image stream to calculate the user’s eyes and gaze point on a device screen.
Use cases of eye tracking technology
We use our eyes constantly for different tasks including reading magazines, gazing at posters and ads, playing games and whatnot. Virtually anything that involves a visual component can become the subject of eye tracking and the data collected by eye tracking devices can be leveraged to glean insights and understand human behavior.
Here are some of the more popular use cases.
Eye tracking in gaming
One of the most obvious uses of eye tracking is improving gaming experience. There are a wide range of areas where eye tracking can make it easier for users to interact with the user interface of games, as they can replace mouse navigation and scrolling.
They can also be used to analyze the eye interaction with the interface. This can help players improve their gaming by giving them insights on what details they’re ignoring.
In rendering, the technology can be used to prioritize rendering for the gaze area and make more efficient use of computer resources.
The technology can also be used to improve the gaming environment, such as having the game characters react when the user is staring directly at them. Imagine an RPG where characters in a tavern will get mad if you look at their purse or an FPS where you can tip off AI allies about enemies sneaking up on them by looking in their direction.
Games will become a whole lot easier to play (though I’m not sure if it’s a good thing).
Eye tracking in advertising and market research
Knowing where customers and users look—and where they don’t—can be invaluable for both online, TV and print advertising. Eye trackers on monitors and kiosks can glean insights into how many users see key messages and component of ads, while mobile gear can be used to weigh customer reaction to print material, posters and product packages.
Eye tracking devices can also help store owners research customer behavior and navigation patterns in order to better understand how customers look at products on shelves, which sections of the store get more attention from customers, and how they can make better use of their store space.
Eye tracking in UI and environment testing
Eye tracking can give a huge leg up to A/B testing, the method used to measure efficiency of variations to user interface.
Software and web developers can use eye tracking to better understand what’s good and not so good about the user interfaces of their applications and websites. Eye tracking will let you know what areas of the screen are getting more attention and focus, and how you can reorient and restructure user interfaces to improve user engagement.
Software and game developers can better understand which features of their applications are going unnoticed. VR environments can be tested to see how much attention is directed to each of the areas.
Eye tracking and accessibility
Eye tracking will make it possible for users with physical difficulties in performing mouse navigation. Eye tracking can help users with disabilities move the cursor as efficiently as anyone.
Eye tracking and driving safety
Distracted driving and drowsiness are two of the prominent causes of road incidents. Eye tracking technology can help track the driver’s attention and state of awareness and issue warnings.
Combined with other innovative technologies such as smart sensors and image analysis software, eye tracking can help direct drivers’ attention to where it most matters and prevent incidents from happening.
And much more
This is just the beginning. There are a lot of other fields where eye tracking can be useful, including medicine, education, simulation and neuroscience, and probably many more areas that we will soon find out as the technology further matures and goes mainstream.
Will there be a dark side to it? Time will tell. For the moment, we know that companies will be able to collect much more information about us, and that usually does come with some privacy tradeoffs. But it’s still too early to tell whether this is a bad thing or not.
The following infographic by iMotions sums up eye tracking pretty well. | <urn:uuid:a91b8ecb-55d2-47f8-a793-a084c017dc2a> | CC-MAIN-2022-40 | https://bdtechtalks.com/2017/01/05/what-is-eye-tracking-technology/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00717.warc.gz | en | 0.943925 | 1,373 | 3.078125 | 3 |
DDoS attacks are considered as one of the most popular cyber-attacks and they have the ability to make systems go down for a very long time. Read more to learn how they work and how you can stop them.
What is a DDoS attack?
DDoS attack (also known as the distributed denial of service attack) is a dangerous and common type of cyber-attacks. It aims to overwhelm the target through disrupting the regular traffic of a service, network or a server.
The perpetrator aims to make a machine or a network source unavailable to its users either temporarily or permanently through the DDoS attack. In order to achieve their goal, the attacker makes use of the Internet to flood the targeted resource or machine with excessive requests. In other words, DDoS attacks create a massive amount of artificial requests and overloads the systems. As a result, the intended users cannot use their machines or systems due to the increased traffic.
One of the most prominent features of the DDoS attacks is the fact that the requests come from many different sources at the same time. As a result, it becomes very difficult to stop the flood of requests since blocking a single source will not stop the other requests from remaining sources will keep coming.
If this explanation confused you a bit, let’s try an analogy. Imagine that the targeted system is a shop. The attackers create an increased traffic at the doors of this shop. Due to the crowd gathered at the doors, the actual customers of the shop cannot go in and buy what they need to buy from there. This increased traffic can cause the shop to close permanently or temporarily in accordance with the severity of damage they cause.
Cyber criminals perform DDoS attacks for various reasons including getting revenge and blackmailing the owners of a machine or system.
What can be done to prevent DDoS attacks?
In order to stop a DDoS attack, many techniques can be employed including attack detection tools, traffic classification tools and immediate response tools.
Attack detection tools allow the cyber security professionals to detect an attack attempt very early. As a result, they can take the necessary preventive measures before the attempt turns into a full blown attack.
Traffic classification tools aim to provide insight and background information on the traffic regarding a network source or a machine, so that the cyber security professionals can distinguish increased traffic caused by DDoS attacks from actual traffic caused by the users.
Immediate response tools come in handy during the DDoS attacks. They help cyber security professionals to block the sources of artificial and increased traffic triggered by an attacker or a hacker. Blackhole routing and DNS sinkholes are two most popular examples of such tools.
Blackhole routing sends all the traffic to a non-existent server also known as a black hole. This way, the traffic caused by a DDoS attack cannot overwhelm the target.
DNS sinkholes serve to route the increased traffic to another valid IP address where requests are analyzed and bad packets are rejected. | <urn:uuid:4f11a8ad-2bb5-4a4e-bf78-fd592c868ce5> | CC-MAIN-2022-40 | https://www.logsign.com/blog/how-do-ddos-attacks-work/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00717.warc.gz | en | 0.948491 | 600 | 3 | 3 |
Now that the Remote Workforce is more or a less a permanent fixture in the American Workplace, many employees not only choose to work from home, but they also work from other places such as restaurants, cafes, virtual offices, and co-working spaces. A common denominator among these places is that Internet access is often gained via the use of Wi-Fi Access points (WAPs).
Wi-Fi Access Point devices can store varying levels of information about the people who use them, and unfortunately, securing them is one of the last priorities. In this article, we examine some of the information that is stored by WAPs, and how you can take action to protect Wi-Fi Access Points from falling prey to a Cyberattack.
What is Stored in Wi-Fi Access Points?
The following are examples of what is collected and stored by the Wi-Fi points:
1) The Operating System: Just like your smartphone or computer, Wi-Fi Access Points also have their own Operating Systems. This is what makes them run in the same manner as your other devices. They too have their own set of vulnerabilities, such as backdoors. This is where the Cyberattacker can covertly enter and stay in for long periods of time to deploy their malicious payloads in spots where they cannot be easily discovered.
The Fix: Make sure that you download and apply the latest software upgrades/patches, and firmware onto your Wi-Fi Access Points. Also, always run a scan to make sure that there are no existing threats residing on it.
2) Passwords: Wi-Fi Access Points also contain the login credentials of the network administrators that deploy and configure them. Unless a long and complex password is used, the statistical odds of a Cyberattacker stealing it are fairly high, and the stolen password will eventually find its way to various forums on the Dark Web.
The Fix: Make use of a Password Manager software application to create long and complex passwords and have the Password Manager reset them automatically at differing timetables. Many of thes packages are free or inexpensive.
3) Routing Table Information: In simple terms, a Wi-Fi Access Point does not give you direct access to an Internet connection. Rather it forwards your network requests (such as trying to visit a particular website) to other nodes along that Internet highway that will eventually guide you to where you want to go. But to keep things running efficiently, Wi-Fi Access Points often make use of Routing Tables. They are used to determine the most optimal path for your requests to take. In technical terms, these are also known as “Data Packets,” and the information they contain are also stored in the Wi-Fi Access Points. In these instances, it is quite easy for a Cyberattacker to use a simple network sniffer to capture the data packets and use them maliciously against you when you least expect it.
The Fix: Make use of strong levels of encryption to secure the network line of communication from your device to the Wi-Fi Access Point, and vice versa. That way, if any of the data packets are intercepted, they will remain in a totally useless, garbled state until they are decrypted.
4) Point to Point Protocol over Ethernet (PPPoE) Credentials:
Point to Point Protocol over Ethernet (or PPPoE) is, in layman’s terms, the login username and password that your Internet Service Provider gives you after they have set up the Wi-Fi Access Points in either your home or business. So if, for example, you were to subscribe to Internet services through Comcast, the technician would provide you with a network name (which is technically known as the “Service Set Identifier” or “SSID”), and a password. This is what is used by Comcast to recognize your device whenever you use their Internet service. This is too stored in the Wi-Fi Access Point, and very often the technician will set them up in a such a way that it’s easy for you to remember. This setup contains the same set of weaknesses that other types of passwords do, making it very easy for a Cyberattacker to hijack them. It is important to note that the SSID is also broadcast to the public as well. So, for example, when you find your network name to log into, it also appears with others that are geographically close to your device. This can be a grave security vulnerability.
The Fix: After you received the initial SSID and password from your Internet Service Provider, immediately reset them, especially the password. When it comes to the SSID, disable its public viewing functionality.
5) Web surfing history: Wi-Fi Access Points are notorious for keeping a log history of the all the website requests that come through it. While this is useful for a network administrator in order sniff out any trends in unusual behavior, if it were to be accessed by a Cyberattacker, it would be quite easy for them to build a profile of their intended targets, as they can associate with the Data Packets that have been captured.
The Fix: On a regular basis, delete the log history that is stored, but make sure to create backup copies and store them in a secure location (such as the Cloud) before you start the actual deletion process.
6) Media Access Control Addresses (MACs): Every network card that is installed onto a wireless device or computer comes with what is known as a Media Access Control Address, or MAC. This is merely a string of numbers and letters that identifies your device from the rest of the crowd in your network neighborhood. If this is not hidden, a Cyberattacker can easily locate your device, remotely scan it, and deploy any kind malicious payload they want to, ranging from Trojan Horses to the much deadlier malware that can launch and execute Ransomware attacks against your particular device.
The Fix: Make use of the whitelisting functionality. Not only will this mask the MAC addresses that are allowed to access the Wi-Fi Access Points, it will also keep rogue and malicious devices from gaining access to it.
It should be noted that if an employee is using a company-owned Wi-Fi Access Point, then it is up to the employer to upgrade and protect it. However, if the employee owns the WAP, then it is his or her responsibility to upgrade it.
The fixing steps outlined in this article are quick and should be easy to implement. By taking these steps now, you will greatly mitigate the risk of becoming the next victim of a Cyberattack.
Ravi Das is a Cybersecurity Consultant and Business Development Specialist. He also does Cybersecurity Consulting through his private practice, RaviDas Tech, Inc. He is also studying for his Certificate In Cybersecurity through the ISC2. | <urn:uuid:d157c215-d7b9-4983-95ad-45461f2c34bb> | CC-MAIN-2022-40 | https://platform.keesingtechnologies.com/how-to-mitigate-the-security-risks-of-wi-fi-access-points/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00717.warc.gz | en | 0.957406 | 1,409 | 2.6875 | 3 |
A remarkable concept that could support power-hungry data centres and reduce power-wasting and environmentally dangerous gas flaring looks like gaining some traction in the Middle East after two recently announced investment deals.
Oman’s sovereign wealth fund the Oman Investment Authority (OIA) and Abu Dhabi’s sovereign investor Mubadala Investment Company have agreed to invest in a process called digital flare mitigation from US company Crusoe Energy, a self-described pioneer of clean computing infrastructure.
The idea is to site containerized data centres next to oil wells. They would then be powered by natural gas, a by-product of the oil production process.
When petroleum crude oil is extracted and produced from oil wells, raw natural gas is brought to the surface as well. Vast amounts of such associated gas are commonly flared as waste or unusable gas. The Crusoe Energy process means the waste gas is fully combusted, so methane is not released, although it appears that CO2 still is.
This process has enjoyed some success in the US. Crusoe says its 98 digital flare mitigation data centres have prevented an estimated 2.5 billion cubic feet from flaring and achieved up to 99.89% elimination of methane emissions – emissions estimated at 650,000 metric tons per year, comparable, says Crusoe, to removing approximately 140,000 cars from the road.
While some would argue that oil production itself is a problem, Crusoe’s point that the world’s appetite for computation, energy and progress will never stop growing is undoubtedly true. Until oil production itself ceases therefore, this may be a workable approach to feeding that appetite from an otherwise wasted resource. | <urn:uuid:dbd029ff-6349-4521-8278-6cc0ad23b162> | CC-MAIN-2022-40 | https://user.developingtelecoms.com/telecom-technology/energy-sustainability/13597-could-data-centres-prevent-gas-flaring-in-the-middle-east.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00117.warc.gz | en | 0.954441 | 341 | 2.53125 | 3 |
The concept of having access to vast amounts of information began in the early 1940’s when a Wesleyan University Librarian, Fremont Rider, published The Scholar and the Future of the Research Library. He estimated that university libraries were doubling in size every sixteen years. He elaborated by speculating that the Yale Library, in the year 2040, will have approximately 200,000,000 volumes, which will occupy over 6,000 miles of shelves. Today, the focus isn’t on how much data, in aggregate, is available for consumption as much as its focus is on how we can leverage it efficiently to make better decisions about every aspect of our lives and how to effectively manage never-before-seen data volume.
Click here to read an informative article on the history of big data entitled, “A Very Short History of Big Data”. This will give you a nice look into the past before we take you on the adventure that is the future of big data and the many ways it will impact our personal and professional lives.
Just yesterday, Gartner released its top predictions for 2016, with its take on the landscape of the digital future; “an algorithmic and smart machine-driven world where people and machines must define harmonious relationships”.
- By 2018, 20 percent of business content will be authored by machines.
- By 2018, six billion connected things will be requesting support.
- By 2020, autonomous software agents outside of human control will participate in five percent of all economic transactions.
- By 2018, more than 3 million workers globally will be supervised by a “robo-boss”.
- By year-end 2018, 20 percent of smart buildings will have suffered from digital vandalism.
- By 2018, 45 percent of the fastest-growing companies will have fewer employees than instances of smart machines.
- By year-end 2018, customer digital assistant will recognize individuals by face and voice across channels and partners.
- By 2018, two million employees will be required to wear health and fitness tracking devices as a condition of employment.
- By 2020, smart agents will facilitate 40 percent of mobile interactions, and the post-app era will begin to dominate.
- Through 2020, 95 percent of cloud security failures will be the customer’s fault.
We have a wide variety of customers, many of whom are Fortune 500 companies. These global enterprise operations are looking to gain a competitive edge by leveraging big data in new and innovative ways. That’s a pretty big challenge in and of itself. However, one of the oftentimes overlooked challenges that they bring to our attention is how they will manage the data itself.
- How will all of this data impact the performance of our applications?
- Is it better to leverage 3rd party APIs to access data and reduce our data footprint or is it better to house it internally so that we have complete control? What are the pros/cons of each approach?
- With this massive amount of data flowing in, what should our data archiving strategy look like?
- What kind of data disposition plan should we put in place?
- Are there any data discipline best practices we should be taking into account on an ongoing basis?
Contact Auritas today to speak with one of our big data experts. We’d be happy to speak with you about the challenges your company faces as it relates to big data, or any other data challenge for that matter
To read more details about Gartner’s predictions, please click here to view their full release.
Candid look at Big Data Feeling concerned because you do not know if you have Big Data, don’t know where it is or how to
Welcome to our new three-part series on our predictions for Big Data, Information Lifecycle Management (ILM), and Enterprise Content Management (ECM). If you’re interested in
We’re pretty sure most CIOs are rather tired of hearing, “big data”, “analytics”, & “omnichannel”. There is so much hype around big data that it | <urn:uuid:f937c861-d8fc-462f-8b45-281dc88df642> | CC-MAIN-2022-40 | https://www.auritas.com/big-data-past-present-future-with-gartner-predictions/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00117.warc.gz | en | 0.932058 | 872 | 2.546875 | 3 |
Although it sounds futuristic, the technology used for facial and touch recognition
has existed for some time now and is constantly being refined. Depending on what devices a company or customer uses, the technology may already be in the palm of a person’s hand.
According to the Federal Bureau of Investigation, facial recognition technology is still a new concept, even though it was developed in the 1960s. There are two main components of face recognition, including geometric and photometric. Geometric studies features while photometric depends on how it is viewed. Even though the technology exists and continues to advance, there are accuracy issues. However, the government and companies that want to use the technology have the power to advance it.
Touch recognition is more accessible than face recognition to companies and consumers. Apple is an example of a company that is using the technology. If a consumer has an iPhone 5 or newer, an iPad Air 2 or iPad Mini 3, it is possible to implement the finger recognition on a device. Choosing a passcode is still required in setting up the fingerprint recognition on an Apple device though.
The safeguarding capability of fingerprint technology is much higher. According to Apple, since fingerprints are unique it is rare that even a small part of different fingerprints have enough similarity to Touch ID match. The combination of a fingerprint and passcode increases protection against theft.
“The probability of this happening is 1 in 50,000 for one enrolled finger,” Apple.com stated. “This is much better than the 1 in 10,000 odds of guessing a typical 4-digit passcode.”
While facial and touch recognition opens up new possibilities for ongoing user authentication, these applications can not currently serve for identity verification (since there are currently no trusted sources to reference data points against). It will be important for companies to remember that ongoing authentication is just component of preventing identity fraud, and incorporate such authentication into a broader fraud prevention plan that includes initial verification of identity. | <urn:uuid:a7b8aa06-1afd-4ed9-8f26-457aed375b22> | CC-MAIN-2022-40 | https://www.electronicverificationsystems.com/blog/facial-and-touch-recognition-for-user-verification | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00117.warc.gz | en | 0.953849 | 399 | 3.0625 | 3 |
Continuing its forward progress of building ultra-powerful machines for scientific organizations, IBM Corp.
picked up a $290 million contract to build two supercomputers for the U.S. Department of Energy (DOE) that sport a combined peak speed of 460 trillion floating operations per second, or teraflops.The deal, announced at the Supercomputing 2002 conference in Baltimore Tuesday, would see IBM build the two fastest supercomputers ever, according to what is currently listed on the Top500 List of Supercomputers.
The first supercomputer, dubbed ASCI Purple, will be used for simulation in the US nuclear weapons mission. It will be capable of calculating data at 100 teraflops, or almost three times faster than the current leader — NEC’s Earth Simulator, which has been clocked at 35 teraflops. ASCI Purple will consist of a cluster of IBM’s POWER chip-based eServer systems and storage systems.
ASCI Purple will serve as the primary supercomputer in the DOE’s Advanced Simulation and Computing Initiative, or ASCI. The DOE’s National Nuclear Security Administration’s (NNSA) Stockpile Stewardship Program will use on ASCI Purple to simulate the aging and operation of U.S. nuclear weapons, to make sure they are dependable and safe, without underground testing.
Boasting 50 terabytes of memory and two petabytes of storage, ASCI Purple will be powered by 12,544 of IBM’s forthcoming POWER5 microprocessors, which features more than 10GB per second memory bandwidth, contained in 196 individual computers and interconnected via a data mainline that exchanges information at 100 GB per second. Armed with autonomic features for self-management, ASCI Purple will run IBM’s AIXL operating system.
The new machine will be installed in a dedicated building known as the Terascale Simulation Facility, currently under construction at Lawrence Livermore National Laboratory in California. IBM has also finished Livermore’s previous most powerful supercomputers, ASCI White, unveiled in August 2001, and ASCI Blue Pacific, unveiled in October 1998. ASCI Purple will be delivered in stages with the first IBM eServer arriving next year.
Giga Information Group Vice President Brad Day said ASCI Purple, due in 2004, is another turning point in the roadmap for next-generation POWER5 servers for IBM.
This is [IBM] trying to get to that same discipline 4 or 8 processor systems in single server framweork build them out to a 128-processor aggregate,” Day told internetnews.com. Day said it’s not much different than HP’s approach with the Itanium2 architecture.
“That announcement suggests the Power [chip architecture] roadmap is a switchitter,” Day said. “It’s strong enough and scalable enough for the commercial workload because it handles CRM and other database stuff, but it is also equally effective at HPC (high-performance computing).”
The second, more powerful system, called Blue Gene/L, will be focused on scientific research, including predicting global climate change (a common use for massive machines) and studying the interaction between atmospheric chemistry and pollution. Developed with the help of new, yet to be disclosed, chip and system architectures, Blue Gene/L will have a peak performance of 360 teraflops with 65,536 computing nodes. Massive machines such as Blue Gene/L can simulate hurricanes for meteorologists.
Mark Seager, assistant director for Advanced Technologies for Livermore’s Computation Directorate, said the power of Blue Gene/L was analogous to “having an electron microscope when all the other scientists have a magnifying glass.”
Blue Gene/L will be used by the three NNSA laboratories, Los Alamos, Sandia and Lawrence Livermore and the ASCI University Alliance collaborators as well as other DOE laboratories in the future. | <urn:uuid:788e037c-7d1b-47dd-b7f7-b278bf00e27c> | CC-MAIN-2022-40 | https://www.datamation.com/erp/ibm-inks-290m-supercomputer-contract/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00117.warc.gz | en | 0.91482 | 817 | 2.859375 | 3 |
The unprecedented COVID-19 pandemic has raised a thorny question for technologists and lawmakers: how might the location data from our cellphones be used to help contain the spread of the virus?
Two broad use cases have emerged: the first is using location data to monitor compliance with quarantine. And the second is contact tracing - using location data to track down people that have come into contact with a person that tests positive to the virus.
The team at Risky Biz discussed both in a livestream this week with regular co-host and Insomnia Security founder Adam Boileau, adjunct professor at Stanford University’s Center for International Security Alex Stamos, and Crowdstrike founder and former CTO Dmitri Alperovitch.
Monitoring quarantine compliance
In an ideal world, people that have tested positive to a deadly and contagious disease would dutifully self-isolate to prevent further infection, and those that they’ve recently come in contact with would dutifully quarantine before their test results come in.
In Western democracies, the use of monitoring for such a purpose requires legislative change and a dramatic suspension of social norms.
In the United States, governments do not have the legal authority to tap cell phone records or social media data for the purpose of enforcing quarantine compliance. The United States is struggling to even make the case for using geofencing data to convict a suspect with a bank robbery.
Emergency powers are gradually being put into place as clusters of infections emerge. Airlines, for example, are now required under US law to submit data to the Center for Disease Control and Prevention (CDC) data about all incoming passengers for the purpose of enforcing quarantine. And the White House is now in discussion with US tech giants such as Facebook and Google about how their location data might also be put to use.
Today, anonymised data from mobile networks and apps is already made available to researchers for the purpose of tracking the spread of disease. Users of IoT thermometers, for example, can already opt-in to share their data for use in the aggregate.
But the prospect of using the data at the individual level for purposes that could be deemed punitive is ethically and legally complex.
Albert Gidari, Director of Privacy at the Center for Internet & Society at Stanford Law School notes that the US Stored Communication Act would not permit compelled disclosure. “Any system devised to take advantage of location history would have to be consent-based and rely on voluntary cooperation of providers,” he told Risky.Biz.
Compelled disclosure might also prove ineffective. The Electronic Frontier Foundation argues that the threat of having your movements monitored could create a perverse disincentive: people that feel unwell - but not so unwell to present for testing - may choose to avoid being tested to avoid it. And if such a system offered no agency or benefit to those being monitored, what is to stop them from simply leaving their mobile device at home?
“We can’t expect that people who choose to be non-compliant are going to use an app voluntarily,” Boileau notes. “So at that point, [authorities] are left with using the phone infrastructure - or other companies that have location data. In New Zealand, for example, the telcos have the data for emergency call location - and in an emergency, a whole bunch of the usual rules don’t apply.”
There are potential benefits for users - measuring compliance with quarantine would be an important input into determining “how long we should be in lockdown”, he said. In other words - put up with surveillance now, and lives can return to normal much sooner.
But that’s a very difficult sell - what’s acceptable to a person in New Zealand or Scandinavia might not fly in Germany or the United States.
Using mobile location data for contact tracing presents many of the same legal and ethical challenges as monitoring compliance with quarantine. But it offers far more palatable use cases for countries seeking to balance containment of the disease with preserving civil rights in the longer term.
Gidari posits the concept of a system whereby individuals that test positive may voluntarily disclose their mobile phone number or online account identifier to healthcare agencies. The government could then use existing lawful arrangements with tech companies to request rapid emergency access to the user’s location history.
The agency could also request aggregate geofencing data to have the provider alert other users who were in close proximity to the person during their illness. If protected by privacy-preserving caveats - such as limiting which agency can access the data and how long they can retain or use the data - it might be something privacy advocates can live with.
“We don’t need a Korea-style approach to this problem to get actionable data in the hands of the CDC or other health care providers,” Gidari said. “We can protect privacy too.”
Stamos - who has previously been an expert witness on cases that involve location-based data - isn’t confident that cell tower data is precise enough for contact tracing without generating an unacceptable number of false positives. But data from Bluetooth beacons and WiFi SSIDs might do.
The government of Singapore used Bluetooth as part of their efforts to contain the virus. Citizens were encouraged to voluntarily download the ‘TraceTogether’ app, provide the Ministry of Health their mobile phone number and turn Bluetooth on permanently. The app asks for user consent to log any other user of the app that spends more than 30 minutes within 2m of the person. The data is then acted upon if any of the users return a positive test.
Over 600,000 Singaporeans have already volunteered to download the app, perhaps motivated by the sense of national solidarity pervasive in Singapore, or perhaps by the assumption that using a government-issued app will fast-track access to testing when it becomes necessary.
In any case, the app has its limitations. The iOS app has to run permanently in the foreground to be effective, and the Android version must be manually configured to run in the background. Users are unlikely to be so diligent that they remember to turn it on every time they are in a public place - well in advance of getting sick - limiting the use case to people already on high alert, such as those that came into contact with a person waiting for test results. Developers may improve TraceTogether now that Singapore plans to release the app’s source code.
Other efforts to convince users to voluntarily download a privacy-preserving app - such as Cambridge University’s ‘FluPhone’ app in 2011 and MIT’s new ‘PrivateKit’ app - haven’t driven enough user interest to make a meaningful impact.
Stamos sees a faster way to enrol users in a privacy-preserving system. Any time Google or Facebook offer features like ‘People You May Know’, he notes, they are effectively already performing a similar feature to contact tracing. And both of those platforms have in excess of 2.5 billion users.
“Contact tracing is a technique already proven in the field by Google and Facebook,” Stamos said. “This is why sometimes when you go into a store, you end up getting related ads in your feed - because Bluetooth beacons placed in the store have recorded your interest for future advertising.”
He envisions a system under which any Facebook or Android user that tests positive to Coronavirus could - at the push of a button in an app they are familiar with - give permission for Facebook or Google to contact any other account holders that have been in the same Bluetooth Beacon or WiFi network (SSID) for more than 30 minutes.
Stamos recommends the tech giants get on the front foot and build this capability voluntarily for US users, lest they be compelled by governments to build a compromised solution.
“If I tested positive, I’d much prefer to hit a button and have Google and Facebook inform everyone that I’ve been in contact with, warning them to go get tested,” he said. “And that data doesn’t necessarily have to go to the government. It could be a relationship between me and counterparties, mediated by an app we use in common.”
As long as the app is opt-in, that consent is provided, and that the app brokers the tracing and notification (rather than the user or other human operator), it could be rolled out in the United States without the need for legislative change, he said.
“All the infrastructure is there to do it,” he said. “It would use the same [geofencing] mechanisms these companies use today, which we know to be legal.”
The same wouldn’t apply for Europe, where GDPR and other regulations would likely prove too prohibitive.
Even the most diehard privacy advocates say they would be willing to make a compromise in such an emergency.
But contact tracing apps will only help, Alperovich notes, if there is enough testing capacity available to help the population know if they are infected or have been in contact with somebody infected. That’s not available in the US today.
“It won’t do anything to trace people if we can’t actually test them,” he said. “But maybe when we get to the point of re-opening this country, and we want to make sure we don’t have new outbreaks, it’s something to consider.”
Speaking as a person that has opted out of platforms that track his location data, he remains cautious.
“I would want full transparency,” he said. “I’d want the source code of the app published by the government. I’d want strict oversight on how the data is used and I’d want mandatory purging of that data every so many days.”
“If it can be effective, and if the user volunteers to submit data on social networks they already use, then with the right safeguards - I’m a tentative yes.”
Even Boileau, who often quips that commercial surveillance is the “cyberpunk dystopia” we always dreaded, is in reluctant agreement.
“The voluntary approach has some real benefits,” he said. “It’s an emergency. We’ve got the data and we should use it. Privacy can just suck it for a while.”
For more coverage: | <urn:uuid:2c775499-051b-4d59-a8fb-efe2a7576520> | CC-MAIN-2022-40 | https://risky.biz/corona-surveillance/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00117.warc.gz | en | 0.946426 | 2,190 | 2.671875 | 3 |
Microsoft reveals an ambitious goal and a new plan to reduce and ultimately remove its carbon footprint, making it ‘carbon negative’ by 2030.
By 2050, Microsoft aims to remove from the environment all the carbon the company has emitted either directly or by electrical consumption since it was founded in 1975.
“This is a bold bet — a moonshot — for Microsoft. And it will need to become a moonshot for the world,” says Microsoft president Brad Smith.
“While the world will need to reach net zero, those of us who can afford to move faster and go further should do so,” he states.
Smith, together with CEO Satya Nadella, CFO Amy Hood, and chief environmental officer Lucas Joppa, announced the company’s plans at an event at Microsoft’s Redmond campus.
“Reducing carbon is where the world needs to go, and we recognise that it’s what our customers and employees are asking us to pursue,” says Smith in a blog detailing the company’s new plans.
In connection with this, Microsoft revealed a new programme to help its suppliers and customers reduce their own carbon footprints.
Microsoft says it is also investing US$1 billion over the next four years for a climate innovation fund to accelerate the global development of carbon reduction, capture and removal technologies.
“We understand that this is just a fraction of the investment needed, but our hope is that it spurs more governments and companies to invest in new ways as well,” says Smith.
Beyond ‘carbon neutral’
As Smith explains in his blog, Microsoft has worked hard to be “carbon neutral” since 2012.
But while the terms sound familiar, they are different.
Companies have typically said they are “carbon neutral” if they offset their emissions with payments either to avoid a reduction in emissions or remove carbon from the atmosphere.
This is good, he writes, but it essentially pays someone not to do something that would have a negative impact. “It doesn’t lead to planting more trees that would have a positive impact by removing carbon.”
In contrast, “net zero” means that a company actually removes as much carbon as it emits.
“The reason the phrase is ‘net zero’ and not just ‘zero’ is because there are still carbon emissions, but these are equal to carbon removal. And ‘carbon negative’ means that a company is removing more carbon than it emits each year.”
He says the challenge Microsoft is taking on will not be easy, but it is the right goal, and with the right commitment, can be achievable. | <urn:uuid:9b6b94a2-dd0d-41a3-a81e-aab9d93748bb> | CC-MAIN-2022-40 | https://www.cio.com/article/201656/microsoft-s-moonshot-to-become-carbon-negative-by-2030.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00117.warc.gz | en | 0.956637 | 562 | 2.890625 | 3 |
Identity and Access Management (IAM) is the management of individuals and their access within an IT infrastructure. It makes sure the right individuals have access to the right (IT) resources at the right time.
These resources can include systems, applications, files and networks. An IAM solution is crucial these days because they make things more secure, efficient and easier. Discover all the different solutions and their advantages below.
Identity management (IDM) provides a central point to manage each user account, the identity, their access to systems and the appropriate rights on these systems and data. The identity management system defines the rights and rules for obtaining access to systems and data. More on IDM
Access management (AM) will enforce the rules set forth by, ideally, the identity management and identity governance systems. More on AM
Identity governance & Administration (IGA) provides tools for managing roles. Where identity management focuses on the lifecycle of a user, roles (technical to business) also have a lifecycle to manage. The roles associated with a person evolve over time and it is important to review assigned roles on a regular basis. Left unmanaged, accounts continue to gather entitlements which leads to accounts with access to multiple resources which they might no longer need (privilege creep). More on IGA
Privileged Account Management (PAM) is a solution that helps secure, control, manage and monitor privileged access to critical assets. More on PAM
Single Sign-On (SSO) is an authentication process that allows a user to access multiple applications with one set of login credentials, reducing user friction by lowering the number of credentials prompts and ensuring productivity. More on SSO
Multi-factor authentication (MFA) typically uses two or more independent access methods like passwords, security tokens, and biometric verification. This creates a defence of multiple layers. More on MFA | <urn:uuid:93b8a137-e725-4f45-b2f1-809872cb5d0e> | CC-MAIN-2022-40 | https://www.is4u.be/en/identity-and-access-management | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00317.warc.gz | en | 0.913383 | 382 | 2.578125 | 3 |
Today’s city living is falling short of citizens’ increased expectations in the digital age. This is according to a report from the Capgemini Research Institute that explored responses from 10,000 citizens and over 300 city officials across 10 countries and 58 cities. It found that many citizens are frustrated with the current set up of the city in which they live and are prepared to show their opinion by leaving for a more digitally advanced city. On average, 40 percent of residents may leave their city in the future due to a variety of pain points including digital frustrations.
The report “Street Smart: Putting the citizen at the center of Smart City initiatives” reveals that more than half of citizens (58 percent) perceive smart cities as sustainable and that they provide a better quality of urban services (57 percent). That explains why more than a third of them (36 percent) are willing to pay more for this enriched urban existence. However, serious challenges to implementation exist, particularly in terms of data and funding.
Capgemini has found that only one in ten city officials say they are in the advanced stages of implementing a smart city vision, and less than a quarter (only 22 percent) have begun implementing smart city initiatives – a particular challenge as two-thirds of the world’s population is expected to live in a city by 2050, with the number of megacities set to rise from 33 today to 43 by 2030. Moreover, there is a considerable global desire for smart cities among citizens, meaning an accelerated approach would be well received.
The key to unlocking an improved urban life
According to the Capgemini report, sustainability is of increased importance for urbanites. Citizens find challenges such as pollution (42 percent) and of lack of sustainability initiatives (36 percent) a major concern and may leave their city as a result. However, over the past three years, 42 percent of city officials say that sustainability initiatives have lagged, and 41 percent say their cities becoming unsustainable over the next 5 to 10 years is one of the top five consequences of not adopting digital technology.
While smart city initiatives can lead to improvements across urban services, Capgemini has found that perception is key, and that the benefits aren’t just limited to tangible outcomes. Citizens using smart city initiatives are happier with the quality of their city life. For example, 73 percent say they are happier with their quality of life in terms of health factors, such as air quality. However, this drops sharply to 56 percent among those who have not used a smart city initiative. More than a third of citizens are willing to pay to live in a smart city. This figure rises for younger and richer citizens: 44 percent among millennials, 41 percent among Gen Z respondents and 43 percent among those earning more than $80,000.
Data and funding are critical implementation challenges
Although smart cities can solve some of the traditional pain points experienced in cities, such as public transport and security, serious challenges to implementation exist. Data is central to smart city optimization, yet 63 percent of global citizens say the privacy of their personal data is more important than superior urban services. Meanwhile, almost 70 percent of city officials say that funding their budget is a major challenge, and 68 percent of officials say they struggle to access and build the digital platforms needed to develop smart city initiatives. From a citizens’ perspective, 54 percent think BigTech firms would provide better urban services than those currently in place. | <urn:uuid:01cc481c-d99f-45a5-badb-da5a0bda9985> | CC-MAIN-2022-40 | https://e3zine.com/capgemini-global-citizens-favor-smart-cities/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00317.warc.gz | en | 0.963431 | 696 | 2.890625 | 3 |
How to Protect Your Intellectual Property
You have spent a significant amount of time and creativity in writing your killer application but you are worried about a competitor or hacker reverse-engineering your software and stealing your intellectual property. What is the best way to protect it?
Your intellectual property (IP) can include your coding algorithms and proprietary data. First we need to understand how your IP can be stolen. There are a number of techniques that a competitor or hacker can use to reverse-engineer your code.
Decompiling and Disassembling
Many popular languages such as C#, VB.NET, Python and Java compile to an intermediate form of machine code known as byte code which is then executed by a platform-specific virtual machine. The advantage of this is that the same binaries can be executed on many different platforms. However, the disadvantage is that your application can be easily decompiled to readable source code. There are many freely available tools that can achieve this.
Other more traditional languages such as C, C++ and Delphi are compiled to native binaries. Reverse engineering these files is much more difficult but still possible for experienced hackers using the right tools and techniques.
Another method of reverse engineering involves stepping through code using a debugger. This helps a hacker to understand the program flow and can help identify crucial algorithms in your code which can then be analysed or decompiled. Debugging can also enables a hacker to modify your code. For example, it can be used to remove primitive attempts at software protection such as checking the computer date.
One method to combat decompilation is to use an obfuscator. An obfuscator will not prevent decompilation but makes the decompiled code very difficult to read and understand. Normally an obfuscator will work by modifying your source code to make it less understandable. It is then compiled like normal. However, an obfuscator can also work on byte code and also in rare cases on native binaries.
Software Protection Systems
It is not so well known that software protection systems, in addition to copy-protecting your software, can also offer superior anti reverse-engineering techniques that go beyond the level of obfuscation.
Automatic software protection using shell wrappers can encrypt code and data in your software. This makes decompilation and disassembly impossible and also protects your data. The code encryption can also continue while the program is loaded into memory.
In addition, software protection systems will offer many anti-debug techniques that disrupt the flow of a debugger or make it very difficult to use or even prevent the use of a debugger altogether.
Microcosm offers two software protection systems: Dinkey Pro/FD - a hardware dongle, and CopyMinder - a purely software-based key. Both of these systems provide automatic shell-protection, anti-piracy and anti-debug techniques to prevent reverse engineering, debugging and theft of your IP. In addition, the Dinkey Pro/FD software protection system can also be used to encrypt data files that are accessed by your shell-protected software under Windows. | <urn:uuid:603c82e1-b6bf-4082-91bf-3c1b4dd7a828> | CC-MAIN-2022-40 | https://de.microcosm.com/blog/how-to-protect-your-intellectual-property | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00317.warc.gz | en | 0.919911 | 621 | 2.6875 | 3 |
Security questions can add an extra layer of certainty to your authentication process.
Security questions are an alternative way of identifying your consumers when they have forgotten their password, entered the wrong credentials too many times, or tried to log in from an unfamiliar device or location.
So, how do you define a good security question? We have come up with some basic guidelines that will help you create the best ones.
What Makes a Good Security Question
The best security questions make it easy for legitimate consumers to authenticate themselves without worrying about their accounts being infiltrated.
- If a question is too hard to answer due to complexity or changing circumstances, it can end up wasting your consumer’s time—and ultimately, it may keep them locked out of their account.
- If the answer is too quickly researched or there are too few possible answers, it can be easy for an attacker to gain access by guessing correctly.
- If the answer has favorite foods or colors, they change over time.
- If the answer has birthdays, it can be easy for an attacker to find online.
- If the answer has a school name or location too, such information is easily available for attackers.
You can minimize both of these outcomes by creating good security questions.
According to the Good Security Questions website, answers to a good security question should meet these criteria:
- Safe: Cannot be guessed or researched.
- Stable: Does not change over time.
- Memorable: Can be remembered.
- Simple: Is precise, easy, and consistent.
- Many: Has many possible answers.
You can see examples of good security questions from the University of Virginia. Let’s take a look at each of these criteria in more detail.
When choosing security questions, it’s extremely important that the correct answers cannot be guessed or researched over the internet.
Here’s an example of a question that fails to meet these rules:
“In what county were you born?”
This question could be considered unsafe because the information can be found online. Also, this information may be common knowledge to friends and family members.
Aside from these issues, if a hacker was interested in a specific account, it might be easy to brute-force their way past this question since there are only a fixed number of counties in each US state.
A good security question should have a fixed answer, meaning that it won’t change over time.
A good example of a security question with a stable answer:
“What is your oldest cousin’s first name?”
This example works because the answer never changes.
Note: Questions like this one might not apply to all users. Asking about someone’s wedding anniversary or cousins does them no good if they have never been married or have no cousins! It’s important to offer your consumers several questions to choose from to make sure they apply.
Some examples of questions with unstable answers:
“What is the title and artist of your favorite song?”
“What is your work address?”
Both of these examples make for poor security questions because their answers will change for most people over time. Many people change their minds about their favorite things over the course of their lives, and they also may change jobs or move to a different office location.
A good security question should be easily answered by the account holders but not readily obvious to others or quickly researched.
Examples of good memorable questions:
“What is your oldest sibling's middle name?”
Most consumers who have siblings know their middle name off the top of their heads, making this a good example of a memorable security question. This question is also excellent because someone would have to do quite a bit of digging to first find out who the consumer’s oldest sibling is, and then find their middle name in order to crack this question.
“In what city or town did your mother and father meet?”
Most consumers know the answer to a question like this, making it fit the criteria of being memorable. It is also more difficult to guess or research this fact. Best of all, it fits the stability criteria as well.
Some examples of questions with unmemorable answers:
“What is your car’s license plate number?”
Many people don’t have their license plate numbers memorized. Also, it’s relatively simple for potential intruders to do some digging and find this information for themselves.
“What was your favorite elementary school teacher’s name?”
The answer to this question may be quick to recall for someone younger, but for older consumers, things from their childhood can be a lot foggier. So answers to such questions might not come so easily. It’s good practice to try to avoid questions from a consumer’s childhood.
A simple question has a precise answer that doesn’t create confusion.
Some examples of questions with simple answers:
“What was your first car’s make and model? (e.g. Ford Taurus)”
“What month and day is your anniversary? (e.g. January 2)”
These both make for good security questions because the answers are specific. These questions show consumers how to format their answers in a memorable, simple way.
These questions can also be asked in a way that doesn’t give simple, precise answers:
“What was your first car?”
“When is your anniversary?”
A good security question should have many potential answers. This makes guessing the answer much more difficult and will also slow down automated or brute-force attempts at gaining access to the consumer’s account.
An example of a question with many possible answers:
“What is the middle name of your oldest child?”
A question with too few possible answers:
“What is your birth month?”
But wait. Is there any such thing as a good security question?
By their very nature, even so-called good security questions are vulnerable to hackers because they aren’t random—users are meant to answer them in meaningful, memorable ways. And those answers could be obtained through phishing, social engineering, or research.
There’s a scene in the movie [Now You See Me 2](https://en.wikipedia.org/wiki/NowYouSeeMe2) where a magician tricks his target into giving him the answers to his bank security questions. The magician guesses the answers and his target corrects him with the actual information. It’s a fictional example, but the phishing mechanics are real.
Many social media memes tap into the answers to common security questions, such as the name of your first pet or the street you grew up on. So by innocently posting your superhero name or rapper name on Facebook, you’re inadvertently sharing important personal information.
What Authentication Methods are Good Alternatives to Security Questions
Passwords and security questions aren’t the only methods for locking down consumer accounts. A good CIAM solution offers several secure alternatives:
Multi-factor authentication is a much more robust and secure method of consumer authentication that relies on two or more ways of verifying the consumer’s identity. Typically, the consumer will be required to present something that they know, something they possess, and/or something they are. Some examples of these different factors are:
- Something they know: A password, pin code, or an answer to a security question.
- Something they possess: Such as a bank card, key, or key fob.
- Something they are: A scanned fingerprint or retina, voice or face recognition.
As an example, the MBNA bank recently decided that security questions were not doing enough for them and their consumers to keep their accounts safe. To upgrade their security, they decided to go with two-factor authentication instead of security questions in order to verify their consumer’s identities.
Source: MBNA website
In these screenshots, you can see that the transition from security questions to two-factor authentication was fairly seamless for MBNA consumers. They even had the option to choose how often they would be prompted to provide a security code as their second factor.
Source: MBNA website
Strong password rules
By requiring your consumers to follow strong password rules, you minimize the risk of hackers brute-forcing their way into their accounts. Lengthy alphanumeric passwords with special and non-repeating characters are much more difficult for an attacker to guess. It also takes significantly longer for brute force programs to break in.
Passwordless Login takes the password right out of the equation. consumers log in with a key fob, a biometric such as a fingerprint, or a magic link. This login method eliminates the issue of consumers forgetting passwords entirely, and it also makes it impossible for hackers to crack their accounts by brute-forcing.
If you’re interested in learning why passwords are slowly becoming a thing of the past, download our e-book The Death of Passwords. There are better authentication methods than passwords and security questions available for your company—and with support from LoginRadius, you can adopt them quickly and easily.
Originally Published at LoginRadius | <urn:uuid:86d460b2-c906-449c-b56d-07db65d5c711> | CC-MAIN-2022-40 | https://guptadeepak.com/best-practices-for-choosing-good-security-questions/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00317.warc.gz | en | 0.945858 | 1,961 | 2.84375 | 3 |
On 6 April 2018, the Indian government reported that its Ministry of Defence website had been hacked. An error message was displayed on the home page instead of regular content, and the appearance of Chinese characters on the page caused a furore on social media. A tweet from Nirmala Sitharaman, the defence minister, confirmed the incident.
On the same day, nine other government websites were reported as inaccessible, displaying the error message: “The requested service is temporarily unavailable. Sorry for Inconvenience. It would be available soon.”
However, the National Informatics Centre (NIC), which develops and manages all Indian government websites, soon clarified that that website has not been hacked and that the issues were the result of a hardware error.
Gulshan Rai, head of the Computer Emergency Response Team (CERT), also refuted the website hack claims. “It is a hardware failure, affecting around 10 government websites […] The websites will be restored soon,” he said.
Although the incident was the result of technical issues rather than malicious intent, the sites still experienced more than eight hours of downtime – something unacceptable in today’s digital age, be it a government website or a business enterprise. It affects customers’ and, in this case, citizens’ perceptions and can lead to reputational damage and financial losses. Fortunately, there are standards available to help ensure the security of your IT infrastructure.
ISO 27001 is the international standard that describes best practice for an information security management system (ISMS). Certifying to the Standard demonstrates that your organisation is following information security best practice, and will help you to identify and tackle security threats and vulnerabilities.
IT Governance offers a range of ISO 27001 products and services to help your organisation, from training courses for staff to ISO 27001 certification. Find out more on our website, or download our free green paper, Information Security & ISO 27001: An introduction. | <urn:uuid:0f32f2ee-ba95-4764-bc69-2327038ddd60> | CC-MAIN-2022-40 | https://blog.itgovernance.asia/blog/hacked-indian-ministry-websites-reveal-chink-in-indias-digital-armour | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00317.warc.gz | en | 0.93296 | 401 | 2.53125 | 3 |
by Florian Riederer
From smartphones, smartwatches, connected cars, networked wearable medical devices, and so on, the highly connected world we live in has made normal, what still almost felt like science fiction 15 years ago. This wide array of networked devices, the internet of things, has enabled great leaps in efficiency, enabling us to get more done, and have more control than ever before. However, they have also opened up new attack surfaces that can be exploited.
Unfortunately, the early warnings of security experts about the theoretical opportunities for bad actors to infiltrate devices and cause harm has finally become the reality of our threat landscape. From the high-profile attack that shut down the Colonial Pipeline System this past May to hacks on domestic smart devices such as highjacked thermostats or baby monitors. Even devices that aren’t directly connected to the internet, but access it through mobile data are at risk, and this risk is expected to increase as the introduction of 5G will enable more and better-networked devices.
Some steps that can be taken to ensure the safety of your IoT network are just the same precautions we should practice with our internet and computer use in general. Employing password best practices and enabling multi-factor authentication goes a long way to provide more security. However, IoT devices have unique risks relative to computers and traditional attack surfaces. Because they are specialized towards specific functions, and to keep costs down, the overall computing power in many IoT devices is relatively small, and security protections would take up space that manufacturers are unwilling to allocate.
The threats posed by attacks on IoT devices are also unique. While traditional ransomware attacks extort the victim by preventing access to data, ransomware attacks on IoT devices can represent threats of imminent, physical, and widespread harm, particularly in attacks targeting industrial equipment, such as an attack in February on a water treatment plant in Oldsmar, that nearly released contaminated water to the 15,000 residents serviced by the plant.
Protecting ourselves, our organizations, and each other from the increasing number and severity of attacks on IoT devices will require clear and efficient network supervision and visibility to identify and respond to unauthorized access. If you want a more detailed overview of the contemporary IoT landscape, I highly recommend this white paper from our partners at Gigamon. If you would like to learn more about how Atlantic Data Security fits into your IoT security needs, please reach out to us at email@example.com. | <urn:uuid:884e47b8-8536-4e5d-886d-6065e815d544> | CC-MAIN-2022-40 | https://atlanticdatasecurity.com/iot-attacks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00317.warc.gz | en | 0.952227 | 502 | 2.625 | 3 |
Driving the Digital Transformation with Augmented Reality
In recent months, augmented reality (AR) has gained much attention across both business and consumer marketplaces. Although AR technologies have been around for at least 15 years, mainstream adoption is a recent phenomenon within the manufacturing industry. New breakthroughs in the affordability and applicability of AR technology have accelerated the rate of adoption. Traditional AR installations involved expensive equipment, a complex rollout, and a high degree of technical expertise. Now, a flood of mobile devices such as smart phones and tablets combined with some innovative software engineering tools have made it possible for developers across industries to achieve affordable AR solutions. Manufacturing companies large and small find themselves in a position to capitalize on AR and to pursue new opportunities that significantly boost operational productivity and enhance competitiveness.
What is augmented reality?
Within the realm of industrial manufacturing, augmented reality is really about two different environments converging or blending in a way that boosts the effectiveness and efficiency of plant operators. One environment is “real” (what you see, unassisted, in front of your own eyes) and the other is “virtual” (not “real”, but computer generated). Both of these environments can be understood in terms of a continuum, with real environments at one end and completely virtual environments at the other. What lies in between is augmented reality, which is, in essence, mixed reality.
AR presents a completely new way of engaging with machine devices and executing tasks. The technology of mobile devices (and the cameras within) is combined with access to new sources of real-time data (usually via a wireless network) and the conversion of that data into visualizations/graphics. This offers operators a blended view that allows them to virtually see “inside” of a machine without having to open any doors.
For anyone that uses a mobile device for daily activities, AR presents a completely new way of engaging with machine devices and executing tasks. The technology of mobile devices (and the cameras within) is combined with access to new sources of real-time data (usually via a wireless network) and the conversion of that data into visualizations/graphics. This offers operators a blended view that allows them to virtually see “inside” of a machine without having to open any doors.
Consider the implication of such capabilities on three areas pertinent to manufacturing:
Product development – Augmented reality applications in IoT can be effective in the product design review phase, when new products require testing and evaluation. AR offers the possibility of evaluating 3D virtual models of new products, which can be easily modified, in their real context of use, without having to take the time and to bear the cost of producing real prototypes. More about the key role of Augmented Reality in Industry 4.0 for Manufacturing.
Maintenance – Suppose an operator’s machine breaks down. An AR app can diagnose the machine problem and visually guides the operator or maintenance person through quick and easy repairs. The AR program displays superimposed information on the operator’s tablet regarding how to execute the specific repair.
Safety applications – New AR applications allow the user to “see” the inside of a closed metal cabinet (that houses machine components) and allows the user to diagnose an issue without ever having to physically open the box. This allows internal environmental conditions to be assessed while equipment is still in operation (without humans having to get too close). This increases overall reliability and reduces safety risk involved with digital transformation.
Generating exponential benefits through “end-to-end” integration
Sophisticated AR tools do require a high degree of integration to perform these specific functions. Elements such as the physical environment, data sources, graphical interfaces, product specifications (including software and connectivity compatibility) and artificial intelligence all need to work together. In fact, AR tools perform best when connected with the broader upstream and downstream processes across the entire manufacturing value chain. Naturally, such complex programming should not be the responsibility of the end consumer, and that is why open and inclusive vendor-developed technology architectures are important enablers to large scale deployment of AR applications.
We are just now uncovering the potential for this new generation of AR tools on the plant floor. Although much progress has been made to get to this point, recent advances in easier integration and practical use cases should help speed the adoption of these solutions within the manufacturing space. In fact, 10 years from now we will realize that 2018 was just the beginning.
This article was written by Peter Herweck, EVP Industry Business, Schneider Electric. Peter began his career at Mitsubishi where he served as Software Development Engineer. In 1993, he joined Siemens in the Motion Control for Machine Tools unit where he led various R&D projects. | <urn:uuid:ccd99786-12cc-4d1c-a72d-eb31f7333471> | CC-MAIN-2022-40 | https://www.iiot-world.com/augmented-reality/driving-the-digital-transformation-with-augmented-reality/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00317.warc.gz | en | 0.940372 | 971 | 2.71875 | 3 |
Today, AI is more accessible than ever. The entire transportation industry, from car-makers, parking lots, cruises, and more are beginning to embrace AI in their products or services. We’ll likely start to see more AI in everyday transportation. From intelligent cars that can detect the mood of the driver and shift it by adjusting their climate and music. Or, another example can be found in some commercial airline companies that are using AI for intelligent maintenance strategies. Or the Chinese Smart Panda bus which is operating in 10 cities, driving hundreds of passengers everyday by itself.
In this article, we’ll be looking at a few more examples of artificial intelligence within transportation and how this is helping to meet several of the most common and persistent challenges in this area. But before that, let’s quickly review some of the challenges that the transportation industry faces today as it is looking towards AI.
So, let’s jump straight in.
Embracing AI in Transportation
Application of Artificial Intelligence (AI) in the transportation industry is driving the evolution of the next generation of Intelligent Transportation Systems. AI and its branch, Machine Learning ML, are enabling transportation agencies, cities, and private car owners to harness the power of the modern compute and communication technologies. These technologies are making mobility a much safer and greener activity.
High-end commercial CPUs, GPUs and IoT communication technologies such as LTE, 5G and LPWAN have created possibilities of several applications of Big Data and Artificial Intelligence in the Transportation sector.
We’ll also start to see more intelligent edge computing technology supported by high-speed connectivity to the cloud. The edge will be powerful enough to process al AI decision-making at the device itself, without having to connect to a server in the cloud, thousands of miles away. But of course, with the help of 5G, these edge devices will be capable of transmitting large amounts of data to be processed by AI analytics servers at the cloud.
What are the Challenges AI Can Solve in Transportation?
There are several challenges that are persistent throughout the transportation industry and that have plagued this sector ever since its inception. Among them are safety, reliability, efficiently, and pollution more recently becoming an increasingly important aspect of transportation requiring attention.
Safety is arguably the most important consideration for those working within the travel or transportation industries. In order for services to be successful in any shape or form, passengers and customers need to know that they or their belongings are in safe hands.
Technology has made increasing safety levels much easier over the years and now, with the advent of AI technologies that are becoming increasingly adopted by businesses and enterprises operating within the transportation arena, safety levels could about to reach even higher peaks.
Another top consideration of many businesses or enterprises operating within travel or transportation is the reliability of their services or vehicles. Passengers are much less likely to travel with operators or in vehicles that look or that are known to be unreliable. Use of Artificial Intelligence in Public Transportation to enhance the reliability of the service is one of the key drivers of its adoption within the industry.
Using artificial intelligence technologies, it is hoped that the ability to process and predict data and outcomes in much larger quantities than humans are capable of will allow travel and transport operators, as well as eventually the public themselves, the ability to schedule public and private transportation services in a significantly improved manner.
Being energy efficient is an increasingly important aspect of travel and transport as our journeys and commutes become ever more integrated with technology. While this undoubtedly has its benefits, it also means new technologies will need to manage their power supplies much more efficiently.
Artificial intelligence technologies will undoubtedly enhance the efficiency of the systems it integrates with; however, power will need to be used much more intelligently by all of the systems in play in order to truly utilize the potential of newer technologies.
With a large percentage of the world becoming increasingly environmentally focused as the effects of climate change are seen across the world, drastic reduction of polluting substances within the travel and transportation industries is required in order to secure their long-term sustainability.
Artificial intelligence could play a big role in developing and deploying new and innovative ways in which to deal with pollution as well as helping to enable scientists and engineers to come up with much more environmentally friendly methods to power and run vehicles and machinery for travel and transportation.
Five Examples of AI in Transportation
Currently, there are several ways in which AI is being used within transportation. As the artificial intelligence in the transportation industry evolves and becomes more and more mature, it is almost certain that the number of roles that AI can occupy and manage will increase exponentially.
Some of the artificial intelligence examples most common in transportation, nowadays are:
Autonomous vehicles are some of the most exciting new innovations to become a reality within transportation and could very well be the first step into a new future of autonomous transport. Artificial intelligence is vital within these driverless vehicles due to their processing, control and optimization capabilities.
Within autonomous vehicles, real-time data transmission and processing is a vital function and any disruptions to these processes could prove catastrophic in a real-life scenario. An AI’s ability to manage the transmission and processing of received data as well as optimize connectivity to ensure the best connection is always used will help make autonomous vehicles safer and much more widespread.
The video below shows how it is to use a self-driven taxi from Waymo, in 2019.
Nowadays, there really is an app for everything. This includes AI-powered real-time traffic updates through services such as Google Maps or Waze. By using location data collected from users smartphones, these apps are able to predict and analyse traffic conditions in your local area so as to better inform your travel plans.
These apps may not be around for long though, as they may soon face direct competition from autonomous vehicles themselves.
Who needs to plan for traffic on their smartphone when the car is already on the job?
Traffic Management Solutions
Another way in which artificial intelligence technologies are used within transportation is in the traffic management systems. Again, due to its processing, control and optimization capabilities, artificial intelligence could be applied to traffic management and decision-making systems in order to enhance and streamline traffic management and make our roads smarter.
The predictive abilities of AI are also of huge benefit to traffic management systems as they are able to recognize the physical and environmental conditions that can lead to or be the result of heavier traffic flow and congestion.
As an example, in India, Siemens Mobility is testing a prototype monitoring system which is using AI throughout traffic lights, to put a final victory to the dreaded traffic jams.
Yes, this amazing technology could help Indian traffic jams.
Artificial intelligence is also now being used in law enforcement capacities and is helping to identify and catch those who drink and/or text while driving. This can often be a challenge to human officers due to the speeds at which vehicles and passengers can come into and out of view, however, with artificial intelligence, this is no longer an issue.
By using advanced analytical and data processing capabilities, AI could help to detect and identify when a driver is drinking or texting behind the wheel and alert any officers within the local area to intercept them.
An example of this in real life is the new radio from Motorola Solution that brings a new AI voice assistant to law enforcement vehicles. Police can now just say a license plate, and the intelligence will look up for that information, and reply within a few seconds.
An in-vehicle gateway such as Lanner’s V3S and V6S shown in the picture below can provide the intelligence needed. It comes with a rugged casing, wide temperature resistant, and wireless connections, and a high processing unit.
Self-driving airplanes have been around for a while.
People are often surprised to find out that one of the earliest adoptions of artificial intelligence within transportation was, in fact, the autopilot systems used in almost every commercial aircraft in service. While today it may not sound as futuristic as other applications in which AI is being tested, it is still an essential part of any modern air travel.
Interestingly, the New York Times reports that only seven minutes of an average Boeing flights are controlled by a human with the rest being handled by a computer. Unsurprisingly, the human-controlled parts of the flight are mostly during both take-off and landing.
AI and ML are becoming more than a futuristic hi-tech. They are part of our daily lives and we use it every day and don’t even notice it. We can find AI, in our mobile apps, the way Grammarly or Google Docs corrects our grammar mistakes, or in voice recognition, social media feeds, etc.
The transportation industry has already been using AI for a while, in the aircraft autopilot or the smartphone apps that can predict traffic jams. But now we will likely see a dramatic increase in uses cases of AI within the transportation.
AI is getting stronger and more sophisticated every passing day. It is a matter of time. | <urn:uuid:525d129c-70ce-4698-85c3-927455de7256> | CC-MAIN-2022-40 | https://www.lanner-america.com/blog/examples-artificial-intelligence-applications-transportation/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00317.warc.gz | en | 0.958087 | 1,870 | 2.640625 | 3 |
Teachers today have more ways than ever to engage students while also streamlining their classroom operations. It is no surprise, then, that two of the world’s computer powerhouses — Google and Microsoft — offer tools to make it easier for teachers and students to work together. What follows is an overview of their offerings to help you determine which one is a better fit for your needs.
Google for Education
Google Classroom offers teachers a robust set of tools and resources that puts everything they need in one central location. With just a few clicks, teachers can distribute assignments to all students or just a select few. They can also create classes that are as precise or as broad as needed. Instant access from anywhere in the world allows teachers to send feedback, manipulate goals or distribute assignments — all in a paperless format that makes it easy to stay organized.
Teachers are not the only ones who benefit from using Google Classroom. Students receive reminders about assignments which helps them stay organized. No more lost papers and misplaced homework. Even a smartphone forgotten at home doesn’t mean that a student can’t complete assignments. Students can sign into their digital classroom on any computer or mobile device with their unique log-in code. Google Classroom facilitates communication between student and teacher so that any questions or issues can be handled in real time.
Microsoft for Education
Microsoft Classroom is the central location that holds all the classroom tools and resources a teacher needs to organize, motivate and collaborate with students. Easily organize multiple class sections, offer students feedback, generate assignments and collaborate with fellow teachers. With the versatility of a OneNote Class Notebook built right into Microsoft Classroom, teachers can enjoy features like School Data Sync where groups and log-ins for Office 365 are automatically created so students have access to the services they need to complete assignments and receive feedback and assistance.
Another feature of Microsoft Classroom is the ability to produce interactive lessons and presentations that are filled with dynamic elements, offering fresh, new ways of engaging students. With OneNote Class Notebook, students can collaborate with each other, attach pictures, sketch diagrams and more.
If your school isn’t using Google Classroom or Microsoft Classroom, you and your fellow teachers are missing out on an intuitive set of tools that can make working with students easier, more engaging and more streamlined. Looking for a reliable IT support partner in Ottawa that will provide your small business or school with the infrastructure and resources it needs to be more productive? We can help! Give Fuelled Networks a call at (613) 828-1280 or drop us an email at email@example.com.
Published On: 28th April 2016 by Ernie Sherman. | <urn:uuid:470efb50-b360-43f4-8bfc-fc0226e94c5f> | CC-MAIN-2022-40 | https://www.fuellednetworks.com/google-versus-microsoft-how-do-their-education-offerings-stack-up/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00517.warc.gz | en | 0.947046 | 554 | 2.703125 | 3 |
Smart cities or intelligent cities are not only about technology improving city services, but they are about improving the community experience as you live, work, and play. Yes, much has changed over the past 18 months, but city projects are moving forward and with a boost of energy because of the pandemic and new funding sources. The industry as a whole is finding new project opportunities centered around automation, remote operations, contactless services, public health and safety, and new ways to deliver legacy services to avoid the face-to-face interaction for safety purposes. A few key technologies directly aiding in smart city initiatives include Internet of Things (sensors, connecting assets, tracking assets, real-time alerting or intelligence), mobile applications, augmented or virtual reality, artificial intelligence, and machine learning.
Historically, smart city projects have centered around traffic management, smart lighting, and city asset management, and while those areas are expected to continue to be areas of focus, new use cases are coming into the mix. Under the American Rescue Plan and Coronavirus Relief Fund (CARES ACT), cities and public schools are receiving emergency funding to support in projects related to safety, healthcare, and administering city services in new and safe ways.
READ MORE AT EXECUTIVE VIEWPOINT
Covering hot topics in the industry, new research, trends, and event coverage. | <urn:uuid:37133c9d-ecfb-4a40-8405-08b5b40431cf> | CC-MAIN-2022-40 | https://www.compassintelligence.com/blog/a-smarter-city-a-smarter-community-and-infrastructure-is-key | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00517.warc.gz | en | 0.931847 | 292 | 2.625 | 3 |
While researching IPv6, I decided it would be a good exercise to tell the short, but interesting story about IPv5. Now the Internet Protocol (IP) was not originally designed as a method of managing addresses on networks; it was intended as a technology to split the original network stack with Transmission Control Protocol (TCP) at layer four and IP at layer three. At the time, the design for TCP was struggling to solve two problems at the same time: how do we package data, and how do we send that data some place? That’s how we got to IPv4.
The History of TCP TCP version 1 was designed in 1973. This was documented through RFC 675. TCP version 2 was documented in March 1977. In August 1977, Jon Postel realized they were going the wrong direction with the protocol. “We are screwing up in our design of internet protocols by violating the principle of layering. Specifically we are trying to use TCP to do two things: serve as a host level end to end protocol, and to serve as an internet packaging and routing protocol. These two things should be provided in a layered and modular way. I suggest that a new distinct internetwork protocol is needed, and that TCP be used strictly as a host level end to end protocol.”
At this point, TCP and IP were split, with both being versioned number 3 in the spring of 1978. Stability was added in the fourth revision and that is how we got to IPv4. What happened to IPv5? It was a failed attempt to expand and solve some of IPv4’s problems. IPv4 was built to support efficient delivery of streams of packets to either single or multiple destinations, requiring guaranteed data rates and controlled delay. In other words, it was attempting to solve quality–of-service issues from the original Internet Protocol. With IPv5, computer scientists were trying to find a way to transmit voice over packet-switching networks. Originally, IP was not designed in a time before routers were required to maintain state information. As the idea of streaming video and other new media become a reality, RFC 1190 was submitted for a formal implementation of IPv5. Apple, Sun, IBM and a few others attempted to implement IPv5, but ultimately, general improvements in bandwidth, applications and compression allowed the modern network to grow around IPv4’s problems. | <urn:uuid:159e38e9-a7b3-4028-b532-01fcffa4e8ba> | CC-MAIN-2022-40 | https://www.alertlogic.com/blog/where-is-ipv1-2-3-and-5/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00717.warc.gz | en | 0.972605 | 484 | 3.25 | 3 |
In the first quarter of 2012 alone, six million new malware samples were created, following the trend of increasingly prevalent malware statistics of previous years, according to PandaLabs.
Trojans set a new record as the preferred category of cybercriminals for carrying out information theft, representing 80 percent of all new malware.
In 2011, Trojans ‘only’ accounted for 73 percent of all malware; worms took second place, comprising 9.30 percent of samples; followed by viruses at 6.43 percent. Interestingly in 2012, worms and viruses swapped positions from the 2011 Annual Report, where viruses stood at 14.25 percent and worms at 8 percent of all circulating malware.
When it comes to the number of infections caused by each malware category, the ranking supports the hierarchy of new samples in circulation with Trojans, worms and viruses occupying the top three spots. Interestingly, worms caused only 8 percent of all infections despite accounting for more than 9 percent of all new malware. This is quite noteworthy as worms typically caused many more infections due to their ability to propagate in an automated fashion.
The figures corroborate what is well known: massive worm epidemics have become a thing of the past and have been replaced by an increasing avalanche of silent Trojans, cyber-criminals’ weapon of choice for their attacks.
The average number of infected PCs across the globe stands at 35.51 percent, down more than three percentage points compared to 2011, according to Panda Security’s Collective Intelligence data. China once again led this ranking (54.25 percent of infected PCs), followed by Taiwan and Turkey.
The list of least infected countries is dominated by European countries with nine out of the first ten places being occupied by them, the top three being Sweden, Switzerland and Norway. Japan is the only non-European country among the top ten nations with fewer than 30 percent of computers infected. | <urn:uuid:6dea60ef-448d-48f2-a424-77dda79166cc> | CC-MAIN-2022-40 | https://www.helpnetsecurity.com/2012/05/07/ransomware-increases-in-prevalence-as-cyber-criminal-tactic/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00717.warc.gz | en | 0.942773 | 386 | 2.5625 | 3 |
Cybercrime is rising worldwide, across industries. In October 2021 a global cloud communications company reported losses exceeding $9 million due to a distributed-denial-of-service (DDoS) attack. In that same time period, the manufacturing sector saw a 641% increase in application-layer DDoS attacks over the previous quarter.
While some data breaches are caused by weaknesses in an organization’s virtual perimeter that allow hackers to exploit software vulnerabilities, a growing number sneak through connected IoT devices.
Security cameras, access control readers, and other physical security devices are often overlooked as a source of vulnerability, since they fall into the realm of the security team and not the IT department. Traditionally, physical security devices, like perimeter fences and door locks, were simply installed and left to do their jobs. Even as data centers began implementing IP-based technology and IoT devices, they didn’t always think about how these assets might make their networks vulnerable.
But physical security and information security are linked. There’s no difference in the impact whether a hacker accesses an organization’s server rooms physically or through a video surveillance camera, a piece of HVAC equipment, or an employee’s laptop. As cyber threats grow, physical security and IT must work together to safeguard network infrastructure.
Unifying physical and cybersecurity
A unified IT and physical security team can develop a comprehensive security program based on a common understanding of risk, responsibilities, strategies, and practices. First, the team should conduct a current posture assessment to identify devices of concern.
- Create an inventory of all network-connected cameras, door controllers, and associated management systems; identify their functions; and confirm their role/relevance.
- Perform a vulnerability assessment of all connected physical security devices to identify models and manufacturers of concern.
- Consolidate/maintain detailed information about each physical security device, including connectivity, firmware version, and configuration.
- Improve network design as needed to segment older devices and reduce crossover attack potential.
- Document all users who have knowledge of physical security devices and systems.
Closing the gaps
Recommended improvements should cover individual devices as well as the entire system. These can include ensuring all network-connected devices are managed by IT network and security monitoring tools as well as implementing end-to-end encryption to protect video streams and data in transit and storage. Devising and implementing a schedule of ongoing testing and reassessment of risk associated with all inventoried devices is an important part of managing and mitigating risk.
Existing configurations and management practices for physical security devices can be improved by using secure protocols to connect devices to the network, disabling access methods that don’t support adequate security protection, verifying configurations of security features and alerts, and replacing defaults with new passwords that must be changed regularly.
Another best practice for protecting network security is to implement a layered strategy that includes multifactor access authentication and defined user authorizations. Organizations can also improve update management by defining who is responsible for tracking update availability and for vetting, deploying, and documenting updates on all eligible systems and devices.
Developing a product replacement strategy
A posture assessment can help determine which devices and systems should be replaced because they present a high cyber risk. When developing replacement programs, organizations should prioritize strategies that support modernization for both physical and cybersecurity. One effective approach is to unify physical and cybersecurity devices and software on a single, open architecture platform with centralized management tools and views.
Replacement programs should also focus on cybersecurity features, including data encryption and anonymization, that are built into a device’s firmware and management software. Another important consideration is looking at a vendor’s capabilities to support a solution life cycle of up to 10 years, including ongoing availability of updates for firmware and management system software. Vendors should conduct their own penetration tests on a recurring basis to catch any vulnerabilities that could have been missed during product development and guard against new forms of cyberattack.
With cyberattacks increasing, organizations must implement effective measures. An important step toward reducing risks to the IT network associated with physical security devices is to integrate physical security and IT and develop a coordinated strategy for hardening systems. Vigilance is key, and it should extend to every partner in the chain of your physical security system and devices. | <urn:uuid:f37fad36-553a-49a5-833d-954b09ed228b> | CC-MAIN-2022-40 | https://www.missioncriticalmagazine.com/articles/94297-cybersecurity-risks-hide-in-physical-security-systems | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00717.warc.gz | en | 0.934011 | 872 | 2.625 | 3 |
Financial losses, data breaches and reputational damage are just some of the ways a cyber-attack can hit an organisation hard.
The Petya and WannaCry cyber-attacks in May and June are two of the biggest in history and impacted the finances of companies throughout the globe. A recent report by the insurers Lloyd’s of London said a major cyber-attack has the potential to cost as much as a natural disaster.
WannaCry, which affected numerous organisations, including the NHS, spread to 150 countries and is estimated to have cost the global economy £6bn.
Petya caused problems with shipping and invoicing for Neurofen manufacturers Reckitt Benckiser, who are expecting to make losses of about £100m as a result of the attack. Some of the world’s largest organisations including Cadburys and Oreo cookies manufacturer Mondelez were also affected by Petya.
A cyber-attack can also lead to a fine for a data breach - a prospect that will become even more real when the new General Data Protection Regulation is introduced in May 2018.
How WannaCry and Petya worked
To begin with, both attacks were referred to as ransomware attacks because they locked people out of their computers and demanded payment to let them back in.
Some cybersecurity experts now believe Petya was not a ransomware attack because it was incredibly difficult to pay the hackers. Ransomware attacks usually make it very easy to make payment. They sometimes even offer step by step guidance and a help centre.
Instead, they believe the malware which they are now calling NotPetya, was designed to spread damage rather than collect money. They have suggested the attack may have been disguised as ransomware to make it appear to be criminal led when it may have been state sponsored.
The malware initially spread through an accounting program used by organisations working with the Ukrainian government. It affected several parts of the country’s infrastructure including banks, airports and railways. It then spread globally through phishing emails, which are disguised as legitimate communications but ask for sensitive information like passwords.
WannaCry and Petya both exploited the same vulnerability in the legacy Microsoft operating system Windows XP and Windows Server 2003. Legacy systems rarely have the necessary security updates, issued in the form of patches, to protect them from the latest threats. Attackers tend to exploit these shortcomings.
Due to the extent of the WannaCry attack, Microsoft did issue a patch for both platforms but some organisations delayed implementing it before Petya. In addition, up to date systems which hadn’t implemented a patch from March 2017 were also vulnerable to the attacks.
How to protect yourself
Petya and WannaCry reinforced the need to take two crucial protective measures - updating legacy systems and using patches to protect against new threats.
The importance of doing more to educate users on how to prevent malware spreading was also evident. All employees should be taught how to recognise suspicious emails. Ransomware usually needs users to carry out actions like clicking on a link, or downloading an infected attachment.
It is also impossible to overstate the importance of backing up data in case you are hit. You can’t be held ransom for data you can access elsewhere.
To protect yourself effectively, or at least lessen the impact if you are hit, you need a layer of cyber security (opens in new tab) measures. The most fundamental ones are:
● Anti-virus software which needs to be kept up to date. Cloud based software is a good option because it’s always current.
● Anti-spam software to filter or block junk email, which is often used to instigate computer infections.
● Firewalls to prevent unauthorised access to your networks.
● Up to date systems which are kept protected using patches.
● An additional DNS layer to protect all devices, including phones and tablets, from malicious activities.
● Unified Threat Management which combines a range of applications to carry out several security functions from one system.
Best practice around using passwords should also be followed. This includes changing entire passwords regularly - not just one or two characters. Bots can try millions of combinations per minute to crack passwords. Don’t reuse the same password either because if it’s leaked hackers can use it to get access to your other accounts too.
Simple housekeeping measures like deleting old user accounts will also help you keep on top of your cyber security. Users should be restricted from having access to areas of your network which they don’t need. This will help prevent infections from spreading.
The best time to protect yourself is now
This means you need to take every step you can to protect yourself and you need to do it today. No one can predict when they might be attacked, so being prepared at all times is the best approach.
WannaCry and Petya may have mainly affected large organisations but businesses of all sizes should protect themselves.
A study by the Federation of Small Businesses reported that small businesses are bearing the brunt of cyber crime. They found that 19,000 cyber crimes are committed against small businesses in the UK every day. Although many small businesses are taking steps to protect themselves, security standards vary and more can be done.
Cyber criminals are getting increasingly sophisticated and attacks are generally automated now. Bots can be used to scan operating systems for vulnerabilities so a mass attack that catches as many people as possible can be deployed.
Sam Reed, Chief Technology Officer at Air IT (opens in new tab)
Image source: Shutterstock/Martial Red | <urn:uuid:9a74720a-2dea-406b-b6bf-4585a240bffa> | CC-MAIN-2022-40 | https://www.itproportal.com/features/what-you-need-to-know-about-the-petya-and-wannacry-cyber-attacks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00717.warc.gz | en | 0.962146 | 1,144 | 2.6875 | 3 |
Phishing is one of the oldest tricks in the attack book. It is a simple social engineering attack that is cheap and easy to carry out, and has never failed to deliver results for attackers. With more and more interactions moving online, opportunities for attacks have seen explosive growth, so it is no wonder that phishing remains one of the highest concerns for defenders.
What are phishing attacks?
Phishing is a common tactic used by online scammers and hackers to trick users into sharing their online credentials or other sensitive information. It is a type of “Social Engineering” that is usually carried out by sending a genuine and trustworthy looking message (E-mail, SMS, social media etc.) containing a link to a deceptive website. Once there, users are asked to provide their authentication credentials to log in, without suspecting they’re actually proving an attacker with their precious password.
Once the attacker has the credentials in hand, it can immediately be used to login to the real service and easily steal data or funds, damage online assets, impersonate the victim and so on. Since this “hack” is done without ever employing sophisticated cyber attacks against the breached system, it can take a while to detect and by the time it is, irreparable damage has been done to the user and the organization.
In order to increase their success rates, attackers try to perfectly imitate the appearance and user experience of the real service. More sophisticated schemes are required as users get more educated about the dangers of phishing, but attack methods are constantly adapting and continue to remain effective. Online scammers have a growing range of tools to imitate email addresses, web domains and also SMS and phone calls that are used as a second factor of authentication.
In many cases it can be very hard to distinguish between a phishing message and a genuine one. Moreover, scammers tend to design and phrase these messages in a way that will prompt an immediate action by the user, typically asking users to perform a “periodical password change” or “security audit”. It is not unusual to see fraudulent messages threatening users with account lockout or deletion if immediate action is not taken, hurrying them to act without sufficient attention. The same can be said about the web domains used in many attacks – both email addresses and destination URLs can be very similar to the official versions and trick even the keenest eye.
How do you protect against phishing attacks?
There are several phishing protection approaches available. Without going into the specific nuances of each technology, they can be roughly classified into four categories:
- Monitor links sent across all possible communication channels and user devices, and block all the malicious ones. This is very much a ‘mission impossible’ and therefore generally accepted as something that’s done on a best efforts basis.
- Prevent users from sending sensitive data to unknown or suspicious sites using a Data Leakage Prevention (DLP) solution. While this may work well when users are on the corporate network, once they are off it, it is next to impossible to monitor all their digital interactions and block sensitive information from being sent. It is also highly intrusive on privacy.
- Enforce policies that require users to frequently reset/refresh their passwords, so if they are phished, the attacker has a shorter window of opportunity in which to operate. This measure has only limited efficacy, but more importantly, it is very onerous on users that are asked to perform frequent password updates.
- Educate employees to make them harder to fool. There are various tools in-market that routinely send users simulated phishing emails to see how they respond, and educate them if they fail to respond appropriately to phishing emails.
How do you prevent phishing attacks?
There are many solutions that help businesses and individuals protect against phishing attacks. But the best way to avoid phishing altogether is to use authentication credentials that are hard to phish. Passwords are easy to phish. OTP codes are also easy to phish, even if the attacker is not in possession of the device needed to generate a code.
Going passwordless is highly effective in preventing phishing attacks. There is nothing the user knows that he can be fooled into disclosing to an attacker. The effort required to defeat a passwordless authentication solution depends on the authמentication methods used, but in almost every case it is extremely difficult. For example, using a fingerprint captured by a sensor embedded in a registered mobile device requires the attacker to gain possession of the registered device and then be able to spoof the user’s finger print – both are considered very difficult. | <urn:uuid:41b43c69-fccc-42b0-a627-c7086ab1348d> | CC-MAIN-2022-40 | https://doubleoctopus.com/octocampus/the-ultimate-phishing-prevention-solution/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00717.warc.gz | en | 0.949977 | 950 | 3.46875 | 3 |
Identity security describes the proactive approach to safely controlling user and system information that is used to authenticate and authorize user identities for access to secure resources. It is an essential aspect of the identity and access management (IAM) space and serves as the cornerstone for security in any organization.
In house identity security
Identity security has always been an important consideration for IT admins, which is why we have directory services like Active Directory® and OpenLDAP in the first place. Without an authoritative identity provider (IdP), each end user would be responsible for making sure their credentials are secure. Not only is this inefficient, but it would require that each user be adept in the best practices for securing identities. When you consider the most common passwords for 2016 were ‘12345’ and ‘password’, according to Time magazine, it’s easy to see how this approach is far less secure than extensively tested and proven solutions managed by security experts. Additionally, there would be no way to authenticate user identities to authorize access to the domain.
Active Directory solved this problem by implementing the concept of a domain controller — essentially the bouncer for your domain. Back when everything was on-prem, a server dedicated to authenticating and authorizing user identities and requests for access made a lot of sense. However, many admins are beginning to discover the limitations of traditional approaches to identity security as more and more infrastructure transitions to the cloud.
Why is identity security important?
Experience has taught us that identity security is the foundation for a secure IT infrastructure. The challenge is controlling the flow of information to allow for frictionless access for the right people while minimizing the risk from potential attackers. Shifting identities to the cloud has only added complexity to this balance.
As any admin will tell you, a compromised user identity can be devastating — especially when you consider the modern user identity is spread across a huge variety of resources. As a result, the thought of hosted identities makes a lot of admins uneasy. Yet, that won’t stop the world moving to the cloud. So what is an IT admin to do in this new, uncharted territory?
Identity security in the cloud
Fortunately, the IAM market is exploding with new solutions aimed at securing cloud identities. What they don’t tell you is that most of them still require an on-prem directory service instance, typically Active Directory, to act as the authoritative IdP.
Directory-as-a-Service® is unique in that it serves as a comprehensive cloud replacement for your on-prem directory service with the power to centralize control throughout your domain. Think of it as the directory service for the cloud era, which provides platform agnostic management for users and resources both on-prem and in the cloud.
Directory-as-a-Service utilizes multiple cryptographic functions to ensure that user credentials are entered and stored using the latest one-way hashing and salting techniques, and never stored or transmitted as plain text. Further, all data is encrypted at rest and in transit. We also encourage lengthy, complex passwords in conjunction with multi-factor authentication (MFA) to add additional layers of security at login. Admins can also utilize SSH keys as an alternative or in addition to customized security settings. Directory-as-a-Service provides these options so admins can apply various levels of security that are appropriate for different roles, groups, and their organization as a whole.
Admins can leverage secure identities to provision or restrict access to resources at an individual or group level, configure custom password complexity settings, run commands against individual or groups of systems, and much more. Users can then federate their core JumpCloud credentials to any number of resources like systems, applications, email, RADIUS, and much more — thus, providing True Single Sign-On™ to any of their provisioned resources.
If you would like to learn more about how Directory-as-a-Service can secure your cloud identities, drop us a note. Alternatively, sign-up for a free IDaaS account and see what a true cloud directory could be for you. Your first 10 users are free forever. | <urn:uuid:12b9a5bf-ae08-40f9-81df-293bf54b2900> | CC-MAIN-2022-40 | https://jumpcloud.com/blog/what-is-identity-security | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00717.warc.gz | en | 0.938789 | 844 | 2.515625 | 3 |
The operating habits of users render passwords as the primary mechanism to secure any application or desktop of little real value. We live today in a world where it is possible to create credential-stealing malware from “do it yourself” kits available on the Internet and where high profile organisations are routinely targeted by criminals. These developments coupled with the fact that users also appear happy to trade passwords for a bar of chocolate in railway station surveys, begs the question, is the era of password authentication finally coming to a close?
From the first time systems administrators looked to restrict access to applications, the default authentication mechanism has been the “password”. Since then, passwords have been proven time and again to be far from foolproof. Everyone recognises that short passwords based on easily guessed names etc., which sometimes appear to be all that many users can memorise, are easily cracked. Where administrators have policies to weed out weak passwords by insisting on minimum lengths, character requirements and password lifetimes, it is by no means unusual for users to employ the ‘ultra-secure’ yellow sticky note to keep the latest login credentials handy. In fact, it is straightforward to argue that long passwords which are regularly changed lead to more security problems than a reasonable password that is kept confidential.
Considering the computers that users routinely use in everyday business, it is now common for business laptops and desktop machines to include fingerprint scanners. This provides users with either a supplement to or an alternative to traditional password authentication systems. Indeed, fingerprint recognition is now often found even on machines targeting the consumer market, and not only PCs but many tablets as well. In addition several vendors now provide capabilities to employ smartcard authentication mechanisms.
Given that alternatives to password authentication on computers are widely available in large parts of the installed base, why is it that the active use of any additional form of authentication appears to be sporadic? Part of the reason could well be that few enterprise systems management tools are easily able to exploit such technologies in the central authentication repositories, although this is changing.
A more likely explanation is that there is little discussion or co-ordination inside many organisations around how robust the security of their desktop and laptop machines needs to be, or what form it should take. In companies where security is essential, it is becoming more common, for at least certain categories of users, to employ some form of additional authentication beyond the password, quite often by utilising a one-time-password system, such as a key-fob display or by sending one time passwords to mobile phones via SMS.
This does raise the question of why such systems are not more widely deployed, especially as recognition is growing of the need to protect better any sensitive data held on systems. It is clear that few users are happy to employ additional authentication protocols, as many believe these to add complexity to their logon processes.
This is a problem, as research we have carried out over the years highlights that few people outside of IT and compliance / auditor functions understand the business requirements to secure their systems robustly. Educating all users on the importance of IT security, and the steps they should adopt in their daily use of computers, has clear benefits by raising understanding, which ultimately helps improve all aspects of data security. Even with better authentication systems, inappropriate usage of systems remains a major risk to organisations.
It remains to be seen whether the increasing raft of privacy regulations and other compliance and governance requirements will force organisations to ramp up PC security authentication. This could trigger the adoption of secondary authentication mechanisms and anyone that does not take steps may be placing the organisation at risk. Should they do so, we are likely to see a rapid take up of secondary authentication put in place alongside more robust password requirements. One obvious place where regulatory pressure may actually force organisations to implement something they arguably should be doing anyway is the requirement to better protect their business assets. Such moves would also help protect and the company’s brand reputation, a clear priority for many organisations.
Tony is an IT operations guru. As an ex-IT manager with an insatiable thirst for knowledge, his extensive vendor briefing agenda makes him one of the most well informed analysts in the industry, particularly on the diversity of solutions and approaches available to tackle key operational requirements. If you are a vendor talking about a new offering, be very careful about describing it to Tony as ‘unique’, because if it isn’t, he’ll probably know. | <urn:uuid:84322d5b-602e-4309-ab71-a692ff6af27d> | CC-MAIN-2022-40 | https://www.freeformdynamics.com/it-risk-management/the-evolution-of-desktop-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00117.warc.gz | en | 0.954591 | 901 | 2.65625 | 3 |
Security threats were once visible and easily identifiable, today’s cyber threats are invisible and anonymous. Where once warfare had clear rules and boundaries, modern cyber warfare is largely anarchic and without borders. As a result, governments and corporations alike are struggling to identify threats, let alone combat them effectively. This calls for an entirely new security discourse.
Cyberwarfare is Internet-based conflict involving politically motivated attacks on information and information systems. Cyberwarfare attacks can disable official websites and networks, disrupt or disable essential services, steal or alter classified data, and cripple financial systems Cyber warfare has been defined as “actions by a nation-state to penetrate another nation’s computers or networks for the purposes of causing damage or disruption”, but other definitions also include non-state actors, such as terrorist groups, companies, political or ideological extremist groups, hacktivists, and transnational criminal organization
Examples of Cyberwarfare-
• In 1998, the United States hacked into Serbia’s air defence system to compromise air traffic control and facilitate the bombing of Serbian targets.
• In 2007, in Estonia, a botnet of over a million computers brought down government, business and media websites across the country. The attack was suspected to have originated in Russia, motivated by political tension between the two countries.
• Also in 2007, an unknown foreign party hacked into high tech and military agencies in the United States and downloaded terabytes of information.
• In 2009, a cyber spy network called “GhostNet” accessed confidential information belonging to both governmental and private organizations in over 100 countries around the world. GhostNet was reported to originate in China, although that country denied responsibility.
Espionage: Espionage is basically taking information that wasn’t meant for you. In the case of cyber warfare, you’re going to be stealing tactical and strategic information: information about troop movements, the strengths and weaknesses of weapon systems, the dispositions of various and anything else about sensitive (read: necessary to wage war) resources that might be important to know.
Sabotage: Also called “direct action,” this is when we take an active role and go out there and do something. In cyber warfare sabotage can be something as benign as dropping a government’s website to causing a nuclear meltdown at a nuclear plant. It’s a pretty broad phrase, but just remember it means “do something” whereas espionage here means “learn something.”
Hillary Clinton– “We need a military that is ready and agile so it can meet the full range of threats and operate on short notice on every domain, not just land, sea, air, and space, but also cyber space,”.
Barrack Obama – Look, we’re moving into a new era here where a number of countries have significant capacities,” Obama said. “But our goal is not to suddenly, in the cyber arena, duplicate a cycle of escalation that we saw when it comes to other arms races in the past, but rather to start instituting some norms so everybody’s acting responsibly.”
Cyber security is not simply a clear-cut technical issue. It is a strategic, political, and social phenomenon with all the accompanying messy nuances. Therefore, cyber reality must be examined with a scientific rigour by all disciplines, enabling an informed public debate. It is both morally essential and rationally effective for the responses to be formulated through a democratic process. | <urn:uuid:c3e3b20b-57cb-4e91-953d-720c634aef36> | CC-MAIN-2022-40 | https://cybersecurityhive.com/cyberwarfare-security-threat/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00117.warc.gz | en | 0.941138 | 740 | 3.296875 | 3 |
What is WORM?
WORM (Write Once Read Many) is the practice of saving data that cannot be deleted until some period of time has elapsed. It’s intended largely to help organizations meet regulatory and legal retention requirements on their datasets. Such as:
My experience with WORM.
The first WORM device I ever recall seeing in person was a SCSI-attached CD Library; full of discs that were all shiny and gold, and that’s how I’ll forever think of WORM. Once data is laid down on a track, it’s read-only, forever. Not long after that I saw a presentation on Dell EMC™’s Centera® for the first time, which was described by my sales rep at the time as a black-hole for your compliance data, where you can save data that you must keep, but almost never read back. It’s fairly likely that this is how many of you first encountered WORM as well. When I first started using Rainfinity FMA and Centera as a customer a few years later, we came to the conclusion that infinite retention was best, because nobody could make up their minds otherwise, and disk was cheap. In recent years I’ve seen the industry shift towards companies becoming more concerned with not just keeping data long enough, but also getting rid of it when it no longer must be kept. Datadobi of course has a unique connection there, as our four founders were part of Dell EMC’s Centera engineering division. Although we have tons of migration experience in the Centera to NAS and Centera to ECS space, this post focuses mainly on migrations of file servers (NAS) with WORM attributes.
How does WORM work on NAS?
Ok, now that we’ve established what WORM is and why you might be legally obligated to use it, what does that really mean in your data center? It means that there must be some mechanism for setting a retention date, and then some separate mechanism to commit the data (basically make it immutable once that date is set). The most common NAS protocols on the market, SMB (including CIFS) and NFS, have no specific mechanism to support WORM attributes. As a result, WORM attributes have been stored in a fairly common manner across most platforms:
- Use the access time of the file as the retention time, the only distinction is that those access times end up in the future, which would normally never be the case for any file, since access time usually means when was the last time a file was read.
- To commit the file, you make it no longer writeable. On NFS this means effectively a ‘chmod –w’ (removing the write attribute from all parties with access to read the data). On SMB this means flipping the file attribute of ‘read-only’.
Storage vendors like to use terms to give you a sense of security when storing data like this. NetApp® has SnapLock®, Dell EMC VNX® & Celerra® have FLR (File Level Retention), Dell EMC Isilon® has SmartLock®. Most also have compliance or governance terms thrown around, which imply hardware enforced retention.
In all NAS implementations that I’ve seen thus far (quite a few), this is basically how it’s done. The directory, file system, or volume that is to contain this WORM data must be created in a special manner before any data is placed within it, to know that once data is committed, it cannot be modified or deleted until the retention date is met. The only exception to this is a so-called privileged delete, which is not something that a user could ever do, only a storage administrator, and only on a system that is not in a governance or compliance mode.
If you start to think about it for a while you might come up with the idea, of well, I can just change the time on the storage system to the future, and then I can delete the data today. (Just trick it). On Compliance WORM systems, this too is not possible because those systems usually have a separate clock called a compliance clock that only counts up.
Migration of WORM data?
So you have WORM data on some old systems and you need to move it to some new system. How do you do it?
The magic is all in the order of operations.
I’ll start with an analogy.
I saw this random Facebook post (one of those viral ones that everybody shares) the other day and it basically stated a simple math question and asked people for their answer. It looked something like this:
1 + 4(2+3)=?
For those of us who remember some middle school math, you’ll get 21. But why? Because you have to do things in the proper sequence to get the correct result.
If you did this problem wrong, you might get 25, or perhaps something else
This same premise holds true for migrating worm data because you must:
- Copy the data and ensure absolute integrity.
- Set the retention date on the target to match the date on the source.
- Read back that retention date on the target to ensure it was set correctly.
- Only once you’re sure the data and timestamps are correct, set the proper permissions.
- Commit the file so that it’s immutable.
- Provide a chain-of-custody to be able to prove that all the data made it from the source to the target correctly with the same attributes.
DobiMigrate can do this, and here’s how using the same steps above:
- DobiMigrate performs a MD5 hash of the source file and compares that with the MD5 hash of the target file after it’s copied. If the hashes don’t match, it’s not a valid copy. Incremental copies are performed automatically (usually every hour) up until you’re ready to perform a cutover. Once the scheduled cutover window begins, DobiMigrate makes the source shares or exports read only automatically (on many systems). DobiMigrate then performs a final incremental copy and creates the target shares and exports to as closely match the source as possible.
- After the cutover, for a WORM migration, DobiMigrate adds a final step of ‘Copy WORM’. During this step the MD5 hashes that have been calculated are compared and, if they are correct, DobiMigrate will set the access time of the file in the future to match the value held on the source (this is the retention date or, how long the data has to be kept around)
- It will now read-back that date to ensure that it’s accurate.
- Now that the data is correct and the timestamps are correct DobiMigrate opens the file handle once and sets the proper permissions.
- With the file handle still open, DobiMigrate will make the file immutable and then close the file.
- Lastly, a final report is provided for both the cutover details, but also a CSV final report of all the data on the source, the matching copy on the target, and all the proper attributes to give a chain-of-custody level of detail.
What platforms do you support?
See that’s the great thing about being protocol based; the easy answer here is anything that speaks SMB/CIFS or NFSv3 is supported. We do of course have API integration with most of the big NAS platforms on the market, and more are being added as we speak. This API integration is a nice-to-have however and not a necessity. To-date, in the WORM space we’ve tested compliance migrations with DELL/EMC VNX (FLR), DELL/EMC ISILON (SmartLock incl. compliance mode), and NetApp SnapLock. That doesn’t however mean that other platforms won’t work.
I have a guy that knows [tool_name_here].
Why can’t I just use that?
I’ve been asked this many times by customers and partners alike:
The short answer.
The importance of unstructured data today to a business, especially WORM data, cannot be overestimated. There is a reason that you must keep this data around and use technology to enforce and prove that you are keeping it. Old-CLI-based tools are so useless in this regard to the point that they are of no practical use for a WORM migration whatsoever. Are they going to give you the validity checking, the ability to set the atime and commit separately, the easy error checking (like when unexpected min/max/default retentions have been set [and shouldn’t be])? No, but DobiMigrate can.
The long answer.
Host-based tools; like RoboCopy, DellEMCopy, or rsync:
RoboCopy, Xcopy, RichCopy, DellEMCopy, and rsync historically have been the usual go-to file migration tools in a storage administrator’s toolbox. Fundamentally there is a lot wrong with them. Most of the data moves across just fine, but there is no guarantee. And if we say that 99.9% of the data made it, is that enough? That’s losing 1 file in a thousand? How about 99.99%? That’s losing 1 file in a million? Data corruption is fairly common in file migrations, but unless you have validation of the data that you’ve moved, rather than verification by exception (error or fail in a log file), then all you’ve proved is what didn’t make it, not what did. Perhaps ignorance is bliss, but that’s not what I would want to say to a regulatory compliance officer.
With any scripted tool, you also introduce the opportunity for significant human error.
f you take 10 sysadmins and give them the same tool, same source and same target, you’ll likely get 10 different scripts based on their own past experience and, let’s be honest, a few last-minute google searches. If that idea doesn’t make you nervous, it should. Especially when it comes to regulated data sets like PCI (credit-card processing), PHI(Patient Health Information), etc.
For the un-initiated, NDMP is the protocol used on most NAS systems to perform backups. Because a backup is just another copy of the data somewhere else, several attempts have been made to use this protocol to perform migrations. I once explained to an account team that using an NDMP-based copy mechanism to do a compliance migration was like assembling a china cabinet using a sledgehammer. While in theory it might work, the odds that you break something or mess up are really high, and with compliance data sometimes there is very little you can do to fix it.
NDMP-based copies have a few primary challenges:
- You have no ability to control the order of operations, which as I stipulated above is critical in a WORM migration.
- You consume NDMP threads on the source which may mean that normal backups cannot be taken, or take twice as long to complete during the migration. This should scare you. Missing backups in most industries today is not acceptable, ever. And the argument of “but we were doing a migration” isn’t going to get very far when data is lost and un-recoverable.
- There is no ability to throttle or slow the impact.
Change your perspective and reset your expectations.
(Use DobiMigrate instead)
So here is where I get biased (important points, but the same ones I’d give you in a sales pitch to be fair).
- DobiMigrate is faster than anything else on the market.
- Performance Testing done by Dell EMC: (disclaimer, I did most of these tests in my previous role at Dell EMC) https://community.emc.com/community/products/isilon/blog/2016/04/27/accelerating-your-journey-to-the-data-lake-with-dobiminer-from-datadobi
- An independent performance test conducted by PassMark: https://www.passmark.com/ftp/Datadobi_N2N_Migration_Benchmark_Testing_April_2016_Edition_1.pdf
- Your source and target are usually heavily mismatched, the target in a refresh being brand new with more SSDs and faster CPUs, and the source being old, slow and in-production. Throttling the impact on that source so that you can limit the impact during business hours is critical. DobiMigrate can do this for you and very easily.
- Proving not what data didn’t make it from hundreds of log files, but what data did make it instead including MD5 hashes is of immense value. No other tool can do it as fast as DobiMigrate can.
- Scripted migration methodologies while they work when you have the best trained personnel with tons of time to dedicate to a project should not be used any longer. The value of an organization’s unstructured data is too high, and the value of their own time is so high in these days of do more with less that with those conflicting priorities you’re introducing risk to the business.
- Performing a file migration is about more than just moving data, it’s about:
- Moving permissions (but being flexible, changing them if needed, or removing orphaned Security Descriptors (SIDs)
- Creating Shares and Exports to match the source
- Timing cutover events accurately to help plan outage windows.
- Being there to support you when things go sideways because of a strange configuration issue or unique dataset (everybody seems to configure their NAS devices just a bit differently).
- Emailing you status reports, so that the tool does the work for you, not you working to manage a tool with a hundred CLI switches.
See it for yourself.
But rather than bore you with step-by-step screenshots, this is a case where a video is far more appropriate:
For more information.
Additional Sources for WORM Regulation information:
|Data Retention Regulations the pertain to Medicare and Medicaid||https://www.cms.gov/Outreach-and-Education/Medicare-Learning-Network-MLN/MLNMattersArticles/downloads/SE1022.pdf|
|Search for Data Retention Regulation information for additional Countries||http://us.practicallaw.com/2-502-1510|
|IronMountain European Document Rentention Guide (wonderful document, however requires registration to download)||http://www.ironmountain.co.uk/Knowledge-Center/Reference-Library/View-by-Document-Type/Best-Practices/E/European-Retention-Guide.aspx| | <urn:uuid:135a0bfd-815a-4879-a09e-485130b91ad5> | CC-MAIN-2022-40 | https://datadobi.com/blog/the-mechanics-of-a-worm-data-migration/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00117.warc.gz | en | 0.934883 | 3,221 | 2.65625 | 3 |
What is a Data Management Strategy?
Data management is an administrative and governance process for acquiring, validating, storing, protecting, and processing organizational data. With the growth of Big Data, enterprises of all sizes are generating and consuming vast quantities of data to create business insights into trends, customer behavior, and new opportunities. Data management strategies help companies avoid many of the pitfalls associated with data handling, including duplicate or missing data, poorly documented data sources, and low business value, resource-intensive processes. An enterprise data management strategy can help organizations perform better within the markets they serve.
Although the strategy drives the organization, there should also be tactics, projects, and operational goals established based on the strategic plan. Data itself get transformed into information, information into knowledge, and knowledge in decisions by and for the organization. It is important to remember these stages of transformation as organizations adopt technology, practices, processes and enable people to support the organization and its customers. Data management as an organizational capability is a business team effort and should not be siloed into just an IT function.
Data management strategy and master data management strategy should follow data science best practices. These practices can be seen in and underpin data engineering, data analytics, machine learning, deep learning, and artificial intelligence disciplines.
Data Management Strategy is enabled by people, processes, technology, and partners and is supported by the business strategy goals and IT strategy goals. The strategy itself is a plan or roadmap with an overall budget that supports the organization’s other strategic projects.
Data Management Strategy Approach
Data management goals and objectives examples could be to enable better decision making, reduce operational friction, protect data stakeholders, and have a best practice common approach to data issues. These goals and objectives are allowed with a data management strategy. Approaching any strategic initiative can be challenging, especially data management.
The following are steps for defining a good strategy approach:
- Identify business objectives by performing a business assessment to understand business strategy, business technological direction, operating model, and policies and procedures
- Decide how IT supports business objectives with data strategy by doing a strategic assessment
- Understand the demand for data management
- Understand strengths, weaknesses, opportunities, and strengths
- Define the market spaces a data management strategy is needed for
- Identify any strategic industry factors
- Identify all organizational gaps in capability and resources for success
- Generate the strategy
- Establish priorities
- Establish goals
- Establish objectives
- Form a position
- Craft a strategic plan
- Decide measurements and critical success factors
- Define expected return on investment (ROI) and total cost of ownership (TCO)
- Execute strategy
- Create strategic plan
- Create risk assessment
- Get business buy-in, alignments, and integrations to the plan
- Deploy IT assets – people, technology, process, partners
- Decide and define governance and compliance capability
- Hire Chief Data Officer to be accountable for strategy execution
- Create standards, policies, and procedures
- Support execution of plan across organizational, functional units
- Monitor strategy and evaluate for continuous improvement
- Obtain feedback from data management architecture and design
- Find the right technology that supports the budget and objectives
- Establish data management practice
- Establish data governance
- Design and transition strategic and tactical plan for operations
- Prepare and execute organizational change management (OCM) for data management
- Train and educate employees and other stakeholders
- Get feedback
- Deliver and Operate data management strategy with feedback
Following these steps will help ensure the success of your data management strategy for the organization. The strategic plan is followed by plans, projects, and then operational implementation. The strategy supports operations, and operations should support the strategy of the organization’s data management initiatives.
Data Management Strategy Benefits
The benefits of a data management strategy are many. Data is a critical asset within all organizations as a capability for decision support and as a resource for all organizational activities. The organization’s data as an asset strategy is synonymous with a data management strategy. Assets are all capabilities and resources for an organization, and data is critical in each category.
Some benefits of having a data strategy:
- Overall better decision-making across the organization.
- A better understanding of organizational strengths, weaknesses, opportunities, and strengths in all business areas.
- Reduce Bad data – Inconsistent, incompatible, duplicate, missing, etc., data for decision support across the organization.
- Improved value chains between teams, projects, and practices in the organization with organizational data.
- Better contribution to business outcomes, IT outcomes, and all supporting outcomes focused on the organization and its customers: overall productivity and higher performance of the business.
- Cost management and efficiency. Enablement of the business to spend its budget better on needed capabilities and resources to support customer outcomes.
- Better data governance, compliance, and master data management strategy. Critical data is managed better.
- Improvement in running the business, innovation of the business, customer fixes, and wishes
These benefits can overall increase business value to the business and its customers. All data management strategies and projects should focus on benefits and value to the organization.
Data Management Strategy Capabilities
To support the data management strategy, the organization should be aware of critical needs. Some of the essential requirements and capabilities for a data management strategy are:
- Data Management Platform supports ease of use, technology integrations, performance needs, availability needs, capacity needs, and security needs. Price supports value for the organization.
- People with specialized capabilities, including leadership of a Chief Data Officer. Organization change management initiative for people training, awareness, and compliance to data management policies, practices, and processes. The overall organizational culture and tolerance for data management have to be addressed.
- Reliable partners to support all needs relative to organizational gaps for success with data management strategy and initiatives.
- Effective, efficient practices, processes, procedures, and work instructions to support people’s behavior. These include governance, risk, and compliance needs for the organization.
Within the data management platform, there should also be a metadata strategy. Metadata is data that describes or gives information about other data. Deciding this strategy can help with organization and usage of data. Decisions on how metadata would be used, governance, structured, and traced should be made as a strategy to support effective long-term usage of data.
Organizations should also review and adopt best practices for data management. The Data Management Framework (DAMA-DMBOK2) is a comprehensive body of data management practices and standards for data governance. This framework can help create an overall data management strategy framework.
The DAMA-DMBOK2 reference data strategy practice includes the following information on 11 functions of data management:
- Data Governance
- Data Modeling and Design
- Data Storage and operations
- Data Security
- Data Integration and Interoperability
- Document and Content Management
- Reference and Master data
- Data Warehousing and Business Intelligence
- Data Quality
- Data Architecture
Data Governance leads the other ten functional areas. Data governance helps define the strategy for the execution of data management as an organizational capability. Data governance includes people or governs, standards, policies, and a plan. Data governance target operating model overalls underpins the functional areas of the DAMA practice.
There can be many challenges relative to having and executing a data management strategy. Some of the biggest challenges are:
- Alignment with business needs is not understood.
- Weak strategy and goal setting – The strategy has to support business outcomes and be followed by data management mission, goals, and objectives for success, including measurements.
- Organizational change – People adoption is key to success, and if people don’t follow the strategy, including plans, procedures, and work instructions, the strategy will not work. Training is essential, and organizations should formally do this and not expect that their people are “smart” and will just pick up all the needed skills.
- Lack of resources – The organization’s budget has to support strategy, and the organization has to enable all resources and capabilities.
- Ineffective communication – Communication plans have to be created and executed. People have to clearly understand what the strategy and plans are.
- Lack of follow-through – Metric has to be reviewed, and continuous improvement plans must be implemented. Metrics have to be attainable. Have to be able to track progress and roadblocks.
Data management strategies, when executed effectively, will outweigh the challenges. There are always challenges with any business initiative, but staying focused on the outcomes and the overall benefits is the key.
Data management strategies help with data governance, metadata management, data quality, data integration, data security, and many other data challenges and issues. Data management has to be a strategic capability of all organizations today. With digital transformation initiatives, machine learning, artificial intelligence, and other emerging technologies and practices, mastery of organization data is a must. Data has to be controlled strategically and not as an afterthought in all the engagements with customers and technologies that we use today. | <urn:uuid:a1475220-df63-42c9-89c5-e587d9cf969d> | CC-MAIN-2022-40 | https://www.actian.com/what-is-a-data-management-strategy/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00318.warc.gz | en | 0.915596 | 1,893 | 2.71875 | 3 |
A researcher into the link between mobile phones and cancer yesterday warned of a "brain tumour pandemic".
Presenting the shock findings, Lloyd Morgan, Senior Research Fellow at the US Environmental Health Trust, told delegates at the Annual Bioelectromagnetics Society meeting in Seoul, South Korea, that the risk of brain cancer posed by mobile phones has been underestimated by at least 25 per cent.
Electronic engineer Morgan and his team took a fresh look (opens in new tab) at data published in a number of papers by the £15 million, decade-long German Interphone study, backed by the World Health Organisation. The research included a paper released in May that claimed mobile phones might actually protect users against the risk of brain tumours.
Criticising design flaws in the earlier research, Morgan said: "What we have discovered indicates there is going to be one hell of a brain tumour pandemic unless people are warned and encouraged to change current cell phone use behaviours.
"Governments should not soft-peddle this critical public health issue but instead rapidly educate citizens on the risks," Morgan warned. "People should hear the message clearly that cell phones should be kept away from one’s head and body at all times.”
However, UK charity Cancer Research dismissed Morgan's findings. Spokesman Ed Yong told the Daily Telegraph (opens in new tab): "The majority of studies in people have found no link between mobile phones and cancer, national brain cancer rates have not increased in proportion to skyrocketing phone use and there are still no good consistent explanations for how mobile phones could cause cancer."
The warning over mobile phone use comes as authorities in California agreed legislation that could see retailers forced to display radiation warnings to customers buying mobile phones.
San Francisco's Board of Supervisors yesterday voted 10-1 to give preliminary approval to a local law requiring shops to provide information each phone's "specific absorption rate" - a measure of the radiation emissions.
Mayor Gavin Newsom is expected to sign the legislation into law after a ten-day comment period, the San Francisco Chronicle reports (opens in new tab). | <urn:uuid:b0b18bcd-2155-4535-b7a8-3e31eb0e4ff5> | CC-MAIN-2022-40 | https://www.itproportal.com/2010/06/16/expert-warns-mobile-phone-cancer-pandemic/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00318.warc.gz | en | 0.944834 | 433 | 2.75 | 3 |
CloudFormation is an AWS service that allows you to define your AWS infrastructure as code (IaC). Using CloudFormation you can create, update, and delete AWS resources. Benefits include fast deployment of infrastructure, consistency across deployments, and automation of infrastructure creation.
The two primary concepts used in CloudFormation are templates and stacks. Templates are JSON or YAML files that describe the AWS resources you will deploy. Stacks are the set of resources that are created and managed together when a template is run in CloudFormation. Stacks can be deployed from templates using the AWS console, AWS command-line interface (CLI) or via AWS APIs.
Using CloudFormation templates, you can automate the deployment of Infoblox vNIOS appliances in AWS. This is useful for deploying vNIOS with identical configurations across multiple regions or for quickly deploying and tearing down a test environment. Two sample templates can be found at the end of this blog. The first deploys a new VPC, subnets, internet gateway, routes, public IP, security group and a vNIOS instance. The second uses an existing VPC and subnets for a new vNIOS instance. These templates can be used as a baseline to customize deployment in your AWS environment.
Infoblox vNIOS Instance Template
We’ll take a look at some components of the first sample template, which deploys a new VPC along with the vNIOS instance.
The template utilizes two mappings, tables which provide values based on input. The first mapping will select the appropriate vNIOS 8.5 Amazon Machine Image (AMI) based on which region the template is used in. This map has entries for many AWS regions and other specific regions can be added as needed.
The second mapping selects appropriate temporary license and instance size based on a vNIOS model parameter input.
NOTE: Not all instance types are available in every region. If you are modifying the templates to deploy in other regions, verify which instance types are available. To find recommended instance types for vNIOS instances in your region, refer to the vNIOS for AWS Installation Guide on the Infoblox support site: https://docs.infoblox.com.
These mappings are referenced in the template when creating the vNIOS instance, to provide specific values for “ImageId” and “InstanceType” properties.
The “UserData” property in the template allows you to pass some initial configuration to the vNIOS appliance. In this template, it is used to allow SSH access, set the admin password, and apply temporary licenses to the instance. Licenses for the specific model are set based on the mapping shown earlier. For further information on working with User Data fields in AWS, refer to the vNIOS for AWS Installation Guide on the Infoblox support site: https://docs.infoblox.com.
For documentation on other resources and sections in the templates, refer to AWS CloudFormation documentation: https://aws.amazon.com/cloudformation.
Deploy vNIOS Instance Template
To deploy this template using the AWS browser console, follow these steps:
In the AWS console, use the Services dropdown menu to navigate to CloudFormation and create a new stack. In Step 1 of the create stack wizard, select Template is ready and Upload a template file. Click on Choose file and select the deployVNIOSv1.json file you downloaded. Click Next.
On Step 2, enter a name for your stack. Set the parameters to your desired values or leave the defaults. The VPCCIDR parameter will only accept a /16 CIDR. Click Next.
On Step 3, add tags for your resources if desired. You can leave defaults for all other settings on this step. Click Next.
On Step 4, review the details for your stack deployment. Click Edit on any section to make changes if needed. Once everything looks correct for deployment, click on Create stack.
You can monitor the progress of your deployment in the Events tab of the stack.
Once you see CREATE_COMPLETE for the stack, access your vNIOS instance and other resources from their respective AWS console pages.
When you no longer need the resources in this stack, you can terminate them using the Delete button on the stack page. This will remove all resources created by this deployment.
Deploy Templates from AWS CLI
To deploy the sample CloudFormation templates using the AWS CLI, use the following commands.
For Template 1:
aws cloudformation deploy /
–template-file ./deployVNIOSv1.json /
–stack-name new-stack1 /
–parameter-overrides VPCName=demo-vpc VPCCIDR=10.17.0.0/16 /
Replace each value in the parameter overrides section with your desired value.
For Template 2:
aws cloudformation deploy /
–template-file ./deployVNIOS_existingVPC.json /
–stack-name new-stack1 /
–parameter-overrides VpcId=vpc-1234abcd SubnetLAN1=subnet-1234abcd /
SubnetMGMT=subnet-5678efgh InstanceName=demo-vnios NIOSmodel=TE-V1425
Replace each value in the parameter overrides section with your desired value. Values for VPC and Subnet IDs are required. SubnetLAN1 and SubnetMGMT can be the same subnet or 2 subnets in the same VPC and availability zone.
AWS CloudFormation allows fast and consistent automated deployment and management of your Infoblox DDI infrastructure in AWS. Templates are highly customizable and can be used for most deployment scenarios. To try out the templates featured in this blog, download them below.
Template1: New VPC, subnets, gateway, routes, security group, and vNIOS instance.
Template 2: vNIOS instance and security group. Uses existing VPC. | <urn:uuid:e32882b9-323d-4413-bb0e-f112ebdfa34b> | CC-MAIN-2022-40 | https://blogs.infoblox.com/community/deploying-vnios-for-aws-with-cloudformation/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00318.warc.gz | en | 0.780261 | 1,296 | 2.515625 | 3 |
By: A. Dahiya, B. B. Gupta
IoT is all about connecting smart devices to the Internet to create something that people already use in their everyday lives. This article covers how the IoT is making DDoS attacks more dangerous than ever before, and how to prepare your business against these potentially devastating attacks today?
What is a DDoS attack?
A Distributed Denial of Service (DDoS) attack [1-3] is a type of cyber-attack where there are many devices attacking a single server. This is usually done by overloading the server’s connection and preventing it from receiving any more data. The devices that launch this type of attack can be computers, servers, or even personal gadgets such as smartphones, but they all have one thing in common: they must be connected to the internet to participate in the attack. DDoS attacks are so common that the largest company in this industry, Cloudflare, had to upgrade its service and add several new features and an extra layer of protection to cope with the growing demand. The overview of the DDoS attack is shown in figure 1.
Why are these attacks more dangerous?
These attacks are made more dangerous because perpetrators can now use the Internet of things (IoT) to make them more severe [4-6]. They can do this by exploiting known vulnerabilities in-home devices like Wi-Fi routers, security cameras, and smart TVs. Assailants may utilize these susceptible devices to deliver a flood of traffic to select sites, disabling their servers. These attacks also pose a more serious threat to individuals whose devices are used in botnets. Many of these botnet victims do not even know their devices are being used in this way, leaving them vulnerable to identity theft or even physical harm.
There are steps you can take to guard against this type of attack. Since most IoT devices have little or no security, updating their firmware is critical. This will reduce the chances that you will be part of a botnet. You should also make sure your devices are always up to date with the latest security patches for their operating systems. There are also applications you can run on your computers and smartphones that will prevent them from being used in botnets. Finally, avoid clicking on links that appear in unsolicited emails. These kinds of attacks on IoT devices are on the rise. Some experts estimate that by 2022, there will be 20 billion IoT devices in use. If you’re not careful, you could end up part of a botnet and your device will be used against you [7, 8].
How does an IoT device make DDoS attacks worse?
If you are not already familiar with DDoS attacks, they are a form of cyber-attack in which the perpetrator latches on to a single connection point and uses it to send multiple messages or requests for information. An IoT device, such as a baby monitor or thermostat, may have an internet connection that is not secured with a firewall. The hacker would then focus on this device and use it to unleash a DDoS attack on your company’s website or another target. If your company’s website is down, it can have a disastrous impact on your business. The best defense against IoT DDoS attacks is to take steps to secure all of your devices [9, 10].
If an IoT device is faulty or causes harm, it may be possible to sue whoever created it. An IoT/botnet victim may sue the botnet owner, although this is more complicated than it seems. Botnets may be created by numerous persons, and their owners may not be in the same nation as the victims.
Each country has its own laws regarding who is responsible for an attack.
- In the UK, the government is considering a bill that would make manufacturers responsible for recalled products.
- Anyone who spreads malware in the United States should be held liable, even if they did not develop it. Also, if a business that fails to maintain adequate security may be held accountable for malware assaults,
- In Europe, if you disseminate malware, you are personally liable for its consequences. However, the business cannot be held liable if it took reasonable steps to safeguard its customers.
What type of IoT devices are most at risk?
The cameras are one of the most popular IoT devices that are being targeted by malicious actors. The hackers will search for any publicly accessible IP address with a camera, then use it to launch their DDoS attack. This is what happened to one of the largest DDoS attacks ever observed, with an estimated size of 1.7 terabits per second.
How can I protect my IoT devices?
The best way to protect your IoT devices is to set up a firewall to restrict inbound and outbound traffic. It’s also important that you use passwords and preferably for the device to change its default login credentials. To avoid these issues, you can block all inbound and outbound traffic from connecting to your device. Besides this, you can use a virtual private network (VPN) to secure your internet traffic and revoke access to your device from all outside networks.
In summary, it’s been shown that IoT is an integral part of many DDoS attacks. It’s been shown that they can be used for reconnaissance and to conduct a distributed attack. In the future, these types of attacks will only become more dangerous as the number of IoT devices increases exponentially.
- Zargar, S. T., Joshi, J., & Tipper, D. (2013). A survey of defense mechanisms against distributed denial of service (DDoS) flooding attacks. IEEE communications surveys & tutorials, 15(4), 2046-2069.
- Yan, Q., et al. (2015). Software-defined networking (SDN) and distributed denial of service (DDoS) attacks in cloud computing environments: A survey, some research issues, and challenges. IEEE communications surveys & tutorials, 18(1), 602-622.
- Tripathi, S., et al. (2013). Hadoop based defense solution to handle distributed denial of service (ddos) attacks. Journal of Information Security. Vol. 4 No. 3 (2013) , Article ID: 34629 , 15 pages DOI:10.4236/jis.2013.43018.
- De Donno, M., Dragoni, N., Giaretta, A., & Spognardi, A. (2017, September). Analysis of DDoS-capable IoT malwares. In 2017 Federated Conference on Computer Science and Information Systems (FedCSIS) (pp. 807-816). IEEE.
- Adat, V., et al. (2018, January). Economic incentive based solution against distributed denial of service attacks for IoT customers. In 2018 IEEE international conference on consumer electronics (ICCE) (pp. 1-5). IEEE.
- Jia, Y., Zhong, F., Alrawais, A., Gong, B., & Cheng, X. (2020). Flowguard: An intelligent edge defense mechanism against IoT DDoS attacks. IEEE Internet of Things Journal, 7(10), 9552-9562.
- Alieyan, K., Almomani, A., Anbar, et. al. (2021). DNS rule-based schema to botnet detection. Enterprise Information Systems, 15(4), 545-564.
- Hoque, N., Bhattacharyya, D. K., & Kalita, J. K. (2015). Botnet in DDoS attacks: trends and challenges. IEEE Communications Surveys & Tutorials, 17(4), 2242-2270.
- Cvitić, I., et al. (2021). Boosting-based DDoS Detection in Internet of Things Systems. IEEE Internet of Things Journal.
- Dhananjay Singh (2021) Captcha Improvement: Security from DDoS Attack, Insights2Techinfo, pp.1
Cite this article as:
A. Dahiya, B. B. Gupta (2021) How IoT is Making DDoS Attacks More Dangerous?, Insights2Techinfo, pp.1 | <urn:uuid:941a81a8-3d51-42c6-b1cb-b39c0d11b494> | CC-MAIN-2022-40 | https://insights2techinfo.com/how-iot-is-making-ddos-attacks-more-dangerous/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00318.warc.gz | en | 0.921711 | 1,700 | 2.734375 | 3 |
A process map template can speed up business process management, design, and reengineering – but it is often better to create your own template than to use pre-made templates.
To learn why, keep reading.
What Is Process Mapping?
Process mapping, process modeling, or business process mapping, refers to visually diagramming a business process in a series of action steps.
In most cases, process mapping involves the use of tools, notation, or a “language” dedicated to business process management.
A few common process mapping tools include:
- Swimlane diagrams
- Value stream mapping
Each of these tools has its use cases, pros, and cons.
Swimlanes, for instance, distinguish job responsibilities in addition to the actual step-by-step workflow.
A flowchart, on the other hand, does not distinguish between job roles and responsibilities. This is not to say that one is better than the other – if job responsibilities are mutable, for example, another diagram may be the better choice.
Regardless of which process mapping tool one uses, a template can be a useful way to speed up the diagramming process.
What Is a Process Map Template?
The process map template is simply a roadmap used for a specific workflow, which can be further fleshed out and customized to suit the specific needs of one’s department. Any process that can be mapped can have its own template.
For instance, HR professionals can use templates to map out the basics of every aspect of the employee life cycle, including:
- Employee onboarding
- Employee training
- Performance improvement
- Career development
This is just one example drawn from a single business unit, however. Any business unit can create a template for and map out their processes, from IT to finance to customer service.
Why Use a Process Map Template?
One reason to use a process map template is to save time.
If a manager or business leader needs to create multiple workflow templates for a team or a set of teams, templates can save time, since workflows don’t need to be designed from scratch.
Another reason to use it template is standardization – that is, when the organization, the manager, or business leaders require certain elements to be included in a business process, they can require the use of a template.
This can ensure that employees adhere to that template’s workflow and, as a consequence, also remain compliant with, for instance, policies and procedural guidelines.
Why You Should Design Your Own Templates
While there are templates available online for a wide variety of business processes, it is usually better to design your own templates.
Online templates can be used as starters, and those not used to business process mapping may learn something from them.
However, these tools are typically very generic and not always relevant to one’s own needs.
Therefore, it is important to design your own templates that can then be implemented across your organization and across different business units.
Which Software Should You Use for Process Mapping?
There are a wide range of tools available for process mapping.
Some are dedicated specifically to process mapping.
Others, such as Excel, are generic tools that can be used to build process maps.
Here are a few examples of process mapping tools:
- Flowchart software
- Process mapping software
- BPMN software
Typically, however, any organization that wants to develop detailed process maps to standardize their own processes should use specialized software.
Flowchart software, for instance, often includes a number of templates that are ready-made. These, as mentioned above, can be further customized to meet one’s own business needs.
While flowchart software, business process mapping software, or even tools such as Excel can be useful, in today’s digital workplace it is important to keep up with current trends, including those that affect the way process mapping is performed.
Process Mapping in the Digital Workplace
Digitally mature organizations are using advanced tools that go beyond traditional process mapping software.
For example, digital adoption platforms (DAPs) can be used not only to map out processes but also for employee training.
These tools can:
- Map out a process from beginning to end
- Automate workflows
- Provide contextual support through software walkthroughs
- Standardize business processes without the need for documentation
In fact, agile organizations may even be able to use DAPs instead of traditional business mapping software. when used in this way, these tools can save even more time on process mapping and process improvement.
Final Thoughts: The Importance of Agile Process Mapping
Agility has become a top concern for many businesses, especially after the COVID-19 pandemic.
According to McKinsey, online searches for “agile transformation” are skyrocketing, and many businesses are engaged in transformation efforts.
They also showed that agile transformations deliver major benefits, including 30% gains in efficiency, customer satisfaction, and operational performance, among other things.
Many companies, however, recognize that agility is not just advantageous, it may be crucial for success in the next normal.
Notably, since agile transformations involve the adoption of new business processes, they also involve business process redesign and process mapping. Using templates and cutting-edge tools such as DAPs, therefore, can save significant time and energy when engaging in any type of business transformation, agile or not. | <urn:uuid:0583a302-e04a-4f3f-8b2b-b843b8e235ee> | CC-MAIN-2022-40 | https://www.digital-adoption.com/process-map-template/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00318.warc.gz | en | 0.92361 | 1,148 | 2.78125 | 3 |
The four stages of the technology adoption life cycle can be infinitely helpful in supporting team members at every level. But how can team members be categorized into groups? What does each group mean? And what can be done to help each team member based on their group and place on the adoption curve?
Looking closely at the technology adoption life cycle will help us answer these questions. Technology adoption can then be successfully implemented to the advantage of the company, increasing success in profits, output, and employee experience.
What Is The Technology Adoption Lifecycle?
Rogers developed the technology life cycle in 1962 based on the work of a soviet social scientist to understand how people use technology. Due to its usefulness, the life cycle is still used today by any company wanting to track the mass consumer market’s adoption of a new product.
The life cycle represents how different people, based on age, income, and other demographic and psychological characteristics, begin adopting a new piece of technology. In time, such new technology reaches its potential, diffusing among all society members until it becomes second nature.
There are five stages of new technology customer engagement, called the five adopter groups. These five adopter categories are Innovators, early adopters, early majority, late majority, and laggard.
The smallest group, innovators, are the youngest and the first to adopt new technology. Innovators are also the highest in the social class of all the groups and have the highest level of sociability toward others. In addition, innovators have the most heightened financial lucidity.
The other two major factors of the innovator’s group are that they are in close contact with scientific resources, giving them the edge in knowledge and understanding of new technologies. Crucially, innovators also have a high tolerance to risk, which means they are likely to purchase an item despite knowing it may not become successful.
- Youngest group
- The first group on the adoption curve
- Highest financial lucidity (understand the value of money)
- Crucially, high-risk tolerance.
- Early Adopters
Early Adopters are young but slightly older than innovators and more youthful than any remaining groups. Early adopters are the second on the adoption curve, making them the second to adopt new technologies.
Early adopters are the opinion leaders, meaning they inform the opinions of others around them the most out of all five groups. They are opinion leaders because they are the most sociable of all the groups and communicate their opinions and ideas more effectively, frequently, and to larger audiences.
Contrasted to innovators, early adopters have even higher social status and usually have a higher education level than all the groups. As a result of this education, early adopters are more careful and take time to research the best products for them. This careful consideration ensures their central communication position.
- Young but slightly older than innovators
- The second group on the adoption curve
- Early adopters are the opinion leaders
- Most social of all the groups
- Early Majority
The early majority group takes significantly longer to adopt an innovation than innovators and early adopters, but less than the remaining groups. The amount of time taken to adopt new technology is longer too.
The early majority group typically has above-average social status and has contact with early adopters. But because of their later adoption, early majority group members rarely fulfill an opinion leadership function.
- Adopt an innovation after a varying degree of time
- The time for adoption is significantly longer
- Above-average social status
- Late Majority
The late majority is not far behind the early majority, adopting new technology just after the average consumer. With this adoption, a high degree of skepticism remains about the potential of the new technology, even after the majority of society formed from the previous three groups has already adopted it.
Members of the late majority usually have below-average social status and very little financial lucidity. Late majority members discover new technologies through contact with the late majority and early majority, but not usually with innovators. As a result of this and low sociability, the late majority has a very low opinion of leadership.
- Adopt an innovation after the average member of the society
- Approach an innovation with a high degree of skepticism after the majority of society has adopted the innovation
- Below the average social status
The laggard group is the final group to adopt any new technological innovation. Laggards tend to be opposed to change agents and are usually the oldest group and most focused on traditions.
Laggards are likely to have the lowest social status and the lowest financial lucidity. As a result of being in contact only with family and close friends, laggards usually have little to no opinion on leadership.
- Last to adopt an innovation
- Usually older age group than other categories
- Lowest social status
Technology Adoption Lifecycle Example: Bitcoin
Cryptocurrencies such as Bitcoin are fascinating to view in the context of the technology adoption life cycle and the technology adoption curve. In the early days of Bitcoin, around 2008 onwards, users of social media sites such as Reddit (Innovators) adopted Bitcoin and began purchasing it in increasingly large amounts. At this time, it had a meager value, purchased at pennies or cents per Bitcoin.
Within a few years, more prominent social media influencers (Early Adopters) on YouTube and Twitter began to take note of Bitcoin and speak about it as a wise financial investment. Not long after this, working-age adults became more interested in Bitcoin (Early Majority), causing its value to skyrocket as it gathered strength. Some banks and online retailers also began acknowledging Bitcoin as an acceptable payment.
In 2020 Bitcoin became mainstream. Working-age consumers approaching retirement (Late Majority) with an interest in finance began buying and using Bitcoin. And finally, retired consumers of advanced age (Laggards) started to seek out an awareness of Bitcoin. Some from this group began investing in the future of their close friends or family members.
Bitcoin is a fascinating example of a product or currency within the technology life cycle because it will not be popular forever. Despite its current popularity, Gartner predicts that “by 2024, 20% of large enterprises will use digital currencies for payment, stored value or collateral, even though 84% of surveyed financial executives say it is a risk”.
Bitcoin is considered a financial risk due to its volatility, but the fact remains that it has brought cryptocurrency into the mainstream market. As a result, Stablecoin will likely replace Bitcoin as the dominant.
Because of this risk, but because Bitcoin has made cryptocurrencies mainstream, what will likely come in place of Bitcoin is Stablecoin. When Stablecoin becomes mainstream, it will have seen the exact structure of popularity as Bitcoin, being adopted by the different groups and then eventually falling out of use almost wholly.
This example shows how the life cycle of any technology, no matter how popular, is finite. Companies can learn a lot about their new products by tracking the progress of new technologies using the technology life cycle.
What Are The Four Stages Of The Technology Adoption Lifecycle?
The technology life cycle shows how long a product lasts in the mainstream market. Investment leads to the product becoming popular. Eventually, its popularity dies, and the substitution stage depends on whatever new technology replaces it. The technology life cycle is much longer for established products like cement manufacturing, as the building industry will always need cement.
The four stages of the technology adoption life cycle represent changes in a product life cycle based on how the public views and how much they buy, use and understand the product. These also fit with the bell curve of the technology life cycle and its categories. However, the life cycle is much shorter for technology products such as smartphones or tablets, as newer models replace products.
The technology adoption lifecycle is an s curve shape and contains two ways of describing each stage. The stages and phases of the technology life cycle go hand in hand, and the first of these four stages is research and development.
The four stages elaborate on the public view of a product and how this affects its popularity. The two ways of describing the stages exist because two social scientists, Soviet Kondratiev in 1925 and Rogers in 1962, contributed to developing and refining the technology life cycle.
- Research & Development
The first stage of any new product is research and development. New ideas are produced all the time by companies eager to create the next new technology for people to adopt quickly and from which companies can extract large profits. Innovative products must also be practical and easy to use. An important aspect is that the research accurately reflects user need based on current and predicted future behaviors informed by current trends used to discover potential users.
The point at which the Research and development stage occurs on the chart is part of the s curve, which falls below the line of profit. A company cannot make a profit; despite the investment, there is an initial loss. This stage is a hugely high-risk, high-cost time for a company, as it is unknown whether the investment will make a return. There are many innovation stages in this part of the process, where experimentation is encouraged. This stage is called the bleeding edge.
The stage where the product becomes known and creates interest with the public is the leading-edge stage or the ascent phase. Advertising budgets are high to let the public know the details of the product. This stage is also known as the syndication stage; a new effect is demonstrated and commercialized for use by the public with the hope of it becoming a market leader. Any new app can be an excellent example of this.
The first two adopter groups, innovators and early adopters, begin purchasing and using the product within the ascent phase. Users from these two groups begin to disseminate information about the products to those of the other three groups. The ascent phase is also when a company can recover investment costs.
The maturity stage, also known as the diffusion phase, is when the product reaches its zenith, hitting its peak and becoming as popular as it will ever become. Many consumers are discussing and purchasing the product. Diffusion refers to consumers communicating with each other to popularize the product as it diffuses throughout the mainstream market.
The decline or substitution stage is when the product is used less and shifts fewer units. Profits go down, as does interest in the product. The substitution is the current product for a newer or alternative brand or model, which is the final stage in the technology life cycle. The recent app example applies here for apps with specific niche uses not used for very long.
What Is A Technology Adoption Curve?
It all begins with an innovation trigger, which comes from innovation theory. The technology adoption curve visually represents the path technology diffusion takes as a new product gains popularity. How this can look will change based on the product’s success or how long it takes to become popular.
The trigger is when a technological breakthrough sparks new technology use. At this point, the usable product does not exist and whether it is financially viable for the mainstream market is unknown. The media shares conceptual designs, which spark public interest.
The chasm is a significant part of the graph, representing the space between innovators and early adopters. A product cannot become popular in the mainstream market as users in all five stages must use it to reach its full potential when this chasm is unfilled. Crossing the chasm is therefore crucial to success. Crossing the chasm has also been written extensively by influential business writer Geoffrey Moore who Gartner often cites.
The peak of inflated expectations
Early media engagement has several successes, but many failures often follow these. Some companies act on these failures, but a lot do not.
Trough of disillusionment
Interest goes down as experiments and attempts to prove the product’s worth do not deliver on promises. Those who created the new technology abandon it or fail. Investors only maintain confidence if the designing company can make the product reach the expectations of early adopters.
Slope of enlightenment
Examples of why and how the new technology has the potential to be an advantage to the enterprise company become clear and easily understood by media and consumers. Products of the second and third generations are released. More enterprises invest in future pilot schemes, yet some companies remain cautious of investment.
Plateau of productivity
Adoption by the mainstream market increases hugely. The new technology has much clearer viability, accessibility, and relevance within the market, showing visible returns on investment.
How Can Businesses Leverage The Technology Adoption Life Cycle?
The technology adoption life cycle has a broad array of business applications, and any business can utilize the life cycle to their advantage. The life cycle tracks the success of a product from the product development phase to the new technology becoming part of the status quo for everyone or most people in the mass consumer market as the product reaches market saturation.
Many entrepreneurs use the technology adoption life cycle to inform technology development based on data collected from customers at different stages for previous technologies. Crossing the chasm is a large part of meeting these aims.
Businesses can leverage the technology life cycle in one of two ways. The life cycle plots the diffusion of innovation stages of a new product for consumers. However, the life cycle plan the diffusion of the innovation stage of new technology for team members as part of a technology adoption strategy or a transformation strategy.
If used as part of a transformation or technology adoption strategy, the CEO
Companies can use the technology adoption lifecycle to become a market leader in a specific technology based on tracking the progress of rival companies’ technologies.
Secret elements are one of many factors that influence popularity. If a company can build a base of loyal customers, that can be hugely advantageous, as there is always a group fighting for the corner of the product, while there may be fierce critics. The harsh critics then become the product fans when its popularity peaks.
Many innovations do not reach their potential because companies do not acknowledge that the curve goes from left to right and that this process takes time.
Technology products have a typical life cycle, which companies must be mindful of when designing products and budgeting funding for research and development.
Making The Technology Life Cycle Work For You
Many examples exist of products and services presented in proof-of-concept in the media for an extended period, increased anticipation, and then for an extended period did not hit these expectations. Virtual Reality (VR) is one example of this. VR has come and gone for several decades, promising much and delivering little.
The late success of this technology may be due to many reasons related to the constraints of the technology itself. VR may have been seen as a niche item and not for many consumers’ benefit. However, non-technical factors influence a product within the new technology life cycle. For example, cost, public perception, or even fear of this new VR technology and how it might change society could have come into play.
The VR headset is now becoming increasingly popular. Perhaps this is due to the technology catching up with the device’s needs to feel immersive. It could also be the proliferation of geek culture and society’s acceptance and dependence on technology.
When looking at such examples, companies can also apply the technology life cycle principles to team members. The life cycle model can be used on a company population, especially in larger enterprises, to see where certain team members are on the technology adoption curve as part of a digital transformation strategy. Meetings and feedback exchanges are useful to discover why and remedy such obstacles if adoption is not as fast as expected as part of a change management plan.
There are many uses internally and externally for companies of many sizes for the technology life cycle. It is a diverse tool that can measure consumers’ and team members’ diffusion of new technology to increase profits and improve user experience. | <urn:uuid:901d418f-4d1c-4d3f-83c9-42bfa5e0f9cc> | CC-MAIN-2022-40 | https://www.digital-adoption.com/technology-adoption-life-cycle/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00318.warc.gz | en | 0.946814 | 3,289 | 2.671875 | 3 |
Criminals, government officials and privacy conscious citizens share a fondness for secure communications.
Users of secure messaging apps may be plotting election campaigns, planning terrorist acts, conspiring other illegal activities or simply seeking to fulfil their human right to privacy online with assurance of confidentiality in their lawful and well-intended communications.
Secure messaging apps, also known as encrypted messengers, have grown in popularity. The demand has tremendously increased following the Snowden revelations in 2013. The demand is fuelled by concerns over government mass surveillance programs, state sponsored espionage, snooping from ISPs, intelligence and law enforcement agencies, and simply by a rightful appetite for privacy.
Conversely, governments are facing significant obstacle to the lawful access of communications by law enforcement and national security agencies. They fight back the challenges posed by encryption with regulation, such as with The Assistance and Access Bill 2018 in Australia, which is designed to require both domestic and foreign companies supplying services to Australia to provide greater assistance to agencies endeavouring to reveal communications of interest.
Secure messaging apps
Secure messaging apps are often used in the form of mobile apps that essentially provide a means to communicate with instant messages, voice or video in a format that is encrypted end-to-end. The end-to-end encryption means that only the legitimate parties of the communication can decrypt the messages destined to them. No other parties, not even the developer of the apps, should be able to eavesdrop in communications. Some apps provide additional controls such as message expiry and progressive text reveal to further improve message secrecy.
There is a wide and growing number of secure messaging apps available, such as Signal, Wickr, Confide, WhatsApp, SudoApp, ChatSecure and Telegram just to name a few. Most of them are free.
Encrypted political turmoil
Some top government officials fervently use secure messengers amongst themselves, at the risk of breaching regulations and arguably at the risk of putting their nations at risk. Yet, some of those fervent users also ironically lobby for a crackdown on encrypted communications.
The French government finds no further appeal in free and foreign encrypted messenger apps such as WhatsApp, owned by Facebook whose privacy practices have been severely exposed and questioned (e.g. Cambridge Analytica case), and Telegram, an app based on a proprietary encryption service created by a Russian entrepreneur who is reported to be pressured by Russian government entities.
The French government is resisting, and it is taking the lead in protecting the communication of its officials and public servants.
“We need to find a way to have an encrypted messaging service that is not encrypted by the United States or Russia.”
French government spokesperson
Reuters reported that the French government has identified a key risk with the protection of its communications. None of the world’s major encrypted messaging apps are based in France and would raise the risk of data breaches abroad. Privacy concerns have grown, and security tools installed on French officials’ work smartphones would now prevent the use of apps such as WhatsApp or Telegram.
The French government still acknowledges the need for secure communications, but within risk tolerance. It is now building its own encrypted messenger service and app, to ease fears that foreign entities could spy on private conversations between top officials.
The app is designed by an anonymous state-employed developer based on open source technology, with the aim to mandate its use for the whole French government by the summer of 2018. The upcoming app is also considered to be later made available to all French citizens.
French government officials are not the only politicians having grown fond of encrypted messengers and to be scrutinised for it.
In the land Down Under, also known for its ambiguous definition of communication metadata and where the laws of cryptographic mathematics would not prevail, the Australian government has been grilled over the subject.
“Turnbull government risking national security, cabinet material by using WhatsApp.”
In March 2015, the ABC reported that Turnbull, at the time Australian Communications Minister, had confirmed he used secret messaging apps including Wickr and WhatsApp for being “superior over-the-top messaging platforms”. Anecdotally, Business Insider Australia reported a few months later that Wickr downloads had increased by 700% following news that Turnbull was using it.
However, in October 2016, Mark Dreyfus alleged that the Turnbull government was risking national security by using WhatsApp for communications supposedly involving cabinet material. The government was reported to be grilled on the subject. Dreyfus added that the government was “treating security with contempt”.
The Office of the Australian Information Commissioner (OAIC) had also warned Federal ministers that their smartphone app messages could be released publicly under Freedom Of Information (FOI): “All communications or records of a minister which relate to his or her duties are potentially subject to FOI” and added the case applied independently as to whether the communications were transmitted via a government or non-government server.
The applicability of enforcing such a FOI request involving encrypted messengers would certainly remain to be seen.
In the U.S., the Wall Street Journal podcast ‘An App All the Rage Among Hack-Fearing Politician’ reported in January 2017 a similar trend with the mobile app Signal used by top US politicians for the same reason as in Australia and France. Trump and aides were mentioned in the podcast. The download of the Signal app was also reported to increase by 400% during the 2016 USA presidential elections and was even further boosted following the DNC email server hack. In addition, Wired reported in February 2017 that Confide was also a popular encryption app amongst white house staffers and that the app would help in leaks and in breaking the law.
Australian, French and U.S. top government officials can rightly be scrutinised over using non-government-vetted secure messaging apps to communicate amongst themselves to mitigate risks of eavesdropping and politically damaging leaks. It would certainly seem that the option is better than using emails for them (lessons learnt from the DNC hack).
However, the issues include:
Complying with relevant data protection government requirements, especially for cabinet material and other classified data; and
Complying with Freedom Of Information requirements.
While avid seekers of secret instant messaging communication for themselves and their teams, and at the risk of breaching the data protection regulations of their own countries, Turnbull, Macron and other worldwide government leaders are lobbying for a crackdown on encrypted messengers to prevent terrorists and criminals to evade intelligence and law enforcement monitoring.
For example, the idea of selective banning for secure messengers is making headways in Australia. In New South Wales, bikie gangs have been reported to using apps such as Snapchat for encrypted communications and evade law enforcement monitoring. In a developing case, landmark crime prevention orders against 10 bikies would include a provision to forbid them the use of any encryption in communications.
In addition, the Australian government is pressing for regulation to “engage with domestic and international communication providers” for law enforcement to effectively investigate serious crime.
The enforcement of such provisions would present challenges. After all, the laws of mathematics are universal and mainly open source, even in Australia.
How private are secure messaging apps?
In my opinion, privacy is a dark side of secure messengers.
When we eagerly install and use secure communication apps, we may feel like entering a privileged super-private zone where we can freely and carelessly communicate, for free.
I would argue that there is no such thing as a free privacy and it is always a good idea to read the apps’ privacy policies.
All private communication apps have their own specificity, in features, in how they secure messages, in being open-source or not for example. They however all have something in common. While the apps’ providers may not be able to know the content of the encrypted messages you exchange, they gather information about their users, including for example:
Your personal information on registration (email address, phone number, etc.);
Who you communicate with and additional metadata on your communications (e.g. date & time);
Your address book (everybody in your contacts list); and
“When you access and use the Service, you will be asked to grant us the right to collect the data stored in the address book on the Device from which you are accessing and using the Service…” – The policy then mentions the information is stored in an anonymised form, but it does not stipulate any controls or constraints on further processing the data; and
“Like most organizations, we rely on automatic data collection … when you visit our Website or use our Service. These technologies may collect information on our behalf such as IP address…information about your device…” and a long list of other data.
How secure are secure messaging apps?
Erica Portnoy from the Electronic Frontier Foundation (EFF) argues that it would be challenging to achieve a consensus on what a “secure” messenger must provide, because people’s and community’s security needs are different. She also adds that “a messenger that’s perfectly secure for every single person is unlikely to exist”.
While the EFF does not provide any app benchmarking (which would be great), Portnoy identifies the following key criteria:
User experience; and
“There’s a big difference between the theoretical and practical security messengers provide.”
Erica Portnoy, EFF
Portnoy argues that encryption is the easy part because most algorithms are standards and using one or another of the key algorithms would not make much of a difference. However, the other criteria are hard to perfect. Programmers may make mistakes when translating the encryption math into actual code for the secure messaging apps.
In addition, Portnoy refers to examples of poor practices that may completely bypass the good security of encryption algorithms, such as apps storing conversation history unencrypted in the Cloud or not having secure auto-updating to patch vulnerabilities.
She also advocates for apps with:
Alias, and not phone numbers, as user identifier;
Indicative of compromise; and
Fingerprint verification to get assurance on the other person.
Whether you are a president, a prime minister, a criminal or a law-abiding citizen valuing your right to privacy online, secure messengers can provide you with a great means of communicating securely and privately, but only to a degree. | <urn:uuid:2c9e0908-c4a9-4ca0-a51e-f305aed2b9e9> | CC-MAIN-2022-40 | https://www.cpomagazine.com/data-privacy/privacy-dilemmas-of-insecure-messaging-apps/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00318.warc.gz | en | 0.95155 | 2,221 | 3.03125 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.