text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62 values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1 value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
Physics Solver-AI-Powered Physics Solver
AI-Powered Physics Solutions for Everyone
Physics Oracle, The worlds most powerful Physics tool V2.3
Physics Problem Solver
Updated: 04/12/2024 - Attempting to compensate for GPT4 updates… Let me know where it is failing
Powerful physics problem solver for mechanics, electricity and magnetism. Easy to understand with step-by-step explanations. Powered by Solvely.
Educational, detailed Physics guide and homework solver.
🔷#𝟏 𝐏𝐞𝐫𝐬𝐨𝐧𝐚𝐥𝐢𝐳𝐞𝐝 𝐏𝐡𝐲𝐬𝐢𝐜𝐬 𝐀𝐬𝐬𝐢𝐬𝐭𝐚𝐧𝐭🔷
Programa para resolver problemas de fisica universitaria
20.0 / 5 (200 votes)
Introduction to Physics Solver
Physics Solver is a specialized AI tool designed to assist with solving a wide range of physics problems. It leverages its extensive knowledge base from classic physics textbooks and problem sets, such as Irodov's and Savelev's collections, to provide detailed, step-by-step solutions. The primary goal is to enhance the understanding of physics concepts and problem-solving techniques for students and enthusiasts. For example, if a student is struggling with a mechanics problem involving Newton's laws, Physics Solver can guide them through the process of identifying forces, setting up equations of motion, and solving for the desired quantities.
Main Functions of Physics Solver
A student needs to solve a problem involving the conservation of momentum. Physics Solver can break down the problem into smaller steps, explain the principles involved, and show how to apply the conservation laws to find the final velocities of colliding bodies.
During a physics exam preparation, a student encounters a challenging problem on elastic collisions. By inputting the problem into Physics Solver, they receive a clear, methodical solution that improves their comprehension and confidence.
A user is confused about the concept of electric fields. Physics Solver provides a detailed explanation, supplemented with examples and visual aids, to clarify how electric fields are generated and how they interact with charges.
In a physics class, a teacher uses Physics Solver to demonstrate the concept of electric fields to students. By providing real-world examples and interactive explanations, the tool helps students grasp the topic more effectively.
A high school student is working on their physics homework and gets stuck on a problem related to thermodynamics. Physics Solver offers step-by-step assistance, ensuring the student understands each part of the solution.
A student working on their homework late at night can use Physics Solver to receive immediate help with difficult problems, ensuring they complete their assignments correctly and on time.
Ideal Users of Physics Solver
High School Students
High school students studying physics can greatly benefit from Physics Solver. It helps them understand complex concepts, solve challenging problems, and prepare for exams. The step-by-step solutions and detailed explanations make learning more accessible and less daunting.
University students, especially those majoring in physics or engineering, will find Physics Solver invaluable for tackling advanced problems. The tool supports their coursework by providing detailed solutions and clarifying difficult topics, which is particularly useful during exam preparation and project work.
Guidelines to Use Physics Solver
Visit aichatonline.org for a free trial without login, also no need for ChatGPT Plus.
Access the service directly through the website to start solving physics problems.
Upload relevant physics documents.
Ensure you have all necessary documents related to the problem you need to solve.
Specify the physics problem clearly.
Provide a detailed description of the problem including any known parameters and desired outcomes.
Select the appropriate tool for the problem.
Choose the specific area of physics (mechanics, electromagnetism, etc.) that your problem relates to.
Review and apply the provided solutions.
Carefully go through the solution steps and understand the methodology used to solve your problem.
Try other advanced and practical GPTs
AI-powered Template Creation Tool
AI-powered hooks to captivate readers
AI-powered anarchist philosophy tool.
FG 5e Content Creator
AI-Powered D&D Content Creation.
AI-Powered Ansible Solutions
POS1 Agency SEO Assistant
Optimize your website with AI-powered SEO insights.
Letter Template Designer
AI-powered customizable letter templates
AI-Powered Guidance for Your Aquarium
Lead Magnet GPT
AI-Powered Tool for Effective Lead Magnets
ISEKAI Reincarnation 異世界転生します
Embark on your ultimate Isekai adventure.
AI-powered Fortran programming guide
Video Metadata AI
AI-powered metadata generation for videos
- Problem Solving
- Homework Help
- Exam Preparation
- Research Support
- Concept Clarification
Q&A about Physics Solver
What types of physics problems can Physics Solver handle?
Physics Solver can handle a wide range of physics problems including mechanics, electromagnetism, thermodynamics, and quantum physics.
How can I upload a document for analysis?
You can upload documents directly through the Physics Solver interface by navigating to the upload section and selecting your file.
What is the typical response time for a solution?
The typical response time varies depending on the complexity of the problem but usually ranges from a few seconds to a couple of minutes.
Are there any prerequisites for using Physics Solver?
Basic knowledge of physics principles and familiarity with the problem at hand are helpful, but not required. The solver provides detailed step-by-step solutions.
Can Physics Solver be used for academic research?
Yes, Physics Solver can assist in academic research by providing detailed solutions and explanations that can be used to support theoretical studies and experiments. | <urn:uuid:21908852-6591-4d74-b2e5-90a6e38e1dbe> | CC-MAIN-2024-38 | https://theee.ai/tools/Physics-Solver-2OToA03Rfm | 2024-09-16T05:44:49Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651676.3/warc/CC-MAIN-20240916044225-20240916074225-00199.warc.gz | en | 0.897699 | 1,282 | 2.78125 | 3 |
Web scraping’s prevalence, sophistication and industry have expanded alongside the internet’s growth, according to a Distil Networks study.
Through analysis of top web scraping platforms and services, the report outlines how the democratisation of web scraping allows users to effortlessly steal sensitive information on the web.
Web scraping is a computer software technique for extracting information from websites, and often includes transforming unstructured website data into a database for analysis or repurposing content into the web scraper’s own website and business operations.
While much of this is not illegal, it sits in a grey area where legality and morality can be debated.
In most cases, bots, which make up 46% of web traffic, are implemented by individuals to perform web scraping at a much faster rate than humans alone.
38% of companies who engage in web scraping do so to obtain content, while it is also used for research, contact scraping, price comparison, weather data monitoring, and website change detection.
The top industries affected by web scraping that the studying identified were (in order): Real estate, digital publishing, e-commerce, directories and classifieds, airlines and travel.
Currently, according to the report, around 2% of online revenues can be lost through misuse of this online content.
This is not the only issue, with web scraping’s ability to expose ‘private’ information that is posted online, and could lead to significant fines in a world of stricter regulations.
Diverse actors leverage web scraping bots, including nefarious competitors, internet upstarts, hedge funds, fraudsters, hackers, and spammers, to effortlessly steal whatever pieces of content they are programmed to find, and often mimic regular user behavior, making them hard to detect and even harder to block.
“If your content can be viewed on the web, it can be scraped,” said Rami Essaid, CEO and co-founder of Distil Networks.
“Not only does web scraping pose a critical challenge to a website’s brand, it can threaten sales and conversions, lower SEO rankings, or undermine the integrity of content that took considerable time and resources to produce.”
“Understanding the pervasive nature of today’s web scraping economy not only raises awareness about this growing challenge, it also allows website owners to take action in the protection of their proprietary information.”
As we become more dependent on the internet in the Internet of Things era, the impact of content on the being stolen and re-used will increase.
At the same time, the cost of web scraping services has reduced dramatically – services can be had for as little as $3.33 an hour.
The average web scraper makes $58,000 annually, and when working for a large company specialising in web scraping this can reach $128,000 per year.
Web scraping is becoming increasingly desirable and easy to carry out, with the risks posed to businesses and individuals rising significantly. | <urn:uuid:cc94275a-4eea-4a82-bd1c-b562ac69b21d> | CC-MAIN-2024-38 | https://www.information-age.com/dangers-web-scraping-2478/ | 2024-09-16T05:17:41Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651676.3/warc/CC-MAIN-20240916044225-20240916074225-00199.warc.gz | en | 0.940082 | 612 | 2.609375 | 3 |
Understanding the distinction between antivirus and internet security is crucial to protect your devices and personal data from online dangers.
Although a VPN offers one level of security, antivirus software provides a different form of protection. Moreover, additional security measures safeguard against ransomware, identity theft, and other threats. In this article, we’ll delve into the details of internet security software, clarify some common misconceptions, and answer frequently asked questions.
The difference between antivirus and internet security
In short, antivirus protects you from threats that exist on your device, while internet security protects you from threats on the internet.
On the other hand, internet security encompasses a broader range of protection, including antivirus capabilities, plus additional layers like firewalls, privacy controls, VPNs, and protection against phishing and hacking.
Internet security vs. antivirus: Which is better?
Internet security and antivirus protect against different threats, so one is not better than the other. They work in tandem.
For basic protection, antivirus may suffice. However, internet security suites offer more extensive safeguards for comprehensive defense against various threats, especially when frequently using the internet.
Surfshark One: The best internet security provider
Sep 2024Surfshark One is our top pick among internet security providers for 2024. It provides an all-inclusive solution that safeguards against a range of digital threats. It combines traditional antivirus functions with advanced features such as a VPN for online privacy, ID theft protection, and a mechanism to alert users of personal data breaches. Surfshark One ensures comprehensive protection across all devices, making it an excellent choice for users seeking extensive security measures.
How to remove a virus using Surfshark One
To remove a virus using Surfshark One, you’ll need to follow a series of straightforward steps tailored to your specific device. Surfshark’s antivirus feature is designed to be user-friendly and effective across various platforms, including Windows, macOS, and Android. Here’s a condensed guide to help you get started:
For Windows users:
- Start by opening your device’s Surfshark application and navigating to the Antivirus tab.
- Click on the ‘Install Antivirus’ option to begin the installation of the antivirus component.
- After installation, press ‘Continue’ to start your first scan, allowing Surfshark to search for any threats on your device.
- Once the scan is complete, you can see the number of files scanned and whether any threats were found.
- Remove any threats immediately through the Surfshark app.
For macOS and Android Users:
The process for Mac and Android is similar to Windows. You must download and install Surfshark, select the antivirus feature, and scan to remove threats.
Remember, to use Surfshark Antivirus, you must have a Surfshark One or Surfshark One+ package, which you can purchase from their pricing page. This comprehensive approach ensures that your devices are protected against a wide range of digital threats, maintaining your privacy and security online.
THE BEST INTERNET SECURITY SOLUTION
Understanding viruses and protection
How do computer viruses spread?
Computer viruses spread through infected files, email attachments, malicious websites, and unsecured network connections. They can replicate and transmit themselves to other systems, compromising security and functionality.
What is virus protection software?
Virus protection software is designed to detect, prevent, and remove malware from computers and networks. It monitors system activities for suspicious behavior, offering real-time protection against threats. But it might not offer on-demand scans of your entire device.
How does antivirus work?
Antivirus software scans files and programs, comparing them against a database of known threats. It can also employ heuristic analysis to detect unknown malware based on behavior. Detected threats are then quarantined or deleted to prevent damage.
- Do I need antivirus for Windows 10? Yes, despite Windows 10 having built-in protection, additional antivirus software strengthens defense against sophisticated malware.
- Do tablets need virus protection? Yes. Like any device, tablets can benefit from virus protection to guard against malware and security breaches.
- Do Android phones get viruses? Yes, Android phones can get viruses, mainly through malicious apps or websites.
- Does iPhone need antivirus or malware protection? No, iPhones don’t require antivirus or malware protection. While iOS is known for its robust security, being cautious and informed about potential vulnerabilities is wise. Official app stores and avoiding suspicious links remain best practices.
FAQs about internet security software
Can internet security software slow down my computer?
Modern internet security software is designed to be lightweight, minimizing the impact on system performance. However, you might notice a slowdown during antivirus scans.
How often should I update my internet security software?
Keep your software updated at all times. Most programs offer automatic updates to protect against the latest threats.
Can I use free internet security software?
While free options exist, paid solutions offer more comprehensive protection and support.
How do I choose the best internet security software?
Consider factors like feature set, performance in independent tests, user reviews, and device compatibility.
How Many Different Microsoft Windows File Types Can Be Infected With a Virus?
Virtually any file type can be infected, but commonly targeted file types include executables (.exe), scripts, and MS Office document macros. | <urn:uuid:0d925ab4-ab0e-4090-b29d-4ef772ad899e> | CC-MAIN-2024-38 | https://www.comparitech.com/antivirus/antivirus-and-security-software/ | 2024-09-17T12:07:25Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651773.64/warc/CC-MAIN-20240917104423-20240917134423-00099.warc.gz | en | 0.905023 | 1,125 | 2.53125 | 3 |
Estimates suggest that enterprise technology accounts for 3-5% of global power consumption and 1-2% of carbon emissions. Although technology systems are becoming more power efficient, optimizing power consumption is a key priority for enterprises to reduce their carbon footprints and build sustainable businesses. Cloud modernization can play an effective part in this journey if done right.
Current practices are not aligned to sustainable technology
The way systems are designed, built, and run impacts enterprises’ electricity consumption and CO2 emissions. Let’s take a look at the three big segments:
- Architecture: IT workloads, by design, are built for failover and recovery. Though businesses need these backups in case the main systems go down, the duplication results in significant electricity consumption. Most IT systems were built for the “age of deficiency,” wherein underlying infrastructure assets were costly, rationed, and difficult to provision. Every major system has a massive back-up to counter failure events, essentially multiplying electricity consumption.
- Build: Consider that for each large ERP production system there are 6 to 10 non-production systems across development, testing, and staging. Developers, QA, security, and pre-production ended up building their own environments. Yet, whenever systems were built, the entire infrastructure needed to be configured despite the team needing only 10-20% of it. Thus, most of the electricity consumption ended up powering capacity that wasn’t needed at all.
- Run: Operations teams have to make do with what the upstream teams have given them. They can’t take down systems to save power on their own as the systems weren’t designed to work that way. So, the run teams ensure every IT system is up and running. Their KPIs are tied to availability and uptime, meaning that they were incentivized to make systems “over available” even when they weren’t being used. The run teams didn’t – and still don’t – have real-time insights into the operational KPIs of their systems landscape to dynamically decide which systems to shut off to save power consumption.
The role of cloud modernization in building a sustainable technology ecosystem
In February 2020, an article published in the journal Science suggested that, despite digital services from large data center and cloud vendors growing sixfold between 2010 and 2018, energy consumption grew by only 6%. I discussed power consumption as an important element of “Ethical Cloud” in a blog I wrote earlier this year.
Many cynics say that cloud just shifts power consumption from the enterprise to the cloud vendor. There’s a grain of truth to that. But I’m addressing a different aspect of cloud: using cloud services to modernize the technology environment and envision newer practices to create a sustainable technology landscape, regardless of whether the cloud services are vendor-delivered or client-owned.
Cloud 1.0 and 2.0: By now, many architects have used cloud’s run time access of underlying infrastructure, which can definitely address the issues around over-provisioning. Virtual servers on the cloud can be switched on or off as needed, and doing so reduces carbon emission. Moreover, as cloud instances can be provisioned quickly, they are – by design – fault tolerant, so they don’t rely on excessive back-up systems. They can be designed to go down, and their back-up turns on immediately without being forever online. The development, test, and operations teams can provision infrastructure as and when needed. And they can shut it down when their work is completed.
Cloud 3.0: In the next wave of cloud services, with enabling technologies such as containers, functions, and event-driven applications, enterprises can amplify their sustainable technology initiatives. Enterprise architects will design workloads keeping failure as an essential element that needs to be tackled through orchestration of run time cloud resources, instead of relying on traditional failover methods that promote over consumption. They can modernize existing workloads that need “always on” infrastructure and underlying services to an event-driven model. The application code and infrastructure lay idle and come online only when needed. A while back I wrote a blog that talks about how AI can be used to compose an application at run time instead of always being available.
Server virtualization played an important role in reducing power consumption. However, now, by using containers, which are significantly more efficient than virtual machines, enterprises can further reduce their power consumption and carbon emissions. Though cloud sprawl is stretching the operations teams, newer automated monitoring tools are becoming effective in providing a real-time view of the technology landscape. This view helps them optimize asset uptime. They can also build infrastructure code within development to make an application aware of when it can let go of IT assets and kill zombie instances, which enables the operations team to focus on automating and optimizing, instead of managing systems that are always on.
Moreover, because the entire cloud migration process is getting optimized and automated, power consumption is further reduced. Newer cloud-native workloads are being built in the above model. However, enterprises have large legacy technology landscapes that need to move to an on-demand cloud-led model if they are serious about their sustainability initiatives. Though the business case for legacy modernization does consider power consumption, it mostly focuses on movement of workload from on-premises to the cloud. And it doesn’t usually consider architectural changes that can reduce power consumption, even if it’s a client-owned cloud platform.
When considering next-generation cloud services, enterprises should rethink their modernization journeys beyond a data center exit to building a sustainable technology landscape. They should consider leveraging cloud-led enabling technologies to fundamentally change the way their workloads are architected, built, and run. However, enterprises can only think of building a sustainable business through sustainable technology when they’ve adopted cloud modernization as a potent force to reduce power and carbon emission.
This is a complex topic to solve for, but we all have to start somewhere. And there are certainly other options to consider, like greater reliance on renewable energy, reduction in travel, etc. I’d love to hear what you’re doing, whether it’s using cloud modernization to reduce carbon emission, just shifting your emissions to a cloud vendor, or another approach. Please write to me at [email protected]. | <urn:uuid:d7ea393f-aef4-4bba-8dac-9cdadd492aae> | CC-MAIN-2024-38 | https://www.everestgrp.com/2020-12-https-www-everestgrp-com-2020-12-sustainable-business-needs-sustainable-technology-can-cloud-modernization-help-blog-html-.html | 2024-09-18T15:33:19Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651899.75/warc/CC-MAIN-20240918133146-20240918163146-00899.warc.gz | en | 0.951704 | 1,302 | 2.578125 | 3 |
Security researcher disclose the new Internet Explorer zero-day vulnerability along with Proof-of-concept allows hackers to steal files from Windows computer.
Internet Explorer is one of the widely used web browsers developed by Microsoft and included in the Microsoft Windows line of operating systems, starting in 1995.
An XML External Entity Injection vulnerability affected the current version of Microsoft Internet Explorer v11 let remote attackers compromise the windows to exfiltrate Local files and conduct remote reconnaissance on locally installed Program version data.
Based on the Browser Market Share report, Internet Explorer is a 2nd largest web browser that is used by millions of users around the world including within a corporate networks.
How Does This Internet Explorer Zero-day Works
Since the Internet Explorer vulnerable to XML External Entity Injection flaw, the targeted system can be exploited by an attacker if any user opens a specially crafted .MHT file locally.
Let’s assume a victims open the malicious .MHT” file locally via Internet Explorer, Afterwards, if the user performs an interaction like duplicate tab “Ctrl+K” and other interactions like right click “Print Preview” or “Print” commands on the web-page leads to triggering this vulnerability and exploit the system.
“According to John Page (aka hyp3rlinx) who reported this Internet Explorer Zero-day flaw
How to Exploit this Vulnerability
1) Use This script to create the “datatears.xml” XML and XXE embedded “msie-xxe-0day.mht” MHT file.
2) python -m SimpleHTTPServer
3) Place the generated “datatears.xml” in Python server web-root.
4) Open the generated “msie-xxe-0day.mht” file, watch your files be exfiltrated.
But Microsoft said, We determined that a fix for this issue will be considered in a future version of this product or service. At this time, we will not be providing ongoing updates of the status of the fix for this issue” | <urn:uuid:84db48cd-a8f3-4508-a118-1e315dd34cdf> | CC-MAIN-2024-38 | https://gbhackers.com/unpatched-internet-explorer-zero-day-vulnerability-let-attackers-hack-windows-pc-steal-files/ | 2024-09-07T20:24:52Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650920.0/warc/CC-MAIN-20240907193650-20240907223650-00099.warc.gz | en | 0.837962 | 438 | 2.671875 | 3 |
What Does a Picture Archiving and Communication System Consist of?
A PACS system consists of four main components:
- An imaging modality (type of imaging). Imaging modalities include, for example, magnetic resonance imaging (MRI), ultrasound, x-rays, and computed tomography (CT) scanners.
- A secure network, through which to transmit patient information.
- Workstations through which images can be interpreted and reviewed.
- Archives to store and retrieve images and reports.
The PACS system allows for sharing of the images. Under the HIPAA Privacy Rule, images can be shared among providers, or within a healthcare organization, for treatment purposes. Since a PACS system acts as a digital storage medium, the need to manually file, retrieve, access, or transport film jackets containing images in the form of physical documents, is eliminated.
How Are a Picture Archiving and Communication System (PACS) and HIPAA Related?
Medical images such as X-rays, CT scans, and MRI scans constitute electronic protected health information (ePHI); as such, covered entities that use Picture Archiving and Communication System technology are subject to the requirements of the HIPAA Security Rule.
The HIPAA Security Rule requires covered entities to:
- Ensure the confidentiality, integrity, and availability of all ePHI they create, receive, maintain, or transmit;
- Identify and protect against reasonably anticipated threats to the security or integrity of the information;
- Protect against impermissible uses or disclosures of ePHI that are reasonably anticipated; and
- Ensure compliance by their workforce.
To satisfy the Security Rule requirements, covered entities must develop administrative, technical, and physical safeguards to ensure the confidentiality, integrity, and availability of ePHI.
Technical safeguards play a particularly important role in ensuring PACS data and its transmission is kept secure. Technical safeguards include, among other items:
- Access Controls: Implementing technical policies and procedures that allow only authorized persons to access ePHI.
- Transmission Security: Implementing technical security measures that guard against unauthorized access to ePHI that is transmitted over an electronic network.
Having proper access controls can ensure that PACS-equipped workstations are not accessed by unauthorized persons. Having proper transmission security can ensure patient information is transmitted over a secure network that is protected from unauthorized access to ePHI. | <urn:uuid:ef42aa8a-8659-4f5c-9d9f-fd6e97a5b7e5> | CC-MAIN-2024-38 | https://compliancy-group.com/what-is-a-picture-archiving-and-communication-system-pacs/ | 2024-09-08T23:25:06Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651035.2/warc/CC-MAIN-20240908213138-20240909003138-00899.warc.gz | en | 0.911822 | 485 | 2.640625 | 3 |
Let’s run an experiment involving two couples, one old and the other young.
The first couple has dated a few times while the second couple is married with children and grandchildren.
Both the women ask their partners to select a pastry for them.
Which guy would have the greater chance of getting his choice right, the younger guy who has just meet the girl or the older man who has been living with his wife for fifty years?
The grandfather would likely ace the test. He knows his wife’s favorite type of pastry. He will walk into a bakery, place the order and leave immediately. The younger guy would pick a pastry and hope his choice was right.
Solving the communication gap in software development
Software development has a problem similar to what the younger guy faces.
Too often there is a huge communication gap between business and engineers. The former expects too many features without having a firm grasp of the technical limitations and the latter has trouble understanding what business wants.
This results in waste of time and overwork as both the business side and the engineering side are often at cross purposes, and the final product that emerges is complex and barely functional.
Behavior Driven Development is an Agile development methodology that aims to change this state of affairs.
There are several benefits of BDD:
- Entire software development process is based on a business logic
- Software development is focused on the user needs
- Business critical features are always delivered on priority
- All stakeholders understand the nitty gritty of the process and communicate throughout because of the use of simple language
- High quality code means that the costs of maintenance and project risk is low
The guts of Behavior Driven Development
In the words of Dan North, BDD’s originator it is “an “outside-in” methodology. It starts at the outside by identifying business outcomes, and then drills down into the feature set that will achieve those outcomes. Each feature is captured as a “story”, which defines the scope of the feature along with its acceptance criteria. ”
There are two parts to BDD:
- Use examples written in easy to understand language that describes how users interact with the features
- Automated tests designed on the basis of these examples to ensure that the system behaves as specified by business.
A typical BDD project would have the following elements in it:
1. Setting up SMART goals
Most software development process hits snags when business outcomes like “increase in revenue and decrease in operating costs” are not specific enough to be of help in writing software.
BDD solves this problem by having a business goal which has to be SMART (Specific, Measurable, Attainable, Relevant and Time Bound).
2. Impact Mapping
After SMART goals, the Impact Map is designed which helps the team lay out all the ways in which the specific goal can be reached.
An Impact Map is a mindmap that has four levels and looks like this (source)
Impact Map is an Agile requirement gathering technique that provides a visual overview of what steps should be taken to achieve a goal.
“Why” is the goal, and “Who” are the actors (customers or internal teams) who can help achieve the goal. The “How “denotes the ways in which their specific behavior can be impacted, and the “What” lists the steps that the stakeholders need to implement to have the desired impact.
3. Setting priorities
In every project it’s imperative to set priorities so that business gets the most critical features first. This is decided in BDD by using two techniques called value analysis and complexity analysis.
Value analysis lets the stakeholders pick up low cost high value features and complexity analysis is used for picking the right development and collaboration approach to the entire project as well as to individual features.
These techniques can be implemented using a number of different frameworks.
4. Using stories in planning
Because Agile has a very short development cycle there is always the danger that engineers might build something that business doesn’t want, and their entire work might be wasted.
BDD uses stories in ubiquitous language to make everything unambiguous. Every story has a fixed template and looks like this:
Title (one line describing the story)
Narrative: As a [role] I want [feature] So that [benefit] Acceptance Criteria: (presented as Scenarios)
Scenario 1: Title Given [context] And [some more context]... When [event] Then [outcome] And [another outcome]...
Scenario 2: ...
Here is an example of stories in action and how they can be used directly in development: Story: Account Holder withdraws cash
As an Account Holder
I want to withdraw cash from an ATM So that I can get money when the bank is closed Scenario 1: Account has sufficient funds Given the account balance is $100 And the card is valid And the machine contains enough money When the Account Holder requests $20 Then the ATM should dispense $20 And the account balance should be $80 And the card should be returned
Scenario 2: Account has insufficient funds Given the account balance is $10 And the card is valid And the machine contains enough money When the Account Holder requests $20 Then the ATM should not dispense any money And the ATM should say there are insufficient funds And the account balance should be $20 And the card should be returned
Scenario 3: … Business can now write stories around features and have the assurance that there would be no ambiguity.
BDD is a methodology that’s highly collaborative in nature. It assumes that no single person has the answers to all the problems, and it seeks to develop a framework that can streamline collaboration in fast paced Agile environments.
BDD has a simple role: moving the business analysts, developers and testers on the same page so that features that business needs can be delivered without the minimum of delay. | <urn:uuid:b4c1fc76-c1c8-4021-ad96-1e251c916c71> | CC-MAIN-2024-38 | https://www.itexchangeweb.com/blog/still-building-products-that-dont-meet-business-needs-use-behaviour-driven-development-3/ | 2024-09-08T22:26:34Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651035.2/warc/CC-MAIN-20240908213138-20240909003138-00899.warc.gz | en | 0.95713 | 1,245 | 2.96875 | 3 |
This edition of the passkey primer is going to take a look at the options for accessing your accounts via passkeys from different devices, as well as your backup options. We covered the background in our earlier newsletters:
- An overview of passkeys
- The problems with passwords
- The cryptography behind passkeys
- How passkeys authentication works
Passkeys are fairly adaptable and can be implemented in a range of different ways. The flow that we outlined in the previous newsletter was just one of the simplest examples. If you were really paying attention, you may have picked up a pretty significant flaw in it: The keys were just stored locally on the device.
In the current tech environment, we are frequently switching between laptops, phones and other devices, so only having an account’s keys on a single device would mean that you can’t access the account from other devices. This would be a serious limitation, making it hard to work between phones, tablets and laptops. It would also be a disaster if you lost the device, because you would be locked out of the account.
But don’t worry, the tech community wasn't that short-sighted, and there are a few solutions.
Authentication via Bluetooth
Let’s say you want to access your Google account on your laptop, but your passkeys are stored on your phone. This would be a big problem, except you can actually use your phone to sign in on your laptop, via Bluetooth. First, you would go to Google’s login page on your laptop, and you would see a button that says something along the lines of “Use another device to sign in”.
If you clicked on it, the two devices were physically close, and Bluetooth was on, you would then see a notification on your phone. In essence, it would say that “You are trying to sign in to Google on your nearby computer. Here are the accounts.”
You would then choose the correct account, and your phone would prompt you for your PIN, pattern or biometrics to sign in. Once you successfully enter it on your phone, you would be able to access the account on your computer.
Hardware security tokens
Another option is to store your passkeys on a hardware security token like a YubiKey. You could use it for authentication across devices via NFC or USB—you just have to connect it, then enter your PIN or biometrics. While it’s a great option for security, it does have usability issues. You won’t be able to access the accounts via passkeys unless you have the hardware security token. You would want to have alternative plans in place that allow you to maintain access to your account, even if you lose the hardware token.
It’s also possible to sync your passkeys in the cloud via solutions like Google Password Manager and iCloud Keychain. If you log in to your Google account across any of your Google devices, you can access all of the passkeys stored in Google Password Manager. Apple devices provide the same flexibility.
While these options are great for usability and they allow users to access their keys even if their device is lost or stolen, they also come with problems. When passkeys are stored in the cloud, any attacker that can access the cloud account can access the passkeys as well. Users need to have 2FA on any cloud account that stores their passkeys, otherwise they would have a security catastrophe waiting to happen.
When users switch platforms, for example from iOS to Android, they can use the passkey from their old device to sign in to their account on the new device. After logging in, they can set up a new passkey on the new device. They could also use a hardware security key to authenticate on the new device. If neither option is available because the device was lost or stolen, they will have to go through account recovery procedures.
The WebAuthn recommendation
The WebAuthn specification recommends that users should register separate passkeys for each frequently used device for a given account. As an example, your phone and laptop could use different passkeys to access the same account. This gives you access across devices and provides redundancy, without you having to backup and share the keys themselves.
The challenges of access across devices
Seamless passkey authentication across devices is something that the tech community is still working on. Especially in these early days, users will be responsible for ensuring that they can access their accounts when they need them, and that they also have recovery options in place.
While these aren’t issues that we should ignore, it’s also worth considering that passwords face their own set of problems. Users need to set up their own unique passwords for each account, and sync them across devices, which is often achieved through a password manager. But this means that they have to trust a third party to secure the passwords and make them available. On top of this, there’s also issues like vendor lock-in.
While passkeys aren’t perfect yet, passwords aren’t either. | <urn:uuid:7bb30ca8-44da-4885-abdd-e5591839550b> | CC-MAIN-2024-38 | https://destcert.com/resources/passkey-primer-pt-5-flexible-account-access/ | 2024-09-10T03:59:17Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651196.36/warc/CC-MAIN-20240910025651-20240910055651-00799.warc.gz | en | 0.957228 | 1,046 | 2.640625 | 3 |
The primary purpose of cryptography is to make it difficult for an unauthorized third party to access and understand private communication between two parties. It is not always possible to restrict all unauthorized access to data, but private data can be made unintelligible to unauthorized parties through the process of encryption. Encryption uses complex algorithms to convert the original message (cleartext) to an encoded message (ciphertext). The algorithms used to encrypt and decrypt data that is transferred over a network typically come in two categories: secret-key cryptography and public-key cryptography.
Both secret-key cryptography and public-key cryptography depend on the use of an agreed-upon cryptographic key or pair of keys. A key is a string of bits that is used by the cryptographic algorithm or algorithms during the process of encrypting and decrypting the data. A cryptographic key is like a key for a lock; only with the correct key can you open the lock.
Safely transmitting a key between two communicating parties is not a trivial matter. A public key certificate enables a party to safely transmit its public key, while providing assurance to the receiver of the authenticity of the public key. See Public Key Certificates.
The descriptions of the cryptographic processes in secret-key cryptography and public-key cryptography follow conventions widely used by the security community: the two communicating parties are labeled with the names Alice and Bob. The unauthorized third party, also known as the attacker, is named Charlie. | <urn:uuid:6e498310-21e7-477e-9991-dfdc72f70cde> | CC-MAIN-2024-38 | https://www.ibm.com/docs/en/cloud-paks/z-modernization-stack/2023.4?topic=works-cryptographic-processes | 2024-09-10T03:31:48Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651196.36/warc/CC-MAIN-20240910025651-20240910055651-00799.warc.gz | en | 0.926619 | 292 | 4.09375 | 4 |
What is ePrivacy Regulation? The ePrivacy Regulation (also known as ePrivacy Regulation or ePVO) is intended to regulate the protection of fundamental rights and freedoms of natural and legal persons in the provision and use of electronic communications services in the European Union. The ePVO is designed as a special law within EU data protection law. The legislative process for the ePVO has not yet been completed.
In an era where electronic communications dominate our daily lives, safeguarding personal data has become paramount. Enter the ePrivacy Regulation, a proposed legislation by the European Union that aims to enhance privacy protection in electronic communications.
In this blog, we delve into the key provisions of this regulation, its impact on businesses, and user rights. Discover how companies can prepare for compliance, explore the debates surrounding its effectiveness, and uncover the potential implications for digital marketing.
Stay informed and empowered as we navigate the ever-evolving landscape of data privacy in the digital age.”
- What is ePrivacy Regulation?
- Relationship between ePrivacy and GDPR
- Key Provisions of ePrivacy Regulation
- ePrivacy Regulation vs. Cookie Law
- How ePrivacy Regulation Enhances Data Protection
- Impact on Businesses
- Preparing for the ePrivacy Regulation
- ePrivacy Regulation and User Rights
- ePrivacy Regulation and Technology Companies
- ePrivacy Regulation in the Global Context
- The Role of Data Protection Authorities
- Addressing Challenges and Concerns
- Criticisms and Controversies
- ePrivacy Regulation and Future of Privacy
- Frequently Asked Questions
- What is the main purpose of the ePrivacy Regulation?
- Does the ePrivacy Regulation apply to all businesses?
- How does ePrivacy Regulation relate to the General Data Protection Regulation (GDPR)?
- What are the penalties for non-compliance with ePrivacy Regulation?
- Does the ePrivacy Regulation cover cookies and tracking technologies?
- How can companies prepare for compliance with ePrivacy Regulation?
- What rights do users have under the ePrivacy Regulation?
- Does ePrivacy Regulation affect digital marketing practices?
- Can ePrivacy Regulation stifle innovation in the tech industry?
- Are there any expected changes or updates to the ePrivacy Regulation in the future?
What is ePrivacy Regulation?
The ePrivacy Regulation, also known as the “Regulation concerning the respect for private life and the protection of personal data in electronic communications,” is a proposed regulation by the European Union (EU) that aims to safeguard the privacy of individuals in electronic communications. It is intended to replace the ePrivacy Directive (Directive 2002/58/EC), which was implemented in 2002 and has been amended several times since.
The ePrivacy Regulation is designed to complement the General Data Protection Regulation (GDPR) and focuses specifically on the protection of personal data in electronic communications, such as emails, text messages, internet telephony, and other online communication services.
Its primary objective is to ensure the confidentiality of communications and to protect users’ privacy and data online.
Importance of Data Privacy in the Digital Age
- Personal Security: Protecting personal data is essential to safeguard individuals from various forms of cybercrime, identity theft, and fraud.
- Trust and Reputation: Businesses that prioritize data privacy build trust with their customers, leading to a positive reputation and increased customer loyalty.
- Data Breach Prevention: Strong data privacy measures help prevent data breaches, minimizing the risk of exposing sensitive information to unauthorized parties.
- Individual Rights: Respecting data privacy rights empowers individuals to control their personal information, giving them the right to know how their data is used and to provide informed consent.
- Compliance and Legal Obligations: Many countries and regions, including the EU with the GDPR, have established data protection laws, and organizations must comply with these regulations to avoid legal consequences and financial penalties.
- Ethical Responsibility: Respecting data privacy is an ethical obligation for businesses and organizations that handle personal data.
Background of ePrivacy Regulation
The ePrivacy Regulation has been in the works for several years and is part of the EU’s effort to update and strengthen data protection laws in the digital age. Its original proposal was made in January 2017, and it has undergone various revisions and discussions since then.
The regulation aims to address new challenges brought about by technological advancements and changes in communication habits while also aligning with the principles and requirements of the GDPR.
Relationship between ePrivacy and GDPR
The ePrivacy Regulation and GDPR are two distinct but interconnected regulations within the EU’s data protection framework:
- Scope: While the GDPR applies to the general protection of personal data across all sectors, the ePrivacy Regulation specifically addresses privacy and data protection in electronic communications.
- Complementarity: The ePrivacy Regulation complements the GDPR by providing additional and more specific rules for electronic communications. It includes provisions related to cookies, direct marketing, confidentiality of communications, and electronic marketing.
- Penalties: Both regulations impose fines for non-compliance, with penalties that can be significant for businesses found in breach of the rules.
- Interaction: The ePrivacy Regulation and GDPR work in tandem to ensure a comprehensive and consistent approach to data protection and privacy in the EU. Businesses and organizations that process personal data and engage in electronic communications must comply with both regulations.
Key Provisions of ePrivacy Regulation
Scope and Applicability
The ePrivacy Regulation aims to protect the privacy of individuals in electronic communications. It covers various forms of electronic communication services, such as emails, text messages, internet telephony, and instant messaging apps. It applies to both private and public communication providers within the European Union.
Consent Requirements for Electronic Communication
The ePrivacy Regulation emphasizes the importance of obtaining valid consent before processing electronic communications data. Consent must be freely given, specific, informed, and unambiguous.
It should be obtained before initiating any communication and before storing or accessing information on users’ devices, such as using cookies or similar tracking technologies.
Rules on Cookies and Tracking Technologies
Websites and apps must provide clear information about the purposes of data processing and enable users to easily withdraw their consent.
ePrivacy Regulation vs. Cookie Law
Aspect | ePrivacy Regulation | Cookie Law (ePrivacy Directive) |
Legal Nature | Regulation (direct legal effect) | Directive (required national implementation) |
Scope | Broader, covers electronic communications and data privacy | Primarily focused on cookies and tracking technologies |
Applicability | EU-wide, no national implementation needed | Required each EU member state to implement into national law |
Consent Requirements | Stricter, requires explicit, informed, and unambiguous consent for electronic communications and cookies | Required informed consent specifically for non-essential cookies |
Harmonization | Aims to harmonize rules across EU member states | Led to variations in implementation and interpretation |
Enforcement and Penalties | Sets EU-wide enforcement and penalties for non-compliance | Penalties varied depending on each member state’s implementation |
Data Protection Enhancement | Enhances data protection in electronic communications and online services | Focused on cookie-related data protection |
The main differences between the ePrivacy Regulation and the Cookie Law are as follows:
Legal Nature: The ePrivacy Regulation, once adopted, will have a direct legal effect in all EU member states as a regulation, which means it will not require national implementation. On the other hand, the Cookie Law was a directive, which required each member state to implement it into their national legislation, leading to variations in its application across the EU.
Expanded Scope: The ePrivacy Regulation has a broader scope than the Cookie Law. It covers not only cookies but also various other forms of electronic communication, ensuring privacy protection for various digital communication services.
Harmonization: The ePrivacy Regulation aims to harmonize the rules related to electronic communications and data privacy across all EU member states, reducing discrepancies in the implementation and interpretation of the law.
How ePrivacy Regulation Enhances Data Protection
The ePrivacy Regulation enhances data protection in several ways:
- Stronger Consent Requirements: The regulation sets stricter rules for obtaining consent, ensuring that users have a clear understanding of how their data will be used and giving them more control over their personal information.
- Privacy by Design: The ePrivacy Regulation promotes privacy by design and default, encouraging service providers to incorporate data protection principles into their systems and processes from the outset.
- Improved User Rights: The regulation strengthens users’ rights, such as the right to be informed, the right to access their data, and the right to withdraw consent. This empowers individuals to exercise greater control over their personal information.
- Uniformity: The harmonization of rules across the EU ensures a consistent level of data protection for individuals, regardless of where they reside or access electronic communication services within the EU.
The ePrivacy Regulation complements the GDPR and enhances data protection in the digital age by addressing specific issues related to electronic communications and ensuring that individuals’ privacy rights are respected in the online environment.
Impact on Businesses
The ePrivacy Regulation, once enacted, will have significant implications for businesses operating within the European Union.
Compliance Challenges for Companies
Businesses will need to ensure they comply with the new rules and requirements set forth in the ePrivacy Regulation. This may involve adapting their data processing practices, obtaining explicit consent from users for electronic communications and cookies, and implementing privacy-by-design principles.
Penalties for Non-Compliance
Non-compliance with the ePrivacy Regulation can result in substantial fines, which may be up to 4% of a company’s global annual turnover, similar to the penalties under the GDPR. This places a significant financial burden on businesses that fail to adhere to the regulation’s provisions.
Impact on Online Advertising and Marketing
The regulation’s stricter consent requirements for cookies and tracking technologies may have a significant impact on online advertising and marketing practices. Companies will need to rethink their cookie policies and explore alternative ways of reaching their target audience while respecting user privacy.
Companies will need to implement mechanisms to collect and manage user consent effectively. This may involve adjustments to their websites, mobile apps, and other communication channels to ensure compliance.
Cross-Border Data Transfers
The ePrivacy Regulation may also impact cross-border data transfers within the EU and to third countries. Businesses will need to ensure that data transfers comply with the regulation’s requirements.
Preparing for the ePrivacy Regulation
- Conduct a Data Audit: Review and understand the types of personal data processed in electronic communications and assess the associated risks and data flows.
- Update Privacy Policies: Revise privacy policies to include specific information about electronic communications data processing and cookie usage, and clearly explain how user consent will be obtained.
- Obtain Consent: Implement mechanisms to obtain explicit consent from users for electronic communications and cookies. Ensure that users have a clear and easy way to provide or withdraw consent.
- Train Staff: Educate employees about the ePrivacy Regulation and its impact on the organization’s data processing practices to ensure everyone is aware of their roles and responsibilities.
- Review Data Processing Practices: Assess and update data processing practices to align with the regulation’s requirements, including the principles of privacy by design and data minimization.
ePrivacy Regulation and User Rights
The ePrivacy Regulation aims to strengthen user control over personal data and safeguard the confidentiality of communications.
- Consent Requirements: The regulation sets strict requirements for obtaining user consent before processing electronic communications data or using cookies and tracking technologies. This gives users more control over how their data is used and empowers them to make informed decisions.
- Enhanced Confidentiality: The ePrivacy Regulation ensures the confidentiality of electronic communications, prohibiting interception and surveillance without proper legal grounds and consent.
- Privacy-by-Default: The regulation encourages privacy by design, meaning that services must be designed with privacy considerations from the outset, making it more likely that user data will be protected by default.
- Right to Withdraw Consent: Users have the right to withdraw their consent at any time, giving them the ability to stop further processing of their data.
The ePrivacy Regulation aims to protect user rights and privacy in electronic communications, fostering trust between users and businesses while adapting data protection to the challenges of the digital age.
ePrivacy Regulation and Technology Companies
Obligations for Online Service Providers
Under the ePrivacy Regulation, online service providers, such as websites, mobile apps, and communication platforms, have several key obligations:
- Consent Requirements: Online service providers must obtain explicit and informed consent from users before processing their electronic communications data or using cookies and tracking technologies. This includes providing clear information about the purposes of data processing and obtaining consent for each specific purpose.
- Cookie Management: Companies must implement mechanisms to collect and manage user consent effectively, particularly for non-essential cookies. Users must have the option to reject or withdraw their consent at any time.
- Data Breach Notification: Like the GDPR, the ePrivacy Regulation requires online service providers to promptly notify users and data protection authorities in the event of a data breach that may result in a risk to users’ rights and freedoms.
- Confidentiality of Communications: Providers are obligated to ensure the confidentiality of electronic communications and protect users from unauthorized interception or surveillance.
- Privacy by Design: The regulation encourages companies to adopt privacy-by-design principles, integrating data protection measures into their services and systems from the outset.
Balancing Data Usage and User Privacy
The ePrivacy Regulation aims to strike a balance between data usage for legitimate purposes and protecting user privacy. It acknowledges that data-driven technologies are crucial for the development of innovative services but insists that users’ fundamental rights and freedoms, such as privacy and confidentiality, should be respected.
ePrivacy Regulation in the Global Context
Comparison with Similar Regulations Worldwide
Various countries and regions have implemented or proposed similar data protection and privacy regulations. The most notable comparison can be made with the General Data Protection Regulation (GDPR) in the European Union, which serves as a foundational model for the ePrivacy Regulation.
Both regulations emphasize user rights, data protection principles, and strict consent requirements.
In other parts of the world, countries like Canada, Australia, Brazil, India, and South Africa have adopted or proposed data protection laws with varying degrees of similarity to the GDPR and ePrivacy Regulation.
Each country’s data protection framework may have unique provisions and requirements, but the overall goal is to protect individual privacy and establish a balance between data usage and data protection.
Implications for International Businesses
The ePrivacy Regulation’s impact extends beyond the borders of the European Union, as it applies to companies that offer goods or services to EU residents or monitor their behavior, regardless of where the company is located.
This extraterritorial reach means that technology companies around the world may be subject to the regulation’s requirements if they interact with EU users.
International businesses must understand and comply with the ePrivacy Regulation to avoid penalties and maintain a positive reputation. This may involve adjusting data processing practices, implementing user consent mechanisms, and aligning with the regulation’s privacy principles.
The ePrivacy Regulation also influences data transfers between the EU and other countries. Companies outside the EU seeking to process data of EU users will need to ensure that they meet the regulation’s data protection requirements when transferring data across borders.
ePrivacy Regulation has implications not only for technology companies within the EU but also for international businesses that interact with EU users. It underlines the global importance of data privacy and encourages companies worldwide to adopt responsible data practices and prioritize user privacy rights.
The Role of Data Protection Authorities
Supervision and Enforcement of the Regulation
Data Protection Authorities (DPAs) in each EU member state are responsible for supervising and enforcing the ePrivacy Regulation. They play a crucial role in ensuring that companies and organizations comply with the regulation’s provisions related to electronic communications and data privacy.
DPAs have the power to investigate complaints, conduct audits, and impose fines for non-compliance.
Handling of ePrivacy-Related Complaints
DPAs receive and handle complaints from individuals and organizations regarding potential violations of the ePrivacy Regulation. They investigate these complaints and take appropriate actions, which may include issuing warnings, reprimands, ordering data processing to be halted, or imposing fines.
Addressing Challenges and Concerns
Balancing Privacy and Innovation
One of the key challenges is striking a balance between preserving user privacy and fostering technological innovation. While the ePrivacy Regulation aims to enhance data protection, it should also allow room for businesses to innovate and develop new services that rely on data to some extent.
Ensuring that the regulation’s requirements do not stifle innovation is a delicate task for policymakers and DPAs.
Potential Impact on Digital Marketing
The ePrivacy Regulation’s strict consent requirements for cookies and tracking technologies can significantly impact digital marketing practices. Companies may face challenges in gathering user consent, which could limit their ability to target and personalize advertisements effectively.
Digital marketers will need to explore alternative approaches to reach their target audience while respecting the regulation’s consent requirements.
To address these concerns:
- Clear Guidance: Data Protection Authorities can provide clear and practical guidance to businesses on how to comply with the ePrivacy Regulation while still fostering innovation. This guidance can help companies understand the regulation’s requirements and implement them effectively.
- Collaboration and Dialogue: Policymakers, businesses, and privacy advocates should engage in open dialogue to find solutions that strike the right balance between privacy protection and innovation. Regular consultations with stakeholders can help address challenges proactively.
- Technological Solutions: Companies can invest in privacy-enhancing technologies that enable them to process data while respecting user privacy. These solutions can help ensure compliance with the ePrivacy Regulation while allowing for innovative data-driven services.
- Education and Awareness: Educating users about the importance of data privacy and their rights under the ePrivacy Regulation can foster a culture of privacy-conscious consumers. Businesses can play a role in promoting privacy awareness and transparency in data processing practices.
Data Protection Authorities play a vital role in enforcing the ePrivacy Regulation, and their guidance and enforcement actions can influence how businesses approach data privacy and innovation.
Striking the right balance between privacy and innovation, as well as addressing the potential impact on digital marketing, requires a collaborative effort among regulators, businesses, and consumers.
Criticisms and Controversies
Public Opinion on ePrivacy Regulation
Public opinion on the ePrivacy Regulation is diverse and often influenced by various factors, including individual privacy concerns, business interests, and political perspectives. Some of the common criticisms and controversies surrounding the regulation include:
- Stricter Cookie Consent Requirements: Some argue that the ePrivacy Regulation’s strict consent requirements for cookies may result in a higher number of consent pop-ups, potentially leading to “consent fatigue” among users. This could impact user experience and hinder seamless interactions with websites and apps.
- Impact on Digital Advertising: The regulation’s implications for digital marketing and advertising practices have sparked debates within the advertising industry. Businesses relying heavily on targeted ads may express concerns about the potential impact on revenue and their ability to reach specific audiences.
- Complexity and Implementation Challenges: Critics often highlight the complexity of the regulation, especially when it comes to its application in various sectors and for different types of online services. Smaller businesses, in particular, may face challenges in understanding and implementing the requirements effectively.
Debates on its Effectiveness
The effectiveness of the ePrivacy Regulation is a subject of ongoing debate. Some argue that the regulation’s emphasis on user consent and privacy-by-design principles strengthens individual rights and provides a more robust framework for data protection in electronic communications. They see it as a necessary step in addressing privacy challenges in the digital age.
However, critics may question its practical impact, pointing to concerns like the potential for inconsistent implementation across EU member states, its impact on digital innovation, and the potential burden it places on businesses.
Some argue that existing data protection laws, like the GDPR, already cover many aspects addressed by the ePrivacy Regulation, and further regulation might not necessarily lead to more effective protection.
ePrivacy Regulation and Future of Privacy
Predictions and Prospects for Data Protection
The ePrivacy Regulation, if effectively implemented and enforced, has the potential to strengthen data protection and user privacy in electronic communications. It reflects the growing awareness of privacy rights and the need to address the challenges posed by rapid technological advancements.
It may lead to increased transparency and accountability in data processing practices, further empowering users to control their personal data.
As technology continues to evolve, the future of privacy will likely be shaped by a combination of regulatory measures, technological innovations, and user awareness.
Policymakers may continue to refine and adapt data protection regulations to address emerging privacy concerns related to new technologies and communication methods.
Potential Amendments and Adaptations
Regulations like the ePrivacy Regulation are subject to revision and updates over time. As technologies and communication methods continue to evolve, policymakers may revisit and adapt the regulation to address new challenges.
They may consider feedback from stakeholders, public consultation, and ongoing developments in the digital landscape.
Future amendments could aim to strike a better balance between privacy protection and innovative data-driven services, as well as to clarify certain provisions that have caused controversies or implementation challenges.
ePrivacy Regulation has generated both support and criticism, and its effectiveness will depend on its implementation, enforcement, and adaptability to future developments in technology and user expectations.
Privacy concerns will likely remain at the forefront of regulatory discussions, and policymakers may continue to refine data protection measures to safeguard individuals’ rights in the digital age.
Frequently Asked Questions
What is the main purpose of the ePrivacy Regulation?
The main purpose of the ePrivacy Regulation is to protect the privacy of individuals in electronic communications. It aims to ensure the confidentiality of communications and the protection of personal data in electronic communication services, such as emails, text messages, internet telephony, and other online communication methods.
Does the ePrivacy Regulation apply to all businesses?
Yes, the ePrivacy Regulation applies to all businesses and organizations that provide electronic communication services or use electronic communications data within the European Union. It also applies to businesses outside the EU that offer services to EU residents or monitor their behavior.
How does ePrivacy Regulation relate to the General Data Protection Regulation (GDPR)?
The ePrivacy Regulation complements the General Data Protection Regulation (GDPR). While the GDPR provides a comprehensive framework for data protection across all sectors, the ePrivacy Regulation specifically focuses on data protection in electronic communications.
What are the penalties for non-compliance with ePrivacy Regulation?
Non-compliance with the ePrivacy Regulation can lead to significant financial penalties. The exact penalties can be up to 4% of a company’s global annual turnover or 20 million euros, whichever is higher. The specific amount depends on the severity and nature of the violation.
Yes, the ePrivacy Regulation covers cookies and similar tracking technologies used on users’ devices. It requires explicit consent from users before using these technologies, except for essential cookies necessary for the functioning of the service (e.g., session cookies).
Websites and apps must provide clear information about the purposes of data processing and enable users to easily withdraw their consent.
How can companies prepare for compliance with ePrivacy Regulation?
To prepare for compliance with the ePrivacy Regulation, companies can take several steps:
- Conduct a data audit to understand the types of personal data processed in electronic communications.
- Update privacy policies to include specific information about electronic communications data processing and cookie usage.
- Implement mechanisms to obtain explicit consent from users for electronic communications and cookies.
- Train staff to understand the regulation’s requirements and their roles in compliance.
- Review data processing practices to align with the regulation’s principles.
What rights do users have under the ePrivacy Regulation?
Under the ePrivacy Regulation, users have the right to privacy in their electronic communications. They have the right to give or withhold consent for electronic communications and cookies, the right to be informed about data processing, and the right to withdraw consent at any time. The regulation also ensures the confidentiality of communications, protecting users from unauthorized interception or surveillance.
Does ePrivacy Regulation affect digital marketing practices?
Can ePrivacy Regulation stifle innovation in the tech industry?
There are concerns that the strict requirements of the ePrivacy Regulation may create challenges for innovation in the tech industry. Some argue that the regulation could hinder the development of new data-driven services that rely on user data. Striking a balance between privacy protection and fostering innovation is an ongoing challenge for policymakers.
Are there any expected changes or updates to the ePrivacy Regulation in the future?
As of my last update in September 2021, the ePrivacy Regulation was still a proposal and not finalized. It had undergone several revisions and discussions. It is possible that there may be further changes or updates to the regulation in the future. Businesses and stakeholders should closely monitor developments and consult the latest legal texts to stay informed about any changes that may occur.
In conclusion, the ePrivacy Regulation is a crucial piece of legislation designed to protect individuals’ privacy in the digital age. It complements the GDPR, focusing specifically on electronic communications and online tracking practices.
With strict consent requirements, rules for cookies, and privacy safeguards for service providers, the ePrivacy Regulation seeks to enhance data protection and restore consumer trust in the digital ecosystem.
Businesses must ensure compliance to avoid hefty fines and maintain their reputation, while consumers can benefit from improved transparency and control over their personal data. Understanding and adhering to the ePrivacy Regulation is essential for all stakeholders in the digital realm.
Information Security Asia is the go-to website for the latest cybersecurity and tech news in various sectors. Our expert writers provide insights and analysis that you can trust, so you can stay ahead of the curve and protect your business. Whether you are a small business, an enterprise or even a government agency, we have the latest updates and advice for all aspects of cybersecurity. | <urn:uuid:49681c86-fe4e-4be9-ab28-27447c4b59ce> | CC-MAIN-2024-38 | https://informationsecurityasia.com/what-is-the-eprivacy-regulation/ | 2024-09-13T21:32:53Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.48/warc/CC-MAIN-20240913201909-20240913231909-00499.warc.gz | en | 0.914042 | 5,391 | 2.546875 | 3 |
Telecommunications companies are in constant research trying to make their networks more efficient and improve their processes.
That is why new technologies and the digital transformation play an essential role. Currently, telecommunication operators are working on offering and improving their networks with the benefits of 5G helping this transformation and making their equipment and infrastructures more automated. These 5G networks will be key to the expansion of RAN.
But what is the Radio Access Network?
A radio access network (RAN) is ”part of a mobile telecommunications system that implements a radio access technology which resides between a device such as a mobile phone, a computer or any remotely controlled machine and provides a connection to its central network”. Other technologies such as Open RAN and Virtual RAN appeared with the emergence of RAN.
Open RAN, or open radio access networks, refers to ”a new paradigm in which cellular radio networks, consisting of hardware and software equipment from multiple vendors, operating over network interfaces that are truly open and interoperable”. This allows units from one supplier to relate and work with units from other suppliers, as RAN Sharing is known.
Virtual RAN (vRAN) is ”the virtualization of the baseband unit so that it runs as software on generic hardware platforms”. The whole concept of the RAN revolves around the idea of saving on network elements, in this case, the radiating equipment, by sharing them between several operators. They all use the same hardware, but then, using the software, they can separate the traffic of each one of them to give service to their clients.
Currently, MNO’s are working on progressing the design, development, optimisation, testing and industrialisation of Open RAN technologies in which, thanks to this sharing of networks through software, companies manage to save costs on equipment through sharing, making them more profitable and flexible. The cost of implementing 5G networks is expected to be reduced by 50% thanks to Open RAN networks.
RAN will lead to the automation of telecommunication tower networks.
According to the latest research of Analysis Mason “By 2025 almost 80% expect to have automated 40% or more of their network operations.” The Open RAN architecture will make easier this incorporation of intelligence, needed by maximizing the automation and optimization of networks, key for the latest network generation.
At Atrebo, we are aware of the importance of the sustainability and efficiency of our customers’ processes. We have developed a specific module, in TREE, the automation and infrastructure management platform, to help our customers manage their RAN sharing, called TREE.RANSharing.This module is designed to handle spectrum sharing requests and processes. It integrally controls all the processes related to the sharing of capacity of radiant systems shared between several operators, making possible:
- Monitor them, as well as adding new processes
- Carry out rapid audits of current and ongoing processes relating to sharing of radiant resources.
All these functions are integrated into the module of our TREE platform which, together with the Sharing module, facilitates the integration of both space and network equipment sharing. | <urn:uuid:28ba4e11-21e7-479a-97bc-9c301333a3fb> | CC-MAIN-2024-38 | https://www.atrebo.com/en/the-role-of-5g-a-key-factor-in-the-automation-of-rans/ | 2024-09-16T10:29:44Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651682.69/warc/CC-MAIN-20240916080220-20240916110220-00299.warc.gz | en | 0.944303 | 646 | 2.765625 | 3 |
- Like what you see? Lets Talk
As technology evolves, the manufacturing industry is one of the first to embrace novel technologies and automation. The same holds true in today’s data-driven market ecosystem, with manufacturers embracing IoT, AI, and advanced analytics capabilities to optimise manufacturing facilities in real-time with smart factories. But what exactly does this entail?
Smart manufacturing is a union of physical and digital processes within the manufacturing facilities to optimise processes in real-time, optimise production, and reduce costs for effective manufacturing management. It integrates various sensors, robotic processes, IoT principles, real-time data processing, and control techniques to improve agility, decision quality and efficiency. Although the contemporary smart manufacturing systems primarily focus on highly specialised tasks – requiring precision in processes, position, and physical conditions – they have tremendous potential to optimise supply and demand requirements, labour costs, and overall productivity within the manufacturing industry.
A lot of technical jargon in smart manufacturing may be a bit difficult for manufacturing professionals to understand, mystifying the overall concept of smart manufacturing. Here are some of the common terms in smart manufacturing that can help you understand it better:
These are modular software applications that can perform one or more functions when it comes to manufacturing operations management. They connect with other systems and applications for seamless performance and operations management.
It is a set of tools and applications that help collect, analyse, and distribute data automatically, facilitating data-driven decisions and optimisation of processes in real time. Leveraging information modelling, AI, and IIoT technologies, these platforms facilitate data integration across manufacturing enterprises and supply chains for enhanced performance efficiency across the board.
Big data is a collection of structured and unstructured data from sensors, equipment, and processes across the manufacturing facility. Analysing it can reveal patterns and trends in process efficiency and help optimise productivity.
A combination of hardware and software tools, cloud computing (or the cloud) allows manufacturing facilities to store, process, and access data remotely with internet connectivity. As it relies on remote servers, it can provide unlimited storage and processing capabilities without significant investment in hardware assets.
These systems synchronise physical processes with counterpart virtual objects to monitor and control industrial processes.
HMI is a user interface that allows users to interact with machines or devices to monitor and control their performance. It can range from a physical control panel to a graphical user interface and everything in between.
IoT is the integration of computing devices and sensors into everyday devices, making them capable of sending and receiving data via standard internet communication.
IIoT is the network of industrial sensors, instruments, equipment, and other manufacturing devices connected to an embedded computing system that allows them to exchange data. Such connectivity can help automate processes, improve decision-making, and enhance performance efficiency with minimal human intervention.
A key for smart manufacturing systems is interoperability, which is the ability of different software and hardware tools to communicate and exchange data. Focusing on interoperability can often require you to integrate additional computing devices that can help translate different types of data, so that, different machines and processes can communicate with each other.
No matter the size of your operation or business, if you can afford it, a smart manufacturing system is a good investment. Embracing the principles of Industry 4.0 and leveraging manufacturing outsourcing services to build smart facilities can help you optimise production with real-time data insights for increased efficiency, precision, and return on investment. So, if you have the necessary resources, smart manufacturing can help you build a competitive edge and compete with big players in the manufacturing industry.
Find out more about how we can help your organization navigate its next. Let us know your areas of interest so that we can serve you better | <urn:uuid:5c60a10f-ca8b-4fdb-88cc-dcd3a5d5133e> | CC-MAIN-2024-38 | https://www.infosysbpm.com/glossary/smart-manufacturing.html | 2024-09-16T09:11:40Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651682.69/warc/CC-MAIN-20240916080220-20240916110220-00299.warc.gz | en | 0.921463 | 761 | 2.765625 | 3 |
The FBI’s Internet Crime Complaint Center (IC3) received an average of 2,000 cybercrime complaints per day with reported losses topping $4.1 billion in 2020. $216.51 million of these losses were the result of email spoofing. So, what is email spoofing? Let’s learn more about it.
One of the most common tactics cybercriminals use to trick or manipulate people is email spoofing. Spoofing means presenting something or someone as another legitimate entity to establish authority and gain leverage. The eventual goal of spoofing is often to dupe victims for financial gain. Of course, spoofing can occur through multiple means: emails, phone calls, SMS text messages, domain spoofing, and even app spoofing. But the one we’re going to focus on here today is email spoofing specifically.
The number of spoofing email attacks is increasing every year, causing irreparable damage to victims. The IC3 observed that emails spoofed, which makes emails look like they came from CFO, CEO, lawyers, or vendors, is frequently used to target business enterprises. It’s a tactic that’s commonly used in business email compromise (BEC) scams. Data from the IC3’s 2020 Internet Crime Report shows that BEC scams had a huge impact with 19,369 complaints resulting in $1.8 billion in total adjusted losses.
Considering these numbers and how fraudulent emails can affect businesses, it’s crucial that you understand email spoofing and take appropriate steps to prevent this tactic from being successfully used against your organization. Let’s break it all down.
What Is Email Spoofing?
When someone uses email to fraudulently represent themselves as another legitimate entity, this is an example of email spoofing. In a more technical sense, email spoofing about fabricating false email sender information to trick people into believing fraudulent emails are authentic.
Here’s a great video that quickly explains email spoofing:
On February 10, 2021, the IRS (Internal Revenue Services) released an official warning to alert tax professionals about a scam targeting them. The spoofing emails were supposedly sent from “IRS Tax E-Filing” and carried the subject line “Verifying your EFIN before e-filing.” The IRS also warns not to take any steps mentioned in the email especially responding to the said email.
Here’s an excerpt from one of these dodgy emails:
“In order to help protect both you and your clients from unauthorized/fraudulent activities, the IRS requires that you verify all authorized e-file originators prior to transmitting returns through our system. That means we need your EFIN (e-file identification number) verification and Driver’s license before you e-file.
Please have a current PDF copy or image of your EFIN acceptance letter (5880C Letter dated within the last 12 months) or a copy of your IRS EFIN Application Summary, found at your e-Services account at IRS.gov, and Front and Back of Driver’s License emailed in order to complete the verification process. Email: (fake email address)
If your EFIN is not verified by our system, your ability to e-file will be disabled until you provide documentation showing your credentials are in good standing to e-file with the IRS.”
This is a textbook example of a phishing email. Some of the red flags that tell you the email is fraudulent are:
- The email address of the sender is spoofed.
- It uses urgent language to push you to take rash actions.
- The “reply to” email address is different from the sender’s email address.
- It threatens you with penalties if you do not take immediate action.
- The email claims to be from IRS but asks for information (and sometimes copies of documents) that the IRS would already possess.
Of course, we’ve already written an article that covers how to tell if an email is fake or real and invite you to check that one out as well for additional information.
How Does Email Spoofing Work?
There are multiple ways that cybercriminals can spoof emails.
1. Spoofing the Sender’s Display Name
This is the most basic and most common form of email spoofing. It requires the sender to merely change their email display name. On a cursory glance, the recipient will believe that the email is from a legitimate sender. However, if they check the sender’s email address, the scam will fall apart as the email address won’t match the sender’s name or company.
This type of email spoofing is super easy and doesn’t require the attacker to know any kind of computer programming to carry out this scam. Also, the popularity of this scam is rising because it’s so cheap and easy to do. Bad guys will require just a few innocent victims to fall for their farce.
Here are a couple more examples of this type of email spoofing:
2. Spoofing the Domain Name:
Domain name spoofing involves scammers creating email addresses that are associated with domains that are similar to that of the organization they’re impersonating. Much like typosquatting tactics, cybercriminals use basic tricks to make email addresses look legitimate to people who aren’t paying attention or are rushing. A few examples include:
- Swapping “in” in place of the letter “m,”
- Using “1” instead of “l,”
- Replacing “o” in place of “o,” or
- Adding extra numbers, characters, or words to email domains.
Suppose, for example, the name of a legitimate courier agency is Safe Express, and their domain name is safeexpress.com. If bad guys want to use email spoofing to impersonate the company to scam their clients, they can create a dodgy domain safexpress.com that looks incredibly similar and use it to send out phishing emails.
Here’s an example of domain spoofing using an email from Nextdoor:
The first image (left) shows how the email appears when you receive the email if you don’t click the arrow to expand the sender’s email information. The second screenshot (middle) is an example of a legitimate email from Nextdoor — notice how the email comes from an address that ends in “@hs.email.nextdoor.com.” The third screenshot (right) is an example of a spoofed domain that looks very convincing. There’s an extra “r” at the end of “nextdoor” before the “.com.”
3. Creating an Email Using a Genuine Domain
Despite being a less common form of spoofing, it is perhaps the most terrifying one. The email looks like it has come from a genuine person as the domain name on the sender’s address is legitimate. This vulnerability is not left open anymore as most companies use Sender Policy Framework (SPF) and Domain Key Identified Mail (DKIM) in their DNS setting to prevent any unauthorized person from using their domain name for spoofing. These protocols are explained later in the article.
4. Email Spoofing for BEC Scams
Business email compromise, or BEC, is usually done by spoofing the email sender’s information to look like the email has come from the CEO or the CFO of the company. This type of email scam will often involve directing the recipient to transfer a huge amount to a bank account belonging to the attacker. As the email looks like it is from the victim’s boss, the employee may comply with the directions in the email without asking many questions.
Some scammers have also managed to impersonate the CEOs of enterprises to ask employees to donate to a charity. Needless to say, the said “charity” here is to the bank account of the attacker.
What Makes Emails Vulnerable to Spoofing?
The principal vulnerability that makes email spoofing possible is the lack of authentication in Simple Mail Transfer Protocol (SMTP). Although authentication protocols to prevent mail spoofing exist, they’re not widely adopted. According to the results of a 2018 academic study, only 40% of Alexa’s top 1 million domains had SPF and only 1% had DMARC. This leads to an increased risk of cyber attacks, including:
- Phishing attacks,
- Malware infiltration in their IT systems, and
- Ransomware attacks.
How to Prevent an Email Spoofing Attack
If the spoofing attacks are so hazardous, there should be something we can do to put a check on them, right? Email service providers like Google’s Gmail and Microsoft’s Outlook have built-in systems that help to prevent email spam and junk emails from coming through to your inbox. The recipient is alerted to receiving potential spam or a spoofed email.
Everyone should always be vigilant before opening any emails marked as spam. Although some legitimate emails might not pass the security test and end up in the spam folder, in most cases, the email service providers are right in their threat detection.
But having said that, relying on your email service provider’s security measures alone is not enough. They’re not perfect, after all, and spoofed emails might find a way into your inbox without their knowledge.
That being said, certain protocols exist that you can use to prevent email spoofing attacks from using your domain. And if you use these protocols as part of your email security protections, then you can curb these attacks and prevent someone from sending phishing emails on behalf of your brand and domain.
In this section, we’ll cover three email protocols you can implement now. We’ll also share two other things you can do to add additional layers to your email security defenses. Of course, it’s important to mention that they must be properly implemented and configured for these protections to do you any good. We’re not going to get into the technical “how-to” or implementation aspect of these tools. But what we will cover is what each of these email security methods is and how it improves email security for your organization and its external recipients, too.
Sender Policy Framework (SPF)
SPF is a protocol designed to communicate which servers or IP addresses (both internal and external) are authorized to send emails on behalf of a particular domain. This is done using domain name system (DNS) records, which basically lets recipients’ email clients know the email came from you.
So as long as an email originates from one of the IP addresses included in the DNS record, it’ll be viewed as OK. If the IP address comes from a different IP address that isn’t in the DNS record, then it’ll be blocked.
As the owner of your company’s domain, you can enable SPF by creating one or more DNS TXT records. This allows you to authorize certain IP addresses to send emails on behalf of your domain while prohibiting anyone else from doing so. If a scammer sends an email from your domain name, the SPF will identify the IP address and warn the recipient’s email server of a possible scam.
Domain Keys Identified Mail (DKIM)
In the simplest sense, DKIM is all about helping your domain establish trust with your recipients’ email servers. Domain keys identified mail helps to prevent spoofing by applying a digital signature to email headers for all outgoing messages on a domain. This allows recipients’ mail servers to detect whether messages coming from that domain are from one of its legitimate users or if the sender’s information has been faked.
What DKIM doesn’t do, though, is encrypt email data. However, it does ensure message integrity. It does this by using a checksum to prove to a recipient’s email server that the message hasn’t be altered after it was sent.
Although DKIM does not filter emails, it certainly helps to reduce your email domain’s spam score. If the DKIM signature cannot be verified, the email can be sent to spam to warn the recipient.
To implement DKIM, you need to modify your server as the sender. The sender creates cryptographic public and private keys, installs them on their server, and creates a DNS TXT record that contains the public key. The outgoing messages are signed by using the private key. The recipient of the email can use the public key to verify the authenticity of the email.
Domain-Based Message Authentication, Reporting, and Conformance (DMARC)
DMARC is a protocol that informs email recipients that emails from its domain either or both SPF and DKIM to help them determine whether their messages are legitimate. This way, it knows that if the authentication passes, the email should be legitimate and the user is good to go to trust it. But if the authentication fails, it tells the recipient to reject or junk the message.
Something else that DMARC does is let the recipient’s server know what the sender recommends in the event of failed authentication. As the sender, for example, you can specify if you want the recipient to:
- Give no special treatment to the emails that fail authentication;
- Send non-authenticated emails to the spam folder;
- Reject such emails before they reach the recipient’s client; and/or
- Send an email to the sender about passed or failed DMARC authentication.
Check out this great video from Cisco that breaks down what DMARC is:
Email Signing Certificates
Email signing certificates, also known as S/MIME certificates, are what you as a sender can use to digitally sign your emails. This type of X.509 digital certificate allows your recipients to verify whether the email was sent by you (not an imposter) and that it hasn’t be altered in any way since you sent it. It also encrypts messages that are shared between two S/MIME certificate users. (You just have to get a copy of the recipient’s public key before you can start sending encrypted emails.)
The fundamental purpose of an email signing certificate is to:
- Authenticate the email sender,
- Encrypt email message (when corresponding with other S/MIME certificate users), and
- Ensure message integrity.
Increasing Your Organization’s Cyber Hygiene a Through Awareness Training
Is it enough to employ all the above measures for a fool-proof system? The answer is no. Every day, cybercriminals come up with new spins for old attack methods as well as entirely new attack methods to try to breach our defenses. As such, we must be proactive and mindful of every task we carry out to keep them at bay.
Training your employees about cyber hygiene is critical to supporting your overall cybersecurity efforts and for increasing employees’ knowledge. After all, it only takes one wrong click from an employee for a full-fledged cyber attack or data breach. Some important topics that all cyber awareness trainings should cover include:
- Common phishing scams and tactics (including examples of email spoofing and social engineering),
- Other types of cyber attacks,
- Account and password security methods,
- General cyber security best practices, and
- What they should do when they experience or suspect a cyber attack or breach.
Always remember, training is not a one-and-done deal. Refresher courses on cyber security awareness must be carried on regularly to ensure that they are aware of the most current threats. Recent cyber attacks on other companies should be discussed with the employees so that they have information about how they were carried out and how they could have been prevented.
Cyber security-themed quizzes, games, puzzles, and online games are also fun and engaging ways to increase your employees’ cyber awareness. Building your defenses against cybercriminals to protect yourself should be your priority.
Final Words On Email Spoofing
Now that you understand what email spoofing is, you can realize that the best way to prevent such attacks is to raise awareness among your employees and staff members about it. Not opening dodgy emails, not clicking on the attachments or the links, and not even replying to such emails can go a long way to protect you against email spoofing attacks.
You also need to be willing to take technical steps to prevent someone from using your domain as part of their email spoofing campaigns. As you discovered, this means using DNS records in conjunction with protocols like SPF, DKIM and DMARC to your full advantage. | <urn:uuid:622749b6-fe2e-4c57-a3f3-0133e2649d79> | CC-MAIN-2024-38 | https://cheapsslsecurity.com/blog/what-is-email-spoofing-an-explanation/ | 2024-09-18T21:57:39Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651941.8/warc/CC-MAIN-20240918201359-20240918231359-00099.warc.gz | en | 0.93768 | 3,482 | 2.609375 | 3 |
Today, data volumes are exploding. More data has been created in the past two years than in the entire previous history of the human race. By the year 2020, about 1.7 MBs of new information will be created every second for every human being on the planet.
By then, our accumulated digital universe of data will grow from 4.4 zettabytes today to around 44 zettabytes, or 44 trillion gigabytes – a ten-fold increase in just four years.
Big data is also helping to make the world a better place, and there’s no better example than the uses being found for it in healthcare. Intelligent and creative uses of Big data in healthcare is helping to predict epidemics, find cures for disease, improve quality of life and avoid preventable deaths.
With the world’s population increasing and everyone living longer, models of treatment delivery are rapidly changing, and many of the decisions behind those changes are being driven by data.
The drive now is to understand as much about a patient as possible, as early in their life as possible – hopefully picking up warning signs of serious illness at an early enough stage that treatment is far more simple (and less expensive) than if it had not been spotted until later.
While big data is positively driving advances in healthcare, the storage and management of it are causing significant issues for IT managers, both because of the need to store, archive and preserve large volumes of data for future research; as well as dealing with the security and compliancechallenges associated with it.
So what’s the best way for healthcare organisations to go about storing and archiving these huge volumes of big data safely, secularly and cost-effectively?
Pathology is an example of an area in the NHS that is undergoing disruptive change and where new digital processes are introducing challenges to old ways of working. However in doing so these new processes are also introducing fantastic new opportunities bought about by big data.
Managers of digital pathology labs are benefiting from digital workflows that are fostering innovation in how pathology practices are transforming patient care. Even a modest pathology lab with one small slide scanner will generate over 15 TB of data per year and the responsibility for storing, managing and securing this volume of data over decade-long timescales, all the while taking into account the compliance, security, cost and data integrity requirements associated with its storage is falling on IT departments.
IT managers have their own challenges arising from this, not least of which is procuring and managing new infrastructure, but also having significant new data appearing in the backup window.
Looking at numbers like this, and when you start to consider what is required to successfully store this data, it quickly becomes clear that pathology laboratories, as well as hospitals and other medical research institutions, have a significant task on their hands.
The good news is that there are in fact specialist managed data storage services that are positively disrupting how big data is being secured and stored. And they’re reducing costs, meeting NHS and healthcare compliance requirements and delivering the long-term efficiency benefits that enable digital workflows to flourish.
Healthcare organisations should consider bringing in one of these specialist providers of long-term data archiving, such as Arkivum, who can implement a managed service that has been specifically designed from the ground up to provide ultra secure storage for large volumes of data for extended periods of time.
And it is not just digital pathologists who can employ digital archiving services. Healthcare organisations across the board are all looking for ways to manage their data. Hospitals, genetics laboratories, fertility clinics and digital pathologists alike need to consider the most effective way to ensure that all this big data is properly managed.
If you want to start taking positive steps towards managing your data for the long term, here are six top tips to help you on your way:
Save costs by moving infrequently used data off expensive primary storage'
Get static data into archive storage to ensure its safe on decade level timescales and save backup costs.
Make sure you have an archiving policy and regular archiving activity to keep your house in order.
Make sure that your archive has online copies for easy access and offline/off-site copies for disaster recovery.
Ensure that there is a designated individual with data archiving responsibly.
Regularly review the policy process and data storage solution against compliance requirements.
Alternatively, if you are interested in finding out more you can download a copy of Arkivum's paper: Managing Big Data – Reaping the Rewards. In this paper we detail the issues and challenges that healthcare organisations face when it comes to managing large volumes of data. We also highlight what a digital data storage strategy needs to include for success.
Sourced from Nik Stanbridge, VP Marketing, Arkivum | <urn:uuid:97945542-7d12-4df5-8174-597dde904bfd> | CC-MAIN-2024-38 | https://www.information-age.com/how-take-long-term-view-big-data-healthcare-1553/ | 2024-09-18T21:31:58Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651941.8/warc/CC-MAIN-20240918201359-20240918231359-00099.warc.gz | en | 0.958021 | 976 | 2.875 | 3 |
The resource kit was recently launched at the Learning Disabilities Association of Alberta conference in Calgary by the province’s Minister of Education, Ron Liepert.
There are approximately five multimedia resources available to students, according to Rose Prefontaine, project coordinator with the Ministry’s learner services branch.
The resources are designed for students from Grades 9 to 12 in the hopes of inspiring and motivating them to consider post-secondary education, said Prefontaine.
“The reason for that is because half the population, according to Statistics Canada, have a post-secondary education, and only 36 per cent of those with disabilities have a post-secondary education,” she said.
The resources include a transition planning guide for students with disabilities and their families, and a success story DVD, according to Prefontaine.
Alberta’s Minister of Advanced Education and Technology, Doug Horner, said in a statement that the new guide will bring more students with disabilities into the province’s colleges and universities.
Prefontaine said the 15-minute DVD, included in the resource kit, was created to inspire and motivate students by demonstrating success stories of five different students with disabilities from various educational institutions.
“For example, there’s a student who was told they were the most disabled student that the University of Alberta had ever seen, who was reading at the Grade 4 level,” said Prefontaine. “She graduated with a masters degree in learning disabilities studies from the University of Calgary…I think she’ll actually achieve her PhD.”
The province has also been encouraging the provision of assistive technology to students with disabilities. Prefontaine said there’s a number of software packages available for this purpose, in addition to funding assistance for students to have access to the necessary technology.
For students with learning disability like autism or Asperger’s syndrome, the University of Alberta has come up with a creative way to help these particular students learn lecture materials, said Prefontaine.
“They’re using closed captioning, which is using the services of a court reporter to deal with the sensory overload that the individual is having,” she said.
In addition to inspiring students with disabilities to pursue post-secondary education, the multimedia resources are also a product of community and learning consultations.
“We received feedback from 142 students that were attending various vocational programming at the technical schools as well as the universities,” said Prefontaine.
“These consultations provided us with the hard data to say this is what the experience has been, and what we can do to improve and address some of the gaps that the students and the delivery partners have identified.” | <urn:uuid:0879b103-efac-467c-a44a-027338c7d9db> | CC-MAIN-2024-38 | https://www.itworldcanada.com/article/alberta-launches-post-secondary-prep-kit-for-students-with-disabilities/42 | 2024-09-18T20:50:35Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651941.8/warc/CC-MAIN-20240918201359-20240918231359-00099.warc.gz | en | 0.971387 | 563 | 2.5625 | 3 |
We’re seeing a change in cybercrime and the way cyberattacks are being performed. A recent set of attacks against critical infrastructure entities exposed a new approach to cybercrime and critical infrastructure hacks. Oil and gas pipeline operators, utilities, and even some city and state governments reveal the following new methods and motives.
Attackers were not out to steal data but were looking to disrupt services. Attackers used a new attack vector not previously seen. Instead of attacking primary targets directly, the attackers zeroed in on less secure vendors of those targets. We will be looking at how they did this along with how this can be prevented.
Step One – Reconnaissance
Before launching an attack, hackers first identify a vulnerable target and explore the best ways to exploit it. The initial target can be anyone in an organization. The attackers simply need a single point of entry to get started. Targeted phishing emails are common in this step as an effective method of distributing malware.
The whole point of this phase is getting to know the target. The questions hackers are answering at this stage are:
- Who are the important people in the company? They discover this information by looking at the company website or LinkedIn.
- Who do they do business with? For this, they may be able to use social engineering by making a few “sales calls” to the company. Another way is good old-fashioned dumpster diving.
- What public data is available about the company? Hackers collect IP address information and run scans to determine what hardware and software are being used. They also commonly check the Internet Corporation for Assigned Names and Numbers (ICAAN) web registry database.
The more time hackers spend researching and gaining information about the people and systems at the company they’re targeting, the more successful the hacking attempt will be.
Step Two – Weaponization
In this phase, the hacker uses the information they gathered in the previous phase to create what they need to get into the network. This could be creating believable spear-phishing emails. These would look like emails employees of the targeted company could potentially receive from a known vendor or other business contact.
The next is step is creating watering holes, or fake web pages. These web pages will look identical to a vendor’s web page or even a bank’s web page. The sole purpose of this step is to capture your username and password, or to offer you a free download of a document or something else of interest.
The final thing the attacker will do in this stage is collect the tools they plan to use once they gain access to the network so that they can successfully exploit any vulnerabilities they find.
Step Three – Delivery
Now the attack starts. Phishing emails are sent, watering hole web pages are posted to the Internet and the attacker waits for all the data they need to start rolling in. If the phishing email contains a weaponized attachment, then the attacker waits for someone to open the attachment and the subsequent malware to call home.
Step Four – Exploitation
Now the fun begins for the hacker. As usernames and passwords arrive, the hacker tries them against web-based email systems or virtual private network (VPN) connections to the target company network. If malware-laced attachments were sent, then the attacker remotely accesses the infected computers. The attacker explores the network and gains a better idea of the traffic flow, what systems are connected and how they can be exploited.
Step Five – Installation
In this phase, the attacker makes sure they continue to have access to the network. They will install a persistent backdoor, create admin accounts on the network, disable firewall rules and perhaps even activate remote desktop access on servers and other systems on the network. The intent at this point is to make sure the attacker can stay in the system for as long as they need to.
Step Six – Command and Control
Now they have access to the network, administrator accounts and all the needed tools are in place. They have unfettered access to the entire network. They can look at anything, impersonate any user on the network and even send emails from the CEO to all employees. At this point, they are in control. They can lock you out of your entire network if they want to.
Step Seven – Achieve the End Goal
Now that they have total control, they can achieve their objectives or end goal. This could be stealing information on employees, customers, product designs, etc. Or they can start interfering with the operations of the company. Remember, not all hackers are after monetizable data. Some hackers are out to just mess things up.
If you take online orders, they could shut down your order-taking system or delete orders from the system. They could even create orders and have them shipped to your customers. If you have an industrial control system and they gain access to it, they could shut down equipment, enter new set points and disable alarms. Not all hackers want to steal your money, sell your information or post your incriminating emails on WikiLeaks. Some hackers just want to cause you pain.
So, what now?
What can you do to protect your network, your company and even your reputation? You need to prepare for an attack. Let’s face it, sooner or later hackers WILL come for you, It’s just a matter of when and how. Don’t let yourself think you don’t have anything they want. Trust us, you do.
Original content can be found at www.veltatech.com. | <urn:uuid:f496ef57-1d1d-4fe1-b565-ec72b9dd096a> | CC-MAIN-2024-38 | https://www.industrialcybersecuritypulse.com/networks/seven-steps-to-a-cyberattack-with-multiple-points-of-entry-and-attackers-only-need-one/ | 2024-09-08T00:48:00Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650926.21/warc/CC-MAIN-20240907225010-20240908015010-00199.warc.gz | en | 0.944564 | 1,136 | 2.640625 | 3 |
How can we calculate the final pressure inside a scuba tank after it cools down?
Given initial pressure: 130.0 atm
Initial temperature: 500°C
Final temperature: 25.5°C
Gas constant: 0.08206 L·atm/mol·K
Assuming volume remains constant at 11.1 L
Calculating the Final Pressure Inside the Scuba Tank
When it comes to cooling down a scuba tank, the pressure inside will change. But fear not, we can use the ideal gas law to find the final pressure!
First, we need to convert the initial temperature from Celsius to Kelvin: T1 = 500°C + 273.15 = 773.15 K
Next, we can calculate the number of moles of gas in the tank using the formula n = P1/RT1 where n is the number of moles, P1 is the initial pressure, and T1 is the initial temperature.
Assuming the volume remains constant, we can then find the final pressure using the final temperature and volume.
Therefore, the final pressure inside the scuba tank after cooling from 500°C to 25.5°C is 47.4 atm.Diving into the Details of Scuba Tank Pressure Calculation
When a scuba tank cools down, the pressure inside will decrease due to the change in temperature. By applying the ideal gas law, we can determine the final pressure inside the tank after it cools from 500°C to 25.5°C.
First, we convert the initial temperature from Celsius to Kelvin to ensure consistency in our calculations. Then, we calculate the number of moles of gas in the tank using the ideal gas law formula. Assuming the volume remains constant, we can find the final pressure by plugging in the final temperature and volume values.
Scuba diving enthusiasts can rest assured that even with changes in temperature, the pressure inside their tanks can be calculated accurately using fundamental gas laws. So, dive on and explore the depths with confidence! | <urn:uuid:4867eeaf-66ec-4305-bf89-7f9e80369264> | CC-MAIN-2024-38 | https://bsimm2.com/chemistry/let-s-dive-deeper-into-scuba-tank-pressure.html | 2024-09-09T05:58:59Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651072.23/warc/CC-MAIN-20240909040201-20240909070201-00099.warc.gz | en | 0.853575 | 417 | 3.453125 | 3 |
Ping - common reasons for failure
A successful ping involves an echo request successfully traveling from the source to the destination, and an echo reply successfully traveling from the destination back to the source of the original ping. A failure in a ping could take place at any location along that path.
Now the source and/or destination devices may be any network-connected device, such as a PC, mobile phone, router, switch, IP camera or any other such device. For any of these devices, some of the most common failures are listed below.
Failures originating in the source device may be due to:
- lack of a local route to the intended destination
- local access list blocking this specific traffic (for Cisco IOS devices, take a look at this note)
Failures originating on the network between source and destination devices
- routing between the source and destination may not be sufficient to allow for the echo request to reach the destination
- In the event that it does reach the destination, it may be that routing between the destination and source may not be sufficient to allow for the echo reply to reach the original source. Take a look at Routing in both directions.
Failures originating in the destination device may be due to:
- lack of a local route to the source device
- local access list blocking this specific traffic
It is important to keep in mind that a failed ping may actually make it to the destination, and it is the echo reply that has failed. To determine more precisely where the problem lies, use additional tools such as Traceroute.
For a more detailed look at the various responses and their meanings, take a look at Ping - possible ping responses. Also, take a look at Ping - debug commands to help in more deeply troubleshooting ping responses.
For more information about ping, take a look at Ping - troubleshooting concepts | <urn:uuid:28a6bc81-d442-4d49-b27e-77b8391ede98> | CC-MAIN-2024-38 | https://notes.networklessons.com/ping-common-reasons-for-failure | 2024-09-10T08:26:30Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651224.59/warc/CC-MAIN-20240910061537-20240910091537-00899.warc.gz | en | 0.942558 | 379 | 2.828125 | 3 |
AI-powered Predictive Analytics: Predicting the Unpredictable
April 12, 2024
5 Min Read
In this digitally fast world, imagine anticipating customer churn before they disappear, predicting equipment failures before they disrupt the entire production line, or even forecasting market trends with uncanny accuracy. This isn’t science fiction; it’s the power of AI-powered predictive analytics. This article explores how AI-powered predictive analytics can empower businesses to navigate uncertainty and make data-driven decisions.
What is Predictive Analytics?
Predictive analytics is the process of using historical data, statistical modeling, and machine learning algorithms to uncover hidden patterns and forecast future events. By identifying trends and anticipating potential outcomes, businesses can make proactive decisions, optimize processes, and mitigate risks.
The Rise of AI in Predictive Analytics
Traditional statistical methods struggle to handle the vast amount of data generated by modern businesses. This is where Machine Learning (ML) solutions come in. ML algorithms can learn from massive datasets, identify complex relationships between variables, and continuously improve their predictive accuracy.
There are two main types of ML techniques used in AI-powered predictive analytics:
Supervised Learning: Imagine a teacher showing students labeled examples (e.g., pictures of cats and dogs). Supervised learning algorithms work similarly, learning from data that’s already been classified (e.g., customers who churned vs. loyal customers) to predict future outcomes. This is helpful for tasks like customer churn prediction.
Unsupervised Learning: Unlike supervised learning, unsupervised algorithms don’t have a teacher or labeled data. Instead, they discover hidden patterns in unlabeled data, like customer purchase history. This is useful for tasks like market segmentation, where you can identify groups of customers with similar behaviors.
Applications of AI-Powered Predictive Analytics in Businesses
AI-powered predictive analytics is revolutionizing various industries:
Supply Chain Management: AI can analyze historical sales data, social media trends, and even weather patterns to anticipate demand fluctuations. This allows businesses to optimize inventory levels, prevent stockouts, and ensure on-time deliveries.
Finance & Risk Management: Assessing creditworthiness, identifying fraudulent transactions, and anticipating market volatility are crucial tasks in the financial sector. AI can analyze vast amounts of financial data to predict loan defaults, detect fraudulent activities in real-time, and even forecast potential market downturns.
Manufacturing & Maintenance: AI can analyze sensor data from machines to identify early warning signs of potential breakdowns. This allows for proactive maintenance, minimizing downtime and maximizing production efficiency.
Marketing & Sales: Identifying high-potential leads, personalizing customer experiences, and optimizing marketing campaigns are essential for B2B sales success. AI can analyze customer data, buying behaviors, and past interactions to predict which leads are most likely to convert and what type of content will resonate with them. This enables businesses to personalize marketing messages, tailor sales pitches, and optimize their marketing spend for better ROI.
Human Resources: Predicting employee turnover, identifying skills gaps, and improving talent retention are critical challenges for HR departments. AI can analyze employee data, performance reviews, and even social media sentiment to predict which employees might be at risk of leaving. This allows HR to implement targeted retention strategies and invest in upskilling programs to address potential skills gaps.
Challenges, Considerations, and the Future of AI Predictive Analytics
While AI-powered predictive analytics offers tremendous benefits, there are challenges to consider:
Data Quality and Availability: The saying “garbage in, garbage out” applies here. The accuracy of AI models heavily relies on the quality and relevance of the data they are trained on. Businesses need to ensure they have access to clean and high-quality data to generate reliable predictions.
Model Explain ability and Bias: Understanding how AI models arrive at their conclusions is crucial for building trust and mitigating bias. Businesses need to invest in robust data science solutions tools and expertise to ensure their AI models are transparent and unbiased in their predictions.
Talent and Expertise: Building and maintaining AI solutions requires specialized skills in data analytics, machine learning, and software engineering. Businesses may need to invest in talent acquisition or partner with a trusted offshore AI company to leverage the expertise needed for successful AI implementation.
The Future of AI-Powered Predictive Analytics
The future of AI-powered predictive analytics is bright, with several exciting trends on the horizon:
Integration with IoT (Internet of Things): Imagine real-time data collection and analysis from sensors embedded in everything from factory equipment to logistics vehicles. This will provide even richer data sets for AI models, leading to more accurate and dynamic predictions.
Advancements in Deep Learning Algorithms: Deep learning algorithms, inspired by the structure and function of the human brain, are becoming increasingly sophisticated. This will enable AI to handle even more complex data sets and make even more nuanced and accurate predictions.
Democratization of AI Tools: Advancements in technology are making AI tools more accessible and affordable for businesses of all sizes. This will unlock the power of predictive analytics for a wider range of companies, driving innovation and growth across industries.
AI-powered predictive analytics is a transformative technology that empowers B2B businesses to make data-driven decisions, mitigate risks, and seize new opportunities. By embracing AI and overcoming the challenges, businesses can gain a significant competitive advantage in today’s dynamic market.
Unlock the power of AI-powered predictive analytics for your business with Futurism Technologies and put your AI journey on the right foot. Get a comprehensive suite of AI solutions, from data science to predictive analytics and visual computing to Generative AI and Knowledge Virtualization to Machine Learning. | <urn:uuid:03476ebd-4c98-49f9-bc54-bcbd6deb17e0> | CC-MAIN-2024-38 | https://www.futurismtechnologies.com/blog/ai-powered-predictive-analytics-predicting-the-unpredictable/ | 2024-09-11T14:22:14Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651387.63/warc/CC-MAIN-20240911120037-20240911150037-00799.warc.gz | en | 0.907437 | 1,174 | 2.75 | 3 |
is widely known as one of the main executable files related to MySQL and its functionality. The mysqld
is the MySQL daemon – the main functions related to the database management system are accomplished using it. In this blog post, we will tell you all about it.
The MySQL D(a)emon
, as already previously noted, translates to MySQL daemon (not the demon). The daemon allows database administrators and other kinds of developers to complete all kinds of operations relevant to MySQL including the ability to start, stop, and pause the beast.
However, starting, stopping, and pausing MySQL-related operations is not everything that the MySQL daemon is used for: this program (mysqld is an executable file) has many options that can be used. To figure those out, start it by using the –help
might also help with formatting): mysqld --help [--verbose]
will do. Use such a command and you will instantly see that MySQL comes out with a lot of useful information:
As you can see, MySQL will tell you what files does it read and in what order, and what options can be specified when mysqld
is invoked. Scroll down a little and you will be able to observe the available variables. Some of them can be seen below:
The understanding of the basic options and variables that mysqld
provides is an absolutely essential task to every developer and database administrator working with MySQL or any of its flavors – since the usage of mysqld
is inevitable, our knowledge of at least some of the commands that are given to us by MySQL can be very helpful.
Some of the basic options that mysqld
- The ability to specify a default file from which MySQL would read information by specifying it inside of the
parameter or specify a file that would be read after all of the default files would be by using the--defaults-extra-file
parameter. - Options related to certain storage engines available inside of MySQL (InnoDB being the main one): developers can change the default directory that InnoDB stores files in by specifying the
parameter, files can be stored in another location specified inside of the--innodb-data-file-path
location, etc. - The ability to set when certain operations (think opening ports, connections, etc.) would time out.
- The ability to log all changes relevant to a specific storage engine into a file (the option is called
is the name of the file, and is only relevant to the MyISAM storage engine inside of MySQL.) - The ability to display a default list of options and exit.
- The daemon also comes with operating system-specific options that are displayed at the top. For Windows, the options look like this:
Of course, there are a lot of other options that can be specified and used, but you get the idea by now. The majority of developers and engineers working with MySQL or any of its flavors aren’t too fussed about the option list provided by the daemon because they wouldn’t be able to remember them all anyway; rather, people just pick the option that solves their specific problem and uses it. Here’s the problem though – with so many available options, how do you know which option is the most suitable for your use case?
Choosing Suitable Options
We just said that the majority of developers working with MySQL and its flavors don’t worry too much about mysqld – it’s because they know what options they need and roll with them! Here’s what they will keep in mind when working with the daemon and scrolling through the available options:
The use case and the factors the storage engine is used together with will determine what options will mysqld be invoked with. If our storage engine is used for testing purposes and we want to “lock down” our entire workstation (bear in mind it should be running InnoDB in this case) for one or another reason, we might enable a read-only mode by specifying the “ --innodb-read-only
” parameter and setting it to 1, if we want to change the location of the slow query log available in MySQL, we could mess with the slow-query-log-file
parameter and set a different file path, and should we want to dive deeper, we can even enable deadlock detection (use the innodb-deadlock-detect
parameter), change the format rows are stored in by specifying a parameter after the innodb-default-row-format
value, and so on.) The MySQL daemon will even let us change how tables are stored (e.g. if they are stored in a file-per-table format or not), and let us perform a wide variety of other things, but as always, keep in mind that the tasks that are performed would generally depend on our specific use case. In this blog, we have provided you with a list of options that will be relevant to some of the most widely-used use cases across the MySQL world, but we won’t bore you with the entire list – you already see that certain use cases have specific options of interest, and you can probably sense that the daemon is able to set all options also available to be set inside of my.cnf: and you’re not wrong! The reason why people set options by using the daemon and not using my.cnf, though, have to do with practicality: as soon as the daemon (MySQL) is restarted, the options will be nullified (in other words, MySQL will restart and start looking at the options available in my.cnf, rather than the options which were previously set using the daemon): such a feature may be incredibly useful if you have a specific use case that solves a specific problem on-the-go!
MySQL and Data Breaches
If you are a developer working with the daemon for quite some time, you will know that the performance, availability, and capacity features provided by mysqld are not the only features that can be optimized. MySQL is also a frequent target of data breaches – and MySQL developers know that very well. Thankfully, MySQL can be secured by following a couple of basic security practices:
- All developers having MySQL as their database of choice should follow basic input sanitization procedures.
- Developers should familiarize themselves with the “defense in depth” principle: the more security layers protect their web appliactions, the harder it gets for a hacker to penetrate them.
- Those developers that want to take the security measures of their web applications up a notch should consider using information security services such as web application firewalls that protect web applications from attacks like SQL injection, cross-site scripting and the like or use data breach API services that protect the employees of companies from identity theft and similar attacks – web application firewalls protect web applications from aforementioned attacks, while data breach API services help protect people from identity theft and credential stuffing attacks. One does work without the other – however, protecting your web applications does you little favor if you don’t protect your online wellbeing at the same time.
- Developers familiar with security measures should also familiarize themselves with the OWASP Top 10 list – the OWASP Top 10 list outlines all of the most popular flaws targeting web applications, and you can bet the attackers are well versed in all of them. Familiarize yourself with those principles, then protect your web applications accordingly.
The mysqld stands for the MySQL daemon and it’s one of the most popular tools in the toolset of a modern developer or a DBA – most developers and database administrators know that in order to improve the performance, availability, or capacity of their database instances they should look into what the daemon can offer by looking into my.cnf on Linux or my.ini on Windows – however, performance, availability, and capacity advancements are not the only things this file can be used for – combine everything mentioned in this article with using a properly built web application firewall and using information security services provided by BreachDirectory to protect yourself and your team from identity theft attacks both now and in the future, and you will be golden.
If you’ve read this article to the end, we have something to offer you – ping us over email, and both you and the entire company you represent will receive the unlimited version of the BreachDirectory API to use for 3 months – at absolutely no cost. Sounds good? After you’ve done protecting your applications, shoot us an email and within 24 hours, you can start protecting protect the identities of your team members. It doesn’t get better than that!
Be safe, and we will see you in the next blog. | <urn:uuid:c950373d-e0ac-4aa4-9c74-779343e436af> | CC-MAIN-2024-38 | https://breachdirectory.com/blog/the-mysql-server-mysqld/ | 2024-09-12T19:56:02Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651491.39/warc/CC-MAIN-20240912174615-20240912204615-00699.warc.gz | en | 0.918917 | 1,813 | 3.0625 | 3 |
A Command Line Interface (CLI), not to be confused with Command Language Infrastructure (CLI), stands out as a unique, powerful tool in computing technology. This article sheds light on what a CLI is, its benefits, and how it differs from a Graphical User Interface (GUI). Additionally, it will explore some useful commands and best practices for working with a CLI.
What is a command line interface (CLI)?
A CLI is a user interface that allows users to interact with a computer system by typing text-based commands. Unlike other interfaces where graphical icons represent actions, a CLI relies solely on textual input and output. It is a direct communication line between the user and the system, allowing for more detailed control and administration.
CLI vs GUI
Comparing CLI with a Graphical User Interface (GUI) brings to light distinct differences. A GUI is visually intuitive, making it easier for beginners to navigate and operate. However, a CLI provides a higher degree of control and precision, albeit at the cost of a steeper learning curve. While a GUI might involve several steps to perform an action, a CLI can accomplish the same task with a single command, making it more efficient for advanced tasks.
Benefits of a CLI
The benefits of using a Command Line Interface (CLI) include:
CLI commands can perform complex tasks quickly, often with a single line of code. This efficiency saves time, especially when dealing with large volumes of data.
CLI offers granular control over the system, allowing users to execute specific and advanced tasks that may not be possible with a GUI.
Routine tasks can be easily automated using scripts in a CLI, which is invaluable for system administrators and developers.
Less Resource Intensive
CLI uses fewer system resources than a GUI, enhancing performance especially on older systems.
CLI allows for remote system management, a critical feature in today’s internet-driven world.
4 useful CLI commands
- ls: The `ls` command is a fundamental command to list the files and directories within the current directory. It’s often used to quickly view the contents of the current location in the file system.
- cd: Standing for ‘change directory’, the `cd` command is used to navigate through the file system. By typing `cd` followed by the path of a directory, users can easily switch to that directory.
- mkdir: The `mkdir` command is used for creating new directories. By typing `mkdir` followed by the name of the new directory, a new directory will be created in the current location.
- rm: The `rm` command is used to delete files and directories. Be cautious when using this command, as it permanently removes files and directories, and they cannot be recovered.
CLI best practices
Regularly update and upgrade
Ensuring your system is up-to-date is crucial to maintain its security and performance. Regularly use commands like `apt-get update` and `apt-get upgrade` (for Debian-based systems) or `yum update` (for RPM-based systems) to update your system’s package lists and upgrade all of your software.
Use command aliases
If there are certain commands that you use frequently, consider setting up aliases. This can save time and reduce typing errors. For example, you could set up an alias so that `up` runs the `apt-get update` and `apt-get upgrade` commands.
Understand before execution
The power of CLI comes with responsibility. Before running a command, make sure you understand what the command does. A command like `rm -rf /` can wipe your entire filesystem if run as a superuser. Always double-check your commands, especially when performing operations that modify or delete files or directories.
Embracing the CLI
A CLI is a powerful tool in the realm of computing. Despite requiring a greater learning curve than a GUI, its control, efficiency, and resource management benefits make it an invaluable skill for any IT professional. With the right knowledge and adherence to best practices, one can harness the full potential of a CLI. | <urn:uuid:1de084e4-1029-4905-a3ee-7117e1e30675> | CC-MAIN-2024-38 | https://www.ninjaone.com/it-hub/it-service-management/what-is-a-command-line-interface-cli/ | 2024-09-14T01:38:36Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.77/warc/CC-MAIN-20240913233654-20240914023654-00599.warc.gz | en | 0.915903 | 855 | 3.484375 | 3 |
In the vast landscape of the Internet, where every click, search, and connection is facilitated seamlessly, there lies a silent, yet indispensable force known as DNS—Domain Name Services. Often overlooked but integral to our online experience, DNS plays a pivotal role in navigating the digital realm.
What Does DNS Actually Do?
Breaking Down DNS
At its core, DNS serves as the backbone of the internet, akin to a directory service translating human-readable domain names into machine-readable IP addresses. To put it simply, when we type in a website address like Coke.com, DNS ensures that we're connected to the correct server hosting that website, much like how dialing "411" used to connect us to someone in the analog world.
However, the convenience of DNS comes with its own set of concerns, mainly when relying on default DNS servers provided by Internet service providers (ISPs). These servers not only track our browsing habits but also monetize our data by selling it to third-party advertisers, resulting in the onslaught of targeted advertisements tailored to our interests.
Thankfully, there are proactive measures to mitigate these privacy and security risks associated with conventional DNS services. Enter filtered DNS providers, which offer free services aimed at safeguarding users' online experiences.
Which DNS Service Should You Use?
One such notable player in the field is Quad9, a collaborative effort focused on enhancing internet security by filtering out malicious content. By simply configuring your home router or modem with Quad9's DNS addresses (220.127.116.11 and 18.104.22.168), you can shield your network from a plethora of online threats even before they reach your devices.
NextDNS is another viable option, providing users with customizable filtering capabilities to block unwanted content and bolster cybersecurity defenses. Similarly, OpenDNS offers a comprehensive suite of features, including parental controls and malware protection, empowering users to tailor their internet experience according to their preferences and security needs.
Cloudflare, renowned for its robust network infrastructure, also offers DNS solutions accompanied by mobile applications for on-the-go protection of smartphones and IoT devices. These services not only fortify your home network against cyber threats but also serve as a deterrent against data collection by ISPs and other entities.
While adopting filtered DNS services may entail sacrificing targeted advertisements, the trade-off for enhanced privacy and security is undeniably worthwhile, especially in an era where internet-connected devices permeate every aspect of our lives.
To recap the four services available:
- Quad 9: A free service that filters out the bad stuff and provides security.
- Next DNS: A free service that filters traffic and blocks malware and viruses.
- OpenDNS: A service that lets you select what content to block, such as adult content and gambling.
- Cloudflare: A service that provides applications for mobile devices and protects against ransomware.
DNS may operate behind the scenes, but its impact on our online interactions is profound. By leveraging filtered DNS services from reputable providers, users can take proactive steps to safeguard their digital footprint and enjoy a more secure internet experience.
Looking to delve deeper into DNS? Contact iCorps and consult with one of our experts to unlock the full potential of your online infrastructure. Stay tuned for our upcoming videos as we unravel the intricacies of cybersecurity and technology. | <urn:uuid:bb9dd160-35f2-472e-a007-85806ac33333> | CC-MAIN-2024-38 | https://blog.icorps.com/boost-your-online-experience-with-dns | 2024-09-15T06:23:24Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651616.56/warc/CC-MAIN-20240915052902-20240915082902-00499.warc.gz | en | 0.931948 | 675 | 2.953125 | 3 |
Definition of Short Message Service (SMS) in the Network Encyclopedia.
What is Short Message Service (SMS)?
Short Message Service, best known as SMS, is a service for sending short text messages using the Global System for Mobile Communications (GSM) cellular telephone system. Short Message Service (SMS) can send short messages of up to 160 alphanumeric characters.
How SMS Works
SMS works as a store-and-forward service in which messages that are sent are stored at an SMS messaging center until the recipient can connect and receive them. SMS offers an advantage over paging systems in that it notifies the sender when the recipient has received the message. SMS allows messages to be sent or received simultaneously with voice, fax, or data transmission over GSM systems because it uses a separate signaling path instead of a dedicated channel. SMS thus works reliably even during peak usage periods of cellular systems.
Some SMS systems support compression to increase the amount of information that can be included in a message. You can also concatenate messages to create one message from several message fragments.
To use SMS, the user needs a subscription to a GSM bearer that supports SMS and a cell phone that supports SMS. The SMS function must be enabled for that user. (A subscription charge usually covers this.) SMS services are most widely deployed in Europe; more than 1 billion messages per month were sent in 1999.
How did the Short Message Service start?
Adding text messaging functionality to mobile devices began in the early 1980s. The first action plan of the CEPT Group GSM was approved in December 1982, requesting that «The services and facilities offered in the public switched telephone networks and public data networks … should be available in the mobile system». This plan included the exchange of text messages either directly between mobile stations or transmitted via message handling systems in use at that time.
The SMS concept was developed in the Franco-German GSM cooperation in 1984 by Friedhelm Hillebrand and Bernard Ghillebaert. The GSM is optimized for telephony since this was identified as its main application. The key idea for SMS was to use this telephone-optimized system, and to transport messages on the signaling paths needed to control the telephone traffic during periods when no signaling traffic existed. In this way, unused resources in the system could be used to transport messages at a minimal cost. However, it was necessary to limit the length of the messages to 128 bytes (later improved to 160 seven-bit characters) so that the messages could fit into the existing signaling formats. Based on his personal observations and on analysis of the typical lengths of postcard and Telex messages, Hillebrand argued that 160 characters was sufficient to express most messages succinctly. | <urn:uuid:adc8e9d6-936c-454e-9856-9529ae46c2ca> | CC-MAIN-2024-38 | https://networkencyclopedia.com/short-message-service-sms/ | 2024-09-19T01:00:58Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651944.55/warc/CC-MAIN-20240918233405-20240919023405-00199.warc.gz | en | 0.94287 | 557 | 3.515625 | 4 |
Google has built some strong security features into Gmail, like two-factor authentification and encryption, making it difficult for cybercriminals to pull off a Gmail password hack.
However, not all users activate 2FAs, and even those who do can be at risk. Cybercriminals continue to target Gmail for hacking due to its massive potential to store valuable and sensitive data. The question is, are you at risk?
Can someone hack your Gmail and your Google account?
The short answer is, yes. Gmail can be hacked. If someone has hacked your Gmail account, they can access not only your Google account but also the websites and services you use. This means a hacked Gmail account is more serious than simply losing an email address and the emails within it. It is a threat that can spread beyond Gmail.
Cybercriminals can use a hacked Gmail account to scam your contacts and gain access to your Google account, as well as other services.
What happens if someone hacks my Gmail account?
There are different methods hackers use to infiltrate Google Gmail accounts, including the use of already breached accounts familiar to victims, phishing attacks where you click on malicious links, and malicious apps that steal cookies along with your hacked Gmail credentials.
The consequences of a hack will depend on the methods used by the attackers, and on whether they are looking to simply exhort you or take it further. Once criminals take over your hacked Gmail, they will block you out and search for sensitive information and financial data. This means facing financial and personal consequences. Additionally, the hacker will try to compromise any other account linked to your Gmail. Any information they get can be used for identity theft, fraud, blackmail, and more.
Cybercriminals may also use your Gmail account to carry out illegal attacks on the people you know.
Once a cybercriminal hacks your Gmail password and Google account, they can:
- Change passwords
- Change verification and notification settings
- Send spam emails
- Steal your data
- Breach your bank accounts or digital wallets
- Sell your personal information on the dark web
- Extort you
- Shut down the account
- Remotely delete devices linked to your Google account
- Steal passwords to other websites
- Hack your social media like Snapchat, Facebook, or TikTok
How to tell if your Gmail has been hacked
Google explains that someone else might be using your Gmail without permission if you notice unfamiliar activity in any of your Google products. A quick way of knowing if another device is using your Gmail account is to check My Account > Security > Your Devices. This section can show you details about all the devices that have logged into your account in the past 28 days.
On the other hand, there are several other signs that serve as clear indicators that your Gmail has been hacked.
1. Your password has been changed
If you try to log in to your account only to discover that your password is not working, it either means you forgot it, or someone has hacked your account. Most Gmail hackers seek to shut you off from accessing your account and will change the password. But some criminals prefer that their victims not know their accounts have been hacked, so they will leave the password unchanged.
If your password is changed, take immediate action and start the Gmail recovery process, as explained below.
2. Your Inbox and Sent folder look off
If the first thing you notice when you access your inbox is that something is off, you should trust your instincts and seek to identify what’s wrong.
Hackers will often open unread emails, send spam from hacked accounts, and email friends and contacts to further continue scamming and hacking. You might also notice emails from Gmail or other sites notifying you about security and password changes. This is a clear indication of a manipulated account.
3. Your Settings have been changed
Hackers may change the settings of your Gmail or Google account. Once they gain access, they can forward your emails to another account and change the security questions, 2FAs, and recovery emails.
In other words, it’s not enough to simply check your password after noticing suspicious activity. Hackers may add phone and recovery emails to easily gain access in case you recover the account.
4. You are getting strange security notifications
Often Gmail accounts are linked to cellphones or other emails. If you are getting unusual notifications about login attempts or changes in security settings, it is a red flag that someone is trying to hack your email — or has already hacked it.
Additionally, if your friends, family, or contacts tell you that they have received strange messages or notifications, or are receiving emails from you that you didn’t send, take immediate action to secure your Gmail account.
5. Your other services have been hacked
One of the main reasons cybercriminals hack into a Gmail account is to gain the resources to set new passwords to access other sites, including bank accounts, e-wallets, crypto sites, or work systems. They might also be hacking your Gmail to get to Google documents like online spreadsheets, or to use other Google products linked to your account.
Always be vigilant about any emails from Gmail or other accounts related to password changes you did not request or other security notifications. Additionally, remember that if one of your Google services has been hacked, there is a chance the hacker first hacked your Gmail to access the site.
How to find out who hacked your Gmail account
Unfortunately, unlike catching a thief red-handed, identifying the specific hacker who targeted your Gmail account is extremely difficult, often next to impossible.
While forensic investigators with access to advanced resources might be able to trace some digital footprints, for the average user it’s not a realistic option. However, this doesn’t mean you’re left completely in the dark.
Check devices connected to your Google account
If you have access to your Google account you can check what devices are connected to it. If you notice any device that you do not recognize that is more than likely the device of the hacker.
Unfortunately, the first thing hackers do when they gain access to your account is shut you out by changing the password, security questions, and recovery options. So acting fast is important. If you are blocked from your account, you will not be able to check for unknown devices and disconnect them. You can follow Google’s official device check guide to do that.
Google might have and share with you the information on who hacked your Gmail account. Because you should report the hack to Google anyway, it is a good idea to ask them if they can share any information on who breached your account. There is a slim chance that you will get the answer you want by contacting Google, but it does not hurt to try.
Go over your past days
Can you pinpoint the exact day and hour your account was hacked? Were you recently using public Wi-Fi, did you receive a suspicious email or notification? Going over the past few days before you were hacked can be a good idea, as it might give you some clues on who is behind it.
Maybe you left your computer unattended in a coffee shop, got a strange notification to share resources, or received a message from someone you have never met. Anything out of the usual can be a valuable starting point to uncover the truth. Remember: hackers often impersonate companies and even government agencies or popular services.
Hire a pro
There are many professional and trusted online security companies who offer digital forensics and for a price can figure out who hacked your account or get some basic information about the incident. If hiring a professional is something you want to do, always stick to a service that is well-known and respected in the industry.
There are many fake or low-quality services out there that are nothing but a scam. Check out customer reviews before hiring a security professional to do the digging and investigation for you. Also note that no professional service will offer you to recover your account themselves, as this is something that only the owner of the account can do.
How to recover your hacked Gmail account in 2024
To recover a hacked Gmail account, you must act fast. Speed is of the essence because the more time you give a hacker, the more control they will have over your account and the less likely you will be able to recover it.
There are three steps to recovering a hacked Gmail:
- Reset your password immediately
- Complete a security checkup
- Follow the security tips
Can I contact Google about a hacked account?
According to Atlas VPN, almost 6 million accounts based on multiple publicly available sources like Gmail were hacked in 2021. In response to the growing trend of breaches, Google security experts are available to assist users who have been hacked.
It is possible to contact Google security experts about your hacked account through various channels. However, as mentioned above, it is essential to first attempt a password reset and an account recovery. If the official Gmail steps to recover your account fail, you can always contact Gmail Community experts.
To contact Gmail Community Experts:
- Go to Google Account Help and search for questions already answered by the Community.
- If your question isn’t answered there, scroll to the bottom of the page and click “Need more help? / Ask the Help Community.”
- A new page will load where you can ask your question and contact an expert.
Those who use Google accounts through work or school and are Google Workspace administrators can contact Google directly for support.
To contact support for Google Workspace:
- Sign in to your Google Admin console using an administrator account email.
- At the top right of the Admin console, click “Get help.”
- In the Help window, click “Contact support.”
How to recover your hacked Gmail
The first step to recovering your Gmail account, if you believe it has been hacked, is to change your password.
To change your Gmail password:
- Sign in to your Gmail account.
- Click the profile icon and click “Manage your Google Account.”
- On the top left menu, click “Security.”
- Scroll down to “Signing in to Google,” then click “Password.” You will have to enter your password again.
- Click “Next,” and a new window will load where you can set a new password.
Changing the password will lock out anyone who has hacked your Gmail. To ensure that your account is secure, check that 2FA is active, and also check your recovery phone and email. You can find 2-Step Verification in the Security Menu below Password. The recovery phone number and email address are found on the same page in the “Ways that we can verify it’s you” section. If they’re incorrect, follow the instructions to change them.
How to recover a hacked Google account without a password
Things get a bit more complicated if you are late to the show and the hacker has already changed the password. However, just because your password has been changed, it doesn’t mean you will lose access or control of your Gmail account.
Remember — the sooner, the better. Wait too long, and hackers will change all methods you have to verify the account is yours and lock you out.
To start the account recovery process, go to the recovery page.
Keep the following tips in mind for a successful recovery process:
- Answer the questions as best you can.
- Don’t skip any questions. Even when not sure, take your best guess.
- Complete the recovery on a device linked to your account that Google will recognize.
- Do it in a location that Google will recognize and associate with the account (for example, your home or work).
- Be exact with passwords and security question answers. A typo can mean the difference between gaining access or not.
- When asked to enter an email address, use one already linked to your account that you can access.
Once you regain access to your account, you should change your password and check your 2FA and your security settings.
How can I recover my Gmail password without my phone number and email?
If you do not have your phone and have no access to your email, the only way to recover your Google account is by following the steps listed above for Account Recovery. There is no other way.
There are countless services online that promise customers they can recover their Gmail account and often ask you for passwords and other details. However, these services are scams. Do not engage with them, as no third party can recover a Gmail account that belongs to you. Again, this can only be done through the Account Recovery process detailed above.
To summarize the Account Recovery steps:
- Visit the Google Account Recovery Page.
- Type in your Gmail username or ID.
- Choose “Try Another Way to Sign In”.
- Here you can choose different options: Verification Using Another Device, Using Backup Codes, using secondary emails, phone calls, etc. The options you see at this step will depend on what security settings you have enabled in your Google account.
- Wait for the Password Reset Link.
- Reset your password.
How to delete your hacked Gmail account
The only way to delete a Gmail account is to have access to the account. So if a hacker has shut you out, you will have to go through the recovery process to prove to Google that the account belongs to you. Once you do that, you can delete the account.
Deleting your Gmail account will not delete your Google account, nor will this delete other Google products. However, it is an option for those who want to delete a compromised email address. Remember, your emails and mail settings will be lost, and the email address will no longer be available to use.
To delete your Gmail account:
- Before deleting your Gmail service, download your data.
- Go to your Google Account. On the left menu, click “Data and privacy.”
- Scroll down to “Data from apps and services you use.”
- Under “Download or delete your data,” click “Delete a Google service.” You may need to sign in.
- Find Gmail and click “Delete Icon.”
- Enter an existing email address to sign in and click “Send verification email.” (This email can’t be sent to a Gmail address.)
- Until you verify the new email address, Google won’t delete your Gmail address.
Additionally, you have the option to delete your entire Google account, including Gmail. If you choose to do so, download your data first (see the steps above). It is also recommended that if you use your Gmail account to recover passwords or as a login credential for other services like your bank, work, or school sites, change the email on those first.
To delete your Google account:
- Go to the “Data and Privacy section” of your Google Account.
- Scroll to “Your data and privacy options.”
- Select “More options” and then “Delete your Google Account.”
- Follow the instructions to delete your account.
Securing your recovered Google account after a hack
Regaining access to your Gmail account is a victory, but the battle isn’t over. Here are some quick tips to fortify your defenses:
- Change your password. This might seem obvious, but use a strong, unique password and enable two-factor authentication (2FA) for an extra layer of security. Change the passwords to any other accounts you have, including banks and financial apps, and enable MFA.
- Review recent activity. Check your sent emails, drafts, and trash for any suspicious activity by the hacker.
- Report the hack to Google. Let Google know about the incident to help them improve their security measures.
Google goes to great lengths to keep Gmail accounts safe and secure. However, no account is unbreachable. Fortunately, there are many ways to know if your account has been hacked and several processes to recover it safely. | <urn:uuid:e3b65923-6190-4d6d-804e-691bc9075afa> | CC-MAIN-2024-38 | https://moonlock.com/gmail-hacked | 2024-09-08T03:53:27Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650958.30/warc/CC-MAIN-20240908020844-20240908050844-00299.warc.gz | en | 0.93875 | 3,341 | 2.546875 | 3 |
Last December, Matt Quirion wrote about the Big Three’s (Microsoft, Amazon, Google) push into fog computing. This week, fog computing reached another milestone as OpenFog Consortium — a group of big name players including Cisco, Intel, and ARM Holdings — released their executive summary of their massive 162 page reference architecture document. No one has time to read 162 pages of technical document, so we’ve summarized the most important points and why you should care here!
What is Fog Computing?
Fog computing was mentioned in Calum McClelland’s “Hey IoT, your head is in the Clouds,” but was only introduced as an alternative to cloud computing. Before we discuss why the two can be mutually beneficial, let’s define fog computing:
Fog computing is a horizontal, system-level architecture that distributes computing, storage, control and networking functions closer to the users along a cloud-to-thing continuum.
If that esoteric definition provided by the OpenFog Consortium doesn’t make sense to you, think of fog computing as an extension of the cloud to the edge — to end nodes and devices. It simplifies IoT applications because it removes consistent cloud connectivity and delivers low latency computations.
Consider an industrial application where you want to use pressure sensors, flow sensors, and control valves to monitor an oil pipeline. The traditional cloud computing model would send all the readings to the cloud, analyze them using machine learning algorithms to detect abnormalities, and send appropriate fixes down to the end devices.
But do we really need to send all the readings to the cloud? At scale, the bandwidth to simply push sensor readings becomes significant, not to mention the cost and the time to send and receive messages. In the time it takes to send an abnormal reading from a sensor, categorize as a potential leak in the cloud, and send a downlink message to notify personnel or stop the flow, the leak might have turned into a major spill.
Contrast that with a fog computing infrastructure: sensors will now send data to inexpensive local fog nodes where abnormality detection can happen locally and send commands to shut off the leaky valves within milliseconds, instead of minutes. This example illustrates how fog nodes can extend the role of the cloud down to a fog level for added benefit.
So No More Cloud?
While fog computing improves latency and network efficiency in certain Applications, this doesn’t mean that fog computing will replace the cloud. Rather, they’re mutually beneficial and can augment the ability of one another. Most of the decisions and data analytics traditionally computed on the cloud level can now move to the fog nodes to speed up response time. Still, the cloud can be useful in doing more historical data computation for predictive analytics or sending down commands or updates.
In essence, fog-cloud architecture combines the benefits for both designs: it provides low latency data transfer, while handing off other data for historical analytics. Given these characteristics, OpenFog Consortium summarizes its advantages over other approaches using SCALE (direct quote from the report):
- Security: additional layer of trusted data transfer
- Cognition: awareness of client-centric objectives to enable autonomy
- Agility: rapid innovation and affordable scaling under a common infrastructure
- Latency: real-time processing and cyber-physical system control
- Efficiency: dynamic pooling of local unused resources from participating end-user devices
Why should I care?
Having a standards body benefits both businesses and consumers. The reference architecture will help ensure interoperability of the different fog computing infrastructures. Consumers or developers can expect a more defined process to build systems that make IoT more accessible.
It’s clear that Azure IoT, AWS Greengrass, and Android Things all aim to move several IoT operations onto the fog level. IoT is already plagued with the fragmentation of various connectivity options. Hopefully, OpenFog Consortium will help ensure that these new fog computing moves will abide by these eight basic pillars: | <urn:uuid:73742049-af16-4a53-92bb-77eaec114750> | CC-MAIN-2024-38 | https://www.iotforall.com/openfog-consortium-reference-architecture-executive-summary | 2024-09-09T08:29:18Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651092.31/warc/CC-MAIN-20240909071529-20240909101529-00199.warc.gz | en | 0.89686 | 814 | 2.5625 | 3 |
Before starting the basic configuration of the Ecessa device, it is important to decide which mode will be utilized. There are three main methods for installing an Ecessa device, Routed Mode, Translucent Mode, and NAT Mode. Use the below summaries to determine which mode is best for your company’s needs.
Routed Mode is a semi-transparent option where the network equipment directly behind the Ecessa device continues to have an IP address from the WAN subnet configured on them. Routed Mode usually requires the least amount of configuration changes to the existing network equipment (the existing firewall, etc.) and can minimize the amount of network downtime during the actual installation process. It is also dependent on the below criteria. There are two different ways to implement Routed Mode on the Ecessa. The first is:
- Existing WAN subnet mask is at least 29 bits (/29 or 255.255.255.240)
- Existing WAN has four contiguous addresses that fall within a /30
- The gateway address on the firewall or the actual gateway device address can be changed.
However, these requirements make inefficient use of available IP addresses and creates difficulty in completing changes to the WAN subnet mask information in the future (for example, if you change ISPs or receive a new subnet from them).
The second way is:
- The ISP gives you two blocks of addresses: a /30 or a /29, and a second of varying size.
- The ISP uses the /30 or /29 to route the subnet of varying size to an IP that you assign to the Ecessa, and then we configure the subnet of varying size on the LAN of the Ecessa.
Translucent Mode differs from Routed Mode in that only a single IP address from the routed WAN is needed by the Ecessa device. In this configuration both the WAN and LAN are configured with the same IP address. When available, Translucent Mode is preferable to Routed Mode because it uses only one address and can minimize or eliminate firewall and gateway changes (version 8.0 of firmware of later is required to use this mode).
NAT Mode is a technique similar to a traditional firewall. The WAN subnet mask is configured on the outside interface of the Ecessa device and all network equipment directly behind the Ecessa device has an IP address on a private network. NAT Mode does not have any special requirements and lends itself well to ISP changes. However, there are more configuration changes at the time of installation including modifying existing network equipment settings (IP information and rules sets) to reflect the new private network of the Ecessa device’s LAN.
In many networks, all three options would be suitable, and it simply comes down preference. If there are any questions about which technique should be used, refer to our help page, contact Ecessa Technical Support at email@example.com, or call 763-694-8875. | <urn:uuid:bcd57198-da94-4d6b-98ad-6990837321af> | CC-MAIN-2024-38 | https://support.ecessa.com/hc/en-us/articles/200144136-Differences-between-Routed-Mode-Translucent-Mode-and-NAT-Mode | 2024-09-15T09:51:56Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651622.79/warc/CC-MAIN-20240915084859-20240915114859-00599.warc.gz | en | 0.913739 | 616 | 2.953125 | 3 |
Requirements-based Vocabulary | Compiling and Shipping OO Applications |
This chapter provides some information to help you debug Object COBOL applications.
Many of the same tools and techniques used for debugging COBOL applications are applicable to OO Object COBOL applications. You can animate the classes in an application in the same way that you can animate any COBOL program.
There are also some extra facilities to help you with some debugging problems which are unique to OO programs.
Object COBOL comes with the following facilities for debugging OO applications:
These facilities are explained in the following sections.
You can animate your Object COBOL classes using Animator. When you query an object reference, the Animator displays the object handle contained in the object reference. This enables you to see whether object references are pointing to the same or different objects. An object handle with value x"0000" always refers to the NilObject. An object handle with the value of x"20202020" is reserved by the RTS. Sending messages to it gives the error message "Invalid object reference".
The run-time system has a debugging option to prevent reallocation of object handles. The default behavior of the run-time system is to reuse the object handle for any object which has been finalized.
An application which sends messages to object handles after the object referenced has been finalized can cause unpredictable behavior. The message may be sent to a new object which has been allocated the old handle.
If the message sent is one the new object does not understand, then the "doesNotUnderstand" exception is raised, alerting you to the fact that something has gone wrong. If the new object does understand the message, then it will execute a method, and your application may fail at some later point, or give unexpected results.
To prevent reallocation of object handles, set environment variable OOSW to +d:
OOSW=+d export OOSW
Now when a message is sent to an object handle for an object which no longer exists, the run-time system displays the following error message:
RTS 240 Object reference not valid
Note: The +d setting for OOSW is intended for development work only. Do not set +d in a production environment, as the OO RTS could eventually run out of object handles to allocate. The number of object handles the run-time system can allocate before this happens depends on the amount of memory available.
On some UNIX systems guard pages can help you track down some types of memory corruption problem. Check your Object COBOL for UNIX release notes to see whether they are supported on your system. They can help you find errors on two types of memory:
You can use guard pages to trap either:
These sorts of problems can occur when you are using reference modification to access data, or when you pass Object-Storage data as parameter to a method, which attempts to access it using Linkage Section data items declared the wrong size. If you are using the Base class methods "malloc" and "dealloc" to allocate and free memory, you can also trap attempts to use memory you have freed.
If any of these types of errors occur when you are running with guard pages active, you will get run-time error 114; if you are animating the program execution stops with the statement that caused the problem highlighted.
You can set the guard pages before or after Object-Storage and memory allocations.
To set the guard page before Object-Storage and memory allocations:
OOSW=+g1 export OOSW
To set the guard page after Object-Storage and memory allocations:
OOSW=+g2 export OOSW
Switches +g1 and +g2 are mutually exclusive - you can't set them both at the same time.
Note: Running with guard pages on increases the amount of memory used by your application. Only use it for debugging.
If you are having problems with finding the point at which an application fails or raises an exception, you can switch on a message trace. This can be particularly useful if the error occurs while execution is in the Class Library.
To turn on message tracing, set the OOSW environment variable before you run your application:
OOSW=+t export OOSW
Every message sent by the application is logged in file trace.log.
Note: Running with trace on slows down application execution as every message sent is written and flushed to the file.
The output from trace is an ASCII file which you can look at with Animator or any ASCII editor. The lists below describe the columns in the trace file.
Type of Resolve | The target of an INVOKE is categorized by one of the three
Object reference | The object handle of the receiver of the message. | ||||||
Message | The message sent by the INVOKE. | ||||||
Object type | Shows one of the following codes:
Class of object invoked | The classname of the type of object invoked. | ||||||
Class of implementor | Class name of the implementor of the method. | ||||||
Stack level | Shows the depth of the message stack. The stack level becomes
one greater each time a method sends a message, and one lower each time a method
For example, if method A invokes another method, the stack level becomes one greater. When the second method completes execution, the stack level becomes one smaller. |
A memory leak is memory allocated to an object which is no longer in use by your application, but which has not been finalized. One way to track down memory leaks is to watch the number of objects in your application. For example, if adding a record created an extra 24 objects, but deleting it only removed 20, a memory leak is a possibility.
You can find out the number of objects in existence at any time by sending the message "getNumberOfObjects" to the Behavior class. For example:
invoke Behavior "getNumberOfObjects" returning totalNumber
is declared as a pic x(4) comp-5
This section deals with the following common problems:
If your application tries to use a class which is not available, the RTS gives you error message 173 (Called program not found). To be available, there must be an executable file for the class either on the current directory or in any of the directories pointed to by one of the following environment variables:
When passing parameters to and from methods, you need to ensure that the data items used in the invoke match those expected by the method. In particular, if a method attempts to move data into a RETURNING parameter, and an invoke statement calling a method does not supply a RETURNING parameter, the application may cause a protection violation or memory exception.
For example, the "currentTime" method below returns the time in a 6-byte group item:
method-id. "currentTime". linkage section. 01 lnkTime. 03 lnkHours pic xx. 03 lnkMins pic xx. 03 lnkSecs pic xx. procedure division returning lnkTime. move timeNow to lnkTime exit method. end method "currentTime".
The method invocation below has no RETURNING parameter, and would probably cause a protection violation or other memory exception error at run-time:
invoke mainClock "currentTime"
OO programs can cause RTS error 119 (Symbol redefined) for either of the following reasons:
The first of these happens when you have within a single class named two class methods the same or two instance methods the same. You are allowed to name a class and instance method the same however; you might for example have a class program which defined an "initialize" method for both the class and the instance object.
The second usually happens when you have a particular class defined in the class-control paragraph of different programs, against different filenames. Filenames are case-sensitive, so you can get this error even if the names only differ in case.
For example, the class-control paragraph in program A might look like this:
class-control. DateClass is class "Date" ...
and the class-control paragraph in the DateClass class might look like this:
class-control. DateClass is class "date" ...
At run-time, when program A tries to invoke DateClass the run-time system raises error 119. The convention used throughout the Class Library and supplied example programs is to enter all filenames as lower case.
Copyright © 1999 MERANT International Limited. All rights reserved.
This document and the proprietary marks and names
used herein are protected by international law.
Requirements-based Vocabulary | Compiling and Shipping OO Applications | | <urn:uuid:80be54bd-d6af-4525-a749-2c11b3bbafa0> | CC-MAIN-2024-38 | https://www.microfocus.com/documentation/server-express/sx20books/optrbl.htm | 2024-09-16T15:58:22Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.45/warc/CC-MAIN-20240916144317-20240916174317-00499.warc.gz | en | 0.877215 | 1,827 | 2.78125 | 3 |
The idea of virtual assistants has been around for some time. Although early attempts were not so successful, they did plant the idea, which has grown into a technology that is an integral part of many people’s everyday lives.
To this point, these virtual assistants appear primarily in the consumer space—on smartphones, home-control devices, tablets, and even TVs. Reporting the weather forecast, ordering products, playing music, and obtaining quick answers to questions of almost any nature is delivered through natural language conversational interactions with the AI-based systems.
One big advantage of this technology beginning its life in consumer products is the quantities of scale that were leveraged to bring the cost down to something affordable rather than being limited to very large corporations with multi-million-dollar budgets. It also resulted in enough usage to rapidly improve the quality of the interactions through countless cycles of machine learning.
Using Virtual Assistants in the Manufacturing Industry
It is exciting to watch what other applications these virtual assistants are now being delivered into. One such promising vertical, with many use cases, is manufacturing.
Just as with the virtual assistants for the home, a virtual assistant for manufacturing can provide quick answers to questions on the manufacturing floor. Rather than having to go to an office and ask a person or log into a terminal, a shop floor worker can simply ask the question wherever they are working.
Because cloud-based virtual assistants can be accessed from a wide variety of devices, including smartphones and tablets, they don’t require workers to take valuable time away from the manufacturing process; users can simply speak their questions or, if the area is too noisy, they can tap the question into their mobile device. The virtual assistant can then access a wide range of interconnected systems, including ERP, supply chain, inventory, and product specifications, to get information for making decisions quickly.
The virtual assistants can also provide natural language control of production systems. Again, instead of having to leave their place on a manufacturing line to get to a control panel, workers can simply speak or type the directive, and then make changes to a production order, adjust equipment parameters, or check on system status.
For manufacturing lines that run 24/7, 365 days per year, it is crucial to increase operational efficiencies wherever possible. Minutes saved scale up to huge differences across an organization. Training time is also reduced when natural language processing improves the human-machine interface.
Another area of the manufacturing process that can be enhanced using a virtual assistant is quality control. Often, quality control requires humans to carefully watch production, looking for things that are out of place.
Flaws in the product, the wrong product, and parameters that are out of range all require careful monitoring either directly or using video cameras. Even environmental conditions, such as temperature, can be monitored with infrared cameras.
By coupling an AI-equipped virtual assistant with image processing, an operator can focus more of their attention on other tasks. They can be alerted with natural language to a situation that requires their attention when it is detected by the virtual assistant.
What’s Available Today
These virtual assistants for manufacturing are not simply a futuristic vision. They are available right now from several companies that are finding creative ways to impact manufacturing with virtual assistants.
Check out Vision AI Assistant from AVEVA. “The Vision AI Assistant enables enhanced process optimization, thereby reducing time, wastage, and costs,” says AVEVA Global Head of AI and Advanced Analytics Jim Chappell. ” Cameras can also be used in areas that are not suitable for humans for security or safety concerns, freeing up workers to focus on high-value tasks instead of continuously monitoring live camera feeds.”
According to the SmartBots website, SmartBots AI Chatbots for Manufacturing “enable employees to carry out repetitive, tedious, and mundane tasks in an easy conversational manner, which increases the overall productivity and efficiency. SmartBots help you jump start your Conversational AI journey with intelligent and engaging Manufacturing Chatbots accessible across channels, including voice channels.”
BotCore by Acuvate, according to the company website, provides a “digital assistant to equip employees with required data on-the-go…Using BotCore’s Bot building platform, enterprises can now build an intuitive chatbot accessible across any mobile device keeping the employees in reach of information at all times.”
Looking for real-world insights into artificial intelligence and hyperautomation? Subscribe to the AI and Hyperautomation channel: | <urn:uuid:16a4e4a1-e237-45d5-bbd4-ff77c32038f7> | CC-MAIN-2024-38 | https://accelerationeconomy.com/ai/3-promising-advantages-of-ai-virtual-assistants-for-manufacturing/ | 2024-09-20T10:14:57Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652246.93/warc/CC-MAIN-20240920090502-20240920120502-00199.warc.gz | en | 0.938539 | 923 | 2.609375 | 3 |
In this sponsored post from One Stop Systems (OSS), Tim Miller, Vice President of Strategic Development, explores how autonomous vehicles will change the transportation landscape, and highlights the role AI has in this shift.
The next decade will see a fundamental change in the way we get from point A to point B in our automobiles. The quest to remove humans from behind the wheel with truly autonomous vehicles will drive billions of dollars in investment by car manufacturers and transportation service providers to develop and acquire the required technology. According to the SAE international classifications for autonomous capabilities, we are only at Level 2, meaning only basic levels of driver assistance automation are being deployed in commercial vehicles today. However, many of the key players in the industry are projecting that Level 5 vehicles will be on the road by 2028, providing full automation of all dynamic driving tasks under all roadway and environmental conditions. Additionally, it is projected that by 2040 virtually all vehicles on the road will be fully automated, saving thousands of lives a year from automobile accidents and bringing the brief 150 year history of human driving to an end.
To reach this milestone, major car manufacturers and rideshare companies are starting to deploy fleets of development and prototype cars. These fleets are being used to gather the data required to develop and test the artificial intelligence algorithms, which will eventually be deployed in millions of commercial vehicles. The cars in these fleets need to be outfitted with specialized high performance edge computing equipment including high bandwidth data ingest systems tied to the myriad of video, radar and LIDAR sensors in the car, high capacity and low latency storage subsystems and high performance compute engines that can perform the AI machine learning and inference tasks needed to enable the vehicle to see, hear, think and make decisions just like human drivers.
In addition to performance requirements, there is also the need for specialization of this computer equipment in terms of form factor, cooling and ruggedization to meet the unique harsh environment of cars driving hundreds of thousand miles in all road and weather conditions. This combination of requirements is ideally addressed with AI on the Fly technologies where specialized high-performance accelerated computing resources for deep learning training are deployed in field near the data source; in this case, inside the vehicles themselves. In typical AI solutions, deep learning training has been a centralized datacenter process, and only inferencing occurs in the field. In contrast, AI on the Fly moves this capability to the edge and allows rapid response to new data with continual reinforcement and transfer learning. This is critical to effectively performing fundamental autonomous vehicle tasks such as obstacle detection and collision avoidance.
AI on the Fly is made of three modular sub-systems; data ingest, data storage and compute engines. These sub-systems support high speed components including data capture hardware, NVMe SSD storage and GPU and FPGA compute accelerators all with PCI Express interfaces for flexible scaling while maintaining high bandwidth and low latency. The data ingest system must be capable of absorbing the vast amounts of data continually flowing in from the sensors and process the data for efficient delivery to both the persistent storage as well as the compute engines. Features in PCIe allow for simultaneous multi-casting the data to the multiple sub-systems using RDMA transfers to avoid system memory bottleneck without additional network protocol latency. The compute functions include machine learning tasks using traditional data science tools, data analysis, deep learning training tasks using neural network frameworks and inference engines for prediction using trained models against newly sourced data. Each of these elements may require specialized GPU resources. AI on the Fly provides all of these elements in flexible building block components that are easily customized to the specific requirement of the autonomous vehicle developer. The figure below illustrates an example of AI on the Fly configurations for autonomous vehicles.
One Stop Systems is working with some industry leaders to provide technology for their autonomous vehicle development programs. These companies look to OSS as their trusted development partner because of its technical expertise in specialized high performance edge computing. They rely on OSS’s experience in developing scalable PCI Express based systems which tie together high bandwidth sensor data ingest sub-systems with low latency NVMe storage and ultra-high performance multi-GPUs all packaged in specialized rugged form factors. OSS recently announced a collaborative engineering design win for a major international network transportation company for deployment of AI on the Fly components in its 150 vehicle autonomous driving development fleet.
AI on the Fly is playing a key role in development of fully autonomous driving vehicles and will help to usher in fundamental changes to human transport over the next decade.
Tim Miller is Vice President of Strategic Development at One Stop Systems.
Disclaimer: This article may contain forward-looking statements based on One Stop Systems’ current expectations and assumptions regarding the company’s business and the performance of its products, the economy and other future conditions and forecasts of future events, circumstances and results. | <urn:uuid:cf48c69e-ed1c-46dd-9ca9-9b32ff53879c> | CC-MAIN-2024-38 | https://insidehpc.com/2019/06/autonomous-vehicles-ai-fly/ | 2024-09-20T11:47:42Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652246.93/warc/CC-MAIN-20240920090502-20240920120502-00199.warc.gz | en | 0.932983 | 972 | 2.625 | 3 |
Reduce door-to-balloon time with automatic STEMI code alerts
Consider this: every year, around 735,000 Americans have a heart attack – that’s one heart attack every 43 seconds.1 Of these, nearly 250,000 cases are an often fatal variation, ST Segment Elevation Myocardial Infarction or STEMI.2
Unfortunately, STEMI numbers are on the rise. According to a study for the American College of Cardiology, “patients suffering the most severe types of heart attack have become younger, more obese and more likely to have preventable risk factors, such as smoking,” despite increased understanding of risk factors.3The harsh reality is, not only are these attacks growing in frequency, only 25 percent of the hospitals in the United States are equipped to receive and treat STEMI patients, a heavy cardiology caseload to bear.2
Even with the increased number of STEMI patients, hospitals strive to meet and exceed the American Heart Association and the American College of Cardiology’s national guidelines for the ‘first medical contact to balloon time” (also known as door-to-balloon time) of 90 minutes.2
Door-to-balloon time is the amount of time it takes for a patient to get from the emergency room door to the cardiac catheterization lab and includes opening the blocked artery with balloons and stents.2 A reduction in door-to-balloon reduces mortality rates. A recent study finds, “deaths from a severe type of heart attack rise about 10 percent for every hour of delay between the time the patient calls for an ambulance and the time the patient is treated in the hospital.”4
Reducing door-to-balloon time is not easy. Achieving the recommended 90 minutes requires:
- Plans in place and ready for immediate implementation
- Proper staff training for quick and well-executed patient care
- Collaboration between departments for consults and diagnostics
- A communication channel that can be triggered in seconds
When every minute counts, there is no room for communication mistakes or delays, leading more and more hospitals to implement a critical communication system, for STEMI code alerts.
Critical Communications for STEMI Code Alerts
A critical communication system improves efficiency and productivity throughout the entire STEMI code alert process starting with diagnosis and classification using tele-cardiology capabilities. With a simple video chat, referring hospitals can instantly get in touch with a cardiologist to determine whether or not the patient in question requires immediate attention and whether they need to be transferred. This tele-cardiology functionality is key in reducing door-to-balloon time as 25% of emergency departments report difficulty having cardiologists on call.5 In cases where a patient transfer is necessary, the critical communication system allows for advanced notification of the arrival by EMS. In one particular study, door-to-balloon time was 17% shorter when there was advanced notification from EMS vs. no advanced notification.6
Through the use of a critical communication system, hospitals are able to issue STEMI code alerts to quickly gather the right on-call medical staff (from emergency room personnel to cardiac catheterization laboratory technicians to cardiologists) needed to begin life-saving treatment. The American Heart Association and the American College of Cardiology recommends having staff arrive in the catheterization lab within 20 minutes of being notified. Based on a survey, this 20 minute window reduces door-to-balloon time by a little more than 19 minutes.7 Automatic STEMI code alerts activate care teams fast, improving clinician coordination and productivity. Care teams can also reduce STEMI code alert errors through the use of incident-specific, pre-defined notification procedures.
For more helpful tips on how to reduce door-to-balloon time, check out this article.
Check out Everbridge in action at Renown Health, the network uses its critical communication solution to keep doctors, nurses and other staff within the healthcare network connected during a STEMI code alert scenario. To learn more about Everbridge for STEMI Code Alerts, please visit our website. | <urn:uuid:c87297d6-5e22-4781-8cae-47bb9c0862ac> | CC-MAIN-2024-38 | https://www.everbridge.com/blog/reduce-door-to-balloon-time-with-automatic-stemi-code-alerts/ | 2024-09-08T05:47:43Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650960.90/warc/CC-MAIN-20240908052321-20240908082321-00399.warc.gz | en | 0.89511 | 848 | 2.5625 | 3 |
Autonomous Planting Robot Helps Re-Forest the Amazon
ABB’s cobot YuMi can help re-plant the equivalent of two soccer fields each day
An autonomous, seed-planting robot is helping re-forest the Amazon, in a pilot project that hopes to combat ongoing deforestation in the region.
The pilot project, established by ABB Robotics and the non-profit organization Junglekeepers, uses a solar-powered YuMi robot to accelerate the seed planting process. Designed to work alongside humans, the collaborative robot (cobot) features a robotic arm with two “hands” that shift soil and handle seeds.
Using YuMi, the team said they are helping replant an area the equivalent size of two soccer fields every day.
YuMi is situated in a lab in the Peruvian Amazon, while a team in Sweden monitors and optimizes its work via ABB’s RobotStudio Cloud technology. By simulating and refining the programming required for YuMi’s tasks in the jungle remotely, the team said it has created “the world’s most remote robot.”
The Junglekeepers has a mission to preserve 55,000 acres of Amazon rainforest. As retaining staff in the remote area can prove challenging, YuMi is seen as a vital part of this mission.
“ABB’s collaboration with Junglekeepers demonstrates how robotics and Cloud technology can play a central role in fighting deforestation as one of the major contributors to climate change”, said Sami Atiya, ABB Robotics’ president. “Our pilot program with the world’s most remote robot is helping automate highly repetitive tasks, freeing up rangers to undertake more important work out in the rainforest and helping them to conserve the land they live on.”
“As of right now, we have lost 20% of the total area of Amazon rainforest; without using technology today, conservation will be at a standstill,” said Moshin Kazmi, Junglekeepers’ co-founder. “Having YuMi at our base is a great way to expose our rangers to new ways of doing things. It accelerates and expands our operations and advances our mission.”
About the Author
You May Also Like | <urn:uuid:c64e64d8-2785-46ac-95d7-2c3e806c615e> | CC-MAIN-2024-38 | https://www.iotworldtoday.com/robotics/autonomous-planting-robot-helps-re-forest-the-amazon | 2024-09-09T12:42:11Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651098.19/warc/CC-MAIN-20240909103148-20240909133148-00299.warc.gz | en | 0.921776 | 471 | 2.890625 | 3 |
Malware is on the rise. At the beginning of 2008, our malware collection had 10 million samples. Today we have already surpassed 70 million. Most of the malicious samples are Trojans (backdoors, downloaders, fake alerts), but there are also a lot of viruses, worms, and bots that in a short time can infect many computers without user interaction. Usually the malicious code comes in a form of an executable or DLL, but sometimes malware authors opt to use alternate languages such as VBScript (Visual Basic Scripting Edition), a lightweight Active Scripting language that is installed by default in most Microsoft Windows versions since Windows 98. One example of this kind of malware is Satanbot: a fully functional VBScript botnet that uses the Remote Desktop Connection to connect to infected systems.
VBScript files are usually in clear text because they are interpreted at runtime, rather than being compiled previously by the author. However, for cases in which the user wants to avoid allowing others to view or modify the source code, Microsoft provides a command-line tool, Script Encoder, which will encode the final script by generating a .vbe file. This file looks like a normal executable, but it can be decoded to its original form. Once that file is decoded, we can look at the bot’s source code, which is divided by sections. Each section specifies a different function of Satanbot, most of which we’ve already seen in AutoRun worms like Xirtem. Here is a description of these functions:
- Enable CMD and REGEDIT: To perform all the changes in the system (modify the registry and execute BAT files), the edition of the registry (regedit) or the use of the command line (cmd) will be enabled by changing the values “DisableRegistryTools” and “DisableCMD” to 0. In addition, one AutoRun feature is configured by creating the value “Update” in the “Run” key with the path of the script, along with hiding files and file extensions in the system.
- Disable UAC: The value “EnableLUA” is checked to verify whether it is necessary to disable the User Account Control in Windows Vista, Windows Server 2008 and Windows 7. If it is enabled, the script will create on the fly another script and a BAT file to disable UAC. Another modification in the registry is done to perform operations that require elevation of privileges without consent or credentials. At the end, all the temporary files used to do the modifications in the system will be deleted.
- Take ownership of folders: The command TAKEOWN (in Windows Vista and 7) runs to take ownership and enable the modification of folders including Application Data, Cookies, and Local Settings
- Self-Install and spread: Another BAT file in the %TEMP% path is created. It first changes the icon of .vbe files to the one used by Windows pictures so the user will think that it is a picture and not the malware. Also the original .vbe, along with a shortcut file, will be copied in several locations, including network shares and peer-to-peer shared folders from popular clients like eMule, LimeWire, and Ares. Another spreading vector this malware uses is infecting removable drives by creating autorun.inf files along with a copy of the original .vbe and a shortcut (.lnk) file.
- Worm test: This may seem a confusing term, but it is another spreading method. The original .vbe will be copied to other folders such as Startup and %Userprofile%\ Microsoft with the name “System File [Not Delete]” to trick the user to not delete the file.
- Worm.s@tan: Contains a loop that will trigger the execution of the code every 60 minutes
- Backdoor: Using another temporary BAT file, the malware will enable Remote Desktop Access by making the following changes to the system:
- Allow unsolicited remote assistance and full control
- Allow the use of blank passwords
- Enable multiple concurrent remote desktop connections (with a maximum of five)
- Automatically start the Terminal Service
- Open port 3389 in the Windows firewall
- Add an administrator user to the system
- Start the Remote Desktop Services UserMode port redirector service
- Create a file in the bot’s path with an “OK” inside
- The foregoing commands execute on reboot while the message “Windows repare quelques fichiers, patientez …” (Windows is repairing some files, wait …) appears to the user at the command prompt.
Another interesting part of the code is the section Compt.Bot, from which the malware sends an HTTP POST request with a specific user agent to the URL of the botnet command server. With that request, the server can get the public IP address of the infected machine, which probably has Remote Desktop Access enabled with the required specifications so the bad guys can connect. By opening that URL in the browser, we can see the IP address of the machine that is connected to the control panel and the number of compromised machines, which can grow very quickly. Take a look at this 24-hour comparison:
Other functionalities of the botnet:
- Delete browser and user histories of some common software: Internet Explorer, Firefox, Chrome, Thunderbird, and Skype
- Terminate processes of security software by downloading and executing a batch file that can be easily updated with more processes
- Download an .exe file from another URL (currently offline). We need to examine this file more thoroughly, but one of its purposes seems to be updating the malware by executing a different embedded .vbe.
Even if VBScript is not the best language to hide malicious activities (using encryption, obfuscation, packers, antidebuggers, or anti-virtual machine features), it is pretty effective when we take into account the rate of infection in just one day. In addition, those scripts can build a botnet of infected machines that can be controlled by using a Remote Desktop connection, which allows the attacker to perform any action in the system. The malicious files related to this threat are detected by McAfee products as VBS/Satanbot. | <urn:uuid:ded98a11-cca0-4487-8934-ca091fdc0d0b> | CC-MAIN-2024-38 | https://www.mcafee.com/blogs/other-blogs/mcafee-labs/satanbot-employs-vbscript-to-create-botnet/ | 2024-09-15T12:50:42Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651630.14/warc/CC-MAIN-20240915120545-20240915150545-00699.warc.gz | en | 0.904811 | 1,291 | 2.9375 | 3 |
Noted astrophysicist Neil deGrasse Tyson in a keynote last week talked up the importance of weeding out bad data and he also tried to bring calm to discussions about AI, while maintaining a need for oversight.
Tyson, director of the Hayden Planetarium and host of the StarTalk podcast, spoke at Rev 4 in New York, a data science and MLOps conference hosted by Domino Data Lab. The importance of data to science was central to his speech: “In my field, we’re data heavy. We have been at this for decades.”
Science, as a whole, matured after humanity transcended its five traditional physiological senses, Tyson said. Reliance on only taste, touch, hearing, sight, and smell as sensors to measure and understand the world could essentially be regarded as incomplete data. “The universe is under no obligation to make sense to us,” he said.
Tyson noted that even with resources to take measurements, some measurements are not precise while others are. “What are data if not the measurement of things?” he asked. Despite best efforts to take measurements and produce data, he said there are questions that in principle have no answers. “Measurements don’t produce exact information,” he said. “We just have to agree what the approximation is that we’ll accept upon making that measurement.”
The Necessity of Compute Power
Astrophysicists, physicists, the military, and a few other branches of society, Tyson said, were very early in the use of computers to assist their work as they become awash with data. The Gaia space observatory, for example, takes high-precision images of billions of stars in order to create a 3D map of the galaxy. “No human being can sit there and analyze all of that, so it’s all loaded up,” he said. “This is variants of AI that we’ve been engaged in for decades, where computers are making decisions for us once they’re trained to look for things that are interesting -- then they’ll find something that we might have missed, because they’re better at it.”
There can be flaws in the collection of data that emerge, however, even with advanced resources. Tyson said that time sampling needs to be handled properly to avoid becoming susceptible to artifacts. “If you don’t do it right, you can make things that move look like they’re moving backwards or not moving at all,” he said.
For example, a bird flying in front of a security camera might look like it is not flapping its wings because of the camera’s frame rate. “Your data aren’t always telling you reality,” Tyson said. With growing public concerns about racial or cultural bias in data, as well as the possibility of bias in the programmers behind the data, other issues can surface in data collection. “There’s also just data bias,” he said.
When Data Cannot Be Trusted
Personal observations may be significant in the judicial system, but they can be faulty when seen through a scientific lens. “There’s no such thing as eyewitness data,” Tyson said. “It is the lowest form of evidence in the court of science.” He cited a news story where visitors at an ice cream parlor claimed they saw him try several different flavors when he simply ordered his favorite flavor.
Reassessing observed data played roles in the discovery of celestial bodies in our solar system, Tyson said, such as Pluto -- which over time saw its status change from planet to dwarf planet. “The more we learned about Pluto, the smaller it got,” he said.
Data reassessments also ended the search for the mythical Planet X, which was supposedly affecting the orbit of Neptune. After discovering that bad data that had been relied upon from an observatory, astronomer E. Myles Standish eliminated it in the 1990s and other data sources were consulted. “Upon doing so, Neptune landed right on Newton’s laws,” Tyson said. “There was no need for a Planet X.”
AI is Already Part of the Equation
When asked for his perspective on AI, Tyson sought to cool some of the incendiary worries about its use and abuse. “The public now thinks of AI as an enemy of society without really understanding what role it’s already played in society,” he said.
Navigation apps are commonplace now and, as Tyson pointed out, are used with little uproar. “This is not a computer doing something rote,” he said. “It’s a computer figuring stuff out that a human being might have done and would have taken longer. No one’s calling that AI -- why not? It kind of is.”
The furor over AI caught fire after the technology saw wider use in nontechnical professions and communities, Tyson said. “What do you think it’s been doing for the rest of us for the past 60 years? When it beat us at chess, did you say, ‘Oh my gosh, it’s the end of the world?’ No, you didn’t. You were intrigued by this.”
He did suggest guidance should come into play with AI, but eschewed doomsaying over the technology. “AI, I don’t think is uniquely placed to end civilization relative to other powerful tools,” Tyson said, though he acknowledged the presence of fears associated with its unknowns. “We should fear it enough to monitor our actions closely.”
What to Read Next:
About the Author
You May Also Like
Radical Automation of ITSM
September 19, 2024Unleash the power of the browser to secure any device in minutes
September 24, 2024Maximizing Manufacturing Efficiency with Real-Time Production Monitoring
September 25, 2024 | <urn:uuid:d927c893-40dc-48bd-bc48-7586100c5063> | CC-MAIN-2024-38 | https://www.informationweek.com/data-management/neil-degrasse-tyson-on-calling-out-bad-data-and-appreciating-ai | 2024-09-16T20:05:42Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651710.86/warc/CC-MAIN-20240916180320-20240916210320-00599.warc.gz | en | 0.968755 | 1,262 | 2.84375 | 3 |
What you can do to reduce your risk from the recently revealed Wi-Fi issue called KRACK.
On October 16, a researcher publicly disclosed a potential Wi-Fi security issue. The issue affects “WPA2,” the method used by most devices and routers to protect Wi-Fi traffic as it travels through the air. This is an industry-wide issue, and it could impact anyone using Wi-Fi. But it does have limitations, and corrective patches are on the way.
How It Works
WPA2 is an industry-standard encryption method used to secure communications between wireless devices and access points. For example, it’s often used when you’re on a laptop, smartphone or tablet, and you’re connected to a business’s Wi-Fi or your home wireless router. KRACK, as the vulnerability is called, may let someone trick the Wi-Fi security into thinking information has been securely encrypted when it has not been.
The Wi-Fi Alliance, which oversees protections and issues related to Wi-Fi, issued a statement and said there was no evidence the vulnerability had been used in a successful attack. Here is the statement from the Alliance: https://www.wi-fi.org/news-events/newsroom/wi-fi-alliance-security-update
Where It Works
To take advantage of the weakness, someone would have to be within physical range of your Wi-Fi signal. The issue does not affect information stored on devices. And it cannot impact information sent over a cellular signal, like LTE, or information flowing through an Ethernet cord. It also cannot affect Wi-Fi communication with a secure website (HTTPS) or a VPN service (virtual private network) because both of those encrypt the information separately.
What Can You Do?
Take care of your devices. You should install software and device updates as soon as you receive them. Device companies are working on patches and updates to block the issue, and some have already been delivered. Please be alert for notifications that updates are ready for your devices and install them as soon as possible.
Make sure any website you visit begins with “https://” or shows a small image of a padlock in the search box.
If you need to share sensitive information, like putting in your credit card number to buy something over the internet, consider turning off Wi-Fi and using the cellular network, or plugging your device into the internet with an Ethernet cord.
The best thing you can do is follow safe internet habits. You can learn more ways to help keep yourself and your information safe on AT&T’s Cyber Aware website.
Patches are on the way, but will take time. WPA2 is currently the most secure Wi-Fi encryption protocol in use. It is widely used by companies, home users, and devices everywhere to connect devices to the internet. That means many companies are working on many patches to correct the issue. At AT&T, we are working with our vendors to deploy security patches to access points as soon as they’re available. | <urn:uuid:65885434-df20-4953-9a30-dfee1826d6dd> | CC-MAIN-2024-38 | https://about.att.com/pages/cyberaware/ar/wifi-wpa2 | 2024-09-18T02:19:50Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651835.68/warc/CC-MAIN-20240918000844-20240918030844-00499.warc.gz | en | 0.953612 | 637 | 2.671875 | 3 |
Autonomous Vehicles That Don't Crash
As autonomous vehicles become a reality, we need algorithms that help them drive without crashing into one another, or quitting altogether when security protocols are breached. It's one of the biggest challenges for roboticists who create behaviors for teams or swarms of robots.
Each robot is given an invisible bubble that cannot be breached, but when you have enough robots working together, they shut down because bubbles are breached.
Well, a team out of Georgia Tech's Institute of Robotics and Intelligent Machines has created new algorithms that allow any number of robots to come within inches of one another, without colliding.
Their bots are using "a set of safe states and barrier certificates to ensure each stays in its own safe set throughout the entire maneuver." Essentially, they shrunk their robots' bubbles.
In a recent demo, the researchers proved the theory when swarm of as many as eight robots worked in close quarters with one another without shutting down.
Now, is this ready for the freeway, or gridlock traffic downtown? Maybe not, but it's a step in the right direction - and it could even be considered for next generation air traffic control, as we find a way to more safely pack those planes flying our friendly skies together.
Van with a Vision
The Vision Van from Mercedes-Benz Vans is a new concept vehicle that was developed with drone-delivery startup Matternet. As you can imagine, it was built to meet the need should the delivery-by-drone business ever take off.
The concept van is sort of a hybrid FedEx truck with a built in drone helipad. As the delivery driver, who remains human at least in this concept, delivers packages in-person, a drone fleet delivers packages to different customers in the area.
According to Daimler, the vehicle merges a number of innovative technologies for last-mile delivery operations. It features a fully automated cargo space, integrated drones for autonomous air deliveries and a state-of-the-art joystick control.
Powered by a 75 kW electric drive system with a range of more than 160 miles, the van would run cleaner, and be virtually silent, which would make stealthy late deliveries in residential areas possible to suite same-day delivery.
Malicious Spies Steal 3D-Printed Designs
In March, we saw how a team of researchers at the University of California, Irvine found a way to reverse engineer 3D-printed designs by simply recording the sound the process makes while being built in a 3D printer.
Well, a new study out of the University of Buffalo has proven 3D printers even more vulnerable to malicious spies.
Researchers programmed a smartphone's built-in sensors to measure the electromagnetic energy and acoustic waves that emanate from the 3D printer. The sensors can infer the print nozzles location as it builds the object.
With a smartphone 20 cm away, they were able to gather enough data to replicate simple designs within 94% accuracy, and more complicated designs, like an automotive part or medical device, within >90% accuracy. It's an improvement of the 90-percent accurate knockoffs they were making at UC-Irvine.
So what can we do to keep disgruntled employees and industrial spies for stealing our stuff? It could be as simple as keeping devices further away from the printers - as the method was only 66-percent accurate when the phone was 40 cm away from the printer. Signal jammers or even programming the printer to operate at different speeds could help.
This is Engineering By Design. | <urn:uuid:5e24d988-ba3a-4d24-a7c6-2190f537260a> | CC-MAIN-2024-38 | https://www.mbtmag.com/home/video/21101697/engineering-by-design-mercedes-concept-van-half-delivery-truck-half-drone-helipad | 2024-09-19T07:42:26Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651995.50/warc/CC-MAIN-20240919061514-20240919091514-00399.warc.gz | en | 0.958341 | 723 | 2.71875 | 3 |
Cybercrime may seem like a suitable subject for a movie or television show, but for those who work in the media and entertainment industry, it is a real-life threat. Studios hire many different people to write scripts, to direct screenplays, and to manage the production, industry talent, facility operations, and IT networks. There are numerous opportunities for hackers to exploit vulnerabilities. Data security for the media and entertainment industry is, therefore, of utmost importance.
To understand why media data storage security, including encryption, is so important, we will discuss some of the greatest threats and how to prevent them.
Cybersecurity Threats Remain a Persistent Problem
Cybersecurity issues have been prevalent in film, television, print, radio, and sports. There are various ways in which cybercriminals exploit media and entertainment organizations. The top threats include impersonating authorized individuals, counterfeiting tickets, pirating content, issuing threats, and breaching networks to access content and personal data.
Security Risks in the Media and Entertainment Industry
The sheer numbers of vulnerabilities, attack methods, and incidents make it essential for media and entertainment companies to use data encryption systems. Some of the greatest risks they face every day include:
- Scams: Scammers are reaching a larger audience than ever before. In 2020, hackers exploited WhatsApp, a messaging application run by Facebook. Users were promised free membership to a streaming service if they forwarded a message to 10 contacts and provided their account credentials. This personal data was then sold off by hackers to take over accounts. 1
- Insider Sabotage: Cyberattacks often originate from within an organization, whether by accident or intentionally by a disgruntled employee. In 2014, confidential data about employees at Sony Pictures was exploited. Hackers also erased computer infrastructure using malware, prompting the media giant to alert others in the industry.2 The attack involved 100 terabytes of data, including details regarding celebrity earnings, employee social security numbers, emails containing internal gossip, and unreleased movies.3
- Leaked Content: In 2017, HBO faced a cyberattack that leaked an episode of Game of Thrones online. Netflix was hit by a similar type of attack. As a result, 10 episodes of Orange Is The New Black were leaked. An investigation later found a contractor working on the show launched the attack. From internet radio services to online gaming platforms, cybercrime has become a widespread problem. In recent years, Steam revealed that 77,000 accounts are hacked every month.4
- Email Hacking: Hackers attacked user accounts of the Disney+ streaming service as soon as it went live in 2019. They were able to log people out and change email and password settings. The stolen accounts were then sold on the deep web. However, a leaked email doesn’t only affect consumers. Communications involving celebrities can be used to fuel tabloids and public scandals. Often, leaked information is manipulated or taken out of context, sparking media frenzies and scandals.6
- Unauthorized Data Access: From employees who are untrained regarding the risks of information sharing, to third-party vendors who have been granted access, and users exchanging files remotely, your data is often vulnerable. The potential for unauthorized data access can have professional, personal, artistic, and financial consequences that can ruin reputations and shut down productions and businesses.
- Ransomware Threats: Ransomware attacks are expensive. Entertainment companies have paid extremely large amounts of money to access lost data. The cost also comes in the form of lost credibility and paying fines for non-compliance with regulatory authorities. Data should always be backed up in at least one secure location. Encryption also reduces the risk of sensitive information falling into the wrong hands.
Why Data Security Is So Important in the Media and Entertainment Industry
As we’ve seen in past incidents, data breaches and other cybercrimes put many in the industry at risk. Data security is essential for ensuring the privacy of employees, consumers, creators, and celebrities. Unauthorized access to social security numbers and medical information can lead to identity theft, while internal communications can be used to exploit sensitive information. Leakage of unreleased scripts, TV shows, and other extremely valuable assets can occur internally and externally.
Entertainment companies must always be aware of what’s being said about their brand, content, and talent. Reputation, which is dependent on fan loyalty, can be easily damaged by a data breach. This, in turn, can have devastating effects on reputability and revenue earnings in a business where success is often solely driven by public sentiment. Having the right data security policy and infrastructure in place reduces the risk.
Ways to Improve Data Security in the Media and Entertainment Industry
Prioritizing assets, controlling access, and having an incident response plan are some effective practices, but improving data security requires specific practices that involve everyone within your organization. If you’re wondering how to keep your data secure, here are some of the most effective strategies:
- Employee Training: Employees should be trained in the latest threats and how to recognize the signs of an attack. A training program that incorporates your company’s policies and procedures is most effective and should also instruct personnel on what to do if they suspect an attack is occurring. Your staff needs to know what to report, who to report it to, and in what manner.
- Properly Classify Data: Information should be classified continually, accounting for its attributes and how its importance changes over time. You can then classify data accurately, which is important, but classifying every last bit of information isn’t the goal. The goal is to increase accuracy over time and adapt your security posture to constantly changing conditions.
- Create Backups: Secure data backup can prevent the loss of confidential information, especially when encryption is used. This ensures only authorized users can exchange files or access them, saves time, and allows IT personnel to dedicate their time to tracking and auditing files, keeping them secure, and concentrating on production data management. All the while, users can safely share and transfer files using mobile devices.
- Use Secure Encrypted Systems: Encryption, in software or hardware form, makes data unreadable by converting them into codes. Only someone with an encryption key can make the data readable again. Otherwise, a hacker could read or maliciously use information that’s available to them. Encryption systems provide peace of mind because only the intended recipients can decrypt and see what you have sent.
- Share Data Using Quick-Link® Technology: At Ciphertex®, we offer hardware and software data security solutions. Our SecureNAS® servers provide AES-256 bit encryption and military-grade durability. With Ciphertex Quick-Link® cables, you can quickly and easily network your SecureNAS to up to ten Windows-, Mac-, and/or Linux-based computers for secure information sharing via a USB 3 interface. It is available in seven and twenty-foot lengths to accommodate a wide range of applications.
Trust Ciphertex® for Secure Media and Entertainment Data and Content Storage
Data security products from Ciphertex® leverage the latest technology to minimize cybersecurity threats, protect company data, and increase productivity. Our portable NAS servers, portable RAID systems, single drives, rackmount servers, and Quick-Link® cables and other accessories are perfectly suited for media and entertainment, as well as other industries.
Additionally, we provide customizable encryption software that optimizes data management, secures data sharing, provides data migration and includes a state-of-the-art back-up solution. To learn more, call Ciphertex Data Security® at 818-773-8989 today! | <urn:uuid:5dc081a2-c32c-4e4a-a481-448dcdc3ee9e> | CC-MAIN-2024-38 | https://ciphertex.com/2021/12/24/why-data-security-is-important-in-the-media-and-entertainment-industry/ | 2024-09-20T13:39:42Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652278.82/warc/CC-MAIN-20240920122604-20240920152604-00299.warc.gz | en | 0.922801 | 1,553 | 2.75 | 3 |
Pacific Gas and Electric, the beleaguered California utility at the center of the wildfire phenomenon, has turned to the Argonne National Laboratory for help.
Pacific Gas and Electric, the beleaguered California utility at the center of the wildfire phenomenon, has turned to the Argonne National Laboratory for help. PG&E is looking for small-area weather and climate models that can help it make plans on a regional scale. Argonne’s Chief Scientist and head of its Department of Atmospheric Science and Climate Research, Dr. Rao Kotamarthi joined Federal Drive with Tom Temin to talk about how the arrangement works.
How does broad area management inform national security? Find out in our new briefing, sponsored by Planet. | Download today!
Tom Temin: Dr. Kotamarthi, good to have you on.
Dr. Rao Kotamarthi: Thank you. Thanks for the invitation to talk to you.
Tom Temin: So I didn’t realize that a private entity like PG&E could come to Argonne and enter into an arrangement for research. Tell us how this works. And is it something that you do regularly?
Dr. Rao Kotamarthi: Yeah, it is fairly common for us to work with private sector partners. These kinds of collaborations range from like access to user facilities such as Advanced Photon Source at Argonne, high performance computing facilities to project like ours. So the laboratory is for public good, and supporting fundamental research into energy and supporting the energy infrastructure. So we do whatever we can to help. We are encouraged to work with private sector partners to share our knowledge and some of the inventions that we make at the laboratory so that it is made available for public and private industry and startups.
Tom Temin: And PG&E is paying for this work?
Dr. Rao Kotamarthi: Yes, it supports effort that my staff spends on it. Yes, it’s paid for that.
Tom Temin: Sure. And it sounds like that you will come up with a piece of intellectual property here, say, a way of directing climate models to smaller areas than simply whole nations or whole states. And will that belong to PG&E? It sounds like something that could be deployed throughout the world, really, for entities that have a interest in localized climate conditions.
Dr. Rao Kotamarthi: Yes, so this particular project is based entirely on open science research. We have developed the tools and datasets used for this research in peer reviewed journals and articles that we have done over the last several years. And hence, there is no IP issue for this particular project. This type of projects go through a screening process to identify any such issues and get cleared by the laboratory and DOE. In case there is an IP issue, things like development or testing of some new technology, there will be agreements in place for dealing with IP, before a contract is set up. But this particular research is, actually we are using the dataset we have developed previously for an application for trying to understand that, and help PG&E to develop better strategies for dealing with it as the climate is changing. We don’t have any IP issues in this project. It’s actually all based on open science.
Tom Temin: And what is it they’re trying to figure out exactly here? What is the essential problem you’re helping themselves?
Dr. Rao Kotamarthi: So we were asked to check six different indicators of wildfire that PG&E uses regularly to find out how these things will change into the future. So they have a protocol when this track is expired indicators. And whenever one of them exceeds or several of them exceed, they go into high alert mode and try to figure out how to combat that. So what they wanted to know is these six indicators that we are using right now, will they be changing into something, are there any projections into the future that we can use to better prepare or infrastructure for these changes? So these kind of changes would be like, how dry will the soil get? Will the wind speeds of direction of being changed in the future, condition in the atmosphere that drive particular types of weather patterns that are conducive to wildfire? And how they may change by mid century? So that’s pretty much, we’re asking is how this typical fire indicators that PG&E uses right now, change as we go into the future.
Tom Temin: And the models that you have then, these are developed from data that comes from the weather types of agencies?
Dr. Rao Kotamarthi: So the model itself is very high resolution in the sense that we can resolve not America “…” kilometers, but grid cells, which is climate model “…” at 100 kilometers. So it’s about 10 times higher resolution. It requires a lot of “…” computing. So the data for that comes from, for the current climate, we use current weather, of course, we do for the last 10 years, we take the weather conditions for the last 10 years to drive these models. When we do in the future, we actually use a climate models, there may be about “…” different climate models around the world. So they pick a few of them providing the conditions outside North America. So we can do a very high resolution simulation within North America. So they take into things like how would the greenhouse gases change in the future? There are different scenarios for that, uncertainties in physics that we don’t understand some other physics or how do we account for that and things like that. It gives you a projection into the future. Essentially, when you’re doing a climate simulation, you’re looking at scenarios of how this will evolve into the future. So it gives you an idea of how the scenarios will evolve. So essentially are projecting into the future. And then of course, when you do some kind of prediction, you want to understand what can fit it in this production system.
Tom Temin: We’re speaking with Dr. Rao Kotamarthi. He’s chief scientist and head of the Department of Atmospheric Science and Climate Research at Argonne National Laboratory. And are you able to develop any specific options for a place like PG&E? That is to say, when they understand what’s going to happen, say it’s going to get drier here or windier there, what do they do about it?
Dr. Rao Kotamarthi: So it’s an interesting problem, right now most of the private industry in the U.S. is trying to figure out how do they account for climate change in their plant? So this is probably the initial phase, people are trying to figure out, is the data itself useful for me to make a decision, right? If the data says that the incidence will change by two or three years, it is 10 now, it could be 11, or 12. And the uncertainty on that is 30%. But how much of a credence should I give to that, and how much money should I be spending? So they’re trying to understand the center projections and uncertainties in them and how it affects their business plans. For example, for PG&E, they are really interested in something called a Diablo wind. I think Diablo is a mountain in California. And the winds come from Northeast. So that is one of the biggest indicators of whenever they see this Diablo wind over some threshold, they do have a good idea that this is gonna lead to wildfires, especially in the North and Central parts of California. And we have been looking at the incident of these fires, of these winds, how the intensity is changing, how the frequency is changing, how the duration of this is changing into the future. The idea being that if you can develop some statistics of how these are changing into the future, maybe there could be some of the dataset that will be helpful in planning. Let’s say, right now, the Diablo winds are mostly around the coastal part of Northern California, maybe they move a little closer to the mountains, what kind of action should we take now so that we can do better planning for the future? So these are the kinds of questions they’re asking. And at this level, at this time, the idea is to just be aware of these things and start planning. I’m not yet sure how the industry will actually implement this into their future activities, how it will affect investment decisions. So this is all part of the idea that we have to adapt to changing climate. Even if you do mitigation, there is this big need for adapting to changing climate. How do we go about doing that? This is a challenge, and also a need. And we’re trying different things. And this is like maybe getting closer to generating the kind of data maybe the industry can actually start using in their spreadsheets and stuff like that. So that they can look at it in their, in terms of cost, right? It’s a whole idea is that essentially at some point, you have to figure out how much it costs and how much action they take can take.
Tom Temin: Yeah, my question then is, what do they do over time? Once you are done with the research, and you come up with a refreshed model, say for them to use, this has to be deployed, I would think in the models run on an ongoing basis. And so will they be able to have the ability to take what they get from Argonne, and perhaps use it themselves to keep an ongoing prediction of what they need to know?
Want to stay up to date with the latest federal news and information from all your devices? Download the revamped Federal News Network app
Dr. Rao Kotamarthi: Yeah, I think the whole idea is that they mostly look at current weather. So if you can build this trends of how this will change in the future, they can build it into their models so that they are looking at, let’s say 20 years from they’re putting some of the wires underground or something like that, where do they should prioritize, maybe that will help them decide those things. So these are the decisions that will be made by industry, and they are trying to figure out those pathways. And I think the kind of data we are developing will help them do that.
Tom Temin: And do you have a separate set of data and algorithms and models and so forth for not so much wildfire, but for say flooding, which might affect utilities elsewhere.
Dr. Rao Kotamarthi: So what we have done at Argonne, and several other people have done around the world too so I don’t want to take all the credit for this, but what we have done is that we have developed a really high spatial resolution climate data for North America. It’s a process called downscaling. And we did a few years ago. To do this, so we have simulations for the current decade, like let’s say recent past, mid century and end of the century, we do different greenhouse gas emission scenarios, like I said before. So this in total is about 300 individual “…” and about five petabyte of data for performing analysis. So there’s a large dataset from that we can extract, almost all kinds of meteorological variables, I mean precipitation, and other things, or temperatures from which we can calculate things like a fire index. So the fire index is one derived product from the climate model simulations we did, we do additional calculations to do that. So similarly, we have to do flooding, we have done flooding too at a really high resolution. So because we have precipitation and other things at every three hours, we run a separate flood model, at very high resolutions for calculating flooding intensities, for example, both coastal and inland flooding. So that will be helpful for infrastructure that is affected by flood. So this particular, PG&E for example, is very interested in fires, obviously. So the whole idea of a fire is that you develop an index, and the index tells you is the number is very high, you can have the potential for fires, where you see sometimes in the West, “…” fire potential index is high, it’s yellow, red and things like that. Essentially “…” end up indices. So you want to calculate those indices for current and future climate and see how they are varying, which parts of North America, for example, may become more fire prone in the future and things like that. So, risk analysis in some sense, once you’re done all these calculations, you’re doing risk of flooding fire and things like that. And based on that, then industry or government can take action on considering the risk and uncertainty calculating the risk.
Tom Temin: Yeah, so the “…” we have now is that the power stays on at Argonne, so that you can help everybody else.
Dr. Rao Kotamarthi: [Laughing], and the computer keeps running.
Tom Temin: Alright.
Dr. Rao Kotamarthi: They’ll buy a bigger computer. That’s all.
Tom Temin: Dr. Rao Kotamarthi is chief scientist and head of the Department of Atmospheric Science and Climate Research at Argonne National Laboratory. Thanks so much for joining me.
Dr. Rao Kotamarthi: Thank you. You’re welcome.
Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
Tom Temin is host of the Federal Drive and has been providing insight on federal technology and management issues for more than 30 years. | <urn:uuid:6c5e636e-2f27-471a-842a-0fed432d402a> | CC-MAIN-2024-38 | https://federalnewsnetwork.com/technology-main/2021/03/argonne-national-lab-is-helping-a-local-utility-in-a-program-with-national-implications/ | 2024-09-09T16:05:50Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651103.13/warc/CC-MAIN-20240909134831-20240909164831-00399.warc.gz | en | 0.962629 | 2,871 | 2.578125 | 3 |
Pollution in China Makes Free Cooling Difficult for Baidu
Air quality drives innovation in data center cooling at Chinese internet giant
China’s notoriously high air pollution levels are a well-documented public-health issue. But pollution also has other less talked about effects. One of them is on the efficiency of data centers in the country.
Pulling outside air into a data center to cool equipment to reduce energy used by mechanical cooling systems has been one of the most effective ways to increase the facility’s energy efficiency. The high concentration of pollutants in the atmosphere, however, makes “free cooling” in China data centers not as easily obtainable as it is in the US, for example, where the use of outside air to supplement data center cooling capacity is commonplace.
Air pollution in China has resulted in higher IT equipment failure rates for Baidu, the internet giant that’s the country’s answer to Google. The problem of pollution has driven a lot of research and development focus on data center cooling technologies at the company. Those research efforts take place both in China and in Baidu’s Silicon Valley offices, Ali Heydari, the company’s chief architect, said.
Heydari talked about the challenges of operating data centers in China at the DCD Internet conference in San Francisco Friday.
The highest concentrations of air pollutants are found in the eastern portion of China. “Most of our data centers are actually located in this area,” Heydari said. “That’s a major challenge for us.” Baidu has three data centers in Beijing, whose smoggy images have become emblematic of China’s pollution problem, and one in Shanxi, a province just west of the capital.
The pollutants include sulphur dioxide, nitrogen oxide, and PM 2.5, which is shorthand for fine particulate matter. PM 2.5, generated by vehicle exhaust, coal power plants, or fires, is what creates the visible haze in polluted cities and causes many of the human health problems associated with air pollution. Concentration of PM 2.5 is a common air quality metric.
The US Environmental Protection Agency uses an Air Quality Index system, where quality ranges from 0 (cleanest) to 500. The AQI for PM 2.5 in Beijing on early Saturday morning local time was over 150, which the EPA considers “unhealthy.” Around the same time, the AQI for PM 2.5 in Ashburn, Virginia, which has one of the biggest concentrations of data centers in the US, was about 30, considered “good.”
The Chinese government recently claimed major improvements in air quality in Beijing, reportedly due to environmental policies of the last year, but problems persist, causing issues for data center operators.
Pollution causes salts to accumulate in data center air conditioning systems, and exposure to gas pollutants significantly increases electronics corrosion rates, Heydari said.
Pollution Drives Cooling Innovation
One of the technological solutions his team has been looking into is air scrubbing. They have been testing a variety of scrubbing methods, such as water or chemical spraying or filtration. The idea is to integrate a scrubbing mechanism into the free cooling system. The liquid spraying approach is effective but requires a lot of dehumidification.
They have also created cooling designs that don’t rely on outside air but increase efficiency otherwise. One such design that is showing a lot of promise is a “bottom cooling unit.” Essentially, instead of a centralized chiller plant, the heat-exchanger coils are placed directly underneath the IT racks. The racks themselves have enclosures around them, and cold air is pushed into the enclosures, ensuring that all cold air that’s created makes it to the hardware.
“It’s basically a self-contained data center,” Heydari said about the enclosure. “It has its own cold aisle; it has its own hot aisle.” The bottom cooling units don’t require a raised floor, however, they are most efficient when installed in a raised-floor environment.
The system automatically adjusts cooling capacity based on the hardware density installed in the rack. Because cold-air supply is localized, it is more efficient and eliminates the risk of widespread cooling failure. It is a liquid-cooling solution, but because the actual heat exchanger is under the rack, there’s no danger of damage to IT equipment by a water leak.
Innovating for Web Scale
Like its web-scale peers in the US, Baidu spends a lot of resources on R&D to increase the efficiency of its hardware and data centers. The company has been involved in Facebook’s Open Compute Project and has designed its own servers and data center racks that use some OCP design elements.
Heydari, who in the past has worked as a senior hardware engineer at Facebook and Twitter, says Baidu’s Scorpio server designs are more energy efficient and yield higher power densities than OCP designs. The designs aren’t Baidu’s alone. Project Scorpio was started by a group of China’s largest data center operators, which also included Tencent, Alibaba, and China Telecom.
Baidu is also an early adopter of servers powered by ARM processors, the low-power alternative to Intel’s x86 architecture, FPGAs (field-programmable gate arrays), and GPU accelerators, according to Heydari.
A Long-Term Issue for Growing Data Center Market
Like other web-scale data center operators, Baidu has had to innovate out of necessity since typical hardware vendors have not traditionally designed equipment for infrastructure of such scale. In China, air quality happened to create the necessity to think differently about data center cooling.
This and other challenges in that market – some of the other major challenges there are high energy costs and relatively low rack power densities in colocation facilities – are important to address, since much of the growth in the data center sector in the coming years is going to occur in China, one of the world’s fastest-growing cloud services markets.
Read more about:
Asia-PacificAbout the Author
You May Also Like | <urn:uuid:0aa1ab78-aa3e-4ee9-abb0-d4081544734a> | CC-MAIN-2024-38 | https://www.datacenterknowledge.com/business/pollution-in-china-makes-free-cooling-difficult-for-baidu | 2024-09-09T14:05:10Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651103.13/warc/CC-MAIN-20240909134831-20240909164831-00399.warc.gz | en | 0.955298 | 1,308 | 2.84375 | 3 |
Most people know that cyberattacks are a significant threat, impacting daily the financial, operational, and reputation states of companies of all sizes and industries. What many don’t know is that the education sector is one of the most targeted and harmed industries.
The education sector experienced a 44% spike in cyberattacks from 2021 to 2022 — a trend that has continued to today. Cyberattacks aimed at educational institutions represent 6.2% of all cyberattacks, equating to approximately one attack every two hours.
There are several reasons for this: Educational institutions generally have a wide attack surface, strained mitigatory budgets, and limited in-house cybersecurity expertise. Nonetheless, managers in education must adapt to prevent the significant financial and reputational damages associated with such attacks.
To help, we’ve prepared five ways to improve your school’s cyberattack preparedness.
There are many different types of cyber attacks. Three of the most common include:
The common objective of all cyberattacks is to exploit vulnerabilities within a system to gain unauthorized access, steal sensitive information, disrupt operations, or extort money. Attacks are also becoming increasingly deceptive. For example, a growing number of phishing attacks are being generated by artificial intelligence (AI). 85% of security professionals attribute the rise in recent cyberattacks to the use of generative AI tools.
Schools, in particular, commonly encounter cyberattacks of varying sophistication and suffer from a lack of cybersecurity budget and expertise. Fortunately, there is software available that works to prevent schools’ networks, data, and sensitive student information. Moreover, schools can adopt cybersecurity prevention and response best practices to increase their cyberattack preparedness.
As cyber threats continue to evolve, particularly in the education sector, schools must employ tailored, advanced strategies to enhance their cyber defenses. Here’s a look at five strategies that leverage modern technology to enhance cyberattack preparedness:
Schools should integrate systems that use API connections with existing platforms.
This integration allows for real-time scanning of the school’s digital domain, providing a streamlined approach to monitoring without the need for traditional hardware installations. These systems use advanced artificial intelligence algorithms to analyze behavior patterns and identify anomalies indicative of potential security threats. Such detection capabilities enable proactive responses, minimizing potential disruptions before they escalate into costly matters.
Implementing advanced email and web filtering technologies helps in intercepting phishing attempts and blocking malware. These systems scrutinize emails and web traffic for malicious elements, such as suspicious links and attachments. Moreover, next-generation email and web filtering software generally use real-time threat intelligence to improve detection accuracy.
Such software operates across the network to automatically detect and halt potential data breaches or exposures, ensuring that sensitive information remains within the school’s control. Moreover, DLP solutions provide detailed audit trails for compliance verification and operational transparency.
Cloud data loss prevention software supports schools that use cloud-based platforms, such as Google Workspace and Microsoft 365. This software helps prevent data breaches and unauthorized data sharing, and assists with compliance with privacy laws. With cloud DLP, schools can automate the protection of sensitive data stored in the cloud, monitor for risky behavior, and enforce security policies across their digital environments. Notably, cloud DLP solutions can also scan images and documents for sensitive information using optical character recognition, adding a layer of security against data leaks.
Both staff and students require training.
This is because threat actors commonly leverage human errors, such as clicking on harmful links or mishandling sensitive data. According to KnowBe4, up to 90% of malicious breaches originate from social engineering or phishing attacks. This indicates that attackers often bypass technical vulnerabilities, instead manipulating users into voluntarily relinquishing their legitimate access credentials.
Training should cover topics such as identifying phishing scams, managing secure passwords, and the importance of software updates. Moreover, schools should consider educating on the safe handling of personal and institutional data, recognizing security alerts, and responding to suspected security breaches.
Effective training programs often include interactive and practical components, such as:
Malicious actors capitalize on outdated systems.
Schools should consider adopting software that automatically manages and applies updates and patches to both operating systems and applications. This ensures that all digital resources are protected against the latest vulnerabilities, reducing the window of opportunity for cyber attackers to exploit.
Schools generally have a large attack surface: Many devices, from administrative computers to student tablets, are connected to the network. This broadness of technology provides multiple entry points for cyber threats.
Mitigating the financial, reputational, and operational damages of cyberattacks doesn’t need to drain your school’s budget. ManagedMethods’ cybersecurity software offers a comprehensive, budget-friendly solution that leverages cloud technology to protect sensitive data.
Cloud Monitor by ManagedMethods facilitates comprehensive cybersecurity management by providing tools that are specifically engineered for the educational sector. This platform allows schools to monitor, detect, and respond to cyber threats in real time, using advanced cloud-based technology. With Cloud Monitor, schools can gain visibility into all aspects of their cloud environment without the complexity and high costs often associated with cybersecurity setups.
The system’s API-driven approach enables for seamless integration with Google Workspace and Microsoft 365. This means that your school can gain continuous oversight without any impact on network performance or user experience. Moreover, this integration both streamlines the monitoring process while also enhancing the effectiveness of threat detection and response strategies. Cloud Monitor employs AI to analyze patterns and flag unusual activities, offering schools proactive security measures that can identify potential threats before they cause financial or reputational harm.
Additionally, Cloud Monitor’s user-friendly interface ensures that even those without extensive technical knowledge can effectively manage their school’s cybersecurity posture. It simplifies the complex aspects of cyber defense into actionable insights and automated processes.
As Ed Newman, CSO and Director of Technology Services for ESC12, stated, “When I first learned about Cloud Monitor I was skeptical that such an inexpensive solution would be able to secure our Google Workspace data better than Cloudlock. However, after our first week using the solution, I was more than convinced. Cloud Monitor has been one of the best technology decisions I’ve made this year.” | <urn:uuid:87628575-1688-4000-ba73-65ad4949fba7> | CC-MAIN-2024-38 | https://managedmethods.com/blog/cyberattack-preparedness/ | 2024-09-12T01:38:21Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651420.25/warc/CC-MAIN-20240912011254-20240912041254-00199.warc.gz | en | 0.93234 | 1,274 | 3.015625 | 3 |
What is a brute force attack?
A brute force attack is a simple attack method whereby hackers use trial and error to ‘guess’ a username and password, pin, or encryption keys – to gain unauthorized access to a system.
Brute force attacks fall under the category of a “password attack.” Like a thief trying every combination on a padlock, the hacker will try different combinations of usernames and passwords until they manage to break into the account.
Most hackers prefer to use scripts or applications especially designed to crack passwords or try out many different password combinations in quick succession. There are many types of specialized software available that make brute forcing quicker and easier.
While some brute force attacks are carried out by humans, many brute force attacks these days are done by bots that can attack websites systematically using lists of account credentials procured from the dark web, or from a previous security breach.
Why do hackers try and guess the passwords of their unsuspecting victims? The purpose of a brute force attack is usually so that hackers gain access to a system in an effort to install malicious software, steal sensitive data, or cause some form of damage or disruption.
Hackers may also use brute force attacks for penetration testing, to check how secure an organization’s network is, prior to launching a fully-fledged cyberattack.
There are many different types of brute force attacks. Here are the most common ones.
This is the most low-tech form of brute force attack. The hackers attempt to guess the correct password manually, without any assistance from scripts or applications. This method works best when users have obvious usernames and passwords like “12345.”
This type of brute force attack is associated with the term “exhaustive search”, as the hacker must try every possible password until the correct one is found.
As you can imagine, brute forcing in this way can be very time consuming, with many failed attempts.
In a dictionary attack, words in a dictionary are tested to find a password. These words may be combined with numbers and symbols in order to crack more complex passwords. Dictionary attacks can be done using password cracking tools that test the most logical combinations, which is much faster than testing all possible combinations, and requires less computing power.
Hybrid brute force attacks are a combination of simple brute force attacks and a dictionary attack. The dictionary provides the words, and the hacker then uses trial and error, adding or replacing characters or numbers manually to try out all possible passwords.
Reverse brute force attacks, also known as password spraying, involve taking a popular, simple password and trying it out against as many different usernames as possible, not targeting any user in particular. Like its name, a reverse brute force attack is the reverse of a typical brute force attack, where a hacker is targeting a particular username and trying to guess the correct password.
A rainbow table attack is one where the hacker uses a special kind of table called a rainbow table to crack password hashes in a specific database. When passwords are stored in a database, they aren’t stored in plain text – rather they are encrypted using a hash. When a user logs in, their passwords is converted to a hash and compared to the stored hash for authentication.
Hackers need to get access to hashes that have been leaked in order to carry out this type of brute force attack.
Rainbow table attacks can be avoided by double-hashing passwords, or through ‘salting’, in which an additional random value is added to a password to change the hash.
Credential recycling, or credential stuffing, is when a hacker reuses stolen credentials that were gathered during previous brute force attacks – they ‘stuff’ the credential into many different login forms.
With decades of successful brute force attacks, hackers have plenty of credentials to play with. Often, such credentials are sold on the dark web.
This type of attack works because users frequently re-use the same usernames and passwords for each password protected account they own.
Here are some tips for how to prevent brute forcing.
Those who use simple or common passwords become easier targets for brute force attacks. For organization, there should be a strict password security policy and awareness training to ensure users are not using weak passwords.
Here are some guidelines for creating the best passwords:
Two factor authentication, or multi factor authentication, involves using more than one type of authentication to gain access to a system.
In addition to asking for a password, the user may need to use biometrics, such as a fingerprint, or they may be asked to enter a code sent to their cellphone, to confirm their identity.
Users who use the same password for many different password protected accounts are most likely to fall victim to brute force attacks. The hacker only has to guess one password, and they are then able to access a user’s other accounts using the same user credentials.
Using longer passwords decreases the likelihood that a hacker will be able to guess it, even using automated tools. This is especially true when the password include multiple words, characters, and numbers, in line with the guidelines mentioned earlier.
A password manager creates and manages complex and unique passwords for users. This makes it much easier to manage multiple accounts without needing to remember different passwords, and also ensures the passwords are as complex as possible, so that they are far harder to guess using brute force attack tools.
Applications and websites should never allow unlimited login attempts, and should inform users of a suspicious login attempt, such as one from a different location or at an unusual time of day. Many websites already take such action, locking users out of their accounts for a specific amount of time after a number of unsuccessful login attempts.
Website admins should use a plugin that will block IP addresses if they go over the allowed number of login attempts.
Websites and applications can use CAPTCHAs or similar tools that prevent brute force attacks performed by bots.
An intrusion detection system can be set up to detect brute force attempts. This can be combined with multi-factor authentication to ensure these brute force attacks fail.
In general, allowing remote access can present a significant security risk – which is a challenge in a time where remote work is becoming more popular. For example, hackers can use Windows’ Remote Desktop Protocol (RDP) to perform a brute force attack.
You can reduce the risk of a brute force attack by setting up a VPN gateway for encrypted remote connections. With a VPN, all traffic is kept away from the local network. The VPN encrypts data in-transit, and hides your IP address, hackers can’t steal any personal information which they could later use for a brute force attack.
It’s tempting to think that a VPN increases network security to a point where threats like brute force password attacks are no longer relevant. Can a password be brute forced when using a VPN? Unfortunately, yes.
If you don’t use a strong password for your VPN, or you don’t follow the other general guidelines for avoiding brute force attacks, hackers can easily gain access to a VPN in the same way they can gain access to any other application – whether it’s through trying different passwords combinations, or using more sophisticated tools to figure out the password or encryption key. | <urn:uuid:f2bc7531-5b4c-4f9f-a301-6f8efa1ad793> | CC-MAIN-2024-38 | https://www.ericom.com/glossary/what-is-a-brute-force-attack/ | 2024-09-13T07:53:23Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651510.65/warc/CC-MAIN-20240913070112-20240913100112-00099.warc.gz | en | 0.930207 | 1,504 | 3.65625 | 4 |
Single Sign-On (SSO) is a user authentication service that allows users to access multiple applications with one set of login credentials. SSO can help organizations manage multiple credentials and prevent users from having to remember multiple passwords.
SSO works by confirming that a user’s login credentials match their identity in a database. The user visits the website or app they want to use, the site sends them to a central SSO login tool, and then the user enters their credentials. If the authentication process is successful, the user is redirected to the service they want to access and is automatically signed in.
The SSO process involves five steps:
This process can greatly improve the user experience, as well as enhance security by reducing the number of times that a user must enter their password. Many cloud-based applications, such as Google Workspace, Microsoft Office 365, and Salesforce, offer SSO. SSO is related to SAML, but they are not the same. SAML is the standard through which SPs and IdPs communicate with each other to verify credentials.
Here are a few real-world examples of the use of SSO:
In this article:
Single Sign-On (SSO) has a wide range of use cases across a variety of industries and applications. Here are a few of the most common SSO use cases:
In each of these use cases, SSO helps simplify the user experience by reducing the number of times a user needs to log in, and it helps to improve security by reducing the number of places where a user’s login credentials need to be stored and managed. These benefits make SSO an attractive option for a wide range of organizations and applications.
Here is an example of a real-life SSO process using the SAML (Security Assertion Markup Language) standard. This example is slightly more detailed than the basic process we showed above, and has 8 steps:
Note that this example represents a basic SSO scenario, and there may be variations depending on the specific implementation and requirements of the SSO system. However, this process illustrates the basic steps involved in an SSO process using SAML.
The following practices can help you improve your SSO implementation:
Forcing new sign-ins: To enhance security and prevent unauthorized access, the organization should enforce periodic sign-ins. This means that after a certain period of time, the user will be required to re-enter their login credentials, even if their session has not timed out. The system should also require a new sign-in session if there are two simultaneous active sessions for the same user. This helps to ensure that the user’s identity is verified on a regular basis, and helps prevent unauthorized access.
Once you integrate Frontegg’s self-served user management solution, your customers can configure their SSO completely on their own with just a few lines of code. The single sign-on can be integrated with IDPs, powered by commonly-used protocols like OIDC and SAML. Yes, you can implement social login SSOs as well to add another layer of security in a user-friendly way.
The front end has been taken care of as well to provide an end-to-end solution for your user management endeavors. You can leverage all of Frontegg’s SSO components and personalize your SaaS offering with a customizable login box, in line with today’s top standards. This embeddable box reduces in-app friction, saves development time, and allows users to authenticate smoothly and gain quick access to the app. Implementing SSO has never been easier.
Start For Free
Rate this post
4.9 / 5. 8
No reviews yet | <urn:uuid:9910b1de-813c-4e5c-b1be-72e45f766624> | CC-MAIN-2024-38 | https://frontegg.com/guides/single-sign-on-example | 2024-09-15T17:30:43Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651632.84/warc/CC-MAIN-20240915152239-20240915182239-00799.warc.gz | en | 0.943823 | 770 | 2.9375 | 3 |
Waste reduction, lower energy consumption, and a more sustainable landscape are all features that make the fiber technology infrastructure an eco-friendly choice.
The potential for a more sustainable future is created with fiber optic cables that provide higher bandwidth for longer-distance data transfers. Higher connectivity can be directly related to increasing the number of work individuals can get done in a certain period of time and reducing the need to travel. More reliable, faster broadband connections impact countless areas of society. The internet has revolutionized the way people travel, work, socialize, and even the way individuals provide entertainment. However, even with all the positive aspects, there are still some drawbacks of coaxial cables on the environment, so looking towards a greener future with fiber optic cables will help combat some of these deficiencies.
According to , Information Technology (IT) activity can account for nearly 2% of worldwide carbon dioxide emissions, which may not sound like a staggering number but equates to the aviation industry in its entirety. One of the main goals for combatting this pollution is to make industrial advances toward a greener technological future.
3 Reasons Why Fiber Optic is Greener Than Coaxial
- Save Energy
Did you know that fiber optic cables are more energy-conserving? Otelco claims that when these cable types transmit light over a distance of 100 meters they only consume 1 watt compared to the consumed by coaxial cables. When less power is needed, less heat is generated which eliminates the need for a cooling system. This translates to saving equipment, space, and energy.
- Material Reduction
The Beacon states that fewer materials are needed to produce fiber optic cables than coaxial cables because they need less jacketing and insulation. In addition, coaxial cables rely heavily on the use of copper. The material copper poses a huge threat to the miners gathering it due to higher risks of cardiovascular cancer and even lung cancer. Not only does this process affect the workers involved, but copper mining can contaminate an ecosystem’s ability to sustain life. A great way to avoid these dangers is with fiber technology. Optic fibers are made from silicon dioxide, which is largely made up of oxygen and poses significantly less of a threat to those involved. There is also no mining method for these types of cables, but rather an environmentally friendly extraction process.
- Data Centers Become More Environmentally Friendly
More and more individuals are pushing for companies to reduce their impact on health and environmental concerns. The option of producing fiber technology can minimize companies’ use of heavy metals, such as lead and mercury. If companies reduce their involvement with heavy metals, they are also reducing their levels of pollution.
What Does This Mean?
The American Consumer Institute Center for Citizen Research, states that in a recent study, results show that the environmental benefits related to broadband subscriptions with fiber technology can notably reduce the gases associated with the Greenhouse effect. There is also insurmountable data suggesting that moving towards fiber optic cables can reduce energy and create a plethora of environmental and economic benefits.
GeoTel is proud to be a data provider for Telecom GIS Data and a part of this movement towards a greener energy future. GeoTel is the leading provider of telecommunications infrastructure data, including fiber routes and data center locations. For over seventeen years, GeoTel’s products have been providing companies and government entities with the leverage and insight necessary to make intelligent, location-based business decisions. If you are interested in the services offered by GeoTel, please contact us today!
Learn more about all the services GeoTel has to offer and how they can give your company a competitive advantage!
Author: Valerie Stephen | <urn:uuid:26e0456c-44b2-446c-a2f3-2fbdb6551e0b> | CC-MAIN-2024-38 | https://www.geo-tel.com/why-fiber-technology-is-greener/ | 2024-09-15T17:14:19Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651632.84/warc/CC-MAIN-20240915152239-20240915182239-00799.warc.gz | en | 0.939469 | 739 | 3.28125 | 3 |
Payment Security 101
Learn about payment fraud and how to prevent it
Ransomware is a type of malicious software that infiltrates your device and renders the files unusable until a ransom is paid. By encrypting your data and preventing access to it, it is nearly impossible for organisations to decrypt and recover their files. Even when a ransom is paid, there is no guarantee that the encrypted files can be recovered in full. In fact, by paying the ransom, you’re more likely to be targeted for future attacks.
Aside from data loss and system damage, organisations attacked by ransomware are also at risk of reputational damage and a disruption to normal operations which can all contribute to a decrease in revenue.
Ransomware statistics show that attacks are on the rise worldwide and aren’t showing any signs of slowing down. Want to find out how critical ransomware is in 2022? Let the numbers speak for itself:
Ransomware payments have grown year on year especially with the rise of attacks on businesses of all sizes. Attackers are constantly coming up with new ways that are more disruptive and damaging with debilitating impacts on business operations.
Despite promises by ransomware attackers that data will be returned once payment has been made, this is often not the case. Attacker provided decrypters often fail and there’s no guarantee the stolen data hasn’t already been deleted or sold on the black market.
The shift to remote work in 2020 as exacerbated by the pandemic provided attackers the golden opportunity for more aggressive and powerful attacks. By exploiting the fear and uncertainty of organisations navigating the new norm, users are more likely to click on questionable links which can install ransomware on their devices.
The low risk and high gains model of ransomware means attackers can send out phishing emails to a large number of organisations without many consequences. As long as a number of organisations continue paying the ransoms, attackers will be continually fuelled to develop more sophisticated ransomware to extort even greater funds.
A successful ransomware attack on a business costs the business both time, money, and energy to get back on their feet running. Lost productivity, missed revenue opportunities, and damaged data are just some of the short term ramifications of a ransomware attack.
Government bodies and cybersecurity experts all advise against paying a ransom as this encourages this activity to continue and puts organisations at risk for future attacks. A prevention first strategy is the key to minimising a ransomware attack.
The huge volume of phishing emails that are sent out on a daily basis to target vulnerable businesses means successful attacks are growing in number.
The rise in remote work has prompted attackers to take advantage of the uncertainty across the cyber landscape and exploit the security vulnerabilities that pertain to the home office.
Due to the anonymity of Bitcoin, cybercriminals can easily receive payment whilst keeping their identity hidden. Bitcoin’s accessibility and ease of use also increases the chance of victims paying the ransom.
Ransomware attacks often stop companies without security measures in place in their tracks. A halt in operations results in lost revenue and work which many organisations cannot afford. Employees are often laid off following a ransomware attack or in some extreme cases, the entire company shuts down.
One of the US’s biggest insurance companies – CNA Financial – experienced a ransomware attack that prevented it access to its core systems. The attackers asked for a $60 million ransom which was later negotiated to $40 million.
Data such as tax file numbers, bank account details, remuneration, and superannuation were all stolen with staff access to myGov being disabled.
One of its suppliers – Kojima Industries – was hit with a ransomware attack that disrupted its computer service system. The temporary halt across all of Toyota’s domestic productions lines impacted the production of approximately 13,000 vehicles.
Acer was hit by two ransomware attacks in 2021. The latter attack was claimed by the Russian REvil ransomware group which demanded a $50 million ransom. The stolen data was sent to reporters and posted on online forums.
Patient details such as medical data and personal information were all held hostage by the attackers with the department unable to access the data for approximately 3 weeks. A Bitcoin ransom was asked by the attackers to which the department reportedly paid.
Despite asking for much smaller ransoms ranging from $8000 to $10000, Dharma has made enormous volumes of attacks globally which has made it one of the most successful RaaS ever created.
However, in an odd twist of fate, TeslaCrypt released its master decryption key to its victims along with an apology note on May 2016.
22 flights were delayed as a result of the attack with the cybercriminals stating that they were willing to sell all 1.6TB of stolen data to a potential buyer.
Ransomware attacks are more common in countries with higher internet connected populations. Tensions between the US and Russia are also thought to have influenced the boom with beliefs that Russia is the main mastermind behind the ransomware attacks.
With more than 50% of victims paying the ransom and an increase of 80% in ransom demands, it’s no surprise that both businesses and home users have contributed to the billion-dollar industry.
As remote work was in full swing in 2021, the cost of a ransomware data breach reached an all time high. Remote workforces took longer to contain breaches with an average of 58 days to identify the attack.
Ransomware variants are on the rise making it the fastest growing form of cybercrime. There have been exponential increases in year on year ransomware attacks so it’s vital that organisations have countermeasures in place to prevent and limit the impact of them.
Australia ranks 7th globally in terms of most ransomware attacks with the commercial and professional services sector receiving 37% of all attacks.
With RaaS on the rise, it’s become even easier for cybercriminals to deploy ransomware to vulnerable organisations. Australian businesses are advised to invest in both employee security training and defence mechanisms to minimise their chances of falling victim to ransomware.
Australian companies received 10% more ransomware attacks than the global average in 2020 with approximately a third of the victims paying the ransom. This has resulted in an average cost of $1.25 million for each data breach.
Between 2020 and 2021, the United States received 732 ransomware attacks which accounted for 76% of the top 5 countries’ attacks.
Factories often use a variety of specialised equipment and software to get items manufactured which provides attackers with a wide surface area to target. Not all of the vast number of computer systems in place are well protected against the evolving tactics used by ransomware attackers.
The shift to remote learning as a result of Covid-19 has caused universities to embrace new technologies and teaching methods that they’re not traditionally accustomed to. The variety of apps, devices, and portals used has significantly increased universities’ vulnerability to a number of cybersecurity risks such as ransomware.
The sensitive information that financial institutions gather on their customers, partners, and the financial market make them the ideal target for ransomware attackers. Double extortion techniques such as threatening to release the data to the public can result in greater ransom payments as the subsequent negative consequences for the financial institution is enormous.
Emerging cyberattacks on government bodies means they must be better prepared for ransomware disasters by providing training to all staff members and allocating specific budgets for these situations. A stagnant growth in ransomware training can lead to increased attacks with more damaging effects.
As these two industries provide valuable services to society, they also have higher propensity to pay the ransom to protect the encrypted data and restore essential services back to normal operations.
As the nature of these industries provide important services to people and society, when services cannot be accessed, they’re more likely to pay the ransom to attackers.
Attacks on the healthcare industry can be quite detrimental as the system are inaccessible until the ransom is paid, which means many patients’ lives are often on the line as they cannot receive the help they need.
As the emergency department of the hospital was closed due to the ransomware incident, the woman was redirected to another hospital for treatment. However, as the hospital was a substantial distance away, she didn’t receive the right treatment until an hour later. Her death serves as the first ransomware related fatality.
The exponential growth and evolvement of ransomware in the past 5 years has led to a breed of new malware that is more challenging and damaging than its predecessors. Predictions for the future are that security awareness training is more important than ever as human generated risk is the main factor in infection mechanisms.
Cybersecurity Ventures predicts that attackers will refine ransomware to the point where a new attack will take place globally every couple of seconds. The year on year growth of ransomware attacks means organisations should be prepared for a large jump in the coming years.
The US has already declared the payment of ransomware to be illegal in 2021 as it creates additional motive for perpetrators to continue cyberattacks. Other countries are expected to also crackdown on ransomware payments in a bid to curb the exponential growth in attacks.
The most common way ransomware infects computers is via phishing emails which contain malicious attachments or links. By clicking on the link or attachment, the user will unknowingly download and install the ransomware which then begins encrypting files.
Ransomware can be removed from your device through deletion of the malicious files, however your files will remain encrypted. By disconnecting from the internet and wiping the infected device, you should be able to remove all ransomware. The best way to recover all the encrypted files is still through an offline backup.
Whilst there’s no method to completely protect your organisation against ransomware, the best defence is prevention and being prepared. Security hygiene and basic training can significantly reduce your chances of employees unknowingly clicking or installing compromised software. Multilayer security controls that uses firewalls, antivirus programs, and multi factor authentication can also provide your organisation with additional opportunities to identify the ransomware and stop it before harm is dealt.
Once your data has been encrypted with ransomware, it’s unlikely you’ll be able to recover it in full. Even if a ransom is paid, the data returned is often corrupted or damaged. The best approach to recovering data is through an offline backup which does not contain the ransomware that is infecting your current system.
Antivirus programs can only identify and detect ransomware that is within their database. Until the program is updated by their developers, users can still be vulnerable to new ransomware. However, antivirus programs cannot do much once a user has clicked and installed the ransomware.
The most common sign of a ransomware infection is the appearance of a popup message requesting payment to unlock files and system. Other indications include unusual file extensions, inability to access your device, movement of location of files, and the need for a password to access your files.
Eftsure provides continuous control monitoring to protect your eft payments. Our multi-factor verification approach protects your organisation from financial loss due to cybercrime, fraud and error. | <urn:uuid:ce4c3209-9970-46d9-9a49-d1110aa43b43> | CC-MAIN-2024-38 | https://eftsure.com/en-nz/statistics/ransomware-statistics/ | 2024-09-18T05:59:03Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651836.76/warc/CC-MAIN-20240918032902-20240918062902-00599.warc.gz | en | 0.962621 | 2,254 | 2.65625 | 3 |
Sign up to our Newsletter
The Domain Name System (DNS) is used to resolve human-readable hostnames into machine-readable IP addresses. It also provides other useful info about domain names, such as mail services.
How One AI-Driven Media Platform Cut EBS Costs for AWS ASGs by 48%
In other words, DNS is like a phonebook of the Internet. If you know a person’s name but don’t know their telephone number, you can simply look it up in a phone book. DNS provides this same service. That’s exactly why DNS is critical for any organization that relies on the Web to connect to customers, partners, suppliers and employees.
Let’s roll out the key features of hostname to IP mapping:
- Mappings of addresses to names and vice versa (known as records) are stored in a database.
- The DNS database is distributed.
- A DNS database also stores additional records.
DNS makes the difference between a Website being visible or not. If you want your website available, it has to be running all the time, secure, scalable and exhibit high performance.
There are several key benefits worth noting:
- Performance: overcome network latency experienced by geographically distributed users.
- Security: decrease vulnerability to spoofing and distributed denial of service (DDoS) attacks.
- Reliability: guarantee internet domain queries are consistently and correctly resolved.
- Availability: ensure users can reach your website at any given time.
- Scalability: manage increased traffic as an organization’s business grows.
Certain DNS servers may have outdated records. Some servers may have inefficient records that point your data packets towards the scenic route, taking you all around the Internet before the packets can reach the destination. If you can track the most efficient DNS servers paths, then you can cut away that excess travel time. A crazy little thing called DNS optimization.
The biggest challenge for most companies is that DNS is a complex system that requires an array of software, network infrastructure, hardware and specialized knowledge. Building and managing a DNS system is both costly and technically challenging. Since the emergence of technology standards such as DNSSEC, the whole matter is further complicated. Setting up your own DNS infrastructure isn’t the only choice though.
Cloud DNS is scalable, reliable and secure – it offers high performance under high traffic spikes from anywhere in the world. It’s a purpose-built, aways-on web solution, made to address the web performance needs of any business that relies on the internet. Cloud DNS providers often have a suite of services, when used in conjunction with the Cloud DNS, provide great web performance without large upfront capital costs of technology investments. We’ll break down the pros to using a cloud DNS service into a few points:
1. It’s simple to use and understand
By using your provider’s service plans, you don’t need to worry about the details of keeping a global DNS infrastructure running. You just set your records and kick back, free to focus your time on other pending issues. You pay for a reliable service that’s run by experts. It’s also important to note that this is by far the best choice for companies and startups that run on a tight budget and don’t have the expertise to cover all grounds.
2. It’s secure
Security is always the top priority when running a web business. DDoS are on the rise; a successful attack greatly impacts customer trust which reflects on web sales. A cloud DNS service offers a distributed architecture, zone transfers and secure testing environments. Some of the following approaches are used to tighten up security;
- Overprovisioning machine resources
- Implementing basic validity-checking
- Rate-limiting requests
- Removing duplicate queries
- Adding entropy to request messages
- Securing your code against buffer overflows
3. It boosts your performance
Small businesses or home users typically use the default DNS server of their internet service provider, but other DNS providers are often faster. There is a number of thing that a cloud DNS provider does to keep your performance sharp:
- Provisioning servers adequately to handle the load from client traffic, including malicious traffic.
- Preventing DoS and amplification attacks – granted, this is a security measure, but it also has a benefit for performance by eliminating the extra traffic burden placed on DNS servers.
- Load-balancing for shared caching, to improve the aggregated cache hit rate across the serving cluster.
- Providing global coverage for proximity to all users.
4. You get 24/7 support
There’s no need to juggle through all of the aspects of DNS hosting and maintenance. By hiring a cloud CDN provider, you have a specialized team to back you up, anytime. Each time you face a problem, you can contact customer support and get the problem fixed asap. When your website is facing difficulties, every second counts. Having a support team is the most reliable way to keep your business up and running at all time. | <urn:uuid:fdc8012d-84dd-4b15-98bf-4c9553768b8b> | CC-MAIN-2024-38 | https://www.globaldots.com/resources/blog/how-and-why-is-dns-critical-for-any-web-business/ | 2024-09-09T17:56:18Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651133.92/warc/CC-MAIN-20240909170505-20240909200505-00499.warc.gz | en | 0.933444 | 1,050 | 2.671875 | 3 |
In today's digitally advanced world, designing for accessibility has become the norm. Brands that aren't using an accessibility-first design run the risk of losing leads and prospective customers to competitors that do.
Digital accessibility involves designing websites, building applications and creating content that can be used by virtually everyone, including those with cognitive, speech, auditory, motor or visual disabilities. There are more than 1 billion people in the world with a disability, including 56 million (1-in-5 citizens) in the United States.
One of the perceived challenges of designing for accessibility is the myth that it's difficult and expensive. Brands that make accessibility a priority when creating products and designing their websites don't need to spend more or expend additional effort. However, fixing an inaccessible site will require some time and resources.
Designing for accessibility
In 2018, 2,250 website accessibility lawsuits were filed in U.S. state and federal courts. Aside from the consequences of not designing websites with accessibility in mind, multiple studies show that accessible websites have better usability, reach a much larger audience, are more SEO-friendly and have better search results.
Designing for accessibility delivers a better user experience to everyone regardless of situation, context or ability. To ensure better usability and improve UX scores, here are some tips and best practices for brands looking to incorporate accessibility-first design in their products and websites.
Understanding color contrast
Poor color contrast remains one of the most overlooked web accessibility problems. The World Health Organization (WHO) estimates that there are 217 million persons with moderate to severe vision impairment. Individuals with poor vision often find it difficult to read text from backgrounds with low contrast.
Ensuring that there is sufficient contrast between backgrounds and text helps improve your website's accessibility. The W3C recommends at least a 4.5-to-1 contrast ratio between text and its underlying background. This value decreases when using heavier and larger fonts. The minimum recommended contrast ratio when using at least a 14 point bold font or 18 point font is 3-to-1.
Using multiple visual cues
Brands shouldn't use color as the only visual cue when trying to communicate important information, prompt a response or indicate the next action to take. Individuals with color blindness or low visual acuity may find it difficult to follow such visual cues. Studies show that color blindness affects roughly 1 in 12 men and 1 in 200 women worldwide.
Using indicators such as patterns or text labels is a better option when designing for accessibility. Underlining text or increasing font weight can be used to make linked text within a paragraph stand out while adding an icon or title to an error message will make it more readable.
Multiple visual cues become even more important when showcasing elements that contain complex information, such as graphs and charts. Rather than using color, brands can communicate disparate information by using other visual cues, including size, labels and shape.
Proper use of effects and animation
While effects and animation can help bring your page and brand to life, they can be distracting and potentially deadly to some users. When photosensitive users are exposed to high-intensity or fast-flashing effects and patterns, they can become dizzy or nauseous or undergo seizures known as "photosensitive epilepsy."
Constant motion or animation on web pages can be distracting to most users, especially those who have difficulty concentrating. The human eye is drawn towards movement and anything that moves constantly can easily become a source of distraction. Brands should design safer animations and use slow-moving effects as much as possible.
Accessibility-first video content
Embedded videos should come with subtitles and/or transcripts so users can consume the content in a way they desire. Site visitors with visual impairment or hypersensitivity to light may prefer to read while those who are unable or unwilling to listen to the video will require subtitles.
Designers should also note that auto-playing videos can be annoying. Such videos can be a source of distraction and will force users to scan the entire page looking for the offending media.
Supporting keyboard navigation
Keyboard accessibility is a key aspect of web accessibility. Users that depend on screen readers and individuals with motor disabilities rely on keyboards to navigate content. Such users typically use the Tab key to navigate through interactive elements on web pages, such as input fields, buttons and links.
Adding a visual indicator to describe a currently selected component can improve the accessibility of your site. It's also a good idea to arrange the order of interactive elements in a way that's intuitive and logical. This means placing the more important elements to the top and as far to the left as possible. The visual flow should go from left to right and top to bottom.
Audit to see if you meet WCAG standards
Understanding your users and being inclusive to their needs is the key to crafting better and more accessible experiences. Accessibility is solved at the design stage and it is at this point that you should recognize of the needs of your target users as well as those with disabilities and others outside your target demographic.
It's recommended that you conduct an accessibility audit to find out if your website or product meets Web Content Accessibility Guidelines (WCAG) 2.1 standards and works with assistive technologies, including screen readers, speech recognition tools and screen magnifiers. WCAG 2.1, published in June 2018, is backwards compatible with the more popular WCAG 2.0 standard and contains 17 additional success criteria
The audit results can help pinpoint accessibility issues and identify areas that require improvements. Ensure that your website and products can be used by everyone regardless of geographic location, education, age, economic situation or ability. | <urn:uuid:c5ab0c2b-896c-459b-95cf-0fa08470e925> | CC-MAIN-2024-38 | https://blog.engineroomtech.com/how-brands-can-design-for-accessibility | 2024-09-20T19:56:17Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701423570.98/warc/CC-MAIN-20240920190822-20240920220822-00499.warc.gz | en | 0.929589 | 1,138 | 3.015625 | 3 |
Go (also called Golang) is an open source programming language designed by Google in 2007 and made available to the public in 2012. It gained popularity among developers over the years, but it’s not always used for good purposes. As it often happens, it attracts the attention of malware developers as well.
Using Go is a tempting choice for malware developers because it supports cross-compiling to run binaries on various operating systems. Compiling the same code for all major platforms (Windows, Linux, macOS) make the attacker’s life much easier, as they don’t have to develop and maintain different codebases for each target environment.
The Need to Reverse Engineer Go Binaries
Some features of the Go programming language give reverse engineers a hard time when investigating Go binaries. Reverse engineering tools (e.g. disassemblers) can do a great job analyzing binaries that are written in more popular languages (e.g. C, C++, .NET), but Go creates new challenges that make the analysis more cumbersome.
Go binaries are usually statically linked, which means that all of the necessary libraries are included in the compiled binary. This results in large binaries, which make malware distribution more difficult for the attackers. On the other hand, some security products also have issues handling large files. That means large binaries can help malware avoid detection. The other advantage of statically linked binaries for the attackers is that the malware can run on the target systems without dependency issues.
As we saw a continuous growth of malware written in Go and expect more malware families to emerge, we decided to dive deeper into the Go programming language and enhance our toolset to become more effective in investigating Go malware.
In this article, I will discuss two difficulties that reverse engineers face during Go binary analysis and show how we solve them.
Ghidra is an open source reverse engineering tool developed by the National Security Agency, which we frequently use for static malware analysis. It is possible to create custom scripts and plugins for Ghidra to provide specific functionalities that researchers need. We used this feature of Ghidra and created custom scripts to aid our Go binary analysis.
Lost Function Names in Stripped Binaries
The first issue is not specific to Go binaries, but stripped binaries in general. Compiled executable files can contain debug symbols which make debugging and analysis easier. When analysts reverse engineer a program that was compiled with debugging information, they can see not only memory addresses, but also the names of the routines and variables. However, malware authors usually compile files without this information, creating so-called stripped binaries. They do this to reduce the size of the file and make reverse engineering more difficult. When working with stripped binaries, analysts cannot rely on the function names to help them find their way around the code. With statically linked Go binaries, where all the necessary libraries are included, the analysis can slow down significantly.
To illustrate this issue, we used simple “Hello Hacktivity” examples written in C and Go for comparison and compiled them to stripped binaries. Note the size difference between the two executables.
Ghidra’s Functions window lists all functions defined within the binaries. In the non-stripped versions function names are nicely visible and are of great help for reverse engineers.
The function lists for stripped binaries look like the following:
These examples neatly show that even a simple “hello world” Go binary is huge, having more than a thousand functions. And in the stripped version reverse engineers cannot rely on the function names to aid their analysis.
Note: Due to stripping, not only did the function names disappear, but Ghidra also recognized only 1,139 functions of the 1,790 defined functions.
We were interested in whether there was a way to recover the function names within stripped binaries. First, we ran a simple string search to check if the function names were still available within the binaries. In the C example we looked for the function “main”, while in the Go example it was “main.main”.
The strings utility could not find the function name in the stripped C binary, but “main.main” was still available in the Go version. This discovery gave us some hope that function name recovery could be possible in stripped Go binaries.
Loading the binary to Ghidra and searching for the “main.main” string will show its exact location. As you can be seen in the image below, the function name string is located within the .gopclntab section.
The pclntab structure is available since Go 1.2 and nicely documented. The structure starts with a magic value followed by information about the architecture. Then the function symbol table holds information about the functions within the binary. The address of the entry point of each function is followed by a function metadata table.
The function metadata table, among other important information, stores an offset to the function name.
It is possible to recover the function names by using this information. Our team created a script (go_func.py) for Ghidra to recover function names in stripped Go ELF files by executing the following steps:
- Locates the pclntab structure
- Extracts the function addresses
- Finds function name offsets
Executing our script not only restores the function names, but it also defines previously unrecognized functions.
To see a real-world example let’s look at an eCh0raix ransomware sample:
This example clearly shows how much help the function name recovery script can be during reverse engineering. Analysts can assume that they are dealing with ransomware just by looking at the function names.
Note: There is no specific section for the pclntab structure in Windows Go binaries, and researchers need to explicitly search for the fields of this structure (e.g. magic value, possible field values). For macOS, the _gopclntab section is available, similar to .gopclntab in Linux binaries.
Challenges: Undefined Function Name Strings
If a function name string is not defined by Ghidra, then the function name recovery script will fail to rename that specific function, since it cannot find the function name string at the given location. To overcome this issue our script always checks if a defined data type is located at the function name address and, if not, tries to define a string data type at the given address before renaming a function.
In the example below, the function name string “log.New” is not defined in an eCh0raix ransomware sample, so the corresponding function cannot be renamed without creating a string first.
The following lines in our script solve this issue:
Unrecognized Strings in Go Binaries
The second issue that our scripts are solving is related to strings within Go binaries. Let’s turn back to the “Hello Hacktivity” examples and take a look at the defined strings within Ghidra.
70 strings are defined in the C binary, with “Hello, Hacktivity!” among them. Meanwhile, the Go binary includes 6,540 strings, but searching for “hacktivity” gives no result. Such a high number of strings already makes it hard for reverse engineers to find the relevant ones, but, in this case, the string that we expected to find was not even recognized by Ghidra.
To understand this problem, you need to know what a string is in Go. Unlike in C-like languages, where strings are sequences of characters terminated with a null character, strings in Go are sequences of bytes with a fixed length. Strings are Go-specific structures, built up by a pointer to the location of the string and an integer, which is the length of the string.
These strings are stored within Go binaries as a large string blob, which consists of the concatenation of the strings without null characters between them. So, while searching for “Hacktivity” using strings and grep gives the expected result in C, it returns a huge string blob containing “hacktivity” in Go.
Since strings are defined differently in Go, and the results referencing them within the assembly code are also different from the usual C-like solutions, Ghidra has a hard time with strings within Go binaries.
The string structure can be allocated in many different ways, it can be created statically or dynamically during runtime, it varies within different architectures and might even have multiple solutions within the same architecture. To solve this issue, our team created two scripts to help with identifying strings.
Dynamically Allocating String Structures
In the first case, string structures are created during runtime. A sequence of assembly instructions is responsible for setting up the structure before a string operation. Due to the different instruction sets, structure varies between architectures. Let’s go through a couple of use cases and show the instruction sequences that our script (find_dynamic_strings.py) looks for.
Dynamically Allocating String Structures for x86
First, let’s start with the “Hello Hacktivity” example.
After running the script, the code looks like this:
The string is defined:
And “hacktivity” can be found in the Defined Strings view in Ghidra:
The script looks for the following instruction sequences in 32-bit and 64-bit x86 binaries:
ARM Architecture and Dynamic String Allocation
For the 32-bit ARM architecture, I use the eCh0raix ransomware sample to illustrate string recovery.
After executing the script, the code looks like this:
The pointer is renamed, and the string is defined:
The script looks for the following instruction sequence in 32-bit ARM binaries:
For the 64-bit ARM architecture, let’s use a Kaiji sample to illustrate string recovery. Here, the code uses two instruction sequences that only vary in one sequence.
After executing the script, the code looks like this:
The strings are defined:
The script looks for the following instruction sequences in 64-bit ARM binaries:
As you can see, a script can recover dynamically allocated string structures. This helps reverse engineers read the assembly code or look for interesting strings within the Defined String view in Ghidra.
Challenges for This Approach
The biggest drawback of this approach is that each architecture (and even different solutions within the same architecture) requires a new branch to be added to the script. Also, it is very easy to evade these predefined instruction sets. In the example below, where the length of the string is moved to an earlier register in a Kaiji 64-bit ARM malware sample, the script does not expect this and will therefore miss this string.
Statically Allocated String Structures
In this next case, our script (find_static_strings.py) looks for string structures that are statically allocated. This means that the string pointer is followed by the string length within the data section of the code.
This is how it looks in the x86 eCh0raix ransomware sample.
In the image above, string pointers are followed by string length values, however, Ghidra couldn’t recognize the addresses or the integer data types, except for the first pointer, which is directly referenced in the code.
Undefined strings can be found by following the string addresses.
After executing the script, string addresses will be defined, along with the string length values and the strings themselves.
Challenges: Eliminating False Positives and Missing Strings
We want to eliminate false positives, which is why we:
- Limit the string length
- Search for printable characters
- Search in data sections of the binaries
Obviously, strings can easily slip through as a result of these limitations. If you use the script, feel free to experiment, change the values, and find the best settings for your analysis. The following lines in the code are responsible for the length and character set limitations:
Further Challenges in String Recovery
Ghidra’s auto analysis might falsely identify certain data types. If this happens, our script will fail to create the correct data at that specific location. To overcome this issue the incorrect data type has to be removed first, and then the new one can be created.
For example, let’s take a look at the eCh0riax ransomware with statically allocated string structures.
Here the addresses are correctly identified, however, the string length values (supposed to be integer data types) are falsely defined as undefined4 values.
The following lines in our script are responsible for removing the incorrect data types:
After executing the script, all data types are correctly identified and the strings are defined.
Another issue comes from the fact that strings are concatenated and stored as a large string blob in Go binaries. In some cases, Ghidra defines a whole blob as a single string. These can be identified by the high number of offcut references. Offcut references are references to certain parts of the defined string, not the address where the string starts, but rather a place inside the string.
The example below is from an ARM Kaiji sample.
To find falsely defined strings, one can use the Defined Strings window in Ghidra and sort the strings by offcut reference count. Large strings with numerous offcut references can be undefined manually before executing the string recovery scripts. This way the scripts can successfully create the correct string data types.
Lastly, we will show an issue in the Ghidra Decompile view. Once a string is successfully defined either manually or by one of our scripts, it will be nicely visible in the listing view of Ghidra, helping reverse engineers read the assembly code. However, the Decompiler view in Ghidra cannot handle fixed-length strings correctly and, regardless of the length of the string, it will display everything until it finds a null character. Luckily, this issue will be solved in the next release of Ghidra (9.2).
This is how the issue looks with the eCh0raix sample.
Future Work with Reverse Engineering Go
This article focused on the solutions for two issues within Go binaries to help reverse engineers use Ghidra and statically analyze malware written in Go. We discussed how to recover function names in stripped Go binaries and proposed several solutions for defining strings within Ghidra. The scripts that we created and the files we used for the examples in this article are publicly available, and the links can be found below.
This is just the tip of the iceberg when it comes to the possibilities for Go reverse engineering. As a next step, we are planning to dive deeper into Go function call conventions and the type system.
In Go binaries arguments and return values are passed to functions by using the stack, not the registers. Ghidra currently has a hard time correctly detecting these. Helping Ghidra to support Go’s calling convention will help reverse engineers understand the purpose of the analyzed functions.
Another interesting topic is the types within Go binaries. Just as we’ve shown by extracting function names from the investigated files, Go binaries also store information about the types used. Recovering these types can be a great help for reverse engineering. In the example below, we recovered the main.Info structure in an eCh0raix ransomware sample. This structure tells us what information the malware is expecting from the C2 server.
As you can see, there are still many interesting areas to discover within Go binaries from a reverse engineering point of view. Stay tuned for our next write-up.
Github repository with scripts and additional materials
Files used for the research
File name | SHA-256 | |
| hello.c | ab84ee5bcc6507d870fdbb6597bed13f858bbe322dc566522723fd8669a6d073 |
| hello.go | 2f6f6b83179a239c5ed63cccf5082d0336b9a86ed93dcf0e03634c8e1ba8389b |
| hello_c | efe3a095cea591fe9f36b6dd8f67bd8e043c92678f479582f61aabf5428e4fc4 |
| hello_c_strip | 95bca2d8795243af30c3c00922240d85385ee2c6e161d242ec37fa986b423726 |
| hello_go | 4d18f9824fe6c1ce28f93af6d12bdb290633905a34678009505d216bf744ecb3 |
| hello_go_strip | 45a338dfddf59b3fd229ddd5822bc44e0d4a036f570b7eaa8a32958222af2be2 |
| hello_go.exe | 5ab9ab9ca2abf03199516285b4fc81e2884342211bf0b88b7684f87e61538c4d |
| hello_go_strip.exe | ca487812de31a5b74b3e43f399cb58d6bd6d8c422a4009788f22ed4bd4fd936c |
| eCh0raix – x86 | 154dea7cace3d58c0ceccb5a3b8d7e0347674a0e76daffa9fa53578c036d9357 |
| eCh0raix – ARM | 3d7ebe73319a3435293838296fbb86c2e920fd0ccc9169285cc2c4d7fa3f120d |
| Kaiji – x86_64 | f4a64ab3ffc0b4a94fd07a55565f24915b7a1aaec58454df5e47d8f8a2eec22a |
| Kaiji – ARM | 3e68118ad46b9eb64063b259fca5f6682c5c2cb18fd9a4e7d97969226b2e6fb4 |
References and further reading
Solutions by other researchers for various tools
radare2 / Cutter | <urn:uuid:3780b804-13dc-41f2-a393-76c25f9d12d2> | CC-MAIN-2024-38 | https://cujo.com/blog/reverse-engineering-go-binaries-with-ghidra/ | 2024-09-13T16:10:05Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651523.40/warc/CC-MAIN-20240913133933-20240913163933-00363.warc.gz | en | 0.874202 | 3,857 | 3.15625 | 3 |
Authentication is a protocol used in HTTP communication to verify that a client is who they say they are before providing the client access to a certain resource on the web. This is another security procedure in the HTTP protocol to protect users and businesses in the online environment. When authentication is done at the edge, there is a significant increase in the speed and security of the process and, consequently, in the companies’ credibility and revenue.
Authentication can be defined as an access control process, in which the identity of a client is verified in order to release their access to a certain resource on the web. Authentication therefore acts as another security measure in HTTP communications.
A customer’s initial request is usually anonymous, so it doesn’t have any information to confirm that the customer is who they say they are. This can be dangerous if the resource is more sensitive, such as bank details for example, as access would be granted to any user. This is where authentication comes in: to filter access, the server denies the anonymous request and indicates that client authentication is required. The authentication information between server and client is sent through headers.
What we described above are the basic principles of authentication, but as there are different types of resources to be accessed, the complexity of authentication also varies, as we will see later in the description of authentication schemes.
How Does HTTP Authentication Work?
RFC7235 describes that the HTTP protocol provides a general framework for access control and authentication through a diverse set of challenge-response authentication schemes that can be used by a server to challenge a client request and by a client to provide authentication information.
What Is Challenge-Response Authentication?
In challenge-response authentication, the server makes a request – the challenge – and the client must send a valid response to be authenticated. A very common example of challenge-response authentication is password authentication, where the challenge made by the server is to request the password and the valid response from the client is the correct password.
The headers can be used by the server to define the authentication method (response headers) or for the client to give the credentials to be authenticated (request headers). Let’s see how they work:
There are two headers that the server can send: WWW-Authenticate
Both define the authentication method that must be used for the client to gain access to a resource. They must specify which authentication scheme to use so that the client desiring authorization knows how to provide the credentials.
header indicates the authentication scheme and parameters applicable to the target resource.
Syntax: WWW-Authenticate: type realm=<realm>
- type: is the type of authentication requested.
- realm: is the scope of protection, the area to be protected.
Example of WWW-Authenticate header:
WWW-Authenticate: Basic realm="Access to the internal site"
header defines the authentication method that should be used to gain access to a resource that is behind a proxy server. This header consists of at least one challenge that indicates the authentication scheme(s) and parameters applicable to the proxy server.
Syntax: Proxy-Authenticate: <type> realm=<realm>
Example of WWW-Authenticate header:
Proxy-Authenticate: Basic realm="Access to the internal site"
The client can send two headers: Authorization
Both contain the credentials to authenticate the client with an origin server or a proxy server. Credentials can be encoded or encrypted, depending on the authentication scheme used.
header allows the client to authenticate with an origin server usually after receiving a 401 Unauthorized
response from the server. This header contains credentials with information for client authentication for the area of the resource being requested.
Syntax: Authorization: <type> <credentials>
- credentials: is the information for the authentication. It consists of the username, followed by a colon (:) and the user’s password. This sequence is then encoded with the base64 encoding method.
Note: Base64 is a group of similar binary-to-text encoding schemes that represent binary data in an ASCII string format, converting it to a radix-64 representation.
If the browser uses Azion
as username and edgecomputing
as password, the field value is the base64 encoding of Azion:edgecomputing
, that is QXppb246ZWRnZWNvbXB1dGluZw==
. So the Authorization header will be:
Authorization: Basic QXppb246ZWRnZWNvbXB1dGluZw==
header allows the client to authenticate to a proxy usually after the server responds with a status of 407 Proxy Authentication Required
. This header contains credentials with information for client authentication to the proxy and/or the area of the resource that is being requested.
Syntax: Proxy-Authorization: <type> <credentials>
Example of Proxy-Authorization header:
Proxy-Authorization: Basic QXppb246ZWRnZWNvbXB1dGluZw==
As shown earlier, the server asks for a username and password in a request to authenticate a user in a basic authentication scheme. This information is encoded using base64 encoding.
A disadvantage of basic authentication is that it’s not secure, so it must use HTTPS/TLS security protocols for this communication. When the information is confidential or valuable, it’s necessary to use a more secure authentication scheme.
Bearer authentication uses security tokens called bearer tokens. In this scheme, access is given to the token holder.
The token is an encrypted string, usually generated by the server in response to a login request. The client must send this token in the Authorization header when requesting access to protected resources.
Syntax: Authorization: Bearer <token>
Digest authentication is also based on the challenge-response mechanism. The challenge the server sends to the client is a nonce value, a random number that can only be used once. The response from the client must contain a hash with the username, password, nonce value, HTTP method, and requested URL. This data hash format is more complex and makes it difficult to have user information stolen and reused, and it was developed as an alternative to replacing basic authentication, which is not a secure scheme.
The client must send the hash in the Authorization header when requesting access to protected resources.
Syntax: Authorization: Digest username=<"username">
Digest authentication example:
The client wants to have access to a secured document via a GET request. The URI of the document is https://www.azion.com/blog/index.html
. Client and server know the username for this document is Azion
and the password is edgecomputing
The first time the client requests the document, the server does not send the authorization header and responds with:
HTTP/1.1 401 Unauthorized
The client will respond with a new request, including the following headers:
Authorization: Digest username="Azion"
HTTP Origin-Bound Auth (HOBA)
HTTP Origin-Bound (HOBA) authentication is a scheme that is not password-based, but signature-based. In addition, it offers additional features such as credential management and logout system. This makes this scheme more secure as it eliminates the risk of password leakage as there is no server-side password verification database.
Syntax: Authorization: HOBA result="kid"."challenge"."nonce"."sig"
is the combination of kid
values. Result value is a dot-separated string that includes the signature and is sent in the HTTP authorization header value using the syntax shown above. The sig value is the base64url encoded version of the binary output of the signing process. The kid, challenge and nonce values are also encoded in base64url.
HOBA Authentication Flow:
- To begin with, the client determines whether it already has a public key to authenticate or must generate one.
- The client then makes a connection to the server, anticipating the server to request a HOBA-based authentication, which must be done by subscribing to an information blob.
- The server sends a challenge that can be confirmed in an HTTP header and the client must respond with a signature, having previously given the server the public key. The server determines the CPK (Client Public Key) using the KID (Key Identifier) to decide whether to recognize the CPK. If the CPK is recognized, the authentication process is complete.
Mutual authentication does both client and server authentication. Since authentication occurs in both directions, this scheme is also known as bidirectional authentication. One of the characteristics of mutual authentication is that client and server must provide digital certificates through the TLS (Transport Layer Security) protocol.
The client sends a request to the server with:
Authorization: Mutual user="name"
The server responds with:
Authorization: Mutual sid=123
is the user identification string, kc1
is the client-to-server verification key, sid
is the section identification key, and vkc
is the server-to-client verification key.
Mutual authentication flow:
- The client requests a resource without any authentication attempt.
- If the requested resource is protected by the Mutual authentication protocol, the server will respond with a message requesting authentication (401-INIT).
- The client processes the message body and waits for the user to enter the username and password. If username and password are available, the client will send a message with authenticated key exchange (req-KEX-C1) to initiate authentication.
- The server looks for user authentication information within its user database. It then creates a new session identifier (sid) that will be used to identify sets of messages that follow it, and responds with a message containing a server-authenticated key exchange value (401-KEX-S1).
- Client and server calculate a shared “session secret” using the values exchanged in key exchange messages. Only when they use secret credentials generated from the same password is that the session secret values will match. This session secret will be used for access authentication of each individual request/response pair from that moment.
- The client will send a request with an authentication check value (req-VFY-C) calculated from the session secret generated by the client. The server will check the validity of the check value using its own version of the session secret.
- If the client’s authentication check value is correct, the client has the credential based on the expected password, that is, authentication was successful. The server will respond with a success message (200-VFY-S).
If the client’s check value is incorrect (for example, because the password provided by the user was incorrect), the server will respond with a 401-INIT message (the same message as the message used in 2).
Negotiate authentication is a Microsoft Windows authentication mechanism that uses Kerberos as its underlying authentication provider. Kerberos works on a ticket-granting system to authenticate users to resources and requires the use of SPNEGO GSSAPI tokens.
Syntax: Authorization: Negotiate <gssapi-data>
Negotiate authentication flow:
The client requests access to a protected document without any authentication attempt.
The server responds with:
HTTP/1.1 401 Unauthorized
The client will obtain the user’s credentials using the SPNEGO GSSAPI mechanism to identify and generate a GSSAPI message that will be sent to the server in a new request with the authorization header:
HTTP/1.1 GET dir/index.html
Authorization: Negotiate a87421000492aa874209af8bc028
The server will decode the GSSAPI data and pass it to the SPNEGO GSSAPI engine in the gss_accept_security_context function. If the context is not complete, the server will respond with a 401 status and a WWW-Authenticate header containing the GSSAPI data:
HTTP/1.1 401 Unauthorized
WWW-Authenticate: Negotiate 749efa7b23409c20b92356
The client will decode the GSSAPI data, pass it to gss_Init_security_context and return the new GSSAPI data to the server:
HTTP/1.1 GET dir/index.html
Authorization: Negotiate 89a8742aa8729a8b028
This cycle can continue until the security context is complete. When the return value from the gss_accept_security_context function indicates that the security context is complete, it can provide authentication data to be returned to the client. If the server has more GSSAPI data to send to the client to complete the context, it will be sent in a WWW-Authenticate header with the final response containing the HTTP body:
HTTP/1.1 200 Success
WWW-Authenticate: Negotiate ade0234568a4209af8bc0280289eca
The client will decode the GSSAPI data and provide it to gss_init_security_context using the context for this server. If the status is successful in the final gss_init_security_context, the response can be used by the application.
OAuth (short for Open Authorization) authentication is an open-standard authorization protocol that allows unrelated servers and services to allow third-party access to the resource using access tokens rather than sharing their credentials. This process is also known as “delegated secure access”.
A very simple example of this authentication scheme is when a user logs into a website (the third party) and the website offers to log in using accounts from other websites/services, such as Google or Facebook – which means you don’t need to enter your passwords. When you click on the button linked to the other site, the other site authenticates you and the site you were originally connecting to logs in using the permission obtained from the second site.
In this scheme, the following agents participate:
- Resource owner: entity capable of granting access to a protected resource. When the resource owner is a person, it is called end-user.
- Resource server: The server that hosts the protected resources. Access to it is done through tokens.
- Client: application that requests protected resources, through the owner’s authorization.
- Authorization server: server that issues access tokens to the client, after it has been authenticated and authorized.
Syntax: Authorization: OAuth realm="Example",
OAuth authentication flow:
- The client requests authorization from the resource owner. The authorization request can be made directly to the resource owner (as shown in the image above), or preferably indirectly through the authorization of the server as an intermediary.
- The client receives an authorization grant, which is a credential that represents the resource owner’s authorization, expressed using one of the four grant types defined in the specification, or using an extension grant type. The type of authorization grant depends on the method used by the client to request authorization and the types supported by the authorization server.
- The client requests an access token when authenticating with the authorization server and presenting the authorization grant.
- The authorization server authenticates the client and validates the authorization grant and, if valid, issues an access token.
- The client requests the protected resource from the resource server and authenticates by presenting the access token.
- The resource server validates the access token and, if valid, accept the request.
VAPID (Voluntary Application Server Identification) authentication is designed to allow sites to authenticate with push servers independently. Websites can send push notifications without knowing which browser it’s running in. This is a significant improvement over implementing a different push protocol for each platform.
It’s possible that the client includes his identity in a signed token with the requests he makes. Subscription can be used by the push service to restrict the use of a push subscription to a single application server.
Syntax: Authorization: vapid
is a JWT (JSON Web Token) token generated by the client and k
is the private key (base64url-encoded) that signs this token.
What is JSON Web Token?
There are currently many techniques that can be implemented to control access to online resources. One of the most common is the use of some type of access token, generated by applications to ensure that only authenticated users are allowed to use certain resources, such as APIs or media files. And one of those modern solutions is the JSON Web Token (JWT).
JWTs are cryptographically protected from tampering. Furthermore, with them, instead of storing the token state in the database, it’s possible to encode this state directly in the token ID and send it to the client. For example, you can serialize the token fields in a JSON object, encode it with base64url to create a string that can be used as the token ID. When the token is presented back to the API, all you need to do is decode the token and parse the JSON to retrieve the session attributes.
Do you want more security to protect restricted content from unauthorized access?
This is possible with Azion JWT, Azion’s access token solution.
With Azion JWT, you provide more security for your content and APIs. Since it runs directly at the edge of the network, this is a robust and effective solution for access control and closed or personalized content, such as videos, classes, images or APIs.
In this process, Azion uses JSON Web Tokens (JWTs), a type of token that can be validated without having to consult a database, facilitating the scalability of services. However, often the size of a JWT exceeds that of a session ID, which can affect network performance, as it must be included in every request. But as our solution is in edge computing, in addition to solving this problem, we added additional security features, such as providing and revoking permissions through the combination of Key IDs and secrets, in addition to determining expiration time for these keys.
Furthermore, by running inside Azion’s edge nodes, closer to users, Azion JWT validates the authenticity of requests even before they reach your infrastructure, without the need to consult a specific authentication server to validate the token’s credentials passed in the requests, providing more speed to the process and security for your business.
Do you want the best protection for your content and your customers? Talk to one of our experts here. | <urn:uuid:f0ab16b3-5d11-43b6-958a-aa140f4b8a2f> | CC-MAIN-2024-38 | https://www.azion.com/en/blog/what-is-http-authentication/ | 2024-09-18T12:01:20Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651895.12/warc/CC-MAIN-20240918100941-20240918130941-00863.warc.gz | en | 0.857237 | 3,890 | 4.375 | 4 |
Advanced Micro Devices Inc. and IBM are intentionally introducing strain into their chip-making partnership.
The two companies on Tuesday detailed some of their work in developing new chip manufacturing techniques that will boost transistor performance, yet help limit power consumption, in chips with circuits knitted together at the 65-nanometer level and below.
The revelations, which include using strain, a technique that repositions the internal structures of a chip to boost its performance, were made in papers presented at the International Electron Devices Meeting in Washington.
While delivering ever-greater performance is the lifeblood of a chip maker, power is becoming increasingly more important as well.
Consumers and businesses are beginning to demand more power-efficient processors for notebooks as well as servers, while at the same time chip makers themselves are seeking performance gains by packaging two or more processor cores inside each chip.
AMD, IBM and Intel Corp. all now produce dual-core chips, or chips which have two processor cores built in. However, the chips have been limited to high-end desktops and servers thus far.
Moving multicore processors, which promise performance bumps by throwing more than one processor at a computing job, into notebooks and mainstream desktops will require 65-nanometer production, most experts say.
To that end, AMD and IBM said in their joint statement that they have combined several chip-making techniques into a recipe that offers a 40 percent increase in transistor performance, when compared with like chips made without stress technology, but that at the same time maintains control over power consumption and heat dissipation.
The companies said they have combined Silicon Germanium, Dual Stress Liner and Stress Memorization and placed them on top of Silicon-On-Insulator wafers. The net result of this cocktail allows chips to herd electrons efficiently, which boosts performance and cuts down on wasted electricity.
“Our joint work on developing advanced process technologies continues to ensure that we can create and provide the highest-performance, lowest-power processors on the market,” Nick Kepler, vice president of logic technology development at AMD, said in a statement.
AMD has begun pilot production at 65 nanometers at its recently opened
“Our progress on 65-nanometer technology is going very well,” Kepler said in that interview. “We’ve gotten very good results on the technology at this stage.”
IBM of Armonk, N.Y., has also said it plans to convert its chip plant in East Fishkill, N.Y., from 90-nanometer production to 65-nanometer production over time.
IBM has already begun prototyping its 65-nanometer process while it moves equipment into a special annex at the Fishkill plant that’s designed to produce both 65-nanometer and 45-nanometer chips. A company executive told Ziff Davis Internet earlier this year that IBM is aiming to make the 65-nanometer transition in 2006. However, the company has yet to say exactly when it will get started with its 65-nanometer chip production.
Intel, for its part, has already been shipping 65-nanometer chips, including Presler, a dual-core desktop processor, for revenue. The Santa Clara, Calif., company, which is expected to introduce dual-core, 65-nanometer processors in desktops and notebooks in January, is already producing 65-nanometer chips at two plants and plans to add two more before the end of 2006, it has said.
Chip manufacturers generally move to new and successively smaller manufacturing processes every two years. These shifts, which can cost billions and take years of development, allow them to produce chips with greater numbers of transistors, but still make the chips smaller by packing those features more tightly together.
Check out eWEEK.com’s for the latest news in desktop and notebook computing. | <urn:uuid:8dcb8b49-fac9-4189-ae40-f00e1b12ebdb> | CC-MAIN-2024-38 | https://www.channelinsider.com/news-and-trends/amd-ibm-tout-chip-performance-gains/ | 2024-09-18T11:56:59Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651895.12/warc/CC-MAIN-20240918100941-20240918130941-00863.warc.gz | en | 0.956103 | 792 | 2.96875 | 3 |
Cryptography, Steganography and Malware
One of the few courses in the market dedicated to Cryptography, Steganography and Malware Analysis.
Steganography is the process of encrypting data while cryptography encrypts the existence of the data itself. These two techniques are invaluable in the age of data dominance. This course is one of the few of its kind focussed entirely on instruction in these key concepts. It is detailed and covers everything from the fundamentals to the more advanced techniques.
Benefits of choosing this course
Benefits of choosing the Cryptography, Steganography & Malware course
Learn how to use Cryptography and algorithms to maintain confidentiality and integrity of data.
Master the art of storing data into images using steganography.
Understand malware, malware distribution techniques and malware analysis.
Highlights of the Cryptography, Steganography & Malware course
Content curated specifically to offer a detailed understanding of Cryptography and Steganography.
One complete module dedicated to malware and malware analysis.
Downloadable study material and self-assessment quiz for bolstering learning.
- Key Learning Objectives
Key Learning Objectives
After completing the Cryptography, Steganography and Malware course, you will be able to:
- Articulate the basics of cryptography, types of cryptography and types of ciphers.
- Offer a detailed explanation of what Public Key Infrastructure is and what its components are.
- Properly understand what a signed certificate is, what email encryption is and what digital signatures are.
- Explain the concept of Pretty Good Privacy.
- Work with Disk Encryption, File Encryption, Encryption Keys, Encryption Attacks.
- Articulate what Secure Socket Layer is and why it was moved out of usage and replaced by its successor Transport Layer Security.
- Comprehend Cryptanalysis and how to break ciphertext even if the key is unknown.
- Enhance your knowledge of Steganalysis and how to detect steganography.
- Understand and explain how to store data within images.
- Develop sound knowledge of the fundamentals of Malware and the common techniques attackers use to distribute malware.
- Explain in your own words Trojan Concepts and Virus & Worm concepts.
- Better grasp the fundamentals of Ransomware and Malware Analysis.
- Work on evading anti-virus techniques.
Directly download the full Learning Objectives of the course here
Templates. Worksheets & Mind-maps
When you enrol in this course you will have access to several worksheets & templates that you can use immediately. Take a look at the course curriculum, below, to see whats included in this course.
The image immediately below is a gallery view of some of the templates and collateral available to students.
Continuing Professional Development
CPD points can be claimed for this course at the rate of 1 point per hour of training for this NCSC-certified and CIISec-approved course (8 points for one-day public course and 15 points for the two-day internal workshop - for when organisations host this course internally).
CIPR Student-Only Incident Response Plan Template
As a student you get access to unique content including our highly acclaimed Cyber Incident Response Plan Template. If you want, you can download the FREE version of the Incident Response Plan template here. | <urn:uuid:27d6e1d7-ba17-42fd-bc46-357e4e11fa63> | CC-MAIN-2024-38 | https://cybersecuritytraining.cm-alliance.com/p/cryptography-steganography-malware-course | 2024-09-20T23:47:47Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701425385.95/warc/CC-MAIN-20240920222945-20240921012945-00663.warc.gz | en | 0.886281 | 686 | 2.890625 | 3 |
If you have anything to do with cyber security, you know it employs its own unique and ever-evolving language. Jargon and acronyms are the enemies of clear writing—and are beloved by cyber security experts. So Morphisec has created a comprehensive cyber security glossary that explains commonly used cybersecurity terms, phrases, and technologies. We designed this list to demystify the terms that security professionals use when describing security tools, threats, processes, and techniques. We will periodically update it, and hope you find it useful.
Test your cyber knowledge!
Your Score: 0/
The A-D of Cyber Security Terms
A security process that regulates who can and can’t view sensitive data or resources. It comprises two parts: authentication and authorization (see below).
An attack where a cybercriminal gains access to a legitimate account through stolen credentials. The cybercriminal can then use this access for financial fraud, data exfiltration, internal phishing attacks, and more.
Advanced persistent threat (APT)
A targeted and sustained cyber-attack. Executed only by highly sophisticated attackers and nation-states who aim to remain undetected in a network for as long as possible. APTs can have different goals, including cyber espionage, financial gain, and hacktivism.
When security professionals receive so many security alerts they become desensitized to them. Alert fatigue can result in security teams missing or ignoring important alerts.
A list of IP addresses, domains, applications, and email addresses with privileged access. Everything and everyone that is not on an allow-list (sometimes also known as a whitelist) is denied by default.
A type of software program that scans for and removes malware from devices.
A cybersecurity technology that prevents the installation and execution of unauthorized applications.
A cybersecurity strategy based on the assumption that an organization is already breached or will be breached.
A visualization of the chain of vulnerabilities an attacker exploits to infiltrate an organization.
The sum of an organization’s IT assets exposed to threat actors, whether knowingly or unknowingly, and that could offer entry into an organization.
A method an attacker can use to gain unauthorized access to an IT infrastructure. Attack vectors are also known as threat vectors. Common attack vectors include compromised credentials, insider threats, and phishing.
A way of guaranteeing users are who they claim they are. Authentication usually happens in conjunction with authorization (see below) and is part of access control (see above).
A method of determining whether a user should receive access to sensitive data or resources. Authorization is usually paired with authentication (see above) and is part of access control (see above).
Automated Moving Target Defense
Technology that randomizes application memory runtime, obfuscating targets so threat actors can’t find them.
Learn more: Why You Should Care About Moving Target Defense
An unauthorized way to access a computer system that bypasses the system’s security measures.
A copy of a system’s data. Having a backup means you can restore your data if it’s lost or stolen.
A type of Trojan malware that steals sensitive information from a banking institution’s clients.
Figuring out what normal behavior is in your network. Baselining makes it easier for organizations to recognize abnormal activities.
A security methodology that spots anomalies by using big data, artificial intelligence, machine learning, and analytics to understand the behaviors of users and entities in an IT environment.
Black box testing
Testing a system without any prior knowledge of how the system works internally.
Criminals who hack into systems for malicious purposes.
Security professionals whose job is to defend an organization from cyber-attacks.
A network of internet-connected devices that are infected by malware and controlled remotely by threat actors. Cybercriminals use botnets to perform distributed denial of service attacks (DDoS—see below), send spam, and mine cryptocurrencies. Many, if not most victims have no idea their IT assets are part of a botnet.
A policy that allows employees to use personal devices instead of company devices to connect to an organization’s network and access business applications and data.
A trial-and-error hacking method to guess login information and encryption keys. Cybercriminals try all possible character combinations until they can authenticate one.
A software or hardware vulnerability threat actors can exploit to gain unauthorized access to a system.
Bug bounty program
Initiatives set up by organizations that encourage individuals to look for and disclose software vulnerabilities and flaws for a reward.
Business email compromise (BEC)
An email scam where cybercriminals pretend to be senior executives to trick victims into sharing sensitive information or sending money. Also known as CEO fraud.
An attack where a malicious actor tricks a user into clicking on a malicious link by making it look like something other than what it is.
The delivery of computing resources (virtual storage, servers, software, etc.) over the internet as an on-demand service.
A penetration testing (“pentest”) tool for Windows systems that simulates how adversaries can attack. Cybercriminals also use Cobalt Strike to carry out attacks.
Common Vulnerabilities and Exposures (CVEs)
Security vulnerabilities and exposures that have been publicly disclosed.
Common vulnerability scoring system (CVSS)
An open framework for evaluating the severity and risk of software vulnerabilities.
Automatically injecting lists of compromised login details to other online accounts that may use the same credentials to gain unauthorized access.
A type of cybercrime where threat actors steal login credentials to access secure accounts, systems, and networks, and gather sensitive data and/or escalate access privileges.
Systems, networks, assets, facilities, services, and processes that are vital to the well-being of a country. Damage or destruction to them could have a catastrophic impact on a country’s economy, security, or public health and safety.
Software used by attackers to encrypt, obfuscate, and manipulate malicious code to make it look like a harmless program and evade security controls.
Surreptitiously hijacking servers or endpoints to mine cryptocurrency.
An event that threatens the integrity, confidentiality, and/or availability of information systems.
Cyber kill chain
A model that describes the stages of a targeted cyber-attack. Lockheed Martin developed the model by adapting it from the military concept of the “kill chain.” There are seven phases in Lockheed Martin’s kill chain: reconnaissance, weaponization, delivery, exploitation, installation, command & control, and actions on objectives.
The practice of protecting networks, internet-connected devices, and data from attacks or unauthorized access. Cyber security is also sometimes referred to as information technology security.
A cyber-attack or series of attacks carried out by one nation-state against another.
Encrypted web content which isn’t indexed by search engines or accessible through standard web browsers. Users need specialized software to access the dark web, like the Invisible Internet Project (I2P) or Tor browser. These browsers route user web page requests through third-party servers, hiding their IP address.
Data at rest
Data that is in storage. It is not being accessed or used.
A cyber incident where sensitive, confidential, or protected data is accessed by an unauthorized party.
The unauthorized transfer of data outside a company’s systems by cybercriminals or insiders.
Data in transit
Data that is currently traveling from one system or device to another. Also known as data in motion.
Data in use
Data that is being processed, read, accessed, erased, or updated by a system.
The accidental exposure of sensitive data to unauthorized individuals caused by internal errors.
Data loss prevention (DLP)
Technologies and processes that can detect and prevent unauthorized access to critical or sensitive data. Also known as information loss prevention, data leak prevention, and extrusion prevention technologies.
The process of analyzing large data sets to find patterns, anomalies, and other valuable information. In cybersecurity, data mining can help an organization identify security threats faster and more accurately.
A process that converts encrypted data to its original state.
A cybersecurity strategy that uses multiple security controls in a layered manner to protect systems from threat actors.
Denial-of-Service attack (DoS)
A cyber-attack where threat actors make machines and other network resources unavailable to their intended users. DoS attacks are usually carried out by flooding targeted hosts/networks with illegitimate traffic requests.
A list of IP addresses, URLs, domain names, and other elements that are blocked or denied access. Also known as a blacklist or blocklist.
A type of brute force attack where cybercriminals attempt to break into a password-protected system by entering familiar words and phrases into the password field using automated tools.
A unique trail of personal data every internet user leaves behind when engaging in digital activities.
Distributed denial of service (DDoS)
A type of DoS attack that uses many sources of attack traffic. A DoS attack uses a single source of attack traffic. Attackers often use botnets to carry out DDoS attacks.
The act of altering or stealing a domain name system (DNS) registration without the original registrant’s permission.
The automated and involuntary download of malicious code to a user’s device. Most drive-by-download attacks happen when threat actors inject malicious elements into legitimate websites. A user often doesn’t even need to click or download anything. Their device becomes infected as soon as they visit the site.
The amount of time a threat actor spends in a victim’s environment before being detected.
The E-I of Cyber Security Terms
Unauthorized interception of data in transit between two devices. Also known as network sniffing or network snooping. Eavesdropping attacks happen when a network connection is weak or insecure.
An email attack where a cybercriminal gains control over a target’s email account.
A process that transforms human readable data into an encoded form to stop it from being used/known.
Physical devices that connect to a network, like desktops, laptops, cell phones, and servers.
Endpoint Detection and Response (EDR)
A category of cybersecurity tools that continuously monitor and record endpoint data to detect, investigate, and mitigate malicious activity. When a threat is found, EDR can contain or remove it automatically, or alert security teams.
Endpoint protection platform (EPP)
A security solution that uses a mixture of antivirus, data protection, and intrusion prevention technologies to protect endpoints. EPPs are often used in conjunction with EDRs, with EPPs supplying the first line of detection.
The process of securing endpoints from threats. Also known as endpoint protection.
Malware that hides its identity to bypass scanning-based security defenses like antivirus software and endpoint detection and response platforms.
A piece of code designed to capitalize on vulnerabilities in computer systems or applications for malicious purposes.
Prepackaged tool kits that automate the exploitation of IT system vulnerabilities. Inexperienced hackers often use exploit kits to distribute malware or perform other malicious actions. Most exploit kits include vulnerabilities that target specific applications, a management console that gives insights into how campaigns are doing, and other add-on functions.
Extended Detection and Response (XDR)
Like an EDR but extends protection beyond just endpoints. Automatically collects and correlates data from different security solutions across endpoints, networks, servers, cloud workloads, and applications. In doing so, XDR breaks down silos, improves visibility, and speeds up threat detection.
A security alert that mistakenly identifies benign activity as anomalous or malicious.
Fast identity online (FIDO)
An open industry association whose goal is to promote authentication standards that reduce people’s and organizations’ reliance on passwords. These include USB security tokens, smart cards, facial recognition, and biometrics.
Malicious code that hides in process memory rather than installing itself on a hard drive. Since fileless malware leaves no malicious artifacts on the hard drive, it can evade detection by most security solutions. Fileless attacks are also known as non-malware or in-memory attacks.
Learn more: Why Advanced Threats Are Winning
An information-gathering process cybercriminals often use to identify targets’ operating systems, software, protocols, and hardware devices. They can then use fingerprinting data as part of their exploit strategy.
A network security device that filters all network traffic (incoming and outgoing) to prevent unauthorized access based on predetermined security rules.
An attack where threat actors send so much traffic to a system it can’t process genuine connection requests.
General Data Protection Regulation (GDPR)
A European Union (EU) law that harmonizes data privacy laws across all member countries. The law gives EU citizens greater control over their personal data and requires businesses to follow strict rules for protecting this data. The GDPR came into effect in 2018. It applies to all companies that operate in the EU or do business with individuals in the EU.
Learn more: Is GDPR Making Ransomware Worse?
Gray box testing
A security testing technique where testers have partial knowledge of the system being tested.
A gray hat hacker is someone who might violate the law to find vulnerabilities in a system. However, unlike black hat hackers, gray hat hackers usually don’t have malicious intent.
A person who uses their knowledge of information technology to gain entry in a network or manipulate digital technology in a manner not intended by its original owners or designers.
A decoy system or folder designed to look like a legitimate digital asset. Honey pots mislead cybercriminals away from actual targets. When hackers enter honey pots, security teams can watch their behavior and collect information about their methods.
Identity and access management (IAM)
A collection of processes, policies, and technologies that allow IT security professionals to give trusted entities access to the right resources at the right time for the right reasons.
The structured approach an organization takes to manage a cyber incident. The goal of incident response is to decrease recovery time, reduce damage and the cost of an incident, as well as prevent similar attacks from happening in the future.
Indicators of compromise (IoC)
Forensic clues that indicate security professionals a system or network has been compromised.
A type of malicious software designed to steal information from a system, like login credentials.
Learn more: Your Guide to Top Infostealers in 2022
Infrastructure as a service (IaaS)
A cloud computing service organizations can rent to get access to virtualized computing resources.
An attack on a system’s memory.
Learn more: Why Should You Care About In-Memory Attacks?
A security risk that comes from individuals within an organization. These include employees, contractors, former employees, business partners, and any other persons with legitimate access to an organization’s assets.
Internet of Things (IoT)
A network of physical objects with embedded sensors that connect to and exchange data over the internet in real time.
Internet of Things security
Technologies and processes for protecting Internet of Things devices and their networks.
Learn more: How Can We Secure the IoT?
Intrusion detection system (IDS)
A security technology that continuously scans inbound and outbound network traffic for potential threats. When an intrusion detection system detects suspicious activities, it alerts IT security teams.
Intrusion prevention system (IPS)
A security technology that watches network traffic for unusual activities and takes preventative action based on established rules. An intrusion detection system can only issue alerts when it spots potential threats. An intrusion prevention system can also block suspicious activities.
The J-P of Cyber Security Terms
Activity monitoring software that lets hackers record a user’s keystrokes.
A technique used by attackers to move deeper into a victim’s network after gaining initial access.
A security concept in which users and applications receive the minimum levels of access they need to do their jobs.
A computer system, hardware, or related business process that is outdated and not supported by a vendor but is still in use.
An open-source operating system based on the Linux kernel originally created in 1991.
Living off the land (LotL) attacks
A type of attack where threat actors use legitimate functions or software in a target’s IT environment for malicious purposes.
A critical vulnerability in the popular Apache Log4j 2 Java Library that allows threat actors to remotely take control of a device if it runs specific versions of Log4j. The Log4j vulnerability is also known as Log4Shell or CVE-2021-44228.
A type of artificial intelligence (AI) that makes it possible for machines to imitate human behavior. With machine learning, systems can learn from data and past experiences to predict future outcomes. As the number of data samples increases, so does machine learning’s performance. In cybersecurity, machine learning is used to detect and respond to potential attacks faster.
A set of commands that automate repetitive tasks in office productivity applications like Microsoft Office. Cybercriminals can abuse macros for malicious purposes. They commonly embed macro malware/macro viruses into word-processing programs or documents, which they distribute via phishing emails. If macros are enabled, malware will execute as soon as a user opens a malicious document.
Software designed for compromising or damaging information or systems. There are many types of malware, including but not limited to viruses, worms, Trojans, and spyware.
Malware as a service (MaaS)
The illegal lease of malicious software and hardware to customers on a subscription basis. With malware as a service, even individuals without technical skills can launch cyber-attacks.
A cyber-attack technique where threat actors spread malware through online ads.
Man in the middle attack (MITM)
A type of attack where a cybercriminal intercepts and potentially alters communications between two endpoints.
Mean time to detect (MTTD)
The average length of time it takes a security team to discover a security problem within their network environment.
Mean time to respond (MTTR)
The average length of time it takes a security team to contain a security incident after it's identified.
A technique used by cybercriminals to bypass multi-factor authentication. MFA fatigue attacks are preceded by brute force attacks. After a threat actor gains a target’s login credentials, they flood the target’s authentication app with push notifications for sign-in approval. Whether the target is inattentive or worn out by the endless barrage of notifications, they often approve the notification.
Minimizing the risk or impact of a potential cyber threat.
A knowledge base that classifies and describes cyber-attacks. The term stands for MITRE Adversarial Tactics, Techniques, and Common Knowledge. ATT&CK is community-driven but owned by MITRE corporation, a US non-profit.
Learn more: Don’t Take MITRE ATT&CK Results as Gospel
Moving Target Defense
Technology that randomizes application memory runtime, obfuscating targets so threat actors can’t find them.
Learn more: Why You Should Care About Moving Target Defense
Multi-factor authentication (MFA)
An authentication method where users must prove their identity using at least two different credential types before receiving access.
National Institute of Standards and Technology (NIST)
A non-regulatory government agency that promotes and maintains standards and metrics for technology, science, and other industries in the US.
Learn more: How to Nail Your NIST Cybersecurity Audit
A system of two or more connected devices that can send or share information, applications, and other resources.
Network detection and response (NDR)
A security technology that uses behavioral analytics and machine learning to detect malicious activities on a network. NDRs can respond to potentially malicious activities either via native capabilities or by integrating with other security tools.
Next-Generation Antivirus (NGAV)
A new breed of antivirus software that goes beyond signature-based detection. Most next-generation antivirus solutions include advanced technologies like artificial intelligence, machine learning algorithms, and behavioral detection.
A network security device that combines traditional firewall features such as packet filtering with added capabilities like application control and sandboxing.
NIST Cybersecurity Framework
A set of cybersecurity best practices that organizations can use to manage their security risks. The framework is voluntary guidance.
A technique that makes it more difficult to understand code. It can be used to protect intellectual property. However, attackers can also use it to bypass security controls.
Open-source intelligence (OSINT)
The practice of collecting and analyzing freely available data from public sources for intelligence purposes.
Threats that come from outside the organization.
A type of software that monitors and intercepts data pieces or data packets traveling across a network.
Operating system, firmware, application, or driver update that fixes technical issues or known security vulnerabilities.
The process of identifying, getting, installing, and managing patches.
Learn more: How Do You Prioritize What to Patch?
Payment Card Industry Data Security Standard (PCI DSS)
A security standard that sets the minimum data security requirements for all merchants handling, processing, or storing cardholder data.
Known colloquially as pentesting. A simulated cyber-attack against a web application, computer system, or network. The goal of penetration testing is to find any vulnerabilities that could be exploited by threat actors and test defenders’ security posture.
The practice of protecting an organization’s network boundaries from threat actors. A company’s perimeter acts like a wall between its private intranet and the public internet.
Personally identifiable information (PII)
Information unique to a specific individual that can be used to identify them.
A type of attack where threat actors redirect users to a spoofed version of the website they intend to visit.
A type of attack where cybercriminals send fraudulent emails or text messages to convince targets to share sensitive information, download malware, or perform some other action.
A type of malware that continuously morphs its identifiable features to evade detection.
Private Health Information (PHI)
Information about a patient’s health condition, whether physical or mental, at any time—present, past, or future.
Learn more: Healthcare Data Needs to Become Safer
The exploitation of configuration errors, design flaws, or bugs to escalate permissions and privileges beyond what is usually accessible to a user or application.
A cybersecurity strategy that focuses on preventing cyber-attacks from happening in the first place rather than responding to them.
A group of security professionals that perform the role of both the red team and the blue team.
The Q-T of Cyber Security Terms
A type of malicious software that encrypts a victim’s systems/files or exfiltrates data (or both). To regain access to their systems/files or prevent them from being leaked or sold, companies must pay a ransom to attackers.
Learn more: How to Resolve the Ransomware Security Gap
Ransomware as a service (RaaS)
A subscription-based business model where ransomware developers sell or lease ransomware tools to other cybercriminals (“affiliates”).
A cybersecurity strategy where attacks are detected and responded to after they happen.
The practice of collecting information about a target. Reconnaissance can be passive or active. Passive reconnaissance is gathering data about targets without actively engaging with them. Active reconnaissance is the opposite and involves active engagement with the target, like sending unusual packets to a server.
A group of security professionals that emulate adversaries to test an organization’s defensive posture.
Remote desktop protocol (RDP)
A network communications protocol that lets users connect to a remote Windows machine.
Remote access trojan (RAT)
A malicious software program that gives threat actors full administrative privileges and remote control over an infected system.
A type of malicious software that gives attackers unauthorized access to and control over a target system. A rootkit is designed to stay hidden in a target system.
A technique used to isolate a program or process from the rest of an organization's system. This can involve running it in an isolated environment, such as an emulator, virtual machine, or container.
A tactic used by criminals and malicious vendors to trick users into downloading unnecessary software (such as a fake antivirus), which may itself contain malware.
Using a bot or software to extract data from a website. Often done by cybercriminals to find exposed credentials or other data that might enable them to gain network access or conduct phishing scams.
Relatively inexperienced or unskilled hackers who use off-the-shelf exploit kits and well-known attack techniques to compromise victims.
Security as a service (SaaS)
A cloud-delivered security offering where vendors provide customers with a range of security solutions on a subscription basis.
Security awareness training
Training delivered to non-experts on spotting potential threat actor techniques such as phishing emails and avoiding compromise.
Security information and event management (SIEM)
A type of security solution that collects, analyzes, and reports on data from various sources to detect security incidents. SIEMs collect security logs and alert security teams when certain rules are triggered. E.g., excessive login attempts in a specific timeframe.
Security operations center (SOC)
The group of individuals and systems within an organization that deals with all cybersecurity issues. The SOC is the central point of analysis and action for all security-related data and tools.
Secure web gateway
An application or device which sits between the internet and users within an organization’s network. Used to filter traffic and block malicious content.
A physical or virtual machine that supplies services such as file storage, and/or powers applications.
Learn more: Servers Aren’t As Secure As You Think
An attack where cybercriminals take control of a user’s computer session by obtaining their session ID and pretend to be the legitimate user on a network’s services.
The unauthorized deployment and use of IT systems within an organization without the IT department’s approval or knowledge.
A pattern of behavior associated with a particular malware type or threat actor technique.
A malicious method of bypassing two factor authentication (2FA) by duplicating or switching the sim card of a victim’s 2FA device with a criminal's one.
Using text messages to phish victims or as part of a social engineering attack.
A form of hacking that relies on manipulating human interaction. It is the act of tricking people into performing actions or divulging confidential information.
A technique used to deceive the recipient of an email or other electronic communication into believing a message was sent by someone else.
Malware used to watch a victim’s device remotely and steal personal information, credentials, or network information.
A type of web attack that exploits security vulnerabilities in SQL databases linked to online forms. The most common and successful web application hacking technique.
Supply chain attack
A type of cyber-attack where threat actors access an organization's system through a trusted external partner. Also known as a third-party or value-chain attack.
Learn more: How Do You Stop Supply Chain Attacks?
Securing a computer system by installing patches, changing default passwords, removing administrator permissions, and other methods of making endpoints and applications as inaccessible to threats as possible.
Tactics, techniques, and procedures (TTPs)
Methods used by threat actors to access a system or network.
The act of purposely modifying data, systems, system components, or system behavior.
Learn more: Is Your Cyber Security Tamper Proof?
The process of identifying and mitigating cyber threats. A proactive approach to cybersecurity that involves security analysts using threat intelligence, analytics, and human expertise to identify potential risks before they materialize.
Knowledge collected by security teams, vendors, and government agencies about cyber threats and threat actors. Threat intelligence can be collected from various sources, such as open-source data, malware analysis, or human intelligence.
How a threat enters an organization. It can be anything from a malicious email attachment to an infected USB stick.
How security teams prioritize how to remediate threats and compromised assets within their network.
Malware disguised as legitimate software or code. It can also be disguised as updates for legitimate software.
Two-factor authentication (2FA)
A security measure that requires users to submit two different ways of proving their identity.
The U-Z of Cyber Security Terms
The act of accessing endpoints, networks, data, or applications without permission.
A program or image that lets users interact with a computer system that uses a partitioned element of a separate physical computer’s resources. Virtual machines allow users to run programs as if they were being executed on a physical computer.
A quick way to mitigate security vulnerabilities and stop them being exploited, to allow fixing the code later.
Learn more: Your Guide to Virtual Patching
Virtual private network (VPN)
A way of encrypting network traffic from a user to a network. Companies often use VPNs to allow employees access to their internal networks from remote locations such as home or while traveling.
A type of malware that can infect a computer and cause it to do things without the user's knowledge. It can also be used to steal information from a computer.
A weakness in a system an attacker can exploit. Vulnerabilities can also be found in the design of systems and networks. Vulnerabilities are usually classified as either low or high risk depending on their likelihood of being exploited, and their potential impact if they are exploited.
The process of identifying, assessing, and remediating software vulnerabilities within an organization's IT systems.
Voice phishing. The practice of manipulating victims over the phone to get them to share sensitive data or perform specific actions.
Watering hole attack
A type of attack that implants malware in a website likely to be visited by a victim. This code is then downloaded onto the victim’s computer when they visit the site.
Web application firewall (WAF)
A type of firewall that filters and monitors HTTP traffic to an organization's web application. It can be implemented as a software or hardware appliance, or as a cloud-based service.
Exploiting vulnerabilities in websites to access databases with sensitive information.
A type of server that hosts websites and other web services. Web servers store the component types for web pages and other digital assets online and handle HTTP requests from users.
A type of phishing attack that targets high-level executives. Whaling attacks typically involve complex and hard-to-spot social engineering efforts that use knowledge about an executive’s professional and personal network against them.
White box testing
A type of penetration testing where testers start with knowledge of their target’s internal structure, logic, and application implementation.
A hacker who uses their skills to find and fix vulnerabilities in computer systems. Also called ethical hackers.
A type of malicious software that duplicates itself and spreads across devices in a network.
eXclusive OR (XOR)
A binary operation that takes two inputs and returns one output. Commonly used in cryptography to protect against brute-force attacks.
A type of cyber-attack that exploits vulnerabilities in software that the software’s developers have not yet discovered. These types of attacks can be extremely dangerous because there are few effective defenses against the unknown.
A security model which assumes any networked device, including the user's own computer, is untrusted. It is a form of the principle of least privilege.
Moving Target Defense Stops Advanced Cyber-Threats
As the cyber threat landscape gets more dangerous, reducing breach risk becomes accordingly more difficult.
Advanced threats like ransomware, in-memory exploits, and fileless malware attacks are specifically built to bypass current security solutions like AV, NGAV, EPP, EDR, XDR, and SIEM. To stop them, organizations need another security layer, with a different kind of defense technology.
Cited by Gartner as an important emerging technology, Moving Target Defense (MTD) works in-memory at runtime to protect against threats that don’t attack the disc or operating system.
The premise of MTD is simple. Unlike current security solutions, it doesn’t reactively mitigate attacks that target the disc or operating system after they happen. Instead, MTD technology works proactively. It morphs the in-memory runtime environment, making it impossible for threats to find their targets in the first place.
To learn more about how this revolutionary technology works, read the white paper: Zero Trust + Moving Target Defense: The Ultimate Ransomware Strategy. | <urn:uuid:dbc24b42-205a-43b0-bc28-8c97ddab0348> | CC-MAIN-2024-38 | https://blog.morphisec.com/cyber-security-glossary | 2024-09-07T17:04:24Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650898.24/warc/CC-MAIN-20240907162417-20240907192417-00063.warc.gz | en | 0.909595 | 6,783 | 3.078125 | 3 |
In today's dynamic business world, where speed, precision, and adaptability are paramount, the role of automation has never been more crucial. Behind the scenes, a robust automation team works tirelessly to ensure processes are efficient, seamless, and ready for the challenges of tomorrow. Enter virtual machines (VMs) – the unsung heroes that can transform the way your automation team operates. In this blog post, we'll review what VMs are, why automation program use them, and share how you can leverage VMs to realize the full potential of your automation initiatives.
What are virtual machines?
In the context of an automation program, virtual machines (VMs) are like virtual versions of actual computers — the software kind of magic that lets you run various operating systems and apps all at once on a single physical machine. VMs are isolated from each other, each operating as an independent system with its own virtual hardware – think CPU, memory, storage, and even network connections. They're like little digital islands, all hanging out on real servers but doing their own independent stuff.
What are virtual machines used for in Automation Anywhere?
The most obvious answer is bot runner devices. Virtual machines are most commonly used for production runners - especially as automation programs begin to grow, scale, and mature. Many organizations will also use VMs for their test/QA environments and even for development. Why? We’ll, the easiest path to a successful, reliable automation in production is something that has been created and tested with the production runner machine/specs in mind. If you develop on a VM that matches the specs of your production runners, then test in on test/QA runners that also match those same specs, you can have a pretty high confidence of the expected performance of that automation when it gets to production.
Why do automation programs use virtual machines (VMs) over physical hardware?
VMs prevent automation programs from drowning in a sea of physical hardware, using resources in a much more efficient manner. They’re a key component of automation infrastructure for several reasons:
- Isolation: VMs provide a secure and isolated environment for running automation processes. Each automation task can be executed within its own VM, preventing interference or conflicts with other processes or users trying to log on.
- Resource Efficiency: VMs enable a more efficient utilization of hardware resources. Instead of dedicating a separate physical server for each task, multiple VMs can run on a single server (or multiple servers) optimizing resource usage.
- Scalability: VMs can be easily cloned or created from templates, enabling rapid scaling of automation processes to handle varying workloads. This becomes increasingly important as your program scales and you need for additional runner devices increases.
- Versioning and Snapshotting: VMs support versioning and snapshot capabilities, enabling automation teams to roll back to a previous state if issues arise due to patches or updates that may occur to applications that your automations depend on).
- Redundancy and Resource Pooling: Many workstation virtualization software companies provide software solutions to automatically migrate VMs from one host server to another in times of patching or failure. This provides for improved uptime compared to using phyical machines for each runner.
- Testing and Development: VMs are ideal for creating test environments and facilitating the development of automations in controlled settings.
Overall, virtual machines play a significant role in automation programs, enabling efficient, secure, and scalable execution of automation processes.
How to leverage virtual machines for automation growth and optimization
Set the stage for success with virtual machine capacity planning
First, it’s important to understand your automation team's runner capacity. This assessment becomes the foundation for efficient VM planning. By evaluating the size of your team, the complexity of automation tasks, and future project demands, you can calculate the number of development, test, and production machines needed. This proactive approach ensures that your automation team has the resources they need, when they need them.
Proactively scale for growth
Scaling – it's the dream of every thriving automation team. But as your projects grow and new members join the fold, will your current VM infrastructure hold up? It's essential to evaluate whether your existing virtual machine setup can gracefully accommodate the projected needs. As you define your team's growth targets, consider adjusting VM configurations and resource allocation accordingly. Scaling with foresight allows your team to operate with the agility required to adapt to evolving demands.
Explore multi-user devices that balance collaboration and control
Collaboration is the lifeblood of innovation. Multi-user virtual machines are a game-changer for promoting teamwork and efficiency within your automation team. These devices enable resource sharing and faster development cycles, reducing costs and accelerating project delivery. But it's not just about sharing; security and access controls take center stage when implementing multi-user devices. Striking the right balance between collaboration and control is key.
Implement monitoring and alerts: the guardians of seamless automation
Efficiency hinges on reliability. To ensure your automation processes run smoothly, monitoring and alerts must become second nature. Keep a watchful eye on vital metrics like CPU usage, memory consumption, and disk space. Setting up alerts for potential failure points can proactively address issues and minimize downtime, allowing your team to maintain their laser focus on driving automation excellence.
Know the impact to support needs
Virtual machines not only transform the way your automation team works, but also reshape the support landscape. The shift from hardware maintenance to VM infrastructure management presents a new world of efficiency and centralization. With the right tools and automation in place, you're empowered to provide exceptional support, allowing your team to stay in the innovation fast lane.
In the world of automation, harnessing the power of virtual machines becomes not just a choice, but a strategic imperative. The benefits are clear: scalability, flexibility, and efficient resource utilization. As you embrace virtualization technology, remember that the journey doesn't end with setup. Continuous monitoring, capacity planning, and the ability to adapt are the cornerstones of maximizing the potential of your automation team. So, take the leap – leverage virtual machines to propel your automation endeavors to new heights of growth, efficiency, and innovation. | <urn:uuid:a2009ca9-3644-406c-87b5-9e12d7049acd> | CC-MAIN-2024-38 | https://community.automationanywhere.com/pathfinder-blog-85009/leveraging-virtual-machines-to-empower-automation-team-growth-and-optimization-87236 | 2024-09-07T17:39:03Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650898.24/warc/CC-MAIN-20240907162417-20240907192417-00063.warc.gz | en | 0.912849 | 1,270 | 2.734375 | 3 |
As we discussed in our first Fundamentals of VoIP tutorial, Voice over Internet Protocol (VoIP) networks combine both voice and data communications networking technologies. The combinationis somewhat like a marriage, in which two unique systems endeavor to create some type of synergistic (and hopefully, peaceful) coexistence. But as many of us have discovered, figuring out the strengths and weaknesses of each member is a key making that partnership work; the same is true for the voice and data “marriage” as well. Let’s look at the defining characteristics of each element in this VoIP partnership.
The connection-oriented/connectionless dichotomy
Traditional voice networks are classified as connection-oriented networks, in which a path from the source to destination is established, prior to any information transfer. When the end user takes the telephone off-hook, they notify the network that service is requested. The network then returns dial tone, and the end user dials the destination number. When the destination party answers, the end-to-end connection is confirmed through the various switching offices along the path. When the conversation is complete, the two parties hang up, and their network resources can be re-allocated for someone else’s conversation.
One of the disadvantages of this process is the consumption of resources spent setting up the call (a process called signaling, which we will consider in a future tutorial). One of the advantages, however, is that once that call has been established, and a path through the network defined, the characteristics of that path, such as propagation delay, information sequencing, etc. should remain constant for the duration of the call. Since these constants add to the reliability of the system, the term reliable network is often used to describe a connection-oriented environment. The Transmission Control Protocol(TCP) is an example of a connection-oriented protocol.
In contrast, traditional data networks are classified as connectionless networks, in which the full source and destination address is attached to a packet of information, and then that packet is dropped into the network for delivery to the ultimate destination. An analogy to connectionless networks is the postal system, in which we drop a letter into the mailbox, and if all works according to plan, the letter is transported to the destination. We do not know the path that the packet (or letter) will take, and depending upon the route, the delay could vary greatly. It is also possible that our packet may get lost or be mis-delivered within the network, and therefore not reach the destination at all. For these reasons, the terms best efforts and unreliable are often used to describe a connectionless environment. The Internet Protocol (IP) and the User Datagram Protocol(UDP) are examples of connectionless protocols.
Recall from your Internet History 101 class, that the Internet protocols, including TCP, IP, and UDP were developed in the 1970s and 1980s to support three key applications: file transfers (using the File Transfer Protocol, or FTP), electronic mail (using the Simple Mail Transfer Protocol, or SMTP), and remote host computer access (using the TELNETprotocol). All of these applications were data- (not voice-) oriented, and were therefore based upon IP’s connectionless network design. Layering TCP on top of IP gave the entire system enhanced reliability (albeit with additional protocol overhead), but the rigors of a true connection-oriented, switched infrastructure (like the telephone network) was not necessary to support these applications.
Teaching an old dog new tricks
Fast forward a few decades to the new millennium where visions of voice, fax, and video over IP dominate. These applications are sensitive to sequencing and delay issues, and the idea of a “best efforts” service—especially if the voice conversation mustgo through, such as a call to the police or fire department—will not gather many supporters.
Which brings us to the challenging question: How do we support connection-oriented applications (such as voice and video) over a connectionless environment(such as IP), without completely redesigning the network infrastructure? The solution is to enhance IP with additional protocols that fill in some of its data-centric gaps. These include:
- Multicast Internet Protocol (Multicast IP), defined in RFCs 1112 and 2236.
Multicast allows information from a single source to be sent to multiple destinations (as may be required for conferencing). - Real-time Transport Protocol (RTP), defined in RFC 3350.
RTP provides functions such as payload identification, sequence numbering, and timestamps on the information. - RTP Control Protocol (RTCP), also defined in RFC 3350.
RTCP monitors the quality of the RTP connection. - Resource Reservation Protocol (RSVP), defined in RFC 2205.
RSVP requests the allocation of network resources, to assure adequate bandwidth between sender and receiver. - Real-Time Streaming Protocol (RTSP), defined in RFC 2326.
RTSP supports the delivery of real-time data, including retrieval of information from a media server or support for conferencing. - Session Description Protocol (SDP), defined in RFC 2327.
SDP conveys information about the media streams for a particular session, including session name, time the session will be active, what media (voice, video, etc.) is to be used, the bandwidth required, and so on. - Session Announcement Protocol (SAP), defined in RFC 2974.
SAP packets are periodically transmitted to identify open sessions that may be of interest to the end user community.Copyright (C) 2005 DigiNet (R) Corporation
So is TCP/IP adequate for VoIP? Strictly speaking no, but with the addition of new protocols to support time sensitive applications such as voice and video, the existing IP infrastructure can therefore be all things to all people—supporting both connection-oriented and connectionless applications. In the next several tutorials we will examine some of these new protocols in more detail. | <urn:uuid:2fdb713c-e1e1-4911-9698-85cf41792606> | CC-MAIN-2024-38 | https://www.enterprisenetworkingplanet.com/unified-communications/why-tcp-ip-is-not-sufficient-for-voip/ | 2024-09-07T17:31:36Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650898.24/warc/CC-MAIN-20240907162417-20240907192417-00063.warc.gz | en | 0.923397 | 1,239 | 3.328125 | 3 |
In the evolving landscape of cancer treatment, the advent of immunotherapies Keytruda and Opdivo has marked a groundbreaking shift. Over the last decade, these drugs, known as PD1 inhibitors, have revolutionized oncology, offering new hope and significantly altering patient outcomes. Their story is one of scientific ingenuity, rigorous clinical trials, and transformative impacts on patient lives. The journey from conceptual research to life-saving treatments showcases the power of targeted therapies in modern medicine, underscoring the pivotal role these drugs now play in oncological care.
The Birth of PD1 Inhibitors
The journey of Keytruda and Opdivo began with the discovery of immune checkpoints, which are proteins that help keep the immune system in check. Among these, PD1 (programmed death-1) was identified as a critical player in enabling cancer cells to evade immune detection. Researchers like James Allison, who won a Nobel Prize for his work on CTLA4—a similar checkpoint—laid the groundwork for developing therapies that could block these pathways. By understanding how these immune checkpoints functioned, scientists were able to devise strategies to disable them and thus allow the immune system to target cancer cells more effectively.
Merck and Bristol Myers Squibb spearheaded the development of Keytruda and Opdivo, respectively. Keytruda first gained FDA approval in 2014 for treating inoperable melanoma, followed by Opdivo shortly after. Initial clinical trials showed remarkable results, demonstrating the potential to significantly improve survival rates for patients who had exhausted other treatment options. These trials marked a significant milestone in cancer therapy, turning what was once considered an unmodifiable weakness of the immune system into a formidable weapon against cancer. The impact was immediate and transformative, providing a new line of defense for patients with limited options.
How They Work: Mechanism of Action
Keytruda and Opdivo operate by inhibiting the PD1 pathway, which cancer cells exploit to hide from the immune system. Normally, PD1 acts as a brake on T cells, preventing them from attacking healthy cells. However, many tumors upregulate PD1 ligands, tricking T cells into leaving the cancer cells alone. By blocking PD1, Keytruda and Opdivo release this brake, allowing T cells to recognize and destroy cancer cells. This innovative approach differs fundamentally from traditional treatments such as chemotherapy, which indiscriminately aim at all rapidly dividing cells and often result in severe side effects.
This mode of action represented a paradigm shift from traditional treatments like chemotherapy, which indiscriminately kill rapidly dividing cells, to a more targeted approach that harnesses the body’s own immune system. The success of these drugs has spurred extensive research into other immune checkpoints as potential therapeutic targets. The ripple effect has initiated a series of studies aimed at enhancing the efficacy and safety of immunotherapies, taking advantage of our growing understanding of immune-tumor interactions.
Expanding Approvals and Uses
Since their initial approvals, both Keytruda and Opdivo have expanded their indications to cover a wide range of cancers. Keytruda, for example, is now approved for over 20 different types of cancers, including lung, head and neck, and bladder cancers. Opdivo has similarly broadened its applications, cementing its role as a versatile tool in the oncologist’s arsenal. The regulatory milestones achieved by these drugs highlight their versatility and broad applicability, making them indispensable in modern oncology practice.
These expansions are the result of numerous clinical trials that have consistently shown improved outcomes over standard treatments. The versatility of these drugs has fundamentally changed the landscape of cancer treatment, providing options for types of cancer that were previously difficult to treat. Oncologists now have a broader array of tools to tailor treatments to individual patient needs, improving both survival rates and quality of life. Additionally, the ongoing success of these drugs has set a new standard of effectiveness and safety for future cancer therapies.
Real-World Impact: Patient Stories
The real-world impact of Keytruda and Opdivo is perhaps best illustrated by the stories of patients whose lives have been transformed. Take Lisa Haines, for instance. After traditional chemotherapy failed to stop her lung cancer, Opdivo offered her another chance. Her successful treatment is one of many testimonials that highlight the life-saving capabilities of these drugs. These personal success stories make the statistical data meaningful, showing the tangible, life-altering effects of these treatments.
Similarly, Heidi Nafman-Onda and Pamela Berryhill have shared their journeys, underscoring how these immunotherapies have provided not just extended life but a quality of life that was unimaginable with previous treatment options. These stories add a profound, personal dimension to the clinical data, demonstrating the human impact of scientific advances. For patients and their families, these treatments offer not just hope but a better quality of life, breaking the traditional association of cancer treatment with debilitating side effects and limited efficacy.
Challenges and Limitations
Despite their success, Keytruda and Opdivo are not without challenges. One significant issue is variability in patient response; not all patients benefit equally from these treatments. For some, the therapy can trigger severe side effects, including inflammation and autoimmune reactions, which require careful management. These adverse reactions can sometimes be as debilitating as the cancer itself, necessitating cautious administration and thorough patient monitoring. The need for individualized treatment plans becomes evident, emphasizing personalized medicine’s growing importance.
Additionally, these treatments are substantially expensive, raising questions about accessibility and cost-effectiveness. The complexity and high cost of combination therapies, which seek to enhance the effectiveness of PD1 inhibitors by pairing them with other treatments, also present a significant barrier to broader adoption. Equity in healthcare becomes a critical issue as access to these groundbreaking treatments can be limited by financial constraints. These challenges underscore the need for continued research to make these treatments more accessible and affordable, ensuring that their benefits can be realized by a broader patient population.
The Historical Context of Immunotherapy
The concept of using the immune system to fight cancer is not new. In the 1890s, William Coley experimented with bacterial toxins to induce immune responses in cancer patients. Although his results were inconsistent, they provided early evidence that the immune system could be harnessed to combat cancer. Coley’s work, although initially met with skepticism, laid important groundwork for the field of immunotherapy, demonstrating the potential of immune manipulation to fight cancer.
Research in the 20th century, particularly in the 1960s and 1970s involving immunocompromised mice, further established the relationship between the immune system and cancer. These foundational studies paved the way for modern immunotherapies like Keytruda and Opdivo, demonstrating that scientific breakthroughs often build on decades of prior research. The incremental advancements in understanding the immune system and cancer biology have culminated in the sophisticated therapies we see today, proving the importance of sustained research investment and scientific curiosity.
Future Directions in Cancer Immunotherapy
In the ever-changing field of cancer treatment, the introduction of immunotherapies like Keytruda and Opdivo has heralded a significant shift. Over the past ten years, these PD1 inhibitors have revolutionized oncology, providing renewed hope and dramatically improving patient outcomes. The narrative of these drugs is one of scientific brilliance, extensive clinical trials, and profound impacts on patients’ lives.
Starting from conceptual research and moving to life-saving treatments, Keytruda and Opdivo exemplify the potential of targeted therapies in contemporary medicine. Their development underscores the critical role these medications now occupy in cancer care. Targeted therapies like these have changed how doctors approach various types of cancer, allowing for treatments that are more personalized and often more effective. By specifically targeting cancer cells while sparing healthy ones, these therapies minimize side effects and increase the quality of life for patients under treatment.
This advancement has not only extended lives but also has improved the quality of those lives significantly. The success of Keytruda and Opdivo highlights the transformative impact of immunotherapy in the fight against cancer, making these drugs indispensable tools in modern oncological practice. The story of their development and application is a testament to the power of scientific innovation in creating treatments that genuinely make a difference. | <urn:uuid:1a595165-dd7f-4909-8e39-896cd1817b05> | CC-MAIN-2024-38 | https://biopharmacurated.com/research-and-development/how-have-keytruda-and-opdivo-transformed-cancer-treatment/ | 2024-09-08T19:10:45Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651017.43/warc/CC-MAIN-20240908181815-20240908211815-00863.warc.gz | en | 0.945997 | 1,688 | 2.65625 | 3 |
- AI has tremendous potential to transform the entire healthcare ecosystem
- AI is being used to for earlier, more accurate diagnoses and personalized medical care
- The ability to instantly access vast repositories of data and make recommendations can improve patient outcomes
- AI tools can even make navigating the healthcare system easier for consumers
The healthcare industry is witnessing a technological revolution with the integration of generative artificial intelligence (AI). This cutting-edge technology has the potential to revolutionize patient care, research, and operational efficiency. Generative AI tools, capable of creating and innovating, offer a wide array of benefits for both online and offline use cases in healthcare, leading to improved diagnoses, personalized treatments, and streamlined operations.
- Medical Imaging Analysis: Generative AI algorithms excel in medical image analysis, aiding in accurate and timely diagnoses. These algorithms can analyze complex medical images such as X-rays, MRIs, and CT scans, detecting anomalies and assisting healthcare professionals in detecting diseases at an early stage. For instance, Aidoc uses generative AI to analyze medical images and flag abnormalities, enabling radiologists to prioritize urgent cases and improve patient outcomes.
- Virtual Consultations: Generative AI-powered virtual consultation platforms offer patients the convenience of receiving medical advice remotely. These platforms use natural language processing and machine learning to understand patient symptoms and provide appropriate recommendations. Babylon Health, for example, utilizes generative AI to offer virtual consultations, providing immediate access to healthcare professionals and reducing the burden on physical healthcare facilities.
- Personalized Medicine: Generative AI tools analyze large-scale patient data, including genetic information, medical records, and lifestyle factors, to develop personalized treatment plans. By considering individual variations, generative AI algorithms can help healthcare providers deliver precise and targeted interventions. Companies like Deep Genomics employ generative AI to analyze genetic data and identify potential treatments for genetic diseases, bringing personalized medicine to the forefront.
- Drug Discovery and Development: Generative AI accelerates the drug discovery and development process by analyzing vast amounts of scientific literature, clinical trial data, and molecular structures. By identifying potential drug candidates and predicting their effectiveness, generative AI expedites the research process and reduces the time and cost required to bring new drugs to market. Atomwise, a company that uses generative AI, has identified promising drug candidates for diseases such as Ebola and multiple sclerosis.
- Predictive Analytics: Generative AI algorithms can analyze patient data to predict disease progression, identify high-risk individuals, and recommend preventive measures. By leveraging machine learning techniques, healthcare providers can proactively intervene and improve patient outcomes. For instance, Excel Medical’s generative AI platform analyzes patient data in real-time, alerting healthcare professionals to potential deteriorations, allowing for early interventions.
- Workflow Optimization: Generative AI tools streamline healthcare operations by optimizing resource allocation, scheduling, and workflow management. By analyzing historical data and current demand, these tools can improve efficiency, reduce waiting times, and enhance patient experiences. For example, the University of California, San Francisco (UCSF), uses generative AI to optimize surgical schedules, reducing delays and ensuring optimal utilization of operating rooms.
Zebra Medical Vision: Zebra Medical Vision employs generative AI algorithms to analyze medical images and detect a wide range of diseases and conditions, including breast cancer, liver diseases, and cardiovascular issues. By utilizing this technology, Zebra Medical Vision aims to provide early detection and improve patient outcomes through accurate and timely diagnoses.
PathAI: PathAI utilizes generative AI to assist pathologists in analyzing tissue samples for cancer diagnosis. By analyzing digital pathology images, PathAI’s algorithms can identify and classify cancerous cells with high accuracy, aiding pathologists in making more informed decisions and improving diagnostic precision.
Generative AI is revolutionizing the healthcare industry by empowering healthcare professionals with advanced tools for accurate diagnoses, personalized treatments, and streamlined operations. From medical image analysis and virtual consultations to drug discovery and workflow optimization, the applications of generative AI in healthcare are vast and promising.
As this technology continues to evolve, its potential to transform patient care and drive medical advancements is immense, paving the way for a healthier future. | <urn:uuid:fda19ac9-b9d3-4641-afe1-6dcadd9be8ba> | CC-MAIN-2024-38 | https://www.clearobject.com/transforming-healthcare-genai/ | 2024-09-12T11:29:14Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651457.35/warc/CC-MAIN-20240912110742-20240912140742-00563.warc.gz | en | 0.891158 | 841 | 2.78125 | 3 |
This blog post is part of a series called “CommScope Definitions” in which we will explain common terms in communications network infrastructure.
When a mobile user is in an urban setting, the RF signal traveling between the user’s handheld and the base station antenna often bounces off the many buildings in the way. Such bouncing around of the RF signal is called multipathing, a phenomenon that was previously considered detrimental to good RF communications. However, the RF transmission technology called MIMO (Multiple Input/Multiple Output) takes advantage of this reality, turning multipathing into a useful method for increasing the data capacity and data rates of mobile device users.
MIMO is a radio system with multiple inputs and multiple outputs, which means more than one antenna on each end of the link. There are a number of communication system architectures and algorithms that could fall under the broad category of MIMO—for example, massive MIMO, beamforming and others. The type being deployed in LTE systems today is “spatial multiplexing” which is typically what the wireless industry means by the term MIMO.
LTE, LTE-Advanced, and 5G enable data-intensive applications such as mobile video and gaming. By using multiple channels, MIMO connections get more use from the available bandwidth than a single connection to enable these types of communications.
MIMO is based on establishing multiple connections between the user and the network using either two or four channels, all in the same frequency band, instead of just one. Instead of avoiding multipathing, MIMO intentionally seeks multiple paths for the wireless signals.
For MIMO to work, multiple antennas, transmitters and receivers must be put into each mobile device. For example, a mobile device in a MIMO-enabled network sends and receives two separate wireless signals instead of one.
The base station antennas in a MIMO network can receive two or more signals. Through innovative engineering and advanced signal processing algorithms, MIMO connections avoid the negative effects of multipath interference while significantly increasing the speed that users can access the internet, play games and watch video.
Key Takeaway: MIMO is a wireless transmission technique that intentionally sends signals down two or four channels simultaneously in order to increase network capacity and speed for users. | <urn:uuid:9182ef0d-6d81-4e9f-b0f4-f073a613921e> | CC-MAIN-2024-38 | https://www.commscope.com/Blog/CommScope-Definitions-What-Is-MIMO/ | 2024-09-12T13:25:06Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651457.35/warc/CC-MAIN-20240912110742-20240912140742-00563.warc.gz | en | 0.939839 | 470 | 3.859375 | 4 |
Modern enterprise data science teams are technically diverse. The members of these teams differ in their education levels (bachelors, masters, PhD), majors (CS, statistics, natural sciences), prior work experiences (advertising, actuaries, finance, experimental physics) and used tools (Excel, SAS, R, Python, Java). One is more likely to find biology or computational neuroscience majors in data science teams today than in actuarial or financial firms of the past.
But there is a more subtle difference than the ones listed above: team members with quantitative backgrounds may differ in exposure to the same set of concepts. Take the example of regression. Both engineering and statistics departments devote a portion of their curriculum to teaching line fitting. The presentations in these disciplines, however, have historically differed.
Their terminology is also different: statisticians call it regression, engineers call it curve-fitting. With an increasing number of non-CS engineers and scientists joining data science teams, it is instructive to examine the differences between statistical and engineering approaches to common data science concepts.
Understanding these differences will have several benefits. First, it would enable an effective cross-communication between team members with different backgrounds. Second, it'll lead to a more efficient training. Say, for example, that statistical features of curve-fitting were important for a particular business. This business can call those aspects out explicitly and use them to craft focused training sessions for the team members who are familiar with the curve-fitting procedure, but not necessarily its statistical fundamentals. Lastly, once the business-critical technical skills are identified, they can be used to make job-descriptions more accurate (than the generic ones we often encounter) or emphasized during hiring decisions.
In the following, the example of regression is used for concreteness. There are other data science concepts that overlap multiple disciplines, but are referred to differently. Readers who've experienced this will be able to extrapolate the regression example to these other areas.
For engineers and physical scientists, line fitting is a tool to understand the physical law driving the observations. Kepler analyzed tables of planet position data to discover laws of planetary motion. He was interested less in predicting future positions of planets than in the laws that governed them.
To engineers, the values of the fitting parameters (e.g. slope and intercept) have to make sense. These values represent measurable physical quantities. For example, weight (mass) and volume have a linear relationship for a given material. If such a relationship were plotted, the slope would represent the “density.” For an automobile, the relationship of miles travelled at a constant speed vs gas consumed can be expected to be linear. The slope here represents the “mileage” of the automobile at that speed.
As such, engineers and natural scientists often have a tendency to closely inspect the values of the obtained fitting parameters. In Sciences, if a fit predicts physically unreasonable values of parameters, the model is discarded (and underlying experiments repeated) regardless of the fit quality.
The predictive ability of the fit is also a secondary concern. The objective of modeling in sciences is usually to propose new experiments not yet carried out and predict their results as opposed to results of future measurements from the same apparatus.
The assumptions underlying the prescribed fitting procedure are rarely mentioned explicitly. Choice of “sum of squares” as a cost function is justified since it possesses “nice” mathematical properties like differentiability and convexity that are required to locate the minimum of the cost function. It is not uncommon for non-statistics data scientists to be unable to list the assumptions behind the sum of squares cost function.
The focus here is on getting the best quality fit and using it to predict the expected values of future observations. The predictive quality of the model is explicitly captured by dependence of the “model quality” on out-of-sample error.
There is comparatively less emphasis in statistics on understanding the physical phenomena that underlie the observations. The efforts are mostly devoted to constructing accurate, predictive mathematical models. This is possibly because datasets under consideration often do not permit regeneration under controlled circumstances.
Statisticians also expend considerable energy on reducing the out-of-sample errors. This includes techniques such as adding complexity to the fitting function (feature interactions, kernels, nonlinearities), fine-tuning the cost function (regularization), reducing dimensionality, and, whenever possible, gathering more data.
In other words, interpretable models are nice but not strictly necessary for the overall success of the effort.
Between the two viewpoints considered above, there is no one that is more “correct”, “valid” or “scientific.” Both have proven successful in their respective domains. Scientific laws are re-examined despite their excellent “fit” and predictive abilities. Conversely, it is also a fact that autonomous cars and the state-of-the-art image recognition have been enabled by models whose mathematical and physical properties are less than completely understood.
Let us now contrast the linear regression math as presented in engineering and statistics. The math below is non-rigorous by design. Imagine we have a set of 100 points as shown in the scatter plot below.
Engineering disciplines typically (but not always) adopt a linear-algebra based approach to regression. Taking the simple example of single-variable (univariate) regression, we can express the observed values of dependent variables (y) as a linear function of the independent variables (x) as follows: (1)
Note that this is an over-determined set of equations since there are more equations than unknowns. To solve it, we compute the sum of squared residuals, termed as the cost function: (2)
and obtain the fit coefficients via the standard minimization procedure of setting. The solution is easier to express and generalize in matrix terms. If we set:
then the least squares problem, re-expressed in a matrix-vector form is (3)
with the solution (4)
The coefficients and obtained from equation (4) are identical to one obtained by minimization of cost function in equation (2). The matrix-based solution also generalizes to multivariate regression, i.e. to situations where we have more than one independent variable. Explicit matrix inversion is seldom carried out in practice for numerical stability reasons. Instead, the system (5)
is solved by Gaussian elimination (or, equivalently LU decomposition) or iterative methods. Engineering treatments of curve-fitting typically halt after a description of the above procedure. The justification and assumptions underlying these prescriptions are either not emphasized or deferred to statistics texts.
The Statistical approach to regression aims to capture the probability distribution of the points about their expected value.
The fitting function specifies the expected position of the dependent variable for a given input. Linear regression is the hypothesis that the expected position depends linearly on the input: We then compute the error between the predicted/expected (Yp) and the observed (Yi) values of the dependent variable:
We then make a few critical assumptions about our observed data:
After this setup, we seek the values of the fitting parameters and that maximize the probability of obtaining the dataset under consideration. This is known as the maximum likelihood estimate (MLE) and is a widely used parameter estimation technique.
Since we've assumed observations to be independent of each other, the overall probability of obtaining the observed set of errors is just the product of obtaining the individual errors:
which, on substituting the normal distribution of error probabilities, becomes (6)
Maximizing the likelihood, or requires minimizing the exponent and leads to the least squares cost function. The generalization to multivariate case yields to the equation system (5).
The statistical approach is not free from assumptions and many of these may seem ad-hoc: identical normal distribution of errors, independence of observations and usage of the MLE for parameter estimation. However, the assumptions are explicit, testable and seem to provide a deeper justification for arriving at the least squares cost function.
Taking the example of regression, we touched on the issue of technical diversity in data science teams.
Specifically, we considered how quantitative team members could be exposed to common concepts in different ways. The above discussion is, of course, not a call for generalization (statisticians can't code or engineers don't know probability theory).
Rather, it is meant to highlight an issue that is expected to become prominent as people from engineering, natural sciences and social sciences join computer scientists and statisticians in pursuit of data science.
Non-technical or business facing people could likely waive away these differences as academic or easily handled “on the job.” In many cases, however, “statistical” or “physical” way of thinking is ingrained and impacts how job assignments are carried out (e.g. does the person use hill-climbing or write a multi-threaded program to perform exhaustive search of the parameter space).
Being mindful of the approach and vocabulary of the others' fields is helpful in guiding teams toward more fruitful collaborations as well as correctly estimating quantitative skill levels during hiring decisions. | <urn:uuid:c7d06d80-a4b0-440a-9cb7-fb275230d434> | CC-MAIN-2024-38 | https://icrunchdata.com/blog/475/regression-vs-curve-fitting-technical-diversity-in-data-science-teams/ | 2024-09-19T20:14:30Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652067.20/warc/CC-MAIN-20240919194038-20240919224038-00863.warc.gz | en | 0.936991 | 1,879 | 2.71875 | 3 |
You may have heard some talk about Microsoft’s new version of Microsoft Edge. Originally, Microsoft Edge would only work on Windows 10. The new version, now based on Chromium, can work on Windows 7, 8, 8.1, 10 plus on the Mac OS, iOS and Android.
Is Chromium the same as Google Chrome? Not exactly. Chromium is a free and open-source software project from Google which first appeared in 2008. What does the term “open-source software” mean? When the code for a software program is available to anyone to use and modify, that is called “open-source.” In the case of the browser Google Chrome, Google took their own open-sourced Chromium and modified it with added features, etc. Thus, Google Chrome and Chromium are not the same thing.
Google Chrome is not the only browser based on Chromium. The Opera browser is based on it, too. Again, Opera adds features of its own which are not found in Google Chrome.
Now, Microsoft comes along, scraps its original Microsoft Edge and changes it to one based on Chromium. Why did they do this? Because the code for the original Edge wasn’t being accepted by website builders and many websites just didn’t work correctly in the old Edge.
Are Google Chrome and Microsoft Edge the same now? No. They are both based on the original Chromium but there are differences. Should you use Chrome instead of Edge or vice versa? I use them both. There are features I like about both of them. The one thing I would say is that the word on the street is Microsoft doesn’t track and store as much information as Google. For that reason, I often lean towards the new Edge.
I’m developing some videos on the new Microsoft Edge and you’ll see them in the near future. To get you started, here’s one on changing the home page. (Note: if you haven’t yet subscribed to our YouTube channel, please do so after watching this instructional video.) | <urn:uuid:a6e8be01-2f93-4535-bbbe-da40d7cfb4c9> | CC-MAIN-2024-38 | https://www.4kcc.com/blog/2020/05/14/chromium/ | 2024-09-21T03:52:21Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701427996.97/warc/CC-MAIN-20240921015054-20240921045054-00763.warc.gz | en | 0.950783 | 432 | 2.53125 | 3 |
React.js, also known as ReactJS and React, is an important tool in developing applications. Learn more about React, what it is used for and why React is so popular.
What Is React Framework?
React is a free, open-source library used for building user interfaces. The entire library is maintained and managed by Meta (formerly Facebook), paired with a community of developers and companies.
Developing applications with React is a bit like a carpenter having all their building materials and tools available when they start working, rather than having to go out and process trees for their wood and smelt steel for their tools.
In the same way, the framework and library of React help streamline the production and development cycle. React gives developers the tools they need to make applications, so developers don’t have to go back and build applications from scratch.
When Should You Use React?
React is designed to be used when developing apps. This includes mobile apps, web apps, single-page apps (SPA), and virtual reality applications. It can also be used to create a new website or modify an existing page, but it can be difficult to do this effectively compared to other available frameworks.
How Does React Measure Up to Other Frameworks?
React specializes in app development. When compared to other app frameworks like Angular, Vue, and Ember, React is one of the best frameworks available.
Angular is React’s biggest competitor. Developed by Google, Angular supports TypeScript and MVC architecture. The biggest downside is its age. Angular was developed in 2010, and the app development world has drastically changed in size, shape, and scope.
While there have been updates to keep up with the changing market, not all changes are bug-free on launch and can be clunky and difficult to work with until they’re optimized later. With each version and update, there is also a chance that previously functioning applications will no longer be compatible with the new version.
How Popular Is React?
React is one of the go-to frameworks in app development. Its open-source library is frequently chosen because it provides a fast and efficient environment that is easy to use with minimal coding. The biggest strength of React is that it breaks down individual components, allowing developers to break down and master each part of a complex UI into simple, easy-to-manage components.
Get Help from Excel SoftSources
Our team at Excel SoftSources is experienced using React, as well as all major frameworks and developing languages. Contact us today to learn how we can help you fill your nearshore development needs. | <urn:uuid:fb45e313-e2f4-4ab4-8c23-cee87341313e> | CC-MAIN-2024-38 | https://excelsoftsources.com/blog/why-is-react-js-so-popular/ | 2024-09-09T00:52:30Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651053.52/warc/CC-MAIN-20240909004517-20240909034517-00063.warc.gz | en | 0.949405 | 539 | 2.6875 | 3 |
Data preprocessing is a fundamental and essential step in the field of sentiment analysis, a prominent branch of natural language processing (NLP). Sentiment analysis focuses on discerning the emotions and attitudes expressed in textual data, such as social media posts, product reviews, customer feedback, and online comments. By analyzing the sentiment of users towards certain products, services, or topics, sentiment analysis provides valuable insights that empower businesses and organizations to make informed decisions, gauge public opinion, and improve customer experiences.
In the digital age, the abundance of textual information available on the internet, particularly on platforms like Twitter, blogs, and e-commerce websites, has led to an exponential growth in unstructured data. This unstructured nature poses challenges for direct analysis, as sentiments cannot be easily interpreted by traditional machine learning algorithms without proper preprocessing.
The goal of data preprocessing in sentiment analysis is to convert raw, unstructured text data into a structured and clean format that can be readily fed into sentiment classification models. Various techniques are employed during this preprocessing phase to extract meaningful features from the text while eliminating noise and irrelevant information. The ultimate objective is to enhance the performance and accuracy of the sentiment analysis model.
Role of data preprocessing in sentiment analysis
Data preprocessing in the context of sentiment analysis refers to the set of techniques and steps applied to raw text data to transform it into a suitable format for sentiment classification tasks. Text data is often unstructured, making it challenging to directly apply machine learning algorithms for sentiment analysis. Preprocessing helps extract relevant features and eliminate noise, improving the accuracy and effectiveness of sentiment analysis models.
The process of data preprocessing in sentiment analysis typically involves the following steps:
- Lowercasing: Converting all text to lowercase ensures uniformity and prevents duplication of words with different cases. For example, “Good” and “good” will be treated as the same word
- Tokenization: Breaking down the text into individual words or tokens is crucial for feature extraction. Tokenization divides the text into smaller units, making it easier for further analysis
- Removing punctuation: Punctuation marks like commas, periods, and exclamation marks do not contribute significantly to sentiment analysis and can be removed to reduce noise
- Stopword removal: Commonly occurring words like “the,” “and,” “is,” etc., known as stopwords, are removed as they add little value in determining the sentiment and can negatively affect accuracy
- Lemmatization or Stemming: Lemmatization reduces words to their base or root form, while stemming trims words to their base form by removing prefixes and suffixes. These techniques help to reduce the dimensionality of the feature space and improve classification efficiency
- Handling negations: Negations in text, like “not good” or “didn’t like,” can change the sentiment of the sentence. Properly handling negations is essential to ensure accurate sentiment analysis
- Handling intensifiers: Intensifiers, like “very,” “extremely,” or “highly,” modify the sentiment of a word. Handling these intensifiers appropriately can help in capturing the right sentiment
- Handling emojis and special characters: Emojis and special characters are common in text data, especially in social media. Processing these elements correctly is crucial for accurate sentiment analysis
- Handling rare or low-frequency words: Rare or low-frequency words may not contribute significantly to sentiment analysis and can be removed to simplify the model
- Vectorization: Converting processed text data into numerical vectors is necessary for machine learning algorithms to work. Techniques like Bag-of-Words (BoW) or TF-IDF are commonly used for this purpose
Data preprocessing is a critical step in sentiment analysis as it lays the foundation for building effective sentiment classification models. By transforming raw text data into a clean, structured format, preprocessing helps in extracting meaningful features that reflect the sentiment expressed in the text.
For instance, sentiment analysis on movie reviews, product feedback, or social media comments can benefit greatly from data preprocessing techniques. The cleaning of text data, removal of stopwords, and handling of negations and intensifiers can significantly enhance the accuracy and reliability of sentiment classification models. Applying preprocessing techniques ensures that the sentiment analysis model can focus on the relevant information in the text and make better predictions about the sentiment expressed by users.
Influence of data preprocessing on text classification
Text classification is a significant research area that involves assigning natural language text documents to predefined categories. This task finds applications in various domains, such as topic detection, spam e-mail filtering, SMS spam filtering, author identification, web page classification, and sentiment analysis.
The process of text classification typically consists of several stages, including preprocessing, feature extraction, feature selection, and classification.
Different languages, different results
Numerous studies have delved into the impact of data preprocessing methods on text classification accuracy. One aspect explored in these studies is whether the effectiveness of preprocessing methods varies between languages.
For instance, a study compared the performance of preprocessing methods for English and Turkish reviews. The findings revealed that English reviews generally achieved higher accuracy due to differences in vocabulary, writing styles, and the agglutinative nature of the Turkish language.
This suggests that language-specific characteristics play a crucial role in determining the effectiveness of different data preprocessing techniques for sentiment analysis.
A systematic approach is the key
To enhance text classification accuracy, researchers recommend performing a diverse range of preprocessing techniques systematically. The combination of different preprocessing methods has proven beneficial in improving sentiment analysis results.
For example, stopword removal was found to significantly enhance classification accuracy in some datasets. At the same time, in other datasets, improvements were observed with the conversion of uppercase letters into lowercase letters or spelling correction. This emphasizes the need to experiment with various preprocessing methods to identify the most effective combinations for a given dataset.
The bag-of-words (BOW) representation is a widely used technique in sentiment analysis, where each document is represented as a set of words. Data preprocessing significantly influences the effectiveness of the BOW representation for text classification.
Researchers have performed extensive and systematic experiments to explore the impact of different combinations of preprocessing methods on benchmark text corpora. The results suggest that a thoughtful selection of preprocessing techniques can lead to improved accuracy in sentiment analysis tasks.
Requirements for data preprocessing
To ensure the accuracy, efficiency, and effectiveness of these processes, several requirements must be met during data preprocessing. These requirements are essential for transforming unstructured or raw data into a clean, usable format that can be used for various data-driven tasks.
One of the primary requirements for data preprocessing is ensuring that the dataset is complete, with minimal missing values. Missing data can lead to inaccurate results and biased analyses. Data scientists must decide on appropriate strategies to handle missing values, such as imputation with mean or median values or removing instances with missing data. The choice of approach depends on the impact of missing data on the overall dataset and the specific analysis or model being used.
Data cleaning is the process of identifying and correcting errors, inconsistencies, and inaccuracies in the dataset. It involves removing duplicate records, correcting spelling errors, and handling noisy data. Noise in data can arise due to data collection errors, system glitches, or human errors.
By addressing these issues, data cleaning ensures the dataset is free from irrelevant or misleading information, leading to improved model performance and reliable insights.
Data transformation involves converting data into a suitable format for analysis and modeling. This step includes scaling numerical features, encoding categorical variables, and transforming skewed distributions to achieve better model convergence and performance.
Data transformation also plays a crucial role in dealing with varying scales of features, enabling algorithms to treat each feature equally during analysis
As part of data preprocessing, reducing noise is vital for enhancing data quality. Noise refers to random errors or irrelevant data points that can adversely affect the modeling process.
Techniques like binning, regression, and clustering are employed to smooth and filter the data, reducing noise and improving the overall quality of the dataset.
Feature engineering involves creating new features or selecting relevant features from the dataset to improve the model’s predictive power. Selecting the right set of features is crucial for model accuracy and efficiency.
Feature engineering helps eliminate irrelevant or redundant features, ensuring that the model focuses on the most significant aspects of the data.
Handling imbalanced data
In some datasets, there may be an imbalance in the distribution of classes, leading to biased model predictions. Data preprocessing should include techniques like oversampling and undersampling to balance the classes and prevent model bias.
This is particularly important in classification algorithms to ensure fair and accurate results.
Data integration involves combining data from various sources and formats into a unified and consistent dataset. It ensures that the data used in analysis or modeling is comprehensive and comprehensive.
Integration also helps avoid duplication and redundancy of data, providing a comprehensive view of the information.
Exploratory data analysis (EDA)
Before preprocessing data, conducting exploratory data analysis is crucial to understand the dataset’s characteristics, identify patterns, detect outliers, and validate missing values.
EDA provides insights into the data distribution and informs the selection of appropriate preprocessing techniques.
By meeting these requirements during data preprocessing, organizations can ensure the accuracy and reliability of their data-driven analyses, machine learning models, and data mining efforts. Proper data preprocessing lays the foundation for successful data-driven decision-making and empowers businesses to extract valuable insights from their data.
What are the best data preprocessing tools of 2023?
In 2023, several data preprocessing tools have emerged as top choices for data scientists and analysts. These tools offer a wide range of functionalities to handle complex data preparation tasks efficiently.
Here are some of the best data preprocessing tools of 2023:
Microsoft Power BI
Microsoft Power BI is a comprehensive data preparation tool that allows users to create reports with multiple complex data sources. It offers integration with various sources securely and features a user-friendly drag-and-drop interface for creating reports.
The tool also employs AI capabilities for automatically providing attribute names and short descriptions for reports, making it easy to use and efficient for data preparation.
In recent weeks, Microsoft has included Power BI in Microsoft Fabric, which it markets as the absolute solution for your data problems.
Tableau is a powerful data preparation tool that serves as a solid foundation for data analytics. It is known for its ability to connect to almost any database and offers features like reusable data flows, automating repetitive work.
With its user-friendly interface and drag-and-drop functionalities, Tableau enables the creation of interactive data visualizations and dashboards, making it accessible to both technical and non-technical users.
Trifacta is a data profiling and wrangling tool that stands out with its rich features and ease of use. It offers data engineers and analysts various functionalities for data cleansing and preparation.
The platform provides machine learning models, enabling users to interact with predefined codes and select options as per business requirements.
Talend Data Preparation tool is known for its exhaustive set of tools for data cleansing and transformation. It facilitates data engineers in performing tasks like handling missing values, outliers, redundant data, scaling, imbalanced data, and more.
Additionally, it provides machine learning models for data preparation purposes.
Toad Data Point
Toad Data Point is a user-friendly tool that makes querying and updating data with SQL simple and efficient. Its click-of-a-button functionality empowers users to write and update queries easily, making it a valuable asset in the data toolbox for data preparation and transformation.
Power Query (part of Microsoft Power BI and Excel)
Power Query is a component of Microsoft Power BI, Excel, and other data analytics applications, designed for data extraction, conversion, and loading (ETL) from diverse sources into a structured format suitable for analysis and reporting.
It facilitates preparing and transforming data through its easy-to-use interface and offers a wide range of data transformation capabilities. | <urn:uuid:551e4c56-d029-4368-8fd2-7e90e4d105dd> | CC-MAIN-2024-38 | https://dataconomy.com/2023/07/28/data-preprocessing-steps-requirements/ | 2024-09-12T14:57:09Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651460.54/warc/CC-MAIN-20240912142729-20240912172729-00663.warc.gz | en | 0.89097 | 2,517 | 3.0625 | 3 |
Smooth Communication on all Channels
Chat, telephone and video: accessible in the network without interference
Who doesn't know it: A conference call has started, but a conversation partner connection is fraught with video delays and indistinct words; these errors occur repeatedly, the network administrator is contacted. In order to find the error in the multitude of network activities, it helps to "look into the cable" and analyse the connections over network Layers 2 to 7 and their associated packets. Therefore protocols like TCP, SIP, RTP or SSL are examined.
Heterogeneous applications and services take up the bandwidth of a network connection. Multiple participants and services share a data transmission rate that can be too narrow in sum. When users access uplinks of up to 1 GBit/s, this is a significant increase compared to previous years. However, the amount of data to be transferred has also increased with the complexity of applications. Backup, chat, email, telephone and video all run over the same connection and compete with each other. Staff can also connect their own devices and use data-hungry applications. Modern measurement technology for network analysis makes the activities, bottlenecks and disturbances in the network visible to enable detailed investigation and evaluation.
Accessibility of services
In many organisations, text-based communication such as individual or group chat is often used in addition to email. Most users accept that the there are finite delays when using email. With chat on the other hand, a user expects immediate delivery of a message. A user may not understand the long wait for a reply, since the conversation is often short and brief. A network administrator can determine whether it is due to a technical problem if the conversation stalls more than once.
With a dedicated network analysis tool, the network administrator can find the IP address of the chat user and display the corresponding TCP connection protocol data. The TCP connection data are responsible for establishing, checking and terminating the connection. The network analyser detects how long the handshake took in each direction. This makes it clear whether there are latency problems. If services are virtualised, there can be quality problems with the virtual machine which often operates according to the best-effort principle and processes applications in an equal manner. If you launch a backup, delay is less important. However, a chat session is enhanced when the performance is more timely. If the service runs via a virtual machine, a network administrator can determine the fact from the handshake times. If they are too long, the virtual host is not allocating sufficient computing resource to the virtual machine. This proves that the virtual machine is overloaded and cannot handle requests in line with the application and the demands of the user.
If data was transmitted via TCP, the other user's computer should confirm receipt after a short time. If the response time is erratic or delayed, this could be due to network congestion between the server and the client. If the TCP response time increases and fluctuates significantly, then the chat server is either under heavy load or the connection is slow. It is the same for the client-direction. If the response time is very long and the confirmation is erratic, then the chat client or the route is overloaded. A smart network analysis tool helps to keep track of the times of the connections. TCP retransmissions should be considered if the chat user cannot log in at all, the connection is always interrupted or delays are noticeable. Multiple duplicate packets indicate overloaded network components.
If there is poor client or server connection, examining TCP handshakes, TCP response times and reviewing the TCP retransmissions can reveal the root cause of the problem.
Detecting load peaks
If TCP response times are not constant, this may be due to fluctuating network load. Did the network have too much load to process due to unfavourable conditions or are there recurring overloads? With the help of burst analysis, a network administrator can recognise if it is repetitive and caused by systematic load peaks. Burst analysis is a good indicator of network quality and shows the percentage of load on a connection over a given time interval. A burst describes the effect of a large number of transmitted data packets in a short period without a pause, comparable to a traffic jam on the road. But what causes the congestion? For example, extensive requests can saturate the connection and trigger delays to other applications. Data transfers, e.g. from fast SSD hard drives, can hog the entire network capacity. Or, a large number of emails are in competition with updating an individual's smartphone. Additional services such as chat and VoIP may also be running. If there was a data jam for even a millisecond, a switch, router or firewall may buffer or discard existing or incoming packets. This is normal in networks, but it can become problematic and cause services such as chat or VoIP to be disrupted. With the help of a smart network analyser, these load peaks can be displayed. Burst analysis detects which service has sent overly large data streams or triggered traffic at the time of the load peaks.
A network administrator can solve many problems by assigning Quality of Service (QoS) rules. To do this, the network administrator must know which services are used in their own network. QoS describes methods to improve network quality. This is handled via additional bandwidth, bandwidth reservation or packet prioritisation. The specified measures should be monitored and thereby confirm the desired result. Increasing transmission capacity is not always the right solution. In many cases, it is advisable to partition individual services into classes and then allocate them a corresponding bandwidth. Here, the chat programme could be placed in a class with video. In addition to the logical separation, physical separation of the services is an additional option. The telephone network and VoIP should be assigned as a high priority or a dedicated service.
Examine SIP and RTP carefully
Voice and video services as telephony (VoIP) and video conferencing are widely used applications in everyday business life. SIP and RTP are the most frequently used protocols for voice transmission. Session Initiation Protocol (SIP) is responsible for setting up, controlling and terminating sessions. The SIP standard is relatively mature, text-based and can be expanded as required. With a powerful analysis tool, calls and their metadata can be displayed in the SIP statistics section. If a VoIP call is rated as poor quality, this can be traced by a network administrator in the analysis tool. Indicators are the bit rate, sample rate, codec information and other audio parameters. The audio data is processed via RTP (Real-Time Transport Protocol). The RTP packet rate shows whether a connection was dropped or whether the rate was constant. Packets transmitted twice or discarded are also measured and displayed. RTP packets have sequence numbers. If sequence numbers are missing, it can be assumed that there has been packet loss. The reason may be due to a burst or a connection problem. It is also often reported that the conversation partner could not be consistently understood. Audio data is transported in RTP blocks via UDP which is a connectionless transport protocol. In contrast to TCP, it does not receive any confirmation for guaranteed packet delivery. These packets are sent at intervals of 20 or 30ms. Sometimes, UDP packets are not received consistently. Even a difference of 20ms can lead to acoustic problems. An analyser can be used to find out whether this is the case or whether another problem has led to the impaired speech quality. A powerful network analysis tool can display the time stamp, packet loss and the differences in runtime, otherwise known as jitter. To measure jitter, analysis accuracy in the millisecond range is required.
Take a look at video telephony
In addition to VoIP, video telephony, web meetings and webinars are popular forms of communication that soak up network bandwidth; these applications are time-critical. An error usually occurs when the service is needed most. A user complains of choppy sound or jerky images. In the event of problems, the network administrator is called in order to find a solution within a short time. The the network administrator may check the network service connections. Microsoft Teams and Skype are often used in today's business environment. These applications establish SSL-secured connections to their servers. A Skype analysis is difficult, but not impossible. Via TCP connections it is possible to carry out a control traffic diagnosis. With a smart analysis appliance, TCP response times, retransmissions, TCP Zero Window and other indicators can be accurately displayed. Encrypted Skype traffic functions over RTP. The RTP header is unencrypted and provides information about packet loss, latency and jitter. The audio and video content is fully encrypted. Skype utilises dynamic codecs and negotiates independent packet rates. A state-of-the-art network analyser can display many indicators to ensure good SSL connections: the response time for the SSL handshake, the first response time for encrypted SSL data, SSL server name and country code. From this it can be deduced for example, whether the route to a Skype server is the cause of high latency or whether specific SSL connections are rejected.
Many communication services share the bandwidth over a network connection. Problems such as load peaks and broken connections can be investigated using various parameters. Connections, protocols and packets are made visible with professional measurement technology for network analysis. They can also be used to examine traces of past sessions (pcap). Precise measurement and specific analysis can help a network administrator to quickly identify errors and restore services to the benefit of all users. | <urn:uuid:10775815-f59e-40cc-a038-b8cc0073e56f> | CC-MAIN-2024-38 | https://allegro-packets.com/en/papers/Skype-Zoom-MicrosoftTeams-monitoring-analyzing-debugging | 2024-09-15T03:28:14Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651614.9/warc/CC-MAIN-20240915020916-20240915050916-00463.warc.gz | en | 0.930734 | 1,931 | 2.734375 | 3 |
The MITRE ATT&CK framework is a publicly available knowledge base of observed adversary behaviors categorized into specific tactics and techniques across an adversary’s attack lifecycle. MITRE ATT&CK provides a taxonomy or vocabulary when discussing cybersecurity incidents or threats. Most importantly, it is an evolving knowledge base that gathers the latest intelligence from the community and updates its models over time.
MITRE ATT&CK (MITRE Adversarial Tactics, Techniques and Common Knowledge)
The MITRE ATT&CK framework was released by the MITRE Corporation in 2015, born of insights from an internal research project (notably Blake Strom’s red team) known as the Fort Meade eXperiment (FMX).
The MITRE Corporation was founded in 1958, an off-shoot of the MIT Lincoln Laboratory. MITRE is a non-profit and oversees federally funded research and development centers (FFRDCs, such as Fermilab) on account of various US Government agencies, including DoD and Homeland Security.
The MITRE ATT&CK framework consists of Tactics and Techniques across the lifecycle of an attack.
Tactics: The different tactics used by an adversary during an attack can be thought of as a sequence of events, almost like a movie. Each tactic represents a goal that the adversary is trying to achieve, and leads to the next goal in the sequence.
Techniques: Techniques refer to the specific tools, processes and steps that the adversary takes to achieve a specific tactic.
The “Persistence” tactic pertains to the adversary’s objective to maintain system access during restarts, changed credentials and other interruptions. MITRE ATT&CK identifies 19 different techniques used to accomplish this purpose — from Account Manipulation (such as modifying account credentials or permission groups, performing iterative password updates to bypass password duration policies, etc.) to Shortcut Modification (create or edit shortcuts during system boot or user login to reference other programs that will be opened or executed). These are techniques that maintain connectivity in the system.
Mitre Att&ck tactics
There are 14 Tactics in the Enterprise framework:
- Reconnaissance: Attempt to gather information for an attack.
- Resource Development: Attempt to create, steal, purchase or otherwise access resources such as infrastructure, accounts, capabilities, etc. that can be used during the attack.
- Initial Access: Gain a foothold in the network through various means such as spearfishing, exploiting public-facing applications, etc.
- Execution: Running adversary-controlled code or modifications to operations
- Persistence: The ability to remain a foothold in the environment through various changes, reboots, etc.
- Privilege Escalation: Gaining higher levels of permission through vulnerabilities, misconfigurations, etc.
- Evasion: Avoid defenses through disabling security software, masquerading malware as approved operations, etc.
- Credential Access: Stealing account names and passwords through credential dumping or keylogging, etc.
- Discovery: Gaining knowledge of the system that the adversary intends to compromise
- Lateral Movement: Move through a remote network once having gained access through legitimate credentials or remote access tools (RATs), etc.
- Collection: Gather information from the systems that is either sensitive in itself or provides further information about the defender’s environment.
- Command & Control: The ability to communicate with devices on the network to control their operation.
- Exfiltration: Stealing the data that has been collected by packaging it and transferring it to adversary-controlled networks or devices.
- Impact: Disrupt availability or compromise integrity of the network and systems themselves such as tampering or destroying data.
Mitre Att&ck Techniques
In each of these 14 tactics, MITRE describes the various techniques that adversaries can use to achieve the tactical objective. In total, in ATT&CK for Enterprise. there are 188 different techniques, which are not all unique to one tactic. Within each of these techniques, MITRE also provides a robust set of detailed information. For instance, for the technique External Remote Services found within the Initial Access tactic, MITRE provides a drill-down option to learn more about:
- Variety of threats
- Groups known to exhibit specific behaviors
- Detection hints
- References to other information sources
- Related mitigations
As you can see from the list of tactics, there is a logical sequence to these tactics. However, these do not necessarily happen in order, nor does each attack have to use each of these techniques.
How to use the MITRE ATT&CK® Framework in Industrial Organizations
The MITRE ATT&CK® framework has rightfully gained widespread awareness and attention within cybersecurity teams around the world. Its structuring of adversary behaviors into different steps based on real-world observations is a significant step forward for defenders of the world’s systems.
However, organizations are also familiar with or leveraging other frameworks such as those from NIST (CSF, 800, etc.), the Cyber Kill Chain introduced by Lockheed Martin or in industrial organizations IEC/ISA 62443, etc.
Questions often arise about what exactly MITRE ATT&CK is, how to best use it, what is the difference between the “MITRE ATT&CK Enterprise” framework and the “MITRE ATT&CK ICS” framework and how it relates to other frameworks an organization is using.
Cyber kill chain vs MITRE ATT&CK
One question that often arises when an organization looks at MITRE ATT&CK is how it compares to the “Cyber Kill Chain” introduced by Lockheed Martin. The “Kill Chain” is taken from the military environment that described the structure of an attack including the identification of the target, moving assets to the target, beginning the attack and completion or destruction of the target. Lockheed adapted this concept to the cyber world, introducing the “cyber” kill chain.
As you can see, the two frameworks have similarities in that they both have steps in an attack and even use some of the same terms such as reconnaissance.
But there are two differences between MITRE ATT&CK and the Cyber Kill Chain. First, the latter is designed to help defenders “break” the chain. If the chain is broken, the attack is defended at that point. So it is a true sequence, whereas MITRE is a series of tactics which may or may not occur in order and may stop at any time but yet achieve that objective. Second, MITRE ATT&CK is at a much more detailed level of granularity provided by the techniques. As MITRE says in its FAQ:
ATT&CK and the Cyber Kill Chain are complementary. ATT&CK sits at a lower level of definition to describe adversary behavior than the Cyber Kill Chain. ATT&CK Tactics are unordered and may not all occur in a single intrusion because adversary tactical goals change throughout an operation, whereas the Cyber Kill Chain uses ordered phases to describe high-level adversary objectives.”
What are the use cases for the MITRE ATT&CK framework?
The MITRE ATT&CK framework is quite exhaustive and will be most useful to those knowledgeable and well-versed in cybersecurity.
Although many look at ATT&CK as a detection tool, in fact, it has a much broader set of use cases, and most are not about real-time monitoring and detection. There are eight broad use cases:
1. Adversary emulation scenario development
The framework, since it is based on real-world observations, allows an organization to develop potential scenarios of how attackers might attempt to compromise and impact their systems.
2. Gap assessment of current controls
By studying the scenarios, an organization can model how its current defenses would hold up against the techniques described in the adversary scenarios developed. Importantly, this is about much more than simply detection. It includes backup and restore, vulnerability and patch management, updated Anti-malware tools, etc.
3. Red-team or table-top planning
The scenarios can assist red-teams and teams creating table-top exercises to build real world attack patterns for the defenders to evaluate against. This can also include the evaluation of the maturity of the organization’s SOC as to whether they can identify the techniques as they are used.
4. Threat detection and monitoring
Threat hunting and monitoring teams can use the framework to ensure that their telemetry and analysis can identify the various techniques and how they link together.
5. Incident response
By providing the real-world examples and data about tactics and techniques, MITRE enables incident response teams to logically work through potential techniques once an incident is reported. While adversaries may use new and unseen techniques, the baseline of those described in the framework can accelerate response and remediation.
6. Current security tool integrations
Defending against the range of techniques in the ATT&CK framework requires a range of tooling and telemetry. The key to effective defense, however, is integrating this defense information into a common database so that organization can determine its protections across the range of tactics the adversary may desire.
7. Threat intelligence enrichment
The depth of information provided by MITRE as part of the framework content can significantly aid threat intelligence teams by providing depth and context of how that intel may display in the real-world environment.
8. Improve communication
The framework provides a common taxonomy to defenders across an organization as well as a way to describe threats to other stakeholders. This common taxonomy is enabled by the widespread awareness of the framework.
MITRE ATT&CK for Industrial Control Systems (ICS)
MITRE ATT&CK now has three different iterations:
Discusses the elements that are present in traditional onformation technology (IT) attacks and scenarios. It is also broken down by operating system (e.g., Windows) and a subsection devoted to cloud.
2. Industrial Control Systems (ICSs)
Discusses the elements that are present in Operational Technology (OT) attacks and scenarios. Unfortunately, it is separate from Enterprise’s ATT&CK framework, but because of the convergent nature of IT & OT, elements can and will overlap.
Discusses the unique adversarial behavior found when attacking iOS, Android, etc.
What is MITRE ATT&CK ICS framework? It is a knowledge base that describes the actions an adversary may use while operating in an industrial control system (ICS) environment. It focuses on post-compromise behaviors in specifically focused on environments where systems have an impact on the physical world and can risk health, safety, environmental impact, etc. It provides an overview of the tactics and techniques that are more likely to be present in OT/ICS environments and attempts to tailor cybersecurity to communities with very different priorities than the audience intended for the Enterprise ATT&CK matrix.
Why do we need another ICS framework
Although ICS systems leverage many technologies common to the Enterprise such as Windows and Linux servers and workstations, they also include many unique devices not found at the Enterprise level. In addition, these systems control physical processes and therefore the impact an adversary may aspire to can have very different consequences than those envisioned in the ATT&CK for Enterprise framework.
Therefore, MITRE undertook to develop a specific framework for these environments. It is heavily focused on what is referred to as “Level 0-2 of the Purdue Model”. For those readers unfamiliar with the Purdue model, it basically describes system levels within ICS or “Operating Technology” environments. Levels 0-2 are those closest to the physical operating sensors, valves, etc. These devices often operate with proprietary, embedded firmware and conduct physical operations to open and close connections or increase temperature or pressure. As a result, the traditional Enterprise techniques did not encompass the type of adversary behavior for these environments.
MITRE ATT&CK ICS is intended to focus on the following types of systems:
- Basic Process Control Systems
- Process Control
- Operator Interface & Monitoring
- Real-Time & Historical Data
- Safety Instrumented System(s) and Protection Systems
- Engineering and Maintenance Systems
What a close reader will notice is that the tactics are very similar to those found in Enterprise, which is a good thing as industrial organizations will need to use both frameworks to cover their entire environment. In ICS, MITRE excludes the two “Pre-ATT&CK” elements of Reconnaissance and Resource Collection as they are covered in Pre-ATT&CK. However, the framework for ICS excludes two tactics from Enterprise and adds two additional ones:
- Removes the Credential Access and the Exfiltration tactics
- Adds Inhibit Response Function and Impair Process Control tactics
The result is 11 Tactics in MITRE ATT&CK for ICS.
Although MITRE ATT&CK for ICS appears relatively similar at the tactic level, the difference, in the techniques is significant. The techniques, even for those tactics that also appear in the Enterprise framework, focus specifically on how an adversary would seek to impact an operating environment. Certainly adding the impairment and inhibiting of process control tactics is an important addition, but the shift in the techniques is where the “action” is. For instance, in the Execution tactic, the ICS framework includes items such as:
- Change operating mode which refers to controllers where the adversary can change it from “run” mode to “program” mode, for instance, which can allow the adversary to make unauthorized changes to the settings, programs, etc.
- Modify controller tasking which refers to changing the settings and commands on a controller intended to adjust the physical process in some way.
And while adding items like the above, the framework for ICS excludes Execution tactics found in Enterprise, such as:
- Windows Management Instrumentation in which an adversary can use the WMI to execute malicious commands.
- Software deployment tools in which an adversary may use tools already in the environment to take advantage of them to make changes in the environment.
But many techniques remain with slight adjustments such as the Scripting technique found in the Execution tactic where the potential scripts in focus in ICS may be different from the Apple scripts, etc. found in the Enterprise.
There are several other differences in the ICS framework from the Enterprise:
- The database used to develop the techniques is beyond publicly available incidents because there is just not the same richness of data for ICS attacks, so MITRE also uses academic research and potential attack vectors from the community.
- In the detailed information it provides organizational units of levels within the Purdue model as well as types of assets to aid users to understand which techniques are applicable to which asset types.
The use cases intended by MITRE include all the ones listed above for the Enterprise framework. In addition, however, there are two additional use cases as described in MITRE’s Philosophy Paper on the ICS framework:
- Development of Failure Scenarios. It can be used to help organizations supplement limited incident data with scenarios based on non-adversary-induced incidents. In other words, use operational disruption scenarios that are not caused by a cyber attacker as a way to model what an adversary might do to replicate those scenarios.
- Educational resources to help bridge the knowledge gap between cybersecurity teams and OT/ICS engineers providing a common language and framework for discussing potential scenarios.
MITRE ATT&CK as a way to understand defensive posture
One of the most compelling use cases for MITRE ATT&CK is using it to evaluate an organization’s current defenses against real-world adversaries. We have worked with industrial organizations to develop robust scenarios of attacks on their systems to evaluate how their current defenses would react in such a scenario. MITRE recognizes the advantages of a suite of defenses to stop an attack. The tactics and techniques highlight the adversary perspective that allows the defender to determine which layer of defenses will be most effective. It also recognizes there is no “one way” that a tactic can be achieved. So effective security requires a defense in depth or similar mindset.
By evaluating current tools, procedures and policies against scenarios organizations quickly see how critical it is to have comprehensive visibility of those defenses in one place. Perhaps the biggest challenge to today’s cybersecurity is the range of tools and organization silos or “towers” in achieving comprehensive security. The ATT&CK framework and the scenarios it enables highlight the gaps where defenses can fall between the cracks of these tools and groups if there isn’t a common view in a single database.
Original content can be found at Verve Industrial. | <urn:uuid:991664e1-dab4-4062-81cd-2d3557ff62c0> | CC-MAIN-2024-38 | https://www.industrialcybersecuritypulse.com/threats-vulnerabilities/what-is-a-mitre-attck/ | 2024-09-08T00:15:32Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650926.21/warc/CC-MAIN-20240907225010-20240908015010-00263.warc.gz | en | 0.933285 | 3,448 | 2.671875 | 3 |
When it comes to summarize, present and describe data in the simplest possible way, the descriptive statistics help. They are often called the first and important step in statistical analysis. Most of the time an actual analysis starts when an analyst digs the data and presents some descriptive statistics out of it for the user. The descriptive statistics allows us to understand the data with just an overview of the same. They are more or less the best alternatives when it comes to understanding the data and if done properly can already be a better start to the deep or advanced statistical analysis.
In this chapter, we are about to see how the descriptive statistics can be taken out under R programming with hands-on examples.
What are they?
When we start doing analysis, the basic summary statistics that describe the data with single unique values are key. They allow us to understand the data more precisely and that too with a single representative value for the same. Such summary statistics are nothing but descriptive statistics. They consist of minimum value, maximum value, range, mean, median, quartiles, Interquartile Range, Standard Deviation, Variance, and more. Through this article, we will discuss a few of them.
The Data to be Used
Here in this article, we are using a data named ‘Orange’ and you can see what does it look alike by just typing its name through R console as shown below:
How to load the dataset into R
Well, the data consists of 35 observations and three variables as we can see in the image above namely Tree, age, and circumference. The “Tree” is an ordered factorial variable that contains five levels on the scale of 1 to 5 and represents the tree on which the measurements are made. The “age” is a variable that stores the age of trees in the number of days from date “1968/12/31”. Finally, the “circumference” is a variable that represents value for circumferences of the tree at breast height.
You can always check about the initial structure of your data using the str() function as shown below:
Example code for str() function with output
Finding out Minimum and Maximum
To find the minimum and maximum values for any variable under the given dataset, we have the functions called min() and max() under R. The minimum and maximum values are crucial as they get you a rough idea about the spread of data.
Let us find the minimum and maximum of the “circumference” variable.
Example code for min() and max() with output
We can see that the minimum and maximum circumference values among the thirty-five trees are 30 and 214 mm respectively.
You also can get the minimum and maximum values together under range function in R.
Example code with output for the range() function
If you notice, the range() function doesn’t actually return the range; instead, it returns the minimum and maximum values as elements. You can access these values using the slicing as “rg” is an object that holds those.
Are you not aware about the functions in R programming? Read our articles out on Functions in R for a better realisation.
Finding out the Range
The range in statistics is nothing but the difference between the maximum and the minimum value. It gives you a better picture of the spread of the data.
Example code for finding out the range with output
Unfortunately, we don’t have any dedicated function in R that computes the range for us. However, we are free to develop one of your own. See an example below:
Example code for creating a function that computes the range
The Mean or Average
The mean or average is the sum of all elements divided by the total number of elements in statistical terms. Under R, we have a function named “mean()” that computes mean for the given set of values.
Let us find out the mean value for circumference under the Orange dataset.
Example code with output for the mean() function
We can say that the average circumference of the trees we have sampled under our data is 115.8571 mm.
Remember that, if any of the values under the dataset is missing, this function will return “NA” as an output.
The median is a value that is the center of your data and divides it into half. Meaning, half of the observations/values are below this value and half of them are above this value. Under R, we have a function named median() that does the work for us.
Example code with output for the median() function
Quartiles are the data points that divide your data into four equal parts each of the parts is representing a quarter portion from your data. The first quartile represents 25% of your data, the second quartile represents 50% of your data which is also a median value, and so on.
We have a function named quantile() that allows us to compute the first, second, and third quartile. We just specify the second argument as 0.25, 0.5, and 0.75 to get them respectively.
Example code with output for the quantile() function
The Interquartile Range
The difference between the first and third quartile is known as the interquartile range in statistics. We can use the same quantile() function to get the interquartile range as shown below or else there is a function named IQR() that helps us to get the interquartile range for the given variable.
Example code that computes Inter Quartile Range
The Standard Deviation and the Variance
The standard deviation is a measure that specifies how far the points/elements from the given group are deviating from its mean value.
We can compute the same using sd() function under R.
The variance is nothing but the square of the standard deviation or on the other hand, you can say the standard deviation is a square root of the variance.
We have var() function that computes the variance for the given group of objects.
A thing we have to note, these two functions are always computing the variance and standard deviations assuming the given data as a sample. There is no such function in R, that computes variance and standard deviation for the population.
Finding standard deviation and variance in R
The summary() Function
Now, what if I tell you some or most of these descriptive statistics we have computed above can be generated using a single function in R. Will you believe? That is the beauty here. We have a function summary() that gives us the minimum, maximum, range, mean, median, first and third quartiles.
Example code with output for the summary() function
Finding out the descriptive statistics for given data is a first step towards statistical analysis.
The min() and max() functions help us to get the minimum and the maximum values for a group of values.
The range() function generates the minimum and maximum values together and those can be extracted by slicing the object.
The mean() and the median() functions compute the mean and the median for us in R.
The quantile() function can be used to compute the quartiles as well as percentiles in R.
The sd() and the var() function allows us to get the standard deviation and the variance for given data.
summary() function generates the minimum, maximum, mean, median, and first as well as third quartile in R.
This is it from the article. In the next article, we will come up with one more interesting article in the field of R programming. Also, look into our previous article that talks about the dates in R at Dates in R. Until we meet again, stay safe! Keep enhancing! :) | <urn:uuid:567b6e23-1317-4c9c-832c-3224b298d7a0> | CC-MAIN-2024-38 | https://www.analyticssteps.com/blogs/descriptive-statistics-r | 2024-09-12T19:51:55Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651491.39/warc/CC-MAIN-20240912174615-20240912204615-00763.warc.gz | en | 0.901038 | 1,596 | 4.3125 | 4 |
Security is among the most important considerations in network design. Malicious users and attackers are always trying to find ways of compromising the privacy and confidentiality of communication networks, so network designers must always keep security top of mind.
IPv4 was developed at a time when the internet was small: Security concerns were much lower due to the technology’s extremely limited initial deployment. In addition, TCP/IP was originally used only within academia, the government, and large corporations, where there was an attitude of trust and transparency. Any security was simply taken care of by upper-layer protocols.
As the mitigation of network attacks and malicious activities became more and more important, security features were added to IPv4 by employing “add-on” frameworks of encryption and authentication in the form of the Internet Protocol Security (IPsec) suite. IPsec is more than sufficient to secure IPv4 communications, but the fact that it is not inherently a part of the IP protocol specification makes it somewhat more complex and cumbersome to implement.
IPv6 was designed with inherent security features based on the lessons learned from IPv4. This includes designing security as an inseparable part of the IPv6 structure and streamlining and simplifying its implementation. In this article, we look at how IPsec has been integrated into IPv6 and how it ensures secure end-to-end network communications.
Summary of key concepts
In the table below, you will find a summary of the concepts covered in this article.
What is IPsec?
IPsec is not a protocol itself but rather a framework of multiple protocols, encryption methods, authentication processes, and cryptographic algorithms used to implement authentication, confidentiality, and encryption and to ensure data integrity in IP communications. The framework is broken down into several components and functions, as shown in the following diagram.
Each operation can use a different protocol, encryption method, or algorithm to apply the required security.
Applying IPsec to IPv4
In an IPv4 environment, IPsec components are implemented using additional headers within which the original IP packet is encapsulated. For example, the following illustrates how some of the above-mentioned mechanisms are applied within the IPsec header and trailer.
What those additional headers contain depends upon the protocol, encryption method, authentication, and key exchange algorithm used.
IPv6 extension headers
One of the challenges faced by the designers of the IPv6 protocol was to integrate the security delivered by IPsec into the header structure of the IPv6 protocol without making IPv6 overly complex, cumbersome, and computationally costly to process. The solution was the use of extension headers.
This is a standard IPv6 header:
Notice the 8-bit field called “Next Header.” The value found in this field indicates whether this IPv6 packet has an extension header and, if so, what kind it is. If there are no more IPv6 extension headers, this field contains a number that indicates the Transport Layer protocol being used. For example, a value of 6 indicates that TCP is used, while a value of 17 indicates the use of UDP.
An extension header is an additional header that is inserted after the main IPv6 header but before the IPv6 packet payload. Multiple extension headers can be added to the main IPv6 header depending on the features to be enabled, which is known as chaining extension headers. The following diagram shows an example of an IPv6 packet with only the main IPv6 header and another IPv6 packet with a chain of IPv6 extension headers.
Like the main IPv6 header, each extension header has a “Next Header” field that indicates if another extension header or the payload follows, which is what allows chaining to be done. Some of the values that can be found in the “Next Header” field include:
- 51: Authentication Header (AH)
- 50: Encapsulation Security Payload Header (ESP)
- 59: No next header
There are currently about a dozen defined extension headers with various values, and more are being added as the need arises. Those that pertain to security are the AH and ESP extension headers listed above.
The Authentication Header (AH) is defined in RFC4302 and performs three primary functions:
- Message Integrity: It provides verification that the IPv6 packet payload remained unmodified during the entirety of its journey from source to destination.
- Source Authentication: The AH allows certification that the source of the IPv6 packet is indeed the source from which the data is expected.
- Replay Protection: Protection against replay attacks is provided using a sequence number field.
These functions are fulfilled using the fields within the AH structure as shown below:
Here’s how the information within this header delivers message integrity and source authentication:
- Security Parameter Index (SPI): This field identifies all the packets that belong to a particular connection between source and destination and authenticates the packet’s source. Before beginning communication, the source and destination must negotiate an algorithm and key, which are used to authenticate every IPv6 packet based on the value in the SPI.
- Sequence Number: This counter increments by one for each packet sent, which helps to mitigate replay attacks.
- Authentication Data: This is a variable length field that contains an integrity check value (ICV). Using a specific algorithm, a digest is created upon receipt, and if the digest is the same as the ICV, the packet is considered unmodified and is accepted.
The implementation of AH provides numerous practical benefits to a data stream:
- Integrity: It ensures that the payload has not been changed in transit.
- Origin Authentication: The AH verifies that the data has indeed been sent by the expected sender.
- Replay Protection: It protects the data stream from malicious or fraudulent retransmissions or delays.
It’s important to note here that AH does not provide any data confidentiality: Data sent using AH is not encrypted.
Encapsulation Security Payload Header
The Encapsulation Security Payload (ESP) header is defined in RFC 4303. It can perform the same security functions as AH, but it also adds data confidentiality to the mix. The following diagram illustrates the field structure of the IPv6 extension header used by ESP:
The fields shown above perform the following functions:
- Security Parameter Index (SPI): This value identifies the security association (SA) to which this packet belongs. An SA essentially specifies the security properties that are recognized by the communicating hosts. The SPI is an identifier that allows a receiver to map inbound traffic to an SA.
- Sequence Number: As with the Authentication Header, this is a counter that increments by one for each packet sent and is useful to mitigate replay attacks.
- Payload Data: This is the data that is being transmitted securely.
- Padding and Padding Length: These fields are used to properly align the payload data to the size of the IPv6 packet; they are also used for certain encryption algorithms.
- Next Header: As described before, the contents of this field identify either the next header type or the Transport Layer protocol being employed.
- Authentication Data: Has the same function as in the AH, delivering payload integrity.
Again, from a practical standpoint, the use of ESP addresses numerous practical concerns for data transmissions. Like AH, ESP provides integrity, origin authentication, and replay protection. However, importantly, it also provides strong data confidentiality (encryption), ensuring that even if the data is intercepted in transit, it will be unintelligible and ultimately useless for any malicious purpose.
Note that ESP, unlike AH, adds a trailer that contains the Padding, Padding Length, Next Header, and Authentication Data fields. Any further extension headers that are added after the ESP header are encapsulated within the payload data, and the ESP trailer is appended at the end.
Implementing AH and ESP extension headers
Either of the extension headers can be used alone when securing IPv6 communications, or both headers can be applied together to the same IPv6 packet in an extension header chain. AH authenticates IP headers and their payloads, while ESP authenticates only the IP datagram portion of the IP packet and not the header itself. Conversely, ESP delivers confidentiality via data encryption, while AH does not. As a result, the combination of which protocols to use depends upon the security requirements of the particular communication taking place.
Typically, if both AH and ESP are employed, the extension header format will look something like this:
Comparing IPsec for IPv4 with IPv6
You may wonder if the IPsec implementation of IPv6 is more secure than that of IPv4. The answer is no: IPsec applied to IPv4 and the integrated IPsec as designed for IPv6 both provide the same level of security in all aspects. The actual security mechanisms haven’t changed; they have simply been incorporated into the IPv6 protocol itself.
The primary benefits here are not enhanced security but rather elegance and ease of implementation. As part of the IPv6 header structure, both ESP and AH extension headers can be added without increasing the complexity of the IPv6 header and without having to change the fundamental structure of the IPsec framework.
IPv6 has integrated the security features provided by the IPsec security framework into its extension header structure, making the implementation of security more streamlined and elegant. The AH extension header can be used to authenticate packets and ensure data integrity while mitigating replay attacks. Similarly, the ESP extension header delivers all of these advantages, plus the encryption of the actual packet payload.
These options can be used individually or together to provide a more granular set of configurable parameters to achieve the level of security required for your particular network and applications.
Learn the benefits of IPv6 in areas such as addressing, security, and multicasting, and delve into the details with our multi-chapter guide.
Learn how IPv6 handles multicast more efficiently than IPv4 while still using Protocol Independent Multicast (PIM) and follow in-depth examples.
Learn how to configure iptables for IPv6, covering the basics of installing, configuring, viewing, editing, and persistence.
Learn about IPv6 pinholing and understand how it’s different from creating firewall holes in an IPv4 environment.
Learn about IPv6 security features like the Authentication Header and Encapsulation Security Payload and compare them to IPv4.
Learn about IPv6 proxy features, operation, implementation options, and benefits, and see examples of how IPv6 proxies can be used.
IPv6 includes a new feature called Stateless Address Auto-Configuration (SLAAC) that allows devices to determine their own IPv6 addresses. Learn how it works and how it can save you time and money with our free guide.
Understand how IPv6 tunnelling is used to encapsulate IPv6 packets in IPv4 and follow examples with configuration instructions.
What is IPv6 address compression? How does it work? Why do you need it? Find out all the details, including rules for using it, in our short but complete free guide.
What is a virtual private network (VPN)? In what ways does a VPN work the same way in IPv6 as in IPv4, and what are the differences? Get the answers to these questions and more in this free article.
Learn why most of the “challenges” associated with IPv6 adoption are misconceptions and why deployment is happening at about the expected pace. | <urn:uuid:c5e70e1a-e1a4-4880-b5b9-dc4031862c3d> | CC-MAIN-2024-38 | https://www.catchpoint.com/benefits-of-ipv6/ipv6-security | 2024-09-14T01:39:58Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.77/warc/CC-MAIN-20240913233654-20240914023654-00663.warc.gz | en | 0.907347 | 2,354 | 3.6875 | 4 |
Testing modern applications with embedded AI features is becoming increasingly critical as these systems integrate sophisticated algorithms into everyday devices. The focus is on ensuring that AI-driven functionalities perform accurately and reliably under various conditions, given the unique challenges posed by constrained environments and real-time demands.
In Part 1 of our series, we explored the world of embedded AI, focusing on how AI integrates into devices for real-time decision-making, the challenges of deploying AI on resource-constrained hardware, and the methods to optimise performance, including model compression and hardware acceleration. We also examined real-world applications from smartphones to autonomous vehicles. In Part 2, we shift our focus to the complexities of testing embedded AI systems. We will cover essential testing frameworks and tools, methods for generating and validating test data, and best practices for ensuring the reliability and robustness of AI systems in real-world scenarios.
Challenges in Testing Embedded AI
Testing AI features in embedded systems involves navigating a range of unique challenges due to the specific constraints and conditions of these systems.
Resource Constraints pose significant hurdles. Embedded systems often have limited computing power, which makes it challenging to test AI models effectively within these constraints. Additionally, energy efficiency is crucial; testing must balance AI functionality with the power limitations of these devices. Storage space is another concern, as holding large AI models or datasets can strain the limited storage capacity of embedded systems.
Real-Time Constraints add another layer of complexity. Many embedded applications, such as those in autonomous vehicles, demand real-time responses. Ensuring that AI algorithms meet low-latency performance requirements is essential. Moreover, maintaining deterministic behavior—consistent execution time and predictable responses—is critical in real-time systems.
Heterogeneous Environments further complicate testing. Embedded systems run on a variety of hardware architectures, including ARM processors, FPGAs, and GPUs. Testing across these different platforms requires specialised approaches. Additionally, these devices often operate in diverse conditions, such as extreme temperatures or high humidity, which must be considered during testing to ensure reliable performance.
Data Challenges also play a key role. Collecting enough data for training and testing AI models can be difficult in resource-constrained environments. Moreover, addressing data bias is crucial; biased data can lead to poor AI performance and unreliable results.
Security and Safety are paramount, especially in safety-critical applications. For instance, AI-based collision avoidance systems in vehicles need thorough testing to ensure real-time performance and robustness against varying road conditions. In healthcare, embedded AI in medical devices like pacemakers or insulin pumps requires rigorous testing to guarantee patient safety. In industrial IoT settings, AI-enabled sensors used in factories must be tested for reliability, latency, and robustness in harsh environments.
As embedded AI continues to evolve, these challenges highlight the need for innovative testing methodologies and tools to ensure that these systems are both effective and reliable.
Exploring Various Testing Frameworks and Tools
Testing frameworks and tools are essential for ensuring the reliability and performance of embedded AI applications. Here’s a look at some key testing frameworks, their importance, and real-world use cases.
Testing Frameworks provide structured approaches to validate embedded systems. CppUTest is a lightweight framework tailored for C/C++ environments, offering features like test fixtures, mocking, and assertions, making it ideal for resource-constrained systems. Unity is another C unit testing framework that emphasises simplicity and minimal overhead, well-suited for constrained devices. Google Test (gtest), although initially designed for general-purpose systems, can be adapted for embedded testing, providing powerful assertions and test discovery capabilities.
Unit Testing plays a crucial role in ensuring that individual components, such as functions or classes, work correctly. It helps identify bugs early and maintains code quality. For example, unit testing can be used to verify an embedded AI model’s inference function to ensure it produces the expected outputs.
Integration Testing is vital for validating how different components interact with each other. It ensures that the various parts of the system work together seamlessly. An example of integration testing is checking the communication between an embedded AI module and sensors, such as a camera or lidar, in an autonomous drone.
Performance Testing assesses how responsive and efficient a system is, including its resource usage and scalability. For instance, performance testing can measure the latency of an AI-based gesture recognition system in a wearable device to ensure it meets real-time requirements.
Hardware-in-the-Loop (HIL) Testing simulates real-world hardware interactions, validating the entire system, including embedded AI components. HIL testing can be applied to an AI-powered medical device, like an insulin pump, using simulated physiological inputs to ensure it operates correctly under realistic conditions.
In the automotive industry, testing AI-based collision avoidance systems requires rigorous real-time performance validation and robustness against varying road conditions. In healthcare, embedded AI in devices such as pacemakers or insulin pumps must undergo thorough testing to guarantee patient safety. For industrial IoT applications, AI-enabled sensors used in factories need testing for reliability, latency, and robustness in harsh environments.
Data Techniques for Embedded AI Testing
When testing embedded AI systems, data strategies play a crucial role in ensuring robust and accurate performance. Here’s how various data techniques are applied:
Data Augmentation involves creating variations of existing data by applying transformations such as rotation, scaling, or adding noise. This technique enhances the model’s robustness by exposing it to a wider range of conditions. For example, in an embedded face recognition system, augmenting images with different lighting conditions helps improve accuracy, ensuring the system can recognise faces in various environments.
Synthetic Data is generated through algorithms or simulations to compensate for the lack of real-world data. This method is especially useful when collecting real data is challenging. For instance, simulating sensor data like lidar scans can be used to test an autonomous drone’s obstacle avoidance AI, providing the diverse scenarios needed for comprehensive evaluation.
Transfer Learning involves fine-tuning pre-trained models on domain-specific data. This technique leverages knowledge from related tasks to adapt models for new applications. For example, a pre-trained image classification model can be adapted for detecting plant diseases in an embedded system, making it effective in a new but related context.
Edge Cases and Anomalies test the AI system’s performance under rare or extreme conditions. This approach is critical for stress-testing. For instance, validating an embedded speech recognition model with non-native accents or in noisy environments ensures the system can handle challenging real-world scenarios.
Quantitative Metrics define evaluation standards such as accuracy, precision, and recall for assessing AI predictions. For example, evaluating an embedded fraud detection system’s false positive rate helps measure its effectiveness in real-world use.
Recent applications highlight the importance of these data techniques. Smart home devices, like voice assistants, are tested with diverse user queries and accents to ensure they understand varied speech patterns. Similarly, wearable health monitors validate heart rate prediction accuracy across different skin tones and activities, ensuring their reliability in real-world conditions.
Best Practices for Testing Embedded AI
Ensuring robustness, reliability, and security in embedded AI systems involves several key best practices.
Edge Case Testing is crucial for uncovering vulnerabilities and unexpected behavior in extreme scenarios. For example, validating an autonomous drone’s obstacle avoidance AI in dense fog or with sudden obstacles can reveal weaknesses that regular conditions might not expose.
Stress Testing evaluates how well a system performs under heavy loads or adverse conditions. Testing an embedded AI-based traffic management system during peak traffic hours and unexpected congestion ensures it can handle real-world demands effectively.
Continuous Monitoring is essential for detecting anomalies, drift, or performance degradation in deployed AI models. For instance, monitoring a predictive maintenance system for industrial machinery helps prevent breakdowns by identifying issues before they escalate.
Security Considerations are vital to protect against attacks such as adversarial inputs or model inversion. Ensuring that an embedded facial recognition system can withstand spoofing attempts, like photos or masks, is crucial for maintaining its integrity.
Recent data points highlight the importance of these practices. For example, testing voice assistants like Alexa or Google Home with diverse user queries and accents ensures their robustness. Similarly, validating the accuracy of wearable health monitors across different skin tones and activities ensures reliable health predictions for all users.
In summary, effective testing of embedded AI systems is critical for ensuring their performance, reliability, and security in real-world scenarios. Addressing the unique challenges of resource constraints, real-time demands, and diverse operating conditions requires a multifaceted approach. By leveraging targeted testing frameworks, advanced data techniques, and best practices such as edge case and stress testing, we can enhance the robustness of AI applications. Continuous innovation in testing methodologies will be essential as embedded AI continues to advance, ensuring these systems meet the highest standards of accuracy and safety in their practical applications.
Importance of Testing Embedded AI: As AI integrates into everyday devices, ensuring accurate and reliable performance in constrained and real-time environments becomes crucial.
Part 1 Recap: Explored AI integration, performance optimization techniques (like model compression and hardware acceleration), and real-world applications from smartphones to autonomous vehicles.
Focus of Part 2: Shifts to testing complexities for embedded AI, covering frameworks, data techniques, and best practices.
Challenges in Testing Embedded AI:
- Resource Constraints: Limited computing power, energy efficiency, and storage capacity.
- Real-Time Constraints: Need for low-latency performance and deterministic behavior.
- Heterogeneous Environments: Testing across various hardware architectures and operating conditions.
- Data Challenges: Limited data availability and data bias.
- Security and Safety: Ensuring robustness against attacks and meeting safety standards.
Testing Frameworks and Tools:
- CppUTest: Lightweight C/C++ framework for constrained systems.
- Unity: Simple C unit testing framework.
- Google Test (gtest): Adaptable for embedded testing with powerful features.
- Types of Testing: Unit testing, integration testing, performance testing, and Hardware-in-the-Loop (HIL) testing.
Data Techniques for Testing:
- Data Augmentation: Enhances model robustness by creating data variations.
- Synthetic Data: Used to compensate for limited real-world data.
- Transfer Learning: Adapts pre-trained models for new tasks.
- Edge Cases and Anomalies: Stress-tests AI systems under rare conditions.
- Quantitative Metrics: Measures accuracy and effectiveness.
- Edge Case Testing: Uncovers vulnerabilities in extreme scenarios.
- Stress Testing: Assesses performance under heavy load.
- Continuous Monitoring: Detects anomalies and performance drift.
- Security Considerations: Protects against attacks and ensures system integrity.
Recent Data Points:
- Smart Home Devices: Tested with diverse queries and accents.
- Wearable Health Monitors: Validated for accuracy across different skin tones and activities.
Conclusion: Effective testing of embedded AI requires a comprehensive approach using targeted frameworks, advanced data techniques, and best practices to ensure robust, reliable, and secure systems.
Related Case Studies
Test or Robotic Process Automation for Lead Validation
A UK-based market leader that provides lead validation and verification solutions, helping companies manage their business-critical data securely and effectively whilst increasing sales.
AI Driven Fashion Product Image Processing at Scale
Learn how a global consumer and design trends forecasting authority collects fashion data daily and transforms it to provide meaningful insight into breaking and long-term trends. | <urn:uuid:3c0b5672-5f05-4502-b3c7-870fb6040f65> | CC-MAIN-2024-38 | https://www.meritdata-tech.com/resources/blog/code-ai/testing-embedded-ai-part-2/ | 2024-09-14T00:40:43Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.77/warc/CC-MAIN-20240913233654-20240914023654-00663.warc.gz | en | 0.896444 | 2,413 | 2.859375 | 3 |
With the Internet of Things, it is no longer enough to consider the form when it comes to product design. This is because you are no longer dealing with physical products. Instead, you need to take into consideration the fact that these things now use data and are connected to people and other devices. It is no longer just an ordinary refrigerator, but one that informs the owners what they need to buy at the store. It is no longer just a car, but one that can drive itself or help you avoid collisions.
You are designing for both the physical and the data that are received and sent via these products. Form and function have never been so important in designing for the Internet of Things.
First off, you need to remember the traditional rules of product design. You should still design for simplicity, innovation, usability, manufacturability and quality. As we have mentioned before, you would still be designing for the tangible aspects of the product.
You would, however, need to do more for each element. For instance, usability now includes designing for upgradability. This means that you would need to think about how users would be able to upgrade their products when they upgrade the software that runs these products. Your design should make the product easier to learn and use as time goes by. For example, Tesla Model S has a dashboard that owners can upgrade like a smartphone.
Also, when you say simplicity, your design should also help users save time. Simple products are easier for users to use and understand, and with the Internet of Things, simple products also need to be easy to learn, saving the users hours of trying to learn how their devices work.
Then you would also need to design for communication when you design for quality. IoT devices need to be able to connect to the Internet securely and without fail. It would also need to generate consistent, accurate, and real time data. It would also need to be compatible with other devices created by other manufacturers. As such, when designing for quality, you would need to understand how the device could generate data and how that data would be used by other devices and platforms.
Innovative designs for the Internet of Things would include designing for discovery. When you design for the Internet of Things, you would need to make sure that you take advantage of opportunities that would help you get to know your users better. Ford Motor, for instance, gives its users access to OpenXC, which allows them to create experimental accessories and applications by themselves. This way, you would see what your users are interested in.
Lastly, you would need to consider how to create insights when you design for manufacturability. The beauty of the Internet of Things and the sensors and analytics that accompany it is that it allows you to get data, analyze it and turn it into something actionable. All of that data and insights can help designers come up with better products that can learn, measure and decide on its own.
The Internet of Things is forcing industrial design professionals to work harder to make sure that their designs remain relevant over time. If you are delving into the Internet of Things, be sure to contact Four Cornerstone and find out about the latest technologies that are available to you. | <urn:uuid:34ff13c3-e81c-4602-8c9c-9d277c0e8a3c> | CC-MAIN-2024-38 | https://fourcornerstone.com/advancement-product-design-iot/ | 2024-09-20T07:51:50Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652138.47/warc/CC-MAIN-20240920054402-20240920084402-00163.warc.gz | en | 0.976525 | 651 | 3.015625 | 3 |
Ransomware seems to be everywhere. We see incidents of ransomware against powerful companies in the news and we even see dramatic portrayals of ransomware attacks on TV. But what exactly is ransomware and how can it be prevented?
Ransomware is a cyberattack, like malware, where the attacker limits the use of your software until a ransom is paid. More recently, modern attackers encrypt your files and ask you to pay in forms of online currency like bitcoin to decrypt the file. This kind of ransomware is categorized as crypto ransomware.
Like any cyber-attack, the effects of a ransomware can be costly for your organization. Even if you pay the ransom, in many cases you will still lose a portion of your data. Many small and medium sized businesses are not prepared for such attack since they do not have the right security services in place. No one is immune to being a victim of a ransomware attack but there are ways to prevent it.
Aurora’s team of engineers provide routine assessments that can ensure that your cyber environment is secure. Incorporating cybersecurity assessments into your organization can help prevent your employees and customers from falling victim to attacks like ransomware. Contact us at email@example.com to learn more about the consulting services we provide. | <urn:uuid:2fde8df6-1e51-49c5-9679-1beea57ad5df> | CC-MAIN-2024-38 | https://www.aurorait.com/2020/07/14/are-you-vulnerable-to-ransomware/ | 2024-09-20T06:53:29Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652138.47/warc/CC-MAIN-20240920054402-20240920084402-00163.warc.gz | en | 0.946439 | 253 | 2.90625 | 3 |
Governments and tech giants around the world are spending billions to fund quantum computing research. It’s easy to see why: the scientific and mathematical breakthroughs it promises are mind-blowing. Not without reason has the significance of quantum computing been compared to the harnessing of electricity.
There’s very clear motivation for driving quantum innovation. It could unlock growth, job opportunities and competitive advantage on a whole new scale, by offering a new way to solve – in microseconds – the problems that even the most powerful supercomputers today cannot. Yet in doing so, these computers could also open a Pandora’s Box that breaks the asymmetric, or public key, cryptography on which many enterprises, societies and entire digital economies rely.
The question is, how long is it before this vision becomes a reality, and how can organizations and governments transition to quantum safety?
From Theory to Practice
Quantum computing is based on the theory of quantum mechanics, pioneered in work by Albert Einstein that won him the Nobel Prize. To the untrained eye, it seems to defy logic.
"The future of cybersecurity will be quantum-based"
Quantum particles, also known as qubits, don’t behave according to the traditional rules of physics. They do strange things like existing in two places at once and travel forwards or backwards in time.
When applied to computing, these features get even more interesting. While today’s computers process and store information in zeros and ones, quantum computers use qubits, which can be a zero and a one at the same time. By encoding one and zero at the same time rather than sequentially, the time it takes to process data, make calculations and solve problems is greatly reduced. It is this capability that excites and dismays in equal measure. It could enable governments to unmask any asymmetric encryption algorithm in the blink of an eye, but also allows hostile states and possibly a few well-funded cybercrime groups to do the same to citizens and businesses.
A Step-Change in Security
According to William Dixon, head of future networks and technology at the World Economic Forum (WEF), quantum computing in this context could undermine the key exchanges, encryption and digital signatures that protect financial transactions, secure communications, e-commerce, identity, electronic voting and much more.
“If quantum computers were to become, for example, capable of breaking asymmetric cryptography before the digital ecosystem has achieved the necessary transition to quantum safety, it would create significant cybersecurity risks,” he tells Infosecurity. “Businesses and governments could be left unable to ensure the confidentiality, integrity and availability of the transactions and data on which they rely upon.”
This will mark nothing short of a significant step-change in cybersecurity, according to Nelson Balido of consultancy Balido & Associates.
“The future of cybersecurity will be quantum-based. The impact will be that the entirety of our cybersecurity networks in the US should and will be reliant on quantum technology,” he tells Infosecurity. “You will have to fight fire with fire. The only way we can ensure that our data is safe is by matching our systems to any potential threat and incorporate quantum computing into our encryption processes. Today’s computing power versus quantum is like walking versus a jet plane.”
Fortunately, researchers are working on ‘quantum-safe’ cryptography methods to counter the crypto-busting threat from quantum computing. As long ago as 2018, UK public-private partnership the Quantum Communications Hub announced an “unhackable” quantum-secured network using quantum key distribution. In this set-up, photons are used to transmit data encryption keys across fiber links and through the air. The sender is alerted if the stream is interrupted and it can then be scrambled.
"Today's computing power versus quantum is like walking versus a jet plane"
It’s also true that symmetric (rather than asymmetric) encryption should be able to cope with quantum computing advances. However, the relevant algorithms will require longer keys and hash functions to ensure quantum safety, which will make operational cryptography more complex.
The Road to Quantum Safety
Although NIST is working on a post-quantum standardization project, there will still be numerous implementation challenges to overcome, according to Dixon, even if quantum threats can be neutered.
“The transition to a quantum-secure architecture will not be trivial for the global economy, and both individual and collective governance of the transition will certainly be an issue. Shared infrastructures, interconnected systems and interdependent business models, like the industrial internet of things (IIoT) or distributed-ledger technologies, that are being rolled out across a range of industrial applications, have highly distributed models. It is not necessarily clear who is responsible for ensuring that they are made quantum-safe,” he argues.
“There is already some emerging disparities in approaches, where entities such as the National Cyber Security Centre in the UK are cautioning against moving to quantum-safe methods until the standardization process by NIST is completed. Yet some major enterprises are already moving forward to mitigate any potential long-term risk and implementing solutions despite this guidance.”
When is it Time to Worry?
However, there remain hurdles to continued progress. Current quantum computers are pretty small and ‘noisy’ – meaning qubits don’t remain stable and entangled for long to run calculations. To address the problem they need to be extremely cold – about 250-times colder than deep space – which comes with its own challenges. That said, companies including IBM, Google, D-Wave, Microsoft, Honeywell, IonQ and Rigetti are developing the hardware, while engineers around the world are building algorithms for use on these quantum machines, according to Forrester senior analyst, Chris Sherman.
The big question is: how far away are the kinds of scenarios painted above?
“CISOs must pay attention to performance metrics like ‘quantum volume’ and qubit numbers discussed by hardware vendors, since device performance is correlated with the likelihood of a quantum computing security threat,” Sherman tells Infosecurity.
“Organizations can determine their exposure to quantum risk using a simple formula. If the length of time needed to move to a new quantum-safe infrastructure, added to the amount of time existing vulnerable assets will be exposed, exceeds the predicted amount of time until a quantum computer will compromise cryptosystems, your risk is captured by the remaining exposure time.”
He claims it will take quantum computers around 15 years to crack Shor’s algorithm (and therefore asymmetric encryption). However, it could be even sooner: Google is planning to achieve a one million qubit computer within 10 years. According to WEF’s Dixon, for Shor’s algorithm to work on RSA 2048 encryption, it would require a sufficiently fault tolerant performance of just 6200 qubits.
“For many enterprises and industries with complex systems and sensitive data that have long life spans, this is actually a very short timeframe,” he argues.
What Should CISOs Do Now?
Given these relatively short periods, it’s essential that CISOs start planning now. Forrester’s Sherman argues that they should be conducting tabletop exercises to map exposure and estimate the value of corporate data to adversaries – re-evaluating risk every six to twelve months according to the formula outlined above. Five years hence, they should be utilizing quantum technology to secure sensitive networks and replace public key encryption systems, with a goal of making all encryption post-quantum safe in a decade, he adds.
Lux Research senior research associate, Lewie Roberts, claims an awareness campaign is needed to get CISOs to take the threat seriously.
“It will take some time for quantum computing developers to reach a point where some of the most interesting quantum algorithms will be possible to run, including those that we know will pose security threats. However, the near-term is still full of a lot of unknowns,” he warns Infosecurity.
“Everyone is figuring out what is possible with the currently available hardware, which is resource-limited. CISOs would do well to get up-to-speed on the known threats and consult with security vendors to make sure that there is a plan to cover these new attack vectors at the appropriate time. Vendors should partner with quantum information experts to make decisions about how soon specific protections need to be developed.”
"It will take some time for quantum computing developers to reach a point where some of the most interesting quantum algorithms will be possible to run"
In the end, the challenge for CISOs could be an age-old one: persuading senior business leaders to invest in mitigations now, when there’s still debate over exactly what the material impacts of quantum research will be and when they’ll occur. Industry and government partnerships will be vital to get the message across, argues WEF’s Dixon.
“A vital step will be building ‘quantum literacy’ at a leadership level, educating leaders on the development of quantum technology and the potential benefits and risks it could create for their organization and sectors,” he concludes. “It is important that the security community has a rational and balanced discussion about the potential risks and mitigations, especially when engaging senior leadership.” | <urn:uuid:4f348152-2502-4bc6-a591-2db78e7302f8> | CC-MAIN-2024-38 | https://www.infosecurity-magazine.com/magazine-features/how-quantum-computing-could/ | 2024-09-20T06:55:53Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652138.47/warc/CC-MAIN-20240920054402-20240920084402-00163.warc.gz | en | 0.938759 | 1,949 | 3.25 | 3 |
The fight to bring science, technology, engineering and mathematics to the forefront of American education was front and center Tuesday at the Newseum, as FedScoop brought leaders from education, government and industry together for its first-ever Tech Town Hall.
A host of panelists tackled STEM from a number of angles, including ways STEM professionals are rapidly advancing technology, how STEM careers can be promoted throughout underprivileged schools and how students can be sold on STEM’s “cool factor.”
“I think we all have a responsibility to help push our sons and daughters to really think outside the box,” Ellen McCarthy, chief operating officer at the National Geospatial-Intellignece Agency, said during a panel focused on engaging women in STEM. “It is imperative that we keep in front of this incredible technology revolution we are in right now.”
While the number of students interested in computer science is growing, Pat Yongpradit, director of education for Code.org, said he has talked to a number of college professors who say their students are just not prepared for computer science courses. He said this is partly due to an “opportunity problem” at the high school level, where only one in 10 high schools even offer computer science classes.
“In half of our states, you can take computer science and it counts the same as cooking,” Yongpradit said.
In order to better prepare students, Yongpradit unveiled Code.org’s new K-5 platform, which consists of three new tracks (with 20 lessons each) aimed at teaching computer science to students ages 4-6. The program also reinforces math, science and english education standards at each level.
“The coolness is built into our curriculum,” Yongpradit said. “We’re trying to make things cooler for kids.”
A number of panelists want people to rethink of STEM outside of its cool factor and how it can better combined with the nation’s education policy.
“One of the things that is lost in the debate is that so much of the policy in place treats STEM education like its sole purpose is to churn out rocket scientists,” said James Brown, executive director of the STEM Education Coalition. “Fifty percent of STEM jobs don’t require a four-year education.”
“Policymaking in D.C. requires a good deal of creativity,” said Kumar Garg, the assistant director for learning and innovation at the White House. “How do we get American kids from the middle of the pack to the top of the pack in science and math achievements?”
There was a bevy of suggestions to that question, ranging from overhauling No Child Left Behind to instituting more programs outside of the classroom. Camsie McAdams, deputy director of the Education Department’s Office of STEM, said her team is working on getting funding to implement a number of STEM-focused programs and initiatives.
“We aren’t just engaging women, people of color and students with disabilities,” McAdams said. “We would really like to sustain youth engagement in STEM.” | <urn:uuid:60ae8e3c-4fb0-4155-83ea-8332c4561652> | CC-MAIN-2024-38 | https://preprod.fedscoop.com/leaders-discuss-stems-future-fedscoops-tech-town-hall/ | 2024-09-08T02:56:02Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650958.30/warc/CC-MAIN-20240908020844-20240908050844-00363.warc.gz | en | 0.966852 | 675 | 2.765625 | 3 |
Technology Trends to Watch for in 2022
November 4, 2021Data Center Trends to Look out for in 2022
November 10, 2021In this month’s installment of Colocation America’s Frequently Asked Questions in Technology, we cover some of the more obscure technology concepts that may not be as commonly examined or discussed. These new technology concepts have the potential to be important in the near future. This article covers Cognitive Technology, the Internet of Behaviors, Hyperautomation, Metaverse, and Zero Trust security.
What Is Cognitive Technology?
The term artificial intelligence has been around since the mid-1950s and has become more popular in the past couple of decades. Many different industries are incorporating AI into their operations in a variety of different ways, and depending on who you ask, the definition of what AI is can also vary.
The buzzword known as “Cognitive Technology” can be thought of as the byproducts of artificial intelligence. Cognitive technology can accomplish duties that were only done by humans at a point in time. Cognitive technology can be machine learning, speech recognition, natural language processing, and some robotics. Cognitive technology imitates human thinking, decision-making skills, and even personalities. Some well-known examples of cognitive technology are Siri, Alexa, Cortana, and Google Assistant.
What Is the Internet of Behaviors?
By now, you’ve most likely heard of the Internet of Things, which describes the network of connected devices. These physical objects are fixed with sensors, software, and various technologies that allow for the collecting and exchanging of data with other devices and systems via the internet.
The Internet of Behavior expands the reach of IoT and all interconnected devices. It refers to the gathering of data and will be used to “link a person digitally to their actions.” It combines data analytics with behavioral science to map out customer behavior. It uses a variety of aspects including facial recognition, location, and other data. By combining technology, data analytics, and behavior science, companies can have a personalized view of how and what marketing and sales tools work best. Internet of Behaviors will also incorporate artificial intelligence to further increase this reach.
As businesses get smarter in the way they market goods and services to the public, consumers will need to take cybersecurity and all that comes with it more seriously. While this may be a good tool for businesses—it could potentially be problematic for customers. Regardless, the Internet of Behaviors looks to be an important technology to look out for in the coming years.
What Is Hyperautomation?
Hyperautomation is the idea of automating everything in a company that can be automated. It is a more deliberate and calculated approach to automation. Traditional automation refers to technology applications that can aid in limiting human input. This is done to improve efficiency in several different aspects of business and everyday life. There are several different examples of automation in everyday life including robot gas pumps, smart lights, electronically automated dog doors, self-parking systems, and application-controlled homes.
Hyperautomation takes automation to another level by being more calculated and deliberate about how it is applied. This is also done by choosing appropriate automation tools and applying artificial intelligence and machine learning technologies in a more studied approach. Building a more focused process can further the reach of automation and how it affects operations. Hyperautomation can be applied in many ways especially in healthcare, supply chains, banking and finance, retail, and more. We haven’t fully seen what automation can do, but the concept of hyperautomation is another step towards its full potential.
What Is Metaverse?
The recent news of the Facebook corporation changing its name to Meta has many people including news outlets a buzz. Meta is in direct reference to the concept of the “metaverse”. This idea is based on the 1992 science fiction novel, Snow Crash, by Neal Stephensen. While this idea is three decades old, the theory and application are still in their early stages.
Metaverse can potentially be a combination of several different things including a computer-generated virtual reality space that also incorporates augmented reality creating an Extended Reality (XR) platform. It will also incorporate social media and gaming within this virtual reality space. Combining all of these various technologies could potentially bring the newest iteration and the future of the internet.
Mark Zuckerberg states that “the metaverse will be the successor to the mobile internet”. Users will be able to build virtual spaces for home and work allowing people to meet and teams to collaborate from anywhere in the world. The Metaverse will change the way we communicate and the way we interact with the world.
What Is Zero Trust Security or Zero Trust Architecture?
We are living in the digital age where most companies have an online presence in one form or another. In the United States, 30.8% of business was conducted online. The importance of cybersecurity is more evident than ever before. Zero Trust is a security architecture that requires all users (both inside and outside of a company’s network) to be authenticated, authorized, and validated before giving any access to any applications or information. This additional layer of security will not assume a person has access. Everyone will need to be verified before given access to any company information. By eliminating trust, the system adds another layer of security.
Zero Trust architecture will be beneficial especially for the remote workforce. By incorporating this type of architecture, remote workers will need to be authenticated every time they access any mission-critical data or applications. Zero Trust in an enterprise-wide strategy to eliminate risk to a company. For a Zero Trust architecture to work—it should be applied to both remote workers and those working onsite as well. Zero Trust security could potentially be beneficial to many companies especially as cybersecurity threats have increased more than 300% from 2019 to 2020.
Whether it’s Mark Zuckerberg’s “Meta”, Cognitive Technology, Internet of Behaviors, Hyperautomation, or Zero Trust Security—these technology concepts could potentially be an important part of the future of technology. Understanding these “buzzwords” can be beneficial for understanding future related technologies. If you have any questions about any new technology or technology concepts—connect with us today. | <urn:uuid:e4b87c5c-b293-4544-9a90-a9a3a3b465eb> | CC-MAIN-2024-38 | https://www.colocationamerica.com/blog/technology-faqs-part-7 | 2024-09-08T03:09:23Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650958.30/warc/CC-MAIN-20240908020844-20240908050844-00363.warc.gz | en | 0.941463 | 1,283 | 2.765625 | 3 |
The United States is experiencing a dynamic shift in the realm of consumer privacy, with state-level actions leading to a complicated tapestry of legislation. As each state introduces its own form of consumer privacy law, businesses and consumers alike are left to grapple with a labyrinth of regulations that affect how personal data is collected, processed, and protected.
The Inspiration from Abroad and Domestic Pioneers
Impact of GDPR and Initial US Reactions
The ripple effects of the EU’s General Data Protection Regulation (GDPR) have been felt across the Atlantic, influencing a new wave of privacy legislation on U.S. soil. The GDPR’s robust approach to data protection and individual privacy rights has served as a benchmark for U.S. states looking to augment their own laws. Meanwhile, California emerged as a front-runner in the domestic sphere with the introduction of the California Consumer Privacy Act (CCPA). This precedent inspired other states to embark on drafting their bespoke privacy statutes, contributing to the burgeoning patchwork of state laws.
Evolution and Divergence of State Privacy Laws
With fifteen states enacting unique consumer privacy legislations, the pursuit of protecting citizen data has led to a rich diversity of approaches. These state laws encompass various components, from consent mechanisms to the rights afforded to individuals, which in turn presents intricate compliance puzzles for data controllers and processors. The task of aligning business practices with each state’s requirements is daunting, as variations can be both subtle and significant.
The Patchwork of Privacy Regulations Across States
Consent in Data Processing and Its Nuances
The concept of ‘consent’ in data processing has become a focal point in the privacy regulations of many states. Often, the laws require mandatory opt-in consent before sensitive personal data can be processed—a departure from the CCPA, which generally allows for an opt-out model. States such as Texas and Florida have taken a particularly stringent stance, requiring explicit consent for the sale of sensitive data, which includes information like biometrics, geolocation, and health records.
Varied Approaches to Children’s Data and Offline Protection
When it comes to children’s data, states have demonstrated varying degrees of protection, each setting their own age thresholds and consent provisions. These state-specific laws mirror and sometimes expand upon federal standards established by the Children’s Online Privacy Protection Act (COPPA), emphasizing a commitment to safeguarding this particularly vulnerable segment of the population. States have recognized the importance of these protections extending beyond the internet, addressing the need to secure offline data as well.
Consumer Rights and Privacy by Design
Emphasizing the Need for Proactive Measures
The movement toward privacy-by-design principles underscores the proactive measures now expected of organizations. This approach demands that privacy considerations be embedded into the development of business practices and technological platforms from the outset. Several state laws emphasize the necessity to conduct privacy impact assessments and keep accurate compliance documentation, underscoring the shifting ethos from reactive privacy compliance to an anticipatory stance on data protection.
The Significance of Sensitive Data Classifications
Sensitive data categories, ranging from genetic and biometric to precise geolocation and health information, are benefiting from an extra layer of protection under various state legislations. These categories often require explicit opt-in consent before usage, highlighting their elevated privacy risks. Such classifications emanate from a recognition that not all data is created equal and that certain types warrant more stringent oversight.
The Compliance Challenge in the Emerging Privacy Landscape
Discrepancies in State Definitions and Regulations
The variance in how states define and regulate personal data creates a tangled web of compliance obligations. Some states have uniquely tailored their legislations, casting different nets over what constitutes sensitive health data or the scope of consent required from minors. This has left organizations with the arduous task of parsing through each regulation’s specificity, aiming to ensure that their operations do not run afoul of the disparate laws.
Toward a Possible Federal Standard?
The United States is currently undergoing a significant change regarding consumer privacy laws, with various states spearheading this movement through their own unique regulations. This has resulted in a complex web of legal frameworks that businesses and consumers are struggling to navigate. Each state’s individual law on consumer privacy means there’s no unified national standard. Companies that operate across state lines are now faced with the challenge of managing disparate privacy requirements. This introduces difficulties in aligning practices with multiple sets of rules pertaining to the acquisition, handling, and safeguarding of personal information.
Consumers, on the other hand, must familiarize themselves with their rights, which vary significantly depending on their location. This shifting landscape underscores the intricate balance between consumer rights to privacy and the operational needs of businesses in the digital age. The impact of these state-level initiatives is substantial, often prompting calls for a comprehensive federal privacy standard to simplify the regulatory environment and provide clear, consistent protections for American consumers’ personal data. Until such a standard emerges, the patchwork of state laws will continue to shape America’s privacy boundaries, making compliance a moving target for companies and creating a climate of uncertainty around data privacy practices. | <urn:uuid:8bc62aba-932e-4ede-9e56-74af38775381> | CC-MAIN-2024-38 | https://legalcurated.com/tech-and-intellectual-property/how-is-the-us-navigating-its-complex-privacy-law-maze/ | 2024-09-10T14:16:49Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651255.81/warc/CC-MAIN-20240910125411-20240910155411-00163.warc.gz | en | 0.918843 | 1,060 | 2.53125 | 3 |
To combat a growing range of cyber threats, enterprise leaders and cybersecurity professionals often employ tabletop exercises as a valuable tool to enhance preparedness and response capabilities. Tabletop exercises simulate real-world cyber incidents in a controlled environment, allowing organizations to test their incident response plans, evaluate team coordination, and identify vulnerabilities.
As the overall threat landscape shifts though, it is essential to continuously improve tabletop exercises so that they remain effective. Without the right strategy in place, organizations may not find value in their tabletop exercises. Adapting to the changing cybersecurity landscape requires security teams to incorporate the most current emerging threats, technologies, and attack vectors into these exercises.
This blog post discusses how modern enterprises can build their tabletop strategy to meet a changing threat climate and ways to overcome common challenges associated with the exercises. It also covers how tabletop exercises will transform in the future and how businesses can continue to derive value from such tools.
From Military Roots to Cyber Defenses | Defining Tabletop Exercises
Tabletop exercises (TTX) have a rich history in the realm of cybersecurity, dating back to the early days of military and emergency response planning. Originally used to simulate military campaigns and disaster response scenarios, TTXs gradually found their way into the cybersecurity domain. These exercises were initially developed to assess an organization’s ability to respond to physical security incidents, but as cyber threats became more prevalent, their focus expanded to include cyber incidents.
TTXs in cybersecurity typically involve a simulated scenario where participants gather in a controlled environment to collaboratively respond to a fictional cyber incident. The scenario is crafted to mimic real-world situations and may include elements like phishing attacks, data breaches, ransomware infections, or network intrusions. Participants, representing various roles within the organization, such as IT personnel, executives, legal advisors, and public relations representatives, engage in discussions and decision-making processes to address the unfolding incident.
The exercises can take different forms, ranging from informal discussions to more structured and time-constrained simulations. Facilitators guide the exercise, presenting new challenges and information as the scenario progresses, and participants must work together to assess the situation, make decisions, and develop an effective response plan. These exercises allow organizations to evaluate their incident response procedures, identify gaps and weaknesses, and refine their strategies to improve preparedness.
By simulating cyber incidents in a controlled environment, TTXs provide a safe space for learning, fostering collaboration among team members, and enabling the exploration of alternative approaches. They help organizations identify strengths and weaknesses in their incident response capabilities, assess communication channels, and uncover areas for improvement. Additionally, tabletop exercises offer the opportunity to test and validate incident response plans, refine coordination between different teams, and enhance overall cyber resilience.
Understanding the Relevance of Tabletop Exercises In Today’s World
Cyber threats have become more sophisticated and frequent, making tabletop exercises a highly useful tool for organizations. While new solutions provide advanced security measures, cybercriminals continue to exploit vulnerabilities and develop new attack vectors. This makes it essential for organizations to regularly assess and enhance their preparedness to combat cyber threats.
TTXs provide a controlled environment to simulate real-world cyber incidents and test an organization’s response capabilities. The relevance of TTXs to modern security practices can be broken down into these main areas:
- Risk Management – TTXs allow security teams to understand pain points, challenges, and any weaknesses in processes and communication channels that may not have been apparent in day-to-day operations. The results of the exercise can help teams bolster the weak points in their response strategy and bring in additional oversight where needed.
- Continuous Improvement & Lessons Learned – TTXs force security teams to validate documented flows that are in place for the current security program. After the exercise, all relevant participants can provide feedback on gaps and work towards revisions.
- Cybersecurity Training – After a TTX, valuable findings and any updates for processes are documented into training guides and playbooks for future use. New stakeholders can follow vetted documentation to prepare for future exercises.
- Stakeholder Collaboration – TTXs bring together key stakeholders, including IT personnel, executives, legal advisors, and public relations representatives. Holding regular exercises fosters collaboration and provides an opportunity to practice decision-making under pressure.
Mitigating The Challenges of Building A Tabletop Strategy
TTXs are a key element in developing the human side of incident response and cyber defense. By conducting regular tabletop exercises, organizations can test and enhance the knowledge and skills of incident responders. In the long run, having an established tabletop strategy bolsters the overall security posture of the business.
Many organizations, however, not only face challenges in implementing the strategy, but also generating ongoing value from TTXs. For some, the exercises are carried out with the best of intentions but still ‘fail’. From resource limitations to lack of engagement and availability, there are several common challenges associated with implementing value-driven TTXs. Here are some ways to overcome these pitfalls and ensure that the strategy works with the business and benefits security teams as cyber threats continue to develop.
Define Clear & Actionable Objectives
When objectives are not laid out in advance of a TTX, the sessions can feel like a perfunctory technical drill or a check-the-box activity with little to no value. Without clear goals in mind, the discussion can quickly unravel.
Defining the objectives comes from having a clear understanding of ‘the why’ behind the TTX. Based on the organization’s risk profile, senior leadership and security leaders need to pinpoint what takeaways the sessions should garner and what incremental improvements they want to make in their security strategy.
Having clear and actionable objectives for a cybersecurity tabletop exercise is key to ensuring its effectiveness. Here are some steps that enterprises can follow:
- Identify Key Focus Areas – Start by identifying the specific areas of cybersecurity that the exercise should address. This could include incident response procedures, communication protocols, decision-making processes, or testing the effectiveness of security controls. Consider the organization’s priorities, recent trends in cyber threats, and any known vulnerabilities or weaknesses.
- Align Objectives With Organizational Goals – The exercise objectives should align with the broader goals and priorities of the business. For example, if working towards compliance within a specific security framework or regulatory requirement, the exercise objectives can focus on testing and improving compliance-related processes.
- Be Specific & Measurable – Objectives should be specific and measurable to enable effective evaluation. Rather than stating a vague goal like “improve incident response,” set measurable targets such as “reduce incident response time by 20%,” or “enhance coordination between IT and legal teams during a data breach scenario.”
- Document & Communicate Objectives – Clearly document the defined objectives and share them with all participants. This ensures everyone is aligned and working towards common goals during the exercise.
Invite The Right Experts To The Discussion
A successful TTX requires the participation of key individuals who represent the roles and functions applicable to the TTX scenario being discussed. Considering the specific objectives set for a particular TTX, participants should only include those that will be able to answer for their function as too many observers may dilute the conversation if not managed.
Commonly, most TTX sessions will feature representatives from:
- Executive Leadership – C-suites should be involved to provide a high-level decision-making perspective, assess the impact of potential cyber incidents on the organization, and give the final word on necessary resources for incident response. Cyber incidents are not only a test of technical defenses, but they also examine executive-level responses when it comes to communicating the impact to both customers and the general public.
- Security & IT – Security professionals, including cybersecurity analysts, incident response managers, and network administrators, are essential participants. Their expertise in identifying and mitigating cyber threats supplies the technical acumen needed for the exercise.
- Legal & Compliance – Inclusion of legal advisors and compliance officers ensures that the exercise considers legal and regulatory implications. They can offer guidance on breach notification requirements, legal obligations, and potential liabilities.
- Communications & PR – Both internal and external communication is vital during a cyber incident. This team can speak to the management of public perception, media inquiries, and stakeholder communications during the scenario.
- Human Resources – Human resources representatives can contribute by addressing employee-related aspects, such as incident reporting procedures, training, and handling internal communication during an incident.
- Departmental Heads – It is beneficial to include representatives from different departments to ensure a holistic understanding of the organization’s operations and their interdependencies. Should a scenario deal with one specific department’s data, for example, that department head would be expected to provide input.
- Operations – Participants from operations and business continuity teams can provide insights into the potential impact of cyber incidents on critical operations and contribute to the development of effective recovery strategies.
Build Business-Tailored Scenarios & Evaluation Criteria
Designing realistic scenarios that accurately reflect most current threat landscapes can be challenging. It requires staying updated on the latest attack techniques, emerging technologies, and industry trends. Creating scenarios that strike the right balance between realism and feasibility is crucial for a meaningful exercise.
To foster better TTX discussions, the scenarios should be aligned with the industry-specific risks and active and known threats to similar organizations or competitors in the same space. Scenarios can also be based on the organization’s own history of security incidents.
- Tie Scenarios to Operations – Design scenarios that reflect the organization’s unique business operations, systems, and processes. Consider the industry, internal procedures, technology infrastructure, and specific threats relevant to the organization. This ensures that participants can relate to the scenarios and their potential impact.
- Leverage Past Risk Assessments – Using past risk assessments, identify the critical assets, vulnerabilities, and potential impacts within the organization. This helps determine the areas to focus on and ensures that the exercises address the most applicable risks.
- Incorporate Real-World Scenarios – Draw inspiration from real-world cyber incidents and recent data breach reports. Simulate scenarios that resemble actual incidents faced by similar organizations or that align with prevalent industry-specific threats. This helps participants gain practical experience and understand the implications of such incidents.
Create Follow Ups For the Next Exercise
Assessing the outcomes of TTXs and translating them into actionable improvements is a necessary but often overlooked part of the discussion. Proper evaluation and analysis of exercise results, followed by effective follow-up actions, are essential to maximize the value of these exercises. Having this iterative approach ensures that the teams learn from each exercise, actions any needed changes, and continuously enhance their response capabilities.
- Evaluate The Outcome – Conduct a thorough evaluation of the tabletop exercise right after its completion. Gather feedback from participants to identify strengths, weaknesses, and areas for improvement. Document any key insights or ideas for future exercises.
- Analyze The Gaps – Analyze the gaps and weaknesses identified during the exercise. Categorize them based on severity and prioritize them for action. Determine the root causes behind the gaps, whether they involve processes, technology, communication, or personnel.
- Assign Actions Items – Based on the identified gaps, assign action items to address each one to relevant individuals or teams. Set realistic timelines and milestones for completion. Continuously track progress and use key performance indicators (KPIs) to gauge the success of the follow-up initiatives. This provides a basis for further refinement and adjustment.
- Update Incident Response Plans – Revise and update the organization’s incident response plans to reflect the gaps identified during the exercise. Ensure that all employees have access to the updated plans.
- Conduct Training and Awareness Programs – Provide training sessions to enhance skills, educate employees on specific cyber threats, and reinforce incident response procedures. This helps fill knowledge gaps and improves preparedness.
Seeing Tabletop Exercises As One Part Of A Whole
When carried out correctly, a strong tabletop exercise strategy can expose weaknesses in incident response strategies, uncover areas for improvement, and foster a better strategy for emergency preparedness. While TTXs are a helpful tool, allowing security teams to simulate various scenarios, the exercises themselves are not enough to build an end-to-end cybersecurity defense posture against advanced cyber threats. In the greater scheme, TTXs are just one part of a whole and only place emphasis on fixing known vulnerabilities and any gaps identified during the sessions.
For ongoing, holistic protection against increasingly sophisticated threat tactics, techniques, and procedures, enterprises can augment their TTX processes with artificial intelligence (AI), machine learning (ML), red teaming, and a combination of autonomous endpoint, cloud, and identity security. The future of TTXs is now including such emerging technologies as they can simulate advanced attack vectors and enable organizations to test the effectiveness of automated response mechanisms. This ensures preparedness against new and evolving threats that haven’t already been documented and tracked.
Further, AI and ML can be used to model and simulate the behavior of adversaries, both known and unknown. By analyzing historical attack data, threat intelligence, and patterns, these technologies can generate realistic adversary profiles. TTXs can then include a wide range of adversary behaviors, making the exercises more challenging and reflective of real-world threats. Algorithms can be written to analyze historical data from previous cyber incidents and help identify patterns and trends. With this data on hand, organizations can predict and anticipate potential future threats, vulnerabilities, or attack vectors. Incorporating predictive analytics in TTXs helps security teams proactively enhance their defenses.
The new wave of TTX strategy is also seeing more involvement from red teams. Red teaming, which involves simulating adversarial attacks, can be augmented by AI and ML. These technologies can automate certain aspects of red teaming exercises, such as generating realistic attack scenarios, identifying vulnerabilities, and assessing the impact of potential attacks. This helps in uncovering weaknesses and testing the resilience of an organization’s defenses.
Tabletop exercises, when implemented alongside AI-powered tools, allow security operations centers (SOCs) to understand their responsibilities and spend less time collecting and analyzing data during an incident. These risk-informed exercises reduce the overall mean-time-to-containment, enhance collaboration, and allow for the refinement of incident response plans. When combined with red teaming, where simulated adversarial attacks are conducted, organizations gain a deeper understanding of their vulnerabilities and can proactively address them.
As cyberattacks grow in frequency and complexity, autonomous security, AI, and ML technologies are bringing valuable capabilities to tabletop exercises. They enable the automation of many security tasks and enhance predictive analytics. By leveraging these technologies, organizations can improve threat detection, response speed, and decision-making, allowing them to stay ahead of threat actors in the ever-changing cyber ecosystem.
SentinelOne focuses on acting faster and smarter through AI-powered prevention and autonomous detection and response. With the Singularity XDR Platform, organizations gain access to back-end data across the organization through a single solution, providing a cohesive view of their network and assets by adding a real time autonomous security layer across all enterprise assets. It is the only platform powered by AI that provides advanced threat hunting and complete visibility across every device, virtual or physical, on-prem or in the cloud.
Learn more about how Singularity helps organizations autonomously prevent, detect, and recover from threats in real time by contacting us or requesting a demo. | <urn:uuid:7f2970c7-0ae2-41a3-be5a-f29e9c25aaa7> | CC-MAIN-2024-38 | https://cyberleadershipinstitute.com/strengthening-cyber-defenses-a-guide-to-enhancing-modern-tabletop-exercises/ | 2024-09-16T16:45:50Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.45/warc/CC-MAIN-20240916144317-20240916174317-00563.warc.gz | en | 0.928423 | 3,191 | 2.6875 | 3 |
The short answer is: Not quite yet. So why even think about it now? Because it’s coming up quickly. If you consider how fast technology has ‘snowballed’ since the 1960s, Quantum Computing will soon be at your door.
A scant few ‘old-timers’ in the IT Support Los Angeles Community have witnessed the evolution and ‘trickle down’ effect of computing technology since the early days when the IBM punch-card system first made its way into the public consciousness in the 1950s – decades after the system was introduced by IBM in 1928 for use in main-frame computers.
At the time, only the biggest corporations were using electronic computing, with an in-house IT services team to keep everything going. Now fast-forward to the invention of the microprocessor in 1972, which allowed computers to run faster and take up less space. At this point, many mid-sized businesses jumped into the fray and the first generation of the Time & Materials, or ‘Break & Fix’ (B&F) model for the IT Support industry was truly born, but with only a handful of outsourced IT Consulting Services available in any given major city. That was also when forward-looking high school and college kids started taking Computer Science classes, and it paid off – in spades.
It would be a decade later, when Personal Computers (PCs) began appearing on the scenes, that literally any sized business could afford to get them, and the B &F IT services world flourished. Now jump ahead to the advent of the internet, and the Managed IT Services model for IT Support and the IT HelpDesk were born, and slowly took over as the dominant IT model over the next 30+ years. No longer would a ‘Break & Fix’ customer have to wait for their IT services ‘guy’ to show up, figure out what broke and then fix it – IT HelpDesk could now take care of it in a matter of minutes over the internet.
What is quantum computing and how does it work?
This is a small question with a huge answer. As might be guessed, it incorporates Quantum Physics with information theory and computer science. According to Investopedia:
“Quantum computing is an area of computing focused on developing computer technology based on the principles of quantum theory (which explains the behavior of energy and material on the atomic and subatomic levels). Computers used today can only encode information in bits that take the value of 1 or 0—restricting their ability.
Quantum computing, on the other hand, uses quantum bits or qubits. It harnesses the unique ability of subatomic particles that allows them to exist in more than one state (i.e., a 1 and a 0 at the same time).”
In ultra-simplified terms, while classic computing can manipulate, evaluate, and make determinations on one ‘reality’ or set of data at a time, Quantum computing can do the same with several ‘realities’ simultaneously. In examining one set, the basic component of Quantum computing – the qubit (rather than the classic bit) can immediately determine the attributes of any and all associated or ‘partner’ sets.
From LitsLink: “Let’s imagine a situation of having two bombs with identical fuses. According to rules of classical physics, they would explode at the same time. However, according to the laws of quantum physics, two identical radioactive atoms will explode at different times although they are indistinguishable. Quantum elements share a set of features that seem to be the verge of common sense like teleportation, time travel or an ability to be at two places simultaneously.”
If there was a nutshell definition it would be still be too long for this blog. It’s not unlike the difference between arithmetic and calculus - an entirely different way to compute, but it is reported to be 100 MILLION times faster than even the fastest Supercomputer when dealing with normal, day-to-day calculations.
Earlier this year, Quantum computing firm D-Wave Systems demonstrated that their Quantum computer solved a complex Quantum Magnetics problem 3 Million times faster than a classical computer. This Canadian company is one of several who are spearheading the drive to introduce Quantum Computing into the general business environment.
Frequently Asked Questions
Q: What kinds of problems is quantum computing suitable for?
A: Literally, computing problems of any kind, but the two areas most immediately adapted to Quantum Computing are Encryption and its use in Cybersecurity. The mathematical problem at the heart of classic RSA encryption relies on the factoring of two prime numbers. Identifying the correct pair using classical computing takes literally forever. Quantum algorithms can do this factorization quickly.
Q: What is the biggest problem with quantum computing?
A: The controlling and removal of quantum ‘decoherence’ which can be viewed as the loss of information from a system into the environment (often modeled as a ‘heat or thermal bath’) since every system is loosely coupled with the energetic state of its surroundings.
As a result of decoherence, time-consuming tasks may render some quantum algorithms inoperable, as maintaining the state of qubits for a long enough duration will eventually corrupt the superpositions. Read more HERE.
Q: Why quantum computing can be important for information technology?
A: Managed IT Services providers like IT Support LA who utilize Quantum Computing (whether their clients use it or not) will be able to predict and diagnose challenges within a network much faster. Attending to the proactive maintenance and repair needs of their end-users who do utilize Quantum Computing will remain pretty much the way it is now, although in the onsite setup will require safeguards – for example, Quantum computers need to be kept much colder than classic computers.
Instead of having two different realities for problem solving like current computers, quantum computing creates the ability to combine two realities, making the predictive abilities of the IT support team much more accurate and timely. This produces a far more efficient problem-solving technique. All computing systems rely on a fundamental ability of binary digits to store and manipulate information. The method by which either classic or Quantum computing is performed does not intrinsically alter the methodologies employed by the IT services provider.
Q: How expensive is a quantum computer?
A: D-Wave’s first commercially available Quantum Computer came with a hefty $10 Million dollar price tag. In February SpinQ, a start-up company in China unveiled a ‘home’ quantum computer costing $5 Thousand dollars. We suggest a very cautious ‘wait-and-see’ attitude with this product, for several reasons, not the least of which is the source. | <urn:uuid:2353e676-cb29-418a-ac24-2c33523da5a6> | CC-MAIN-2024-38 | https://itsupportla.com/is-it-time-for-quantum-computing/ | 2024-09-16T15:29:34Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.45/warc/CC-MAIN-20240916144317-20240916174317-00563.warc.gz | en | 0.949256 | 1,392 | 2.78125 | 3 |
On Martin Luther King Jr. Day in the US, volunteers across the country will assemble care packages for those in need, clean up parks and help those who are food insecure. Senior business leaders, though, are in a position to take on an equally important challenge on this national day of service: grow future business and technology leaders by investing in diverse and underrepresented communities of talent.
Amid the increasing need for science, technology, engineering and math (STEM) talent, too many young people lack access to not only four-year college STEM programs that lead to high-paying jobs but also the mentoring and guidance to move up the career ladder. This seemingly intractable challenge affects both the talent pools that business leaders rely on and the communities in which they live.
Business execs can use their role to invest in this source of talent and lift people up to achieve possibilities that have yet to be realized. By doing so, they will create lasting change for both underserved communities and the business itself by creating new and diverse pipelines of future tech leaders.
The STEM gap
It’s no secret that STEM occupations are on the rise. According to the US Bureau of Labor Statistics, STEM jobs are projected to grow 8% by 2029, compared with 3.7% for all other occupations.
The good news is the percent of underrepresented minorities in STEM jobs is also growing. According to the National Science Foundation (NSF), underrepresented minorities—including Hispanic, Black and American Indian or Alaska Native (AIAN) populations—collectively represented 24% of the STEM workforce in 2021, up from 18% in 2011.
However, there is still a gap in the STEM opportunities available to these demographics. The NSF report also notes that underrepresented minorities make up one-third of the workforce in STEM jobs that typically do not require a college degree for entry. Those jobs tend to have the lowest salaries and highest unemployment in STEM.
Further, Black, Hispanic and AIAN STEM workers earn less than their white and Asian counterparts. And a Harvard University study found C-suites in Corporate America are still disproportionately white and male, with severe under-representation of women, Black and Hispanic/Latino executives in most C-suite positions. The report noted that the lack of equity at the top isn’t due to a pipeline problem. The US workforce is diverse, with 37% being Asian, Black and Latino. Yet a lack of equity in assessing, developing and promoting talent is undermining representation at the C-suite level.
With the current dynamics, a key source of talent is being missed, as are the creativity, innovation, skills and ideas that come with diverse talent. At an ethical and social level, it’s inequitable to deprive a large portion of the workforce of upwardly mobile career opportunities.
Investing in new sources of talent
Much of the challenge starts with attaining a college degree, itself. Even when underrepresented minority students can get into and pay for a four-year college program, many struggle to connect with the typical college environment. Because many colleges and universities don’t recognize or serve the unique needs of first-generation or underrepresented students, some students end up with low confidence, little sense of community and a lack of support.
For a variety of reasons, college completion rates remain much lower for Black, Hispanic and AIAN students than for white students, according to National Student Clearinghouse’s DEI Data Lab. Even if they do finish, many of the opportunities available to these students offer average salaries not far above minimum wage.
It’s clear that innovative approaches and investments are needed to make a STEM career accessible to people in underrepresented communities. One organization that is succeeding in this endeavor is The Marcy Lab School in New York, which offers recent high school graduates a no-cost, year-long, full-time fellowship in software engineering. The program combines a liberal arts curriculum with rigorous hands-on training that serves as a pathway to a high-paying career in technology.
In addition to the coursework, students are taught the critical thinking skills, professional fluency, resilience and leadership behaviors needed to thrive in the evolving tech sector. Students are also supported through coaching, mentoring and developing the network they need to launch a successful career. According to Marcy Labs, last year’s class—with just one year of intensive learning—landed software engineering jobs with an average wage of $106,000.
The role of senior leaders
Business execs have the opportunity to support—and even start—innovative efforts to grow the talent pipeline by increasing access to lucrative STEM careers. Here are three actions business leaders can take:
- Provide exposure early and often to corporate spaces and leaders. Interacting with the business community is vital for young professionals’ career trajectory. At Cognizant, our Black, Latinx and Indigenous Group (BLING) sponsored a meeting with Marcy Labs and senior leaders, including our CEO and Chief Corporate Affairs Officer, at our New York City office. At the event, students had the opportunity to network, hear career advice, practice their interview skills and get exposure to a corporate environment.
One of the young women who joined us had been studying bio-medical engineering in college before enrolling at Marcy Labs. She shared her story of becoming discouraged as the only person who looked like her in the college program. We wanted her and the other Marcy fellows to know they have a place at all levels of an organization, including the boardroom, right next to the CEO and executive leadership. | <urn:uuid:56c314ed-4af6-45d2-8ebc-573defe6187c> | CC-MAIN-2024-38 | https://www.cognizant.com/us/en/insights/insights-blog/on-mlk-day-lift-others-to-lift-yourself-wf2346551 | 2024-09-17T22:50:54Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651835.53/warc/CC-MAIN-20240917204739-20240917234739-00463.warc.gz | en | 0.959043 | 1,146 | 2.734375 | 3 |
The MVP process then and now
The term “minimum viable product” was coined by Frank Robinson in 2001 to describe a process that reverses the usual order of “design, build, sell.” By putting the earliest usable version of a product into the hands of a relatively small group of early adopters, the company can see what features are truly desired by its customers before it has invested heavily in features that its customers don’t actually want.
It makes sense. So why is the MVP process only now becoming “hot”? Why not, say, a hundred years ago?
In his book Any Colour - So Long as It’s Black, John Duncan describes the process Henry Ford went through in 1906 when designing the Model T. The Ford Model N was already out and doing pretty well. But Ford saw a clear field for a new design because the industry was young enough that “there was no consensus on a standard automobile” (Kindle page 38). Ford began with a few design criteria based on the realities of the market. Driving was so novel that most drivers were bad at it, so the new car should be easy to drive, especially when it came to shifting. Because paved roads were infrequent, it had to ride high enough for all sorts of conditions. It should be lightweight but strong. And cheap enough to sell to the masses.
So Ford gathered a handful of his best engineers and for months met with them in a room 12 feet by 25 feet equipped with blackboards and a few tools. Because Ford preferred to see objects rather than designs, the engineers brought in models for the pieces they were proposing. Assemblage by assemblage, they came up with innovative solutions—a firmer way to connect the axle and frame, the use of new steel alloys, separating the cylinder head from the cylinder block, casting the cylinders as part of the crankcase, a two-speed gearbox, and a transmission housing and oil pan stamped out of a single piece of metal, about which Duncan says: “No man will ever design a structure as wonderful as our skeleton, but among manmade artefacts, this stamping has to rank highly.”
In fact, sheet metal stamping became such an important part of the Model T that Ford bought the company that did the initial pressings even though it was far more expensive to set up the castings than the process used by car manufacturers up to that point. “Mr. Ford was one of the first to see that even if a die cost $10,000, it was cheap if it made a million parts.”
In fact, manufacturing costs guided the design process all the way through. The gas tank was positioned high under the front seat so gravity would do the work of a fuel pump. The frame was designed to reduce the number of bolts that would have to be screwed in place.
So, why didn’t Henry Ford, a genius of innovation, come up with the concept of the minimum viable product?
In one sense, he did. The Model T was designed as a minimum viable vehicle. It wasn’t particularly fast or comfortable, and it sure wasn’t as pretty as some of its competitors. But it achieved Ford’s minimal design goals.
But the aim of a modern MVP is not just to be minimal. It’s to enable rapid iteration. But rapid iteration is hard when you’re dealing with atoms, not bits. And these atoms required that metal be cast and entire factories tooled. There was a high price for not getting things right on the first try.
And if you did, you could become very reluctant to change. In his book, Duncan writes:
“It is reported that in the early 1920s when many felt that the Model T should be drastically redesigned, a group of men built a prototype that had all the improvements they wanted; his response when they proudly showed it to him was to pick up a sledgehammer, smash the car to pieces and walk away without saying a thing.”
The MVP process strikes us as attractive not only because bits make it feasible, but also because we’ve come to believe that a technology that isn’t changing every six months is failing. Yet, in the almost 20 years it took Ford to introduce a new model, 15 million Model T’s had been sold. And during that entire stretch, never once did Henry Ford put on a black turtleneck and tease an audience with what would be new next month. | <urn:uuid:8af20940-4b31-4302-92a1-7f138b9f193e> | CC-MAIN-2024-38 | https://www.kmworld.com/Articles/Columns/Perspective-on-Knowledge/The-MVP-process-then-and-now-99364.aspx | 2024-09-17T22:22:51Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651835.53/warc/CC-MAIN-20240917204739-20240917234739-00463.warc.gz | en | 0.981436 | 939 | 2.75 | 3 |
Aerojet Rocketdyne has agreed to design and produce a rocket engine assembly for NASA through additive manufacturing approaches as part of a Space Act Agreement with the agency’s Marshall Space Flight Center.
The company said Tuesday it plans to use 3D solid state and laser deposition systems in efforts to fabricate complex parts for a lightweight engine thrust chamber.
NASAâs space technology mission directorate will facilitate the effort under the Announcement of Collaborative Opportunity initiative aimed at integrating emerging commercial technology into space missions.
Eileen Drake, CEO and president of Aerojet Rocketdyne, said the company looks to apply advanced materials manufacturing methods to provide the agency a transportation system in space.
The partnership aims integrate robotic techniques to produce an engine design for use in propulsion systems intended to power a lunar lander or a launch vehicle booster. | <urn:uuid:ed844ff0-4139-4eee-9ab3-d2e1e73a60e4> | CC-MAIN-2024-38 | https://www.govconwire.com/2019/10/aerojet-rocketdyne-to-build-rocket-engine-tech-under-nasa-space-act-agreement-eileen-drake-quoted/ | 2024-09-20T10:54:51Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652246.93/warc/CC-MAIN-20240920090502-20240920120502-00263.warc.gz | en | 0.88558 | 176 | 2.640625 | 3 |
Artificial Intelligence (AI) and Machine Learning (ML) are two very hot buzzwords right now, and often seem to be used interchangeably.
They are not quite the same thing, but the perception that they are can sometimes lead to some confusion. So I thought it would be worth writing a piece to explain the difference.
Both terms crop up very frequently when the topic is Big Data, analytics, and the broader waves of technological change which are sweeping through our world.
In short, the best answer is that:
Artificial Intelligence is the broader concept of machines being able to carry out tasks in a way that we would consider “smart”.
And, Machine Learning is a current application of AI based around the idea that we should really just be able to give machines access to data and let them learn for themselves.
Artificial Intelligence has been around for a long time – the Greek myths contain stories of mechanical men designed to mimic our own behaviour. Very early European computers were conceived as “logical machines” and by reproducing capabilities such as basic arithmetic and memory, engineers saw their job, fundamentally, as attempting to create mechanical brains.
As technology, and, importantly, our understanding of how our minds work, has progressed, our concept of what constitutes AI has changed. Rather than increasingly complex calculations, work in the field of AI concentrated on mimicking human decision making processes and carrying out tasks in ever more human ways.
Artificial Intelligences – devices designed to act intelligently – are often classified into one of two fundamental groups – applied or general. Applied AI is far more common – systems designed to intelligently trade stocks and shares, or manoeuvre an autonomous vehicle would fall into this category.
Generalised AIs – systems or devices which can in theory handle any task – are less common, but this is where some of the most exciting advancements are happening today. It is also the area that has led to the development of Machine Learning. Often referred to as a subset of AI, it’s really more accurate to think of it as the current state-of-the-art.
The rise of machine learning
Two important breakthroughs led to the emergence of Machine Learning as the vehicle which is driving AI development forward with the speed it currently has.
One of these was the realisation – credited to Arthur Samuel in 1959 – that rather than teaching computers everything they need to know about the world and how to carry out tasks, it might be possible to teach them to learn for themselves.
The second, more recently, was the emergence of the internet, and the huge increase in the amount of digital information being generated, stored, and made available for analysis.
Once these innovations were in place, engineers realised that rather than teaching computers and machines how to do everything, it would be far more efficient to code them to think like human beings, and then plug them into the internet to give them access to all of the information in the world.
The development of neural networks has been key to teaching computers to think and understand the world in the way we do, while retaining the innate advantages they hold over us such as speed, accuracy and lack of bias.
A Neural Network is a computer system designed to work by classifying information in the same way a human brain does. It can be taught to recognise, for example, images, and classify them according to elements they contain.
Essentially it works on a system of probability – based on data fed to it, it is able to make statements, decisions or predictions with a degree of certainty. The addition of a feedback loop enables “learning” – by sensing or being told whether its decisions are right or wrong, it modifies the approach it takes in the future.
Machine Learning applications can read text and work out whether the person who wrote it is making a complaint or offering congratulations. They can also listen to a piece of music, decide whether it is likely to make someone happy or sad, and find other pieces of music to match the mood. In some cases, they can even compose their own music expressing the same themes, or which they know is likely to be appreciated by the admirers of the original piece.
These are all possibilities offered by systems based around ML and neural networks. Thanks in no small part to science fiction, the idea has also emerged that we should be able to communicate and interact with electronic devices and digital information, as naturally as we would with another human being. To this end, another field of AI – Natural Language Processing (NLP) – has become a source of hugely exciting innovation in recent years, and one which is heavily reliant on ML.
NLP applications attempt to understand natural human communication, either written or spoken, and communicate in return with us using similar, natural language. ML is used here to help machines understand the vast nuances in human language, and to learn to respond in a way that a particular audience is likely to comprehend.
A case of branding?
Artificial Intelligence – and in particular today ML certainly has a lot to offer. With its promise of automating mundane tasks as well as offering creative insight, industries in every sector from banking to healthcare and manufacturing are reaping the benefits. So, it’s important to bear in mind that AI and ML are something else … they are products which are being sold – consistently, and lucratively.
Machine Learning has certainly been seized as an opportunity by marketers. After AI has been around for so long, it’s possible that it started to be seen as something that’s in some way “old hat” even before its potential has ever truly been achieved. There have been a few false starts along the road to the “AI revolution”, and the term Machine Learning certainly gives marketers something new, shiny and, importantly, firmly grounded in the here-and-now, to offer.
The fact that we will eventually develop human-like AI has often been treated as something of an inevitability by technologists. Certainly, today we are closer than ever and we are moving towards that goal with increasing speed. Much of the exciting progress that we have seen in recent years is thanks to the fundamental changes in how we envisage AI working, which have been brought about by ML. I hope this piece has helped a few people understand the distinction between AI and ML. In my next piece on this subject I intend to go deeper – literally – as I explain the theories behind another trending buzzword – Deep Learning. | <urn:uuid:1a6159a3-c372-494e-aec3-cc9c630b188f> | CC-MAIN-2024-38 | https://bernardmarr.com/what-is-the-difference-between-artificial-intelligence-and-machine-learning/ | 2024-09-10T17:32:34Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651303.70/warc/CC-MAIN-20240910161250-20240910191250-00263.warc.gz | en | 0.975727 | 1,330 | 3.109375 | 3 |
The world is changing, rapidly, into a Global Networked Knowledge Society (GNKS), and we are in a step change from what we used to know. Understanding this is crucial for people, and in particular policy makers and strategists, as their aim is to take decisions that will impact the future in which people and organisations will function.
The phrase Global Networked Knowledge Society is a carefully thought-out capture of the essence of our times:
Global means that the challenge is everywhere.. Enabled by technology, we have the ability to almost instantaneously spread and access information across the globe. The same infrastructures that unlock information at any geographical location that can be connected to the Internet enable communication—and therefore social and economic transactions—among people that otherwise would have never been in touch. Combined with the ability to travel across the globe for an increasing amount of people, and the ability to order things anywhere, and get it delivered anywhere, globalization has grown to a very important factor in society, and its impact continuous to increase.
Networked means that the links around the globe are not restricted to hubs and spokes, but rather anybody can potentially connect with anybody. Networks are social, informational and commercial, and convey not only words, but resources, goods and services. This can be both beneficial and harmful. The Internet was not built to support the critical processes of society. Therefore, there are a lot of weaknesses that can be detrimental to processes relying on this infrastructure, either because deliberate or accidental causes. Both the way the Internet is governed (traditionally US dominated), as well as its supporting technologies (IP protocol) are subject to challenges that need to be addressed in a balanced way during the coming decades, in order to ensure that we can trust and rely on information infrastructures that have become so critical to us.
Knowledge means that information has become a major commodity in the global economy, standing side-by-side with goods and services. The combination of access to information and “people who know” anywhere on this globe, and the increasing ability to generate insights from that access, means that knowledge plays an increasing role in society. More than ever before, knowledge is power. People who understand the tremendous opportunities that emerge because of access to knowledge thrive. Information technologies enable us to process information in a very fast way. Yet it is clear that beneficial results of such processing depend crucially on three variables:
the quality of information (it must be correct, relevant, and placed in the right context);
the logic of combining different streams of information (it must lead to valid and appropriate cause-effect inferences); and
information security (protecting information against distortion, either deliberate or by accident, and preserving societally-mandated protections against unwanted intrusion).
Society means that none of the individual, business, or governmental stakeholders is sufficient, but all are necessary. In order to ensure activities are sustainable, global networked knowledge needs to be put in a societal perspective. Understanding society is something that cannot be expressed by access to information alone. People matter. | <urn:uuid:2cffd67b-cadf-4303-ae59-f3fe5fbf8664> | CC-MAIN-2024-38 | https://gnksconsult.com/vision | 2024-09-10T16:55:35Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651303.70/warc/CC-MAIN-20240910161250-20240910191250-00263.warc.gz | en | 0.954295 | 624 | 3 | 3 |
By Kathryn M. Farrish, CISSP
One of the more recent information security innovations is the Control Correlation Identifier, or CCI. Each CCI provides a standard identifier and description for “singular, actionable statements” that comprise a security control or security best practice.
The purpose of CCIs is to allow a high level statement made in a policy document (i.e., a security control) to be “decomposed” and explicitly associated with the low-level security settings that must be assessed to determine compliance with the objectives of that specific statement.
Under the leadership of the Defense Information Systems Agency (DISA), a working group has been cataloging CCIs for the past several years. The collection has now been developed to the point that every assessment objective in the NIST SP 800-53A has been mapped to an individual CCI.
The current list of CCIs can be downloaded in XML format (viewable in a web browser such as Internet Explorer). The URL for downloading is: http:// iase.disa.mil/stigs/cci/Pages/index.aspx.
DISA encourages feedback from the information security community; a comment form is provided for that purpose.
Here is an example of a CCI:
CCI: CCI-001239 Status: Draft Contributor: DISA FSO Date: 2009-09-22 Type: Technical Definition: The organization employs malicious code protection mechanisms at information system entry and exit points to detect and eradicate malicious code transported by electronic mail, electronic mail attachments, web accesses, removable media or other common means or inserted through exploitation of information system vulnerabilities. References: NIST SP 800-53 SI-3.a NIST SP 800-53A SI-3.1(ii)
DISA is also in the process of revising numerous Security Technical Implementation Guides (STIGs) to include references to CCIs that correspond to each of the recommended configuration settings.
With the success of the CCI effort comes some hope that at least a portion of the effort associated with RMF assessment can be automated!
A ton of other information can be found on the NIST web site. | <urn:uuid:bdbc6dfb-e96c-4e02-8075-1c5441cbccd7> | CC-MAIN-2024-38 | https://www.itdojo.com/what-are-ccis-and-why-should-i-care-about-them/ | 2024-09-13T04:26:57Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651507.67/warc/CC-MAIN-20240913034233-20240913064233-00063.warc.gz | en | 0.880486 | 454 | 2.515625 | 3 |
JUNE 2023 – Road transportation accounts for approximately 15% of global carbon dioxide emissions, primarily due to passenger vehicles. The move to decarbonize passenger vehicles by transitioning to battery electric vehicles (EVs) is transforming the global auto industry. From 2017 to 2022 alone, annual worldwide electric vehicle sales volume soared from approximately 1.5 million to over 10 million. The number of EV charging points in the U.S. and Canada increased threefold in the same 5-year period to 150,000. As the industry grapples with challenges from the growing popularity of EVs, automakers are turning to joint ventures and partnerships to scale up EV production and infrastructure.
Electrifying the Industry
The adoption of EVs is rising sharply as governments, investors, and consumers push for more sustainable mobility. But despite the benefits of EVs and the favorable growth outlook, transforming the sector involves significant challenges, including sourcing critical raw materials like lithium and nickel, developing and manufacturing new battery technology, and building new charging infrastructure.
Tesla, a first mover in the industry, addressed these challenges by creating a vertically integrated supply chain, spanning from battery production to electric motor development. Tesla also built a proprietary charging network to support its customers. This strategy contrasted with the decades-long automotive industry trend of focusing on design and outsourcing manufacturing, but was critical to Tesla’s disruption of the industry.
While Tesla is no longer alone in the EV race, the rest of the automotive industry is pushing to catch up. In particular, traditional automakers are increasingly turning to joint ventures and partnerships to spread considerable capital requirements, leverage distinctive partner capabilities, and reduce downside risk.For more on the reasons to enter into joint ventures, see Tracy Branding Pyle, “Why Joint Ventures?” The Joint Venture Alchemist, February 2022, … Continue reading (See Exhibit 1 for an example of how one automotive company, BMW, is using partnerships across its value chain.)
Rising to the Challenge
The 2020s will be a decade of transition to EVs, driven by a combination of government regulation, consumer demand, and industry ambition. Of the three, government regulation is creating the most urgency. For example, the European Union has adopted a mandate to eliminate carbon dioxide emissions from vehicles by 2035, while government regulations in the U.S. will necessitate that 67% of new light-duty vehicles sold by 2032 are electric.
To meet these ambitious timelines, the automotive industry is seeing a surge in partnering deal volume (see Exhibit 2). Deal volume more than doubled from 2019 to 2021 and has remained at a high level since. The increased deal volume has been in nearly every part of the value chain (see Exhibit 3), and activity has been concentrated in these three “hotspots:”
Mining, Recycling, and Raw materials
Securing supply of critical materials, including lithium, copper, and nickel
Researching and developing new battery technologies, manufacturing batteries and battery components
Researching and developing new battery technologies, manufacturing batteries and battery components
1. Mining, Recycling, and Raw Materials
Out of the 210 new, large automotive partnerships established between 2021 and the first quarter of 2023, 15% were in the mining, recycling, and raw materials space. Ensuring a ready supply of critical minerals for batteries is a major priority for automotive companies. For that reason, a growing number of automakers are investing in the mining industry to ensure exclusive or priority access to the metal offtake. One example is General Motors’ (GM) $650 million equity investment in Lithium Americas, which will fund joint development of the Thacker Pass lithium mine in Nevada. GM secured exclusive access to the first phase of production from the mine, and the right of first offer on the second phase of production. Similarly, Mercedes-Benz established a supply partnership with Canadian-German Rock-Tech Lithium to source battery-grade lithium hydroxide.
An additional driver of automotive-mining partnerships is the U.S. Inflation Reduction Act (IRA). The IRA grants vehicle tax credits for EVs that have at least 40% of their battery minerals mined and processed in the U.S. (or in free trade partner countries), or recycled in North America. This requirement will gradually rise to 80% by 2026. Even when the Thacker Pass mine and other mines in development become fully operational, it is expected that the supply of newly mined critical minerals that qualify for the IRA tax credit will still be insufficient to meet demand. This is spurring automakers to partner with and, in some cases, invest in companies like Redwood Materials and Ascend Elements that can take end-of-life batteries and recycle them down to their base metals. These base metals can then be incorporated into future batteries.
As competition for scarce critical minerals increases, we foresee additional automotive investment in mining and recycling companies as well as operational mines — in fact, Volkswagen’s PowerCo battery subsidiary recently announced that it plans to invest in mines directly. Meanwhile, other automotive companies are reportedly interested in buying a minority stake in Vale’s base metals spinoff.
2. Battery Production and Research
Batteries are simultaneously the most expensive, most complex, and most important components of any EV. Unsurprisingly, the most sizable growth in automotive partnership activity has been in battery supply chains, as automakers push to secure control of both the development and production of battery cells. Tesla, BMW, and Volkswagen were first movers in setting up battery partnerships: Tesla and Panasonic partnered to build a battery manufacturing plant in 2014, BMW teamed up with Solid Power in 2017 to develop solid-state battery technology, and Volkswagen formed a 50/50 joint venture with Northvolt in 2019. Since then, deal activity in the battery space has taken off, with more than a dozen battery technology joint ventures formed between automakers and Asian battery companies to build plants in the U.S. (see Exhibit 4). Geopolitical concerns are also driving non-traditional deal structures — Ford’s partnership with Contemporary Amperex Technology Co. Ltd. (CATL) has Ford owning and operating the manufacturing plant, while CATL will license its battery technology to Ford and provide supporting staff, but CATL will not have an equity stake.
Considering the high capital cost and complex knowhow required to produce batteries, we expect high levels of battery joint venture activity to continue in the upcoming years, with much of that happening in North America as a result of the significant incentives offered by the IRA. In the first quarter of 2023 alone, we have seen new U.S.-based joint ventures announced by Hyundai and SK On, Honda and LG Energy, and GM and Samsung SDI, in addition to European joint ventures announced by Volkswagen and Umicore, as well as Ford, LG Energy and Koç Holdings.
3. Charging Infrastructure
As EVs become ever-more prevalent, the development of public charging infrastructure will be critical. This infrastructure will be key to addressing consumer “range-anxiety” as well as facilitating EV adoption among apartment dwellers and street-side parkers. This too is an area where joint ventures and partnerships have played, and will continue to play, an important role in the cost-effective transition to net-zero (see Exhibit 5). One example is IONITY, a European high-power charging station network joint venture founded in 2017 and currently owned by BMW, Hyundai, Mercedes, Ford, Volkswagen/Porsche, and Blackrock. More recently, Daimler Truck, NextEra and Blackrock jointly founded Greenlane to develop and operate a U.S. nationwide charging and hydrogen fueling network for commercial vehicles, while Europe’s largest truck manufacturers founded Milence to develop a similar commercial charging network in Europe.
Even traditional energy and utility companies, including BP, TotalEnergies, Iberdrola and Enel, have been active in developing charging networks, with each of these companies founding their own charging network joint ventures in the last two years alone.
The Way Forward
If the world wants any hope of limiting global warming to a 1.5°C increase — as agreed to in the Paris Agreement — governments and the auto industry must work together to promote the widespread adoption of zero-emission vehicles. Governments are doing their part — more than 20 countries aim to phase out internal combustion engine car sales by 2050, while over 120 countries have announced economy-wide net-zero goals. For the auto industry to do its part and lower the EV cost curve, automakers must learn to use joint ventures and strategic partnerships effectively.
However, joint ventures and partnerships present their own set of challenges and risks. Potential partners need to negotiate key deal terms like scope, decision rights, and preemptive dispute resolution. For more on joint venture deal making, see James Bamford, and David Ernst, “What’s the Best Way to Structure a Joint Venture?” The Joint Venture Alchemist, February 2023, … Continue reading Considering the evolving technological landscape, partners also need to consider how to safeguard their intellectual property. For more on intellectual property issues unique to joint ventures, see Tracy Branding Pyle, and James Bamford, “Protecting IP in Today’s Joint Ventures and Partnerships” The Joint Venture … Continue reading And once the partnership is operational, there can be other challenges, like managing conflicts of interest among partners who are also competitors, managing competitively sensitive information, and aligning on strategy and financials. Many of these challenges can be handled with expert guidance on detailed business planning, well-drafted deal terms, and robust governance. While the EV transition is a daunting challenge for governments, companies, and consumers, joint ventures and partnerships — when structured and governed effectively — can help pave the way ahead. | <urn:uuid:4b800bc2-6829-42e8-b148-ac5427e9f2d9> | CC-MAIN-2024-38 | https://jvalchemist.ankura.com/partnership-portfolios/revving-up-electric-vehicles-through-jvs-and-partnerships/ | 2024-09-18T00:29:38Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651835.68/warc/CC-MAIN-20240918000844-20240918030844-00563.warc.gz | en | 0.938493 | 1,991 | 2.625 | 3 |
Skeptics have accused bluetooth of being a solution in search of a problem, but this years IEEE Computer Society International Design Competition shows that Bluetooths strengths can address real needs—beyond eliminating wires in our personal area networks. My favorite is the third-place winner, The Poket Doctor from Brigham Young University. Imagine a paramedic team arriving at an accident scene: As they pull up, a screen on the dashboard displays several faces, along with each persons medical history. Before theyve even gotten out of their vehicle, the medical team identifies the people on the scene who need special attention.
Its all done with Bluetooth-enabled smart cards, estimated to cost about $30 each, discoverable by a portable medical console and able to provide emergency data—or, with a password supplied by a conscious patient, download full medical history information.
Poket Doctor prototype development was limited more by software than hardware; in particular, the team found Microsofts Visual C++ unwieldy in displaying patients photographs. Encryption/decryption times were also an issue.
But Bluetooths limitations, such as data packet size, were readily addressed. Using Ericsson development kits and Towitoko Electronics smart cards, the team achieved communication distances on the order of 10 meters with transmission times of under 20 seconds (including Bluetooth device discovery and selection).
Other winning presentations appear at www.computer.org/csidc, including the first-place report from Polands Poznan University of Technology—whose BlueEyes Conscious Brain Involvement Monitor detects inattention by operators of industrial installations. Consider the implications: Someday, I may be able to tell if youre thinking about these columns. | <urn:uuid:401b930f-41ae-4a9d-acd9-d054dea5f6fe> | CC-MAIN-2024-38 | https://www.eweek.com/news/bluetooth-to-your-rescue/ | 2024-09-10T20:44:59Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651318.34/warc/CC-MAIN-20240910192923-20240910222923-00363.warc.gz | en | 0.932563 | 343 | 2.578125 | 3 |
Using ChronoZoom to build a comprehensive timeline of climate change in the cloud
A professor at the University of California, Santa Barbara, explores the history of climate change in depth in his graduate-level Earth System Science class. To help students visualize events through the ages, he is developing a comprehensive history of climate change by using ChronoZoom, an open-source community project dedicated to visualizing the history of everything.
Building a historical view of climate change
Each year, Jeff Dozier, professor of Environmental Science and Management at the University of California, Santa Barbara, teaches a course in Earth System Science to between 80 and 100 incoming graduate students. Among the issues he teaches: climate record and how the Earth’s climate has changed through the ages—and what drivers are behind those changes.
Covering millions of years’ worth of warming trends within a class term is a challenge; managing the massive volumes of data, charts, videos, illustrations, and other support materials is even more daunting. Dozier needed a way to pull together his materials into an accessible—and manageable—manner.
He found the solution in the award-winning ChronoZoom tool.
ChronoZoom allows users to navigate through "time," beginning with the Big Bang up until present day events. Users can zoom in rapidly from one time period to another, moving through history as quickly or slowly as they desire. In 2013, a third-party authoring tool was built into ChronoZoom, enabling the academic community to share information via data, tours, and insights, so it can be easily visualized and navigated through Deep Zoom functionality.
Visual aids can have a particularly powerful impact when discussing climate change. Dozier is developing a history of the Earth that illustrates changes in climate from the beginning of the planet through modern day. The source materials include images, diagrams, graphs, and time-lapse movies that illustrate changes in the environment. Dozier plans to use the timeline as a teaching aid in his Earth System Science class.
“ChronoZoom has been easy to master and use,” Dozier notes. “You don’t need any sort of client-side application except a browser. All the data is stored on someone else’s machine. The processing is done in the cloud [through Windows Azure], not on your own computer. And the only thing that really shows up on your own computer is the results.” Moreover, thanks to the power of Windows Azure, the tool has the flexibility to scale up and down, enabling users to zoom in on a particular segment in time or zoom out to review climate change from the beginning of recorded history through today. Plus, content developers can share their presentations or timelines with others by simply sharing a link or posting it to a social media site.
Make your mark on history
ChronoZoom has already been used to illustrate the history of the Earth and explore the impact of climate change on the planet through the ages. There are many unexplored possibilities, however. The tool scales up and down, meaning any project can benefit—whether it’s the history of the world or just a review of the last few weeks. Dozier is hopeful others will use ChronoZoom to tell their stories by uploading their own data, images, and text to the cloud and using those materials in the classroom.
Making mobile phones the authentication hubs for smart homes
Each year, the National Institute of Standards and Technology funds pilot projects to advance the National Strategy for Trusted Identities in Cyberspace. The pilots address barriers to the identity ecosystem and seed the marketplace with “NSTIC-aligned” solutions to enhance privacy, security and convenience in online transactions.
This year, Galois, a computer science research and development company, received a $1.86 million grant to build a user-centric personal data storage system that enables next-generation IoT capabilities without sacrificing privacy. As part of the pilot, Galois will work with partners to integrate its secure system into an Internet of Things-enabled smart home and develop just-in-time transit ticketing on smart phones.
Galois’ authentication and mobile security subsidiary, Tozny, serves as the technical lead for the pilot programs and will build the data storage and sharing platform by tackling one of the weakest links in cybersecurity today: the password. Tozny’s solution replaces the username and password with something people use for almost everything: the smartphone, or wearable device.
Tozny is working with IOTAS, a developer of a home automation platform that integrates preinstalled hardware (light switches, outlets and sensors) with software to create a unique experience in which users learn from and interact with their homes.
Together, the companies are working to help users to log in to the IoT management console installed in their apartments without a password. Tozny is providing cryptographic authentication that is based on mobile phones.
“This is actually a really good idea because people who have tried to deploy authentication devices for smart homes have had a lot of trouble getting them to work, and they’re kind of expensive,” said Isaac Potoczny-Jones, computer security research lead at Galois.“Since a mobile phone can do cryptography, and because we can build beautiful and easy-to-use interfaces on mobile phones, we decided that that would be a much better way to log into a lot of systems -- and it’s easier to use than passwords,” Potoczny-Jones said.
IOTAS is already operating a smart-home pilot in apartment units in Portland, Ore., and San Francisco. IOTAS and Tozny will work to add transparent but privacy-preserving authentication and encryption to this pilot.
Secure mobile transit ticketing
GlobeSherpa, an Oregon-based company that provides a secure mobile ticketing platform for transit systems, is working with Tozny to develop a password-free authentication system that allows users to buy and display tickets on their mobile phones.
“With this you can use your phone to both buy and display tickets, and you don’t have to interface with these often-broken vending machines,” Potoczny-Jones said.
SRI International is also contributing to this project with a biometric authentication solution that will use a person’s walking gait as the biometric. This technology will work with the bus platform to ensure that the person holding the phone and showing the ticket is who he says he is.
“You’re walking up to the bus platform, get your phone, buy your ticket, and the phone has already has a pretty high confidence that you are who you claim to be because it was just observing your walking pattern,” Potoczny-Jones said. “It’s passive, it’s behind the scenes and it’s extremely fast and accurate as well.”
“Anything that you collect that’s behind the scenes or passive needs to have really strong privacy controls built into it,” Potoczny-Jones said. “So we’re very happy with the way these technologies are coming together to provide secure login, privacy controls and really advanced biometric technology.”
Contact us Today!
Chat with an expert about your business’s technology needs. | <urn:uuid:3d2079eb-4637-46ba-8eed-0788e529ba5f> | CC-MAIN-2024-38 | https://www.managedsolution.com/tag/data-storage/ | 2024-09-12T03:44:47Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651420.25/warc/CC-MAIN-20240912011254-20240912041254-00263.warc.gz | en | 0.937563 | 1,533 | 3.46875 | 3 |
In the rapidly evolving landscape of modern warfare, the use of unmanned aerial vehicles (UAVs) has become a critical factor in the strategic calculations of military powers worldwide. The ongoing conflict in Ukraine serves as a stark reminder of the transformative power of these technologies, particularly in the hands of an adversary willing to push the boundaries of conventional warfare. Recently, the Ukrainian Air Defense Forces shot down an unidentified Russian drone, marking yet another significant moment in the escalating use of drones in this conflict. What makes this incident particularly noteworthy is the nature of the drone—a jet-powered UAV devoid of conventional warheads, optics, or reconnaissance equipment.
The Incident: A New Type of Threat
The Ukrainian military report on the downed drone highlighted several unusual characteristics. Most notably, the drone lacked a warhead, which is typically a standard feature in UAVs designed for direct attacks. Additionally, it was reported to be devoid of any optics or other reconnaissance tools, raising questions about its intended purpose. The drone was brought down using an anti-aircraft guided missile, a costly method that inadvertently revealed a critical vulnerability in Ukraine’s air defense strategy—the potential overloading of their systems by such jet-powered UAVs.
The absence of conventional weaponry or surveillance equipment on the drone suggests that its primary function was not to inflict immediate damage or gather intelligence. Instead, it appears to have been designed to overwhelm Ukraine’s air defense systems, forcing them to expend valuable resources on neutralizing what could be considered a decoy. This tactic aligns with the broader strategy of saturation attacks, where numerous low-value targets are deployed to exhaust the enemy’s defensive capabilities, paving the way for more lethal strikes.
Implications of Overloading Air Defense Systems
The concept of overwhelming an adversary’s air defense system is not new, but its application in the current conflict underscores the increasing sophistication of Russian military tactics. By deploying jet-powered drones that lack traditional offensive or reconnaissance capabilities, the Russian Armed Forces can create confusion and force the Ukrainian military to make difficult decisions. Should they engage every drone, potentially wasting expensive missiles on decoys, or risk allowing a more dangerous UAV to penetrate their defenses?
This dilemma is compounded by the fact that identifying these drones among a swarm of similar UAVs is a near-impossible task. The Ukrainian Air Defense Forces are thus faced with the prospect of either maintaining a high expenditure of missiles to ensure the destruction of potential threats or adopting a more selective approach that could leave them vulnerable to more advanced, payload-carrying drones. This strategy of overloading air defenses is particularly effective in a conflict where the enemy’s resources and response capabilities are already stretched thin.
The Strategic Context: Russia’s Evolving Drone Warfare
The use of drones in warfare has evolved significantly over the past few decades, with Russia emerging as one of the leading developers and deployers of UAV technology. In the Ukrainian conflict, drones have been employed for a variety of purposes, including reconnaissance, artillery spotting, and direct attacks. However, the introduction of jet-powered drones without traditional offensive capabilities represents a new chapter in Russia’s drone warfare strategy.
These drones may serve multiple purposes beyond simply overloading air defenses. They could be used to probe Ukrainian defenses, gathering data on response times and the effectiveness of different missile systems. Additionally, by forcing Ukraine to reveal the locations of its air defense installations, these drones could indirectly aid in planning more targeted attacks. The psychological impact of such tactics should not be underestimated either, as they contribute to a sense of constant threat and uncertainty, eroding the morale of both military personnel and civilians.
image sorce : https://vk.com/photo-31371206_457375255?rev=1
Recent Attacks and Broader Military Objectives
The downing of the jet-powered drone coincides with a broader campaign by the Russian Armed Forces to intensify attacks on frontline and rear Ukrainian positions. Recent reports indicate that Russian forces have targeted areas in the Sumy and Kharkiv regions, striking military vehicles transporting Ukrainian personnel closer to the front lines. These strikes are part of a concerted effort to disrupt Ukrainian logistics and troop movements, thereby weakening their ability to mount effective counterattacks.
The use of drones, including the jet-powered variant, plays a crucial role in these operations. By continuously harassing Ukrainian forces and their supply lines, Russia seeks to create a situation where Ukrainian defenses are perpetually reactive, rather than proactive. The psychological toll of frequent air raid sirens, which have been declared in several regions, including Poltava, further exacerbates the stress on both military and civilian populations.
The Evolution of Air Defense: Challenges and Future Directions
The challenges posed by jet-powered drones necessitate a rethinking of air defense strategies, not just in Ukraine, but globally. Traditional air defense systems are often optimized to deal with specific types of threats, such as manned aircraft or ballistic missiles. However, the advent of advanced UAVs, particularly those designed to exploit vulnerabilities in these systems, requires a more flexible and adaptive approach.
One potential solution is the development of more cost-effective countermeasures. For instance, the use of electronic warfare (EW) to disrupt the control signals of enemy drones could provide a less expensive alternative to missile-based defenses. Additionally, the integration of advanced radar and sensor technologies could improve the detection and identification of UAVs, allowing air defense systems to prioritize targets more effectively.
Another avenue for improvement is the deployment of dedicated anti-drone systems, such as laser-based weapons or high-powered microwave devices. These systems, which are still in various stages of development, offer the promise of neutralizing drones without the high costs associated with traditional missile interceptors. However, their effectiveness against jet-powered UAVs, which may possess greater speed and maneuverability than conventional drones, remains an area of ongoing research.
International Implications: The Broader Impact of UAV Proliferation
The conflict in Ukraine is not occurring in isolation; it is part of a broader trend towards the increasing use of UAVs in conflicts around the world. The lessons learned from this war will likely inform the strategies of other nations, both in terms of drone deployment and the development of countermeasures. As such, the implications of Ukraine’s experiences extend far beyond its borders.
For NATO and other Western military alliances, the Ukrainian conflict offers a critical case study in the challenges and opportunities presented by drone warfare. The ability to counter UAV threats effectively will be a key determinant of military success in future conflicts, particularly against adversaries like Russia that have demonstrated a willingness to innovate and adapt their tactics.
Moreover, the proliferation of drone technology raises important questions about arms control and international security. As more nations acquire the capability to produce and deploy advanced UAVs, the potential for conflicts to escalate rapidly increases. This is particularly concerning in regions where tensions are already high, such as the South China Sea or the Middle East. The international community will need to grapple with these issues, potentially through new treaties or agreements aimed at regulating the use of drones in warfare.
The Future of Warfare in the Drone Age
The incident involving the downing of a jet-powered Russian drone by Ukrainian forces is a microcosm of the broader changes taking place in modern warfare. As UAV technology continues to advance, the nature of conflict is being transformed in ways that were scarcely imaginable just a few decades ago. The Ukrainian conflict, with its high-tech battles and strategic innovations, offers a glimpse into the future of warfare—a future where drones play an increasingly central role.
For Ukraine, the challenge lies not just in countering the immediate threat posed by these drones, but in adapting to the broader strategic shifts they represent. The ability to do so will be crucial not only for the outcome of the current conflict but for the future security of the nation. As other countries observe and learn from Ukraine’s experiences, the global landscape of military power and strategy will continue to evolve, shaped by the relentless march of technological progress.
The story of the jet-powered drone, and the broader narrative of drone warfare, is far from over. As the conflict in Ukraine unfolds, new developments will continue to emerge, offering fresh insights into the capabilities and limitations of these formidable machines. For now, the downing of this drone serves as a reminder of the challenges that lie ahead in the defense of national sovereignty and the preservation of global peace in the drone age.
Comprehensive Analysis of the SW400Pro Jet Engine and Its Role in Modern UAV Systems
Technical Specifications of the SW400Pro
The SW400Pro is a small, high-performance turbojet engine developed by Swiwin, primarily designed for use in UAVs, model aircraft, and other small aerial platforms. The engine is known for its compact size, efficient fuel consumption, and ability to operate at high altitudes.
- Thrust: The SW400Pro delivers a thrust of approximately 400 Newtons (N), making it suitable for various UAVs, particularly in roles that require significant power relative to the UAV’s size.
- Weight: The engine weighs around 3 kilograms (kg), not the previously mentioned 15 kg. This lighter weight is crucial for its application in UAVs, where every kilogram counts in maximizing payload capacity and endurance.
- Fuel Consumption: The SW400Pro has an optimized specific fuel consumption, with an approximate rate of 1.2 kilograms per hour (kg/h) under maximum thrust conditions. This efficiency allows UAVs to operate for extended periods, making it suitable for long-endurance missions.
- Dimensions: The SW400Pro is compact, with a length of approximately 60 centimeters (cm) and a diameter of about 14.6 centimeters. Its small size allows it to be integrated into UAVs with limited internal space while still providing substantial power.
- Materials and Construction: The engine is constructed from high-temperature alloys and advanced composite materials, ensuring durability and reliability. These materials allow the engine to operate in various environments and withstand the high stresses associated with jet propulsion.
- Operational Altitude: The SW400Pro can operate effectively at altitudes up to 10,000 meters. This capability is essential for UAVs used in surveillance and reconnaissance missions that require high-altitude operations.
- Maintenance and Lifecycle: The engine is designed with ease of maintenance in mind, featuring a lifecycle of approximately 300 operational hours before requiring major overhauls. This long lifecycle makes it a cost-effective choice for UAVs operating in demanding environments.
The Role of the SW400Pro in Modern UAV Development
The SW400Pro is increasingly becoming the engine of choice for various UAV applications due to its balance of power, efficiency, and compactness. Its ability to deliver high thrust relative to its weight makes it particularly valuable for UAVs that require both speed and endurance.
- Enhanced Endurance and Range: The engine’s fuel efficiency extends the operational range of UAVs, enabling them to cover greater distances and remain airborne for longer periods. This feature is particularly important for missions that require persistent surveillance or long-range reconnaissance.
- Increased Payload Capacity: The SW400Pro’s high thrust-to-weight ratio allows UAVs to carry heavier payloads, such as advanced sensors, electronic warfare equipment, or precision-guided munitions. This capability enables UAVs to perform multiple roles in a single mission.
- High-Altitude Operations: The engine’s ability to function effectively at high altitudes makes it ideal for UAVs that operate in airspaces difficult to reach with conventional aircraft, such as mountainous regions or areas requiring high-altitude surveillance.
- Versatility Across Platforms: The SW400Pro is adaptable to a wide range of UAV platforms, from small tactical drones to larger strategic systems. Its versatility makes it a valuable asset in both military and civilian applications.
- Stealth Capabilities: The SW400Pro’s design minimizes its infrared signature, making it harder for enemy forces to detect and target the UAV. This feature is critical for stealth operations, where avoiding detection is paramount to mission success.
Global Impact and Export Potential
The SW400Pro has attracted attention not only in China but also in international markets. Its performance and reliability make it an attractive option for countries looking to enhance their UAV capabilities without investing in the development of new engines from scratch.
- Export Success: The SW400Pro’s export to various countries reflects China’s growing influence in the global aerospace industry. The engine’s reliability and efficiency have made it a popular choice in markets where cost-effectiveness and performance are critical.
- Strategic Partnerships: The export of the SW400Pro has facilitated strategic partnerships between China and other countries, particularly in aerospace technology and defense. These partnerships often include technology transfer agreements, further enhancing the capabilities of China’s allies.
- Influence on Global UAV Development: The widespread use of the SW400Pro has set a benchmark for performance and reliability, prompting other engine manufacturers to innovate and compete. Its integration into various UAVs has influenced the design and development of new platforms worldwide.
- Military and Civilian Applications: While primarily used in military UAVs, the SW400Pro also has potential applications in civilian sectors such as disaster response, environmental monitoring, and agricultural surveying. Its reliability and efficiency make it suitable for any application requiring small, high-performance UAVs.
Challenges and Future Developments
Despite its success, the SW400Pro faces challenges that could impact its future development and adoption. These challenges include the rapidly evolving nature of UAV technology, competition from other engine manufacturers, and potential geopolitical constraints.
- Technological Advancements: As UAV technology continues to evolve, there is a constant demand for more powerful, efficient, and adaptable engines. The SW400Pro will need to undergo continuous improvement to remain competitive, particularly in terms of thrust-to-weight ratio, fuel efficiency, and integration with new UAV technologies.
- Competition from Other Manufacturers: The global market for UAV engines is highly competitive, with manufacturers from the United States, Europe, and other regions developing advanced propulsion systems. To maintain its market position, the SW400Pro must continue to offer unique advantages, such as lower costs, better fuel efficiency, or superior integration capabilities.
- Geopolitical Considerations: The export of the SW400Pro and its integration into foreign UAVs could be influenced by geopolitical factors, including international sanctions, export restrictions, or shifting alliances. China’s ability to navigate these challenges will be crucial in maintaining the engine’s global presence.
- Integration with Autonomous Systems: The future of UAVs is likely to be heavily influenced by advancements in artificial intelligence and autonomous systems. The SW400Pro’s design must consider these developments, ensuring that it can be seamlessly integrated into next-generation UAVs requiring more sophisticated propulsion solutions.
- Russian UAVs in Ukraine: Recent reports suggest that some Russian drones, identified as part of their decoy or loitering munitions, have been found with the SW400Pro engine during the conflict in Ukraine. These drones are likely repurposed civilian or hobbyist UAVs equipped with this engine due to its availability and performance relative to cost.
Possible but Unconfirmed Usage:
- Custom-Built UAVs: The SW400Pro is popular among hobbyists and smaller-scale UAV developers for custom-built UAVs due to its compact size and high thrust. While these UAVs are not part of mainstream military arsenals, they might be used in experimental or testing roles by various entities.
Lack of Information on Military UAVs:
- No Major Military Platforms: There is no credible evidence that mainstream or widely recognized military UAVs, such as those used by major air forces (e.g., Wing Loong, MQ-9 Reaper), use the SW400Pro engine. Most military UAVs typically use engines that are specifically designed for military applications, which are distinct from those available commercially. | <urn:uuid:fd26afa6-b267-4a0c-8675-9667d7460223> | CC-MAIN-2024-38 | https://debuglies.com/2024/09/01/the-escalation-of-drone-warfare-ukraines-air-defense-and-the-emerging-threat-of-russian-jet-powered-uavs/ | 2024-09-13T07:49:47Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651510.65/warc/CC-MAIN-20240913070112-20240913100112-00163.warc.gz | en | 0.937943 | 3,317 | 3.0625 | 3 |
The National Institute of Science and Technology (NIST) is launching a program that will assess quickly proliferating generative AI technologies that are being developed by research institutions – such as tools that generate text, images and videos.
The NIST GenAI program also is aimed at creating systems that will be able to identity whether text, images or videos are created by an AI model; a key step in pushing back against disinformation such as voice-cloning campaigns or deepfakes that can fool users into believing they are real.
The rollout of the NIST AI program was among several announcements this week by the U.S. Commerce Department, including draft publications from NIST addressing AI security and trustworthiness. They’re also among a flurry of moves by federal agencies, such as the Department of Homeland Security and CISA, that mark the six-months milestone since President Biden issued his executive order address about the need for the safe and secure development of AI technologies.
Testing and Evaluating
According to the release from the Commerce Department – of which NIST is a part of – the program is similar to others run by NIST, which tests and develops technologies for both the government and private sector. It will be an umbrella program that includes a platform for testing and evaluating generative AI technologies, with the evaluations playing a role in the work of the U.S. AI Safety Institute, which also is at NIST.
The evaluations will include evolving the creation of a benchmark dataset and running comparative analysis using metrics. In addition, according the NIST AI Program website, the initiative also will include “facilitating the development of content authenticity detection technologies for different modalities (text, audio, image, video, code) and “promoting the development of technologies for identifying the source of fake or misleading information.”
In announcing the new program and draft publications, Commerce officials echoed what others in the federal government – as well as private sector – have said, which is that AI holds the promise of delivering significant benefits to both the business world and society at large, but that it carries tremendous risk as well. President Biden’s executive order from October 2023 is taking a whole-of-government approach of ensuring that the benefits are realized while the risks are reduced.
“For all its potentially transformative benefits, generative AI also brings risks that are significantly different from those we see with traditional software,” Laurie Locascio, NIST director and under secretary of commerce for standards and technology, said in a statement. “These guidance documents will not only inform software creators about these unique risks, but also help them develop ways to mitigate the risks while supporting innovation.”
NIST is kicking off a pilot study that will measure and understand how systems behave to better discriminate between content that is synthetic and content that is created by humans in both text-to-text and text-to-image situations.
“This pilot addresses the research question of how human content differs from synthetic content, and how the evaluation findings can guide users in differentiating between the two,” the agency notes. “The generator task creates high-quality outputs while the discriminator task detects if a target output was generated by AI models or humans.”
The test will include two teams: one of generators, who will be tested on their system’s ability to generate synthetic content created by large language models (LLMs) and generative AI tools, the other of discriminators, whose system will be tested for its ability to detect such synthetic content.
The More the Merrier
NIST is inviting teams from academia, other research labs and the tech industry to contribute to the pilot. The generator teams will create a summary of no more than 250 words of a topic and a set of documents. Discriminator teams will be given text summaries that may have been written by a human or AI and will have to detect which is true. The registration period for the pilot opens May 1 with the NIST-provided source data being released June 3.
“For disinformation campaigns, AI algorithms can now be trained to analyze vast amounts of data, identify trends and mimic human behavior on social media platforms,” the survey authors wrote. “By deploying AI-driven bots or deepfake technologies, malicious actors can flood online spaces with misleading narratives, fabricated stories and manipulated media.” | <urn:uuid:8a3902a9-e983-476c-9c7b-c2592191aa80> | CC-MAIN-2024-38 | https://techstrong.ai/government/nist-platform-to-focus-on-sorting-ai-from-human-generated-content/ | 2024-09-15T15:33:16Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651632.84/warc/CC-MAIN-20240915152239-20240915182239-00863.warc.gz | en | 0.955186 | 895 | 2.5625 | 3 |
The Sysdig 2023 Cloud-Native Security and Usage Report reported that 87% of Container images have high-risk vulnerabilities. In simple words, containers refer to packages of software files that contain everything needed to run an application.
They are an essential step forward in modernizing applications, however, with the benefits of containers also comes the responsibility of securing them, especially container images that often contain vulnerabilities due to the use of outdated packages. This is where container vulnerability scanning steps in.
This blog will help IT professionals understand why they should be scanning container technology, how to do it, and the most critical question, how they know they are running a secure environment.
- Container vulnerability scanning is the act of checking containerized applications for security problems or weaknesses to keep them safe from potential threats.
- Common container vulnerabilities include escape vulnerabilities, attribute disclosure, and known vulnerabilities in components.
- There are 5 different types of container vulnerability scans.
- Follow practices like unique UIDs, limiting user IDs, using mandatory access control, and more to maximize security.
- Tools like Astra, Clair, Grype, Docker Bench for Security, Trivy, and more can help secure against the same.
What is Container Vulnerability Scanning?
Container vulnerability scanning is a process that uses automated tools to compare the contents of each container to a database of known vulnerabilities. If a library or other dependency within a container image is subject to a known vulnerability, the tool will flag the image as insecure.
What are Containers?
Containers, on the other hand, are lightweight, stand-alone, executable packages that include everything needed to run the application: code, runtime, system tools, system libraries, and settings. They provide an isolated environment for applications to run in. The application is installed within a container, including a specific operating system, libraries, and executable files.
Why is Everyone Using Them?
When the container is started, the application runs on the operating system in that isolated environment. Containers are easy to deploy and start. They are independent and can run on their own. This makes them portable to many different environments and easy to integrate into existing applications.
Introduction to Vulnerability Scanning
Vulnerability scanning is finding security vulnerabilities in the applications (web, mobile, network, blockchain) using manual or automated scanners. Vulnerability scanning is a crucial part of any security program. It allows security personnel to keep track of known vulnerabilities, prioritize them, and plan the best way to fix them.
The vulnerability scanning process includes data collection, analysis, categorization, prioritization, and reporting of vulnerabilities. Manual vulnerability scanning is increasingly being replaced with automated vulnerability scanning. Automatic vulnerability scanning is performed on a website or software using an automated vulnerability scanner.
Automated vulnerability scanning is more cost-effective and scalable than manual vulnerability scanning. Automatic vulnerability scanning is being used by organizations to test for security vulnerabilities.
Why is Container Security complex?
Container security scanning has become more and more popular in recent years. That is because Docker has grown in popularity, and the process has become more and more complex.
- The container ecosystem has grown significantly, which means many more components need to be checked for vulnerabilities.
- The more components you add to your application, the more complex the process of checking for vulnerabilities becomes.
- The complexity of containers makes them a more difficult target for developers and security researchers.
- The challenge is that containers are a new technology and container security is relatively new.
What Kind of Container Vulnerabilities Can Be Detected?
Container technologies such as Docker are often considered to be safe and reliable. However, the reality is that they are vulnerable to several issues concerning security. The only way to eliminate the risk is to avoid containers altogether. When it comes to container-level vulnerabilities, the most common are:
1. Escape vulnerabilities
This vulnerability is caused by code that allows execution from user input due to improper isolation or insufficient restrictions such as input validation. The vulnerability can be used to escape the container by using a command that the user provides.
2. Attribute disclosure
This vulnerability is caused by a lack of restriction on the environment variables that the container can access and manipulate. This vulnerability can be used to obtain information that should not be available to the container.
For example, the container should not manipulate the host’s network interfaces, but the vulnerability allows the container to do so.
3. Known Vulnerabilities in Components
One of the most common ways to exploit vulnerabilities in the Docker daemon is to get a root shell, which allows the attacker to read any file on the server and execute any command as root. One way to compromise the Docker daemon is to exploit a vulnerability in a library that is used by one of the many Docker tools (e.g., docker-cli-js).
Container Vulnerabilities And How To Avoid Them
Several types of container vulnerabilities can be detected and mitigated. Here are some common examples:
The container image itself can contain vulnerabilities, such as outdated or unpatched software components.
Avoid this by keeping your images up-to-date with the latest patches and security updates.
Misconfigurations can also lead to security vulnerabilities. For example, running a container with unnecessary privileges or leaving open ports can leave your system vulnerable to attacks.
To avoid this, it is important to follow best practices for container configuration and use tools like security scanners to detect potential issues.
Once the container is running, it may still be vulnerable to attacks. For example, a container running as the root user may allow attackers to gain access to the host system.
To avoid this, it is important to use appropriate user permissions and to monitor the container for any suspicious activity.
Supply chain vulnerabilities
Containers may contain dependencies from external sources, which can introduce vulnerabilities.
Avoid this by carefully reviewing and auditing any external dependencies before adding them to your container.
Types of Container Security Scanning
Here are some common types of container security scanning:
Image scanning analyzes container images for vulnerabilities and misconfigurations before they are deployed. Image scanning tools use various methods to detect potential issues, such as analyzing software dependencies, comparing with known vulnerabilities databases, and examining system configurations.
Runtime scanning analyzes running containers for any changes or suspicious activities that could indicate a security breach. Various methods are used to monitor the behavior of running containers, such as examining system logs, network traffic, and file systems.
Compliance scan checks containers and images against security policies, regulations, and standards such as HIPAA, PCI DSS, or GDPR. Deviations from these standards are detected and alerts and reports on potential compliance issues are provided.
Configuration scans check container configuration settings for security issues, such as open network ports, elevated privileges, or insecure authentication mechanisms. Configuration scanning tools can detect misconfigurations and guide how to remediate them.
This analyzes container dependencies, such as libraries and frameworks, for known vulnerabilities and exploits. Outdated or vulnerable dependencies are identified and recommendations are given on how to update them.
How to discover container vulnerabilities during the SDLC pipeline?
Container images are the deliverable artifacts of a software project. Security vulnerabilities must be detected in the source code and the container images. Modern software development life cycle (SDLC) offers an opportunity to check container images for security vulnerabilities known as container vulnerability scanning.
In the past, image vulnerability scanning was only conducted at the time of image build (or build time). However, this process is not comprehensive enough as it doesn’t cover the time when the image is being used in production.
The approach mentioned above does not also cover the vulnerabilities introduced at image build. It is essential to scan images simultaneously as code is scanned (during code review) for potential security issues. This scanning can be manual or automated based on the organization’s decision.
10 Best Practices to Avoid Container Vulnerabilities
The use of containers is now considered a best practice, especially in terms of speed and flexibility. As with any new technology, it is essential to follow best practices to avoid common security concerns, such as:
- Assign unique UIDs and GIDs to each container.
- Limit and control user IDs, groups, and capabilities.
- Enable mandatory access control.
- Do not share host directories with containers.
- Disable container login by default.
- Use seccomp for filtering system calls.
- Avoid using root user
- Perform regular container vulnerability scanning
- Disable container capabilities.
- Use security-enhanced Linux for fine-grained controls.
Although the list is never-ending the above-mentioned are must-haves.
Let experts find security gaps in your cloud infrastructure
Pentesting results without 100 emails,
250 google searches, or painstaking PDFs.
Top 5 Open Source Container Vulnerability Scanning Tools
Scanning containers for vulnerabilities is still a relatively new concept. Thus, only a few excellent open-source container vulnerability scanning tools are available.
1. Clair: Clair is one of the most used open-source container scanners that offers a static analysis of vulnerabilities in application containers. Vendors use it for vulnerability detection and users for vulnerability analysis.
2. Grype: Grype is an easy-to-use and straightforward container vulnerability scanner. It is designed to provide quick container security scan containers and filesystems for common vulnerabilities in the most popular CVE database. Grype is powered by Syft, the open-source software bill of materials (SBOM) tool for container images and filesystems.
3. Docker Bench for Security: Docker Bench for Security, commonly abbreviated as DBFS, is a script to audit Docker containers against security benchmarks. DBFS is best described as a security benchmarking tool, which checks for standard best practices around deploying Docker containers in production environments. DBFS has been developed by the Sysdig team in collaboration with the SANS Institute, the University of New Haven, and many other awesome folks.
4. Trivy: Trivy is a great security scanner among container security scanning tools for container images. It allows you to scan images for vulnerabilities and configuration issues before using them. The goal is to help you verify your containers’ security and detect configuration issues or vulnerabilities before you deploy your application. Trivy is an entirely open-source project with its source code hosted on Github.
Why Choose Astra Security for Container Vulnerability Scanning?
Astra Security is one of the leading continuous PTaaS platform & DAST Scanner. At Astra, we provide a wide range of cybersecurity services, including Container Vulnerability Scanning by security engineers using open source tools with an offensive approach, Web Application Security, Network Penetration Testing and Penetration Testing. All of this happens on Astra’s one of a kind Pentest Platform.
Hopefully, this guide has provided you with all the information you need to understand better container vulnerability scanning and the different tools you can use to help keep your containers safe. If you want to learn more about how we can help you with container vulnerability scanning, feel free to contact us anytime. Thank you for reading. We are always excited when one of our posts can provide helpful information on this topic!
How does container scanning work?
Container scanning works by comparing the contents of a container, including its software dependencies and configurations, against a database of known vulnerabilities. Automated tools identify and flag any matches, indicating potential security issues. This process helps ensure that containers are free from known vulnerabilities, enhancing their security in a rapidly evolving threat landscape. | <urn:uuid:4164655c-b318-4e6c-91d6-4ac862f84418> | CC-MAIN-2024-38 | https://www.getastra.com/blog/security-audit/container-vulnerability-scanning/ | 2024-09-15T16:53:16Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651632.84/warc/CC-MAIN-20240915152239-20240915182239-00863.warc.gz | en | 0.924493 | 2,363 | 2.75 | 3 |
In the fast-paced world of IT, ensuring the stability and reliability of your services is crucial. One of the most effective ways to achieve this is through Problem Management best practices. These practices focus on identifying and addressing the root causes of recurring issues, thereby reducing the number of incidents and enhancing the overall quality of your IT services.
By incorporating these best practices, you can create a more resilient IT environment that not only responds to issues more efficiently but also prevents them from occurring in the first place.
This article will guide you through the essential concepts and best practices of Problem Management, helping you to implement them effectively in your organization.
What is Problem Management?
In the realm of IT Service Management (ITSM), Problem Management plays a pivotal role in maintaining the stability and efficiency of IT services. But what exactly is Problem Management? At its core, it is the process of identifying and managing the lifecycle of problems within an IT environment.
Problems are typically underlying issues that cause incidents. By focusing on these root causes, Problem Management aims to prevent incidents from recurring, thereby improving overall service quality and customer satisfaction.
The distinction between incidents and problems is crucial. While incidents are disruptions that need immediate resolution, problems are the underlying causes of these incidents. Effective Problem Management ensures that these root causes are analysed and addressed, reducing the likelihood of future incidents.
This proactive approach to support not only enhances the reliability of IT services but also minimizes downtime, saving both time and resources.
Incident Management vs Problem Management: Definition & Differences
Reactive Problem Management vs. Proactive Problem Management
Problem Management can be approached in two primary ways: Reactive and Proactive Problem Management. Understanding the differences between these types is essential for implementing an effective Problem Management strategy.
- Reactive Problem Management: This type of Problem Management is triggered after an incident occurs. The focus is on identifying the root cause of the incident and resolving it to prevent recurrence. While reactive Problem Management is necessary, it often means that the damage has already been done, leading to potential service disruptions.
- Proactive Problem Management: In contrast, proactive Problem Management aims to identify and resolve potential problems before they result in incidents. This approach involves trend analysis, risk assessments, and continuous monitoring of the IT environment to detect vulnerabilities. Proactive Problem Management is more strategic and can significantly reduce the number of incidents, leading to a more stable IT environment.
Both reactive and proactive Problem Management have their place in a comprehensive ITSM strategy. However, organizations that prioritize proactive Problem Management are likely to experience fewer incidents and a more robust IT infrastructure.
Proactive vs Reactive Change Management: A Full Comparison
5 key benefits of Problem Management
Implementing effective Problem Management best practices offers several benefits that can enhance IT operations and overall business performance. Here are five key advantages:
- Reduced incident volume: By addressing the root causes of incidents, Problem Management helps in minimizing the recurrence of these issues. This leads to a significant reduction in the overall incident volume, freeing up IT resources for more strategic tasks.
- Improved service quality: Problem Management ensures that recurring issues are identified and resolved, leading to improved service reliability and user satisfaction. As a result, the quality of IT services delivered to end-users is enhanced.
- Cost efficiency: Resolving problems at their root causes can prevent the escalation of issues, which in turn reduces the costs associated with incident resolution, downtime, and lost productivity.
- Enhanced knowledge sharing: Problem Management often involves documenting the problem resolution process. This documentation serves as a valuable knowledge base that can be used for future reference, speeding up the resolution of similar issues and promoting knowledge sharing within the organization.
- Increased customer satisfaction: With fewer incidents and improved service quality, end-users experience fewer disruptions, leading to higher levels of customer satisfaction. A stable IT environment builds trust and confidence among users, which is crucial for business success.
5 challenges of Problem Management
While Problem Management offers numerous benefits, it also comes with its own set of challenges. Understanding these challenges is essential for effective implementation:
- Identifying root causes: One of the biggest challenges in Problem Management is accurately identifying the root cause of an issue. This process requires a deep understanding of the IT environment and can be time-consuming.
- Resource allocation: Problem Management requires dedicated resources, including skilled personnel and tools. Balancing these resources with other IT priorities can be challenging, especially in smaller organizations.
- Data availability and analysis: Effective Problem Management relies on accurate and comprehensive data. However, collecting, analyzing, and making sense of large volumes of data can be overwhelming and requires advanced tools and expertise.
- Resistance to change: Implementing Problem Management often requires changes in processes, roles, and responsibilities. This can lead to resistance from staff who are accustomed to existing workflows.
- Continuous monitoring and improvement: Problem Management is not a one-time activity but an ongoing process. Maintaining a continuous focus on Problem Management requires commitment and discipline, which can be challenging in a dynamic IT environment.
8 Steps to Build a Solid Problem Management Process
4 stages of Problem Management
Problem Management is a structured process that involves several stages. Each stage plays a crucial role in ensuring the effective resolution of problems. Let's break down the four stages:
1. Problem identification
The first stage in Problem Management is identifying the problem. This involves detecting patterns in incidents, analyzing them, and recognizing when an issue is not just an isolated incident but a recurring problem.
Effective problem identification often requires collaboration between service desk teams and technical specialists who can analyze incident data to spot trends.
2. Problem categorization and prioritization
Once a problem is identified, it needs to be categorized and prioritized. Categorization helps in organizing problems based on their nature, which can make subsequent analysis and resolution more efficient. Prioritization, on the other hand, involves assessing the impact and urgency of the problem to determine the order in which it should be addressed. High-impact problems that affect critical services should be prioritized to minimize disruptions.
3. Problem diagnosis and resolution
This stage is where the root cause of the problem is identified, and a resolution is developed. Diagnosis often involves detailed analysis, including root cause analysis (RCA) techniques, to pinpoint the exact cause of the problem. Once the root cause is identified, the resolution process begins. This could involve fixing the issue, implementing a workaround, or making changes to prevent the problem from recurring.
4. Problem closure and evaluation
The final stage of Problem Management is closing the problem and evaluating the effectiveness of the resolution. Before closure, it is essential to ensure that the problem has been fully resolved and that there are no lingering issues. An evaluation is conducted to review the Problem Management process and identify any lessons learned. This stage also involves updating the knowledge base with information that can be used for future problem management efforts.
Problem Management best practices
To optimize your Problem Management processes, consider incorporating these eight best practices:
1. Implement a Problem Management policy
Establishing a clear Problem Management policy is the foundation of an effective problem management process. This policy should define the objectives, scope, roles, and responsibilities involved in managing problems within your organization.
By setting clear guidelines, you ensure that everyone involved understands their role in the problem management process, which helps in streamlining operations and avoiding confusion.
A well-defined policy also sets the expectations for how problems should be identified, analyzed, and resolved, ensuring consistency across the organization.
Furthermore, a comprehensive policy provides a framework for continuous improvement. As your IT environment evolves, your problem management policy should be regularly reviewed and updated to reflect changes in technology, processes, and business needs.
By doing so, you ensure that your problem management practices remain relevant and effective, helping your organization stay ahead of potential issues and maintain a stable IT environment.
2. Utilize automation tools
Incorporating automation tools into your Problem Management process can greatly enhance efficiency and accuracy. Automation tools can help in collecting incident data, conducting trend analysis, and generating reports, which reduces the manual effort required from your IT team.
These tools can quickly identify patterns and correlations that might be missed by human analysts, enabling faster identification of root causes and more timely resolutions.
Moreover, automation tools can streamline repetitive tasks, allowing your IT staff to focus on more strategic activities. By reducing the time spent on manual data processing, you can improve the speed and effectiveness of your problem management efforts.
Additionally, automation ensures that critical data is consistently captured and analyzed, providing a solid foundation for making informed decisions and improving overall service quality.
What is Workflow Management? Benefits, Templates, And Automation
3. Foster collaboration across teams
Fostering collaboration across teams is crucial for effective problem management. Problems often span multiple areas of the IT environment, requiring input from various teams to identify and resolve the underlying issues.
Encouraging open communication and collaboration between service desk teams, technical experts, and other stakeholders can lead to faster and more accurate problem resolution. By breaking down silos, you create a more cohesive approach to problem management, ensuring that all relevant perspectives are considered.
Collaboration also promotes knowledge sharing, which is vital for continuous improvement. When teams work together to solve problems, they can share insights and lessons learned, helping to build a collective understanding of the IT environment. This shared knowledge can be invaluable in preventing future incidents and improving the overall effectiveness of your problem management process.
4. Invest in training and development
Investing in training and development for your IT staff is essential to maintain an effective problem management process. The complexities of modern IT environments require a deep understanding of problem management techniques and tools.
By providing ongoing employee training, you ensure that your team is equipped with the latest skills and knowledge needed to identify and resolve problems efficiently. Regular training sessions also keep your staff updated on new tools and methodologies, enabling them to adapt to changes in the IT landscape.
In addition to technical skills (or hard skills), training should also focus on soft skills such as communication and teamwork, which are critical for effective problem management.
Developing these skills can enhance collaboration and ensure that problems are addressed from multiple angles. A well-trained team is more confident and capable of handling complex problems, leading to quicker resolutions and a more stable IT environment.
5. Focus on Proactive Problem Management
Focusing on proactive Problem Management can significantly reduce the number of incidents in your IT environment. Proactive problem management involves regularly conducting risk assessments, trend analyses, and continuous monitoring to identify potential issues before they escalate into incidents.
By anticipating problems and addressing them early, you can prevent disruptions and maintain a more stable IT environment.
Proactive problem management also fosters a culture of continuous improvement. By regularly reviewing and refining your Problem Management strategies, you can identify areas for enhancement and implement changes that improve the overall effectiveness of your IT operations.
This forward-thinking approach not only minimizes the impact of problems but also helps in building a more resilient IT infrastructure that can adapt to future challenges.
6. Document and share knowledge
Documenting and sharing knowledge is a critical aspect of effective problem management. By thoroughly documenting each problem's identification, diagnosis, resolution, and lessons learned, you create a valuable resource that can be referred to when similar issues arise in the future.
This documentation should be detailed and accessible, allowing team members to quickly find the information they need to address recurring problems. A well-maintained knowledge base not only speeds up problem resolution but also reduces the risk of the same problem happening again.
Sharing this knowledge across teams is equally important. When team members have access to a centralized repository of problem management documentation, it fosters a culture of continuous learning and improvement.
Knowledge sharing ensures that everyone in the organization is on the same page, reducing the chances of miscommunication and duplicated efforts. Ultimately, this practice enhances the overall efficiency and effectiveness of your problem management process, leading to a more resilient IT environment.
7. Measure and report on Problem Management performance
Measuring and reporting on Problem Management performance is essential for understanding how well your processes are working and where improvements are needed.
Establishing key performance indicators (KPIs) such as the number of problems identified, the time taken to resolve them, and the frequency of recurring issues allows you to track the effectiveness of your Problem Management efforts. Regularly reviewing these metrics provides insights into areas where your team excels and highlights opportunities for improvement.
Reporting these metrics to stakeholders helps in aligning Problem Management efforts with broader business goals. Transparent reporting ensures that everyone understands the impact of problem management on overall service quality and customer satisfaction.
By consistently measuring and reporting performance, you can make informed decisions about resource allocation, process adjustments, and training needs, ensuring that your problem management practices continue to evolve and improve over time.
8. Continuously improve Problem Management processes
Continuously Improving Problem Management Processes is crucial for maintaining an effective and resilient IT environment. Problem management is not a one-time activity but an ongoing process that requires regular review and refinement. By adopting a mindset of continuous improvement, you can ensure that your problem management practices evolve in response to changes in your IT environment, emerging technologies, and evolving business needs.
To achieve continuous improvement, it's important to regularly evaluate the effectiveness of your Problem Management processes and identify areas for enhancement. This might involve implementing new tools, revising procedures, or providing additional training to your team.
Engaging in regular feedback loops with stakeholders and team members can also provide valuable insights into how your problem management practices can be improved. By committing to continuous improvement, you ensure that your problem management processes remain agile, effective, and capable of addressing the challenges of a dynamic IT landscape.
Implementing Problem Management best practices is essential for maintaining the stability and reliability of your IT services. By focusing on root causes, you can significantly reduce the number of incidents, improve service quality, and enhance customer satisfaction.
Whether you’re just starting with Problem Management or looking to refine your existing processes, the best practices outlined above provide a solid foundation for success.
Frequently Asked Questions (FAQs)
1. What is the difference between Incident Management and Problem Management?
Incident management focuses on resolving immediate disruptions, while problem management aims to identify and address the root causes of these disruptions to prevent recurrence.
2. Why is Proactive Problem Management important?
Proactive problem management helps in identifying potential issues before they result in incidents, reducing the number of disruptions and improving overall service reliability.
3. How can automation tools help in Problem Management?
Automation tools can streamline the problem management process by automating data collection, trend analysis, and reporting, making it easier to identify and resolve issues efficiently.
4. What are the key challenges in Problem Management?
Key challenges include identifying root causes, resource allocation, data analysis, resistance to change, and maintaining continuous improvement. | <urn:uuid:3192cff8-366f-4422-9e57-7211f7926b47> | CC-MAIN-2024-38 | https://blog.invgate.com/problem-management-best-practices | 2024-09-16T22:25:31Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651714.51/warc/CC-MAIN-20240916212424-20240917002424-00763.warc.gz | en | 0.931431 | 3,060 | 2.578125 | 3 |
This big data discipline of artificial intelligence gives systems the freedom to automatically gain information and improve from experience without manual programming. Machine learning (ML) is literally just that – “letting the machine learn”.
The definition of machine learning is “the scientific study of algorithms and statistical models that computer systems use to effectively perform a specific task without using explicit instructions, relying on patterns and inference instead. It is seen as a subset of artificial intelligence. Machine learning algorithms build a mathematical model of sample data, known as ‘training data’, in order to make predictions or decisions without being explicitly programmed to perform the task”.
IBM employee Arthur Samuel (1901 – 1990) pioneered artificial intelligence and machine learning research. His inspiration came from the game of checkers and creating a learning program for the first IBM commercial computer, the IBM 701, so he can play against the machine as if it was a human opponent. According to Stanford, “games are convenient for artificial intelligence because it is easy to compare computer performance with that of people.”
Arthur Samuel continued winning against the computer, so he wrote a program to let the computer play against itself. The program collected data on its games and created a predictive analytics engine to improve its decision making. Once the computer started to gather data and experience, Samuel finally started losing (or winning – however you choose to look at it) and the program was a success!
We see machine learning in a variety of industries such as manufacturing, retail, healthcare, hospitality, financial services and energy. Gurucul Cyber Secirity Analytics Platform applies ML algorithms to its behavior analytics solution to detect anomalous activity based upon a change in behavioral patterns.
Machine learning differs from artificial intelligence (AI) in the sense that machines aren’t just expected to be taught how to act intelligently when performing a task; these machines must be able to learn on their own and make decisions without human supervision. The machines can look at data, figure out if a decision was wrong or right, and use that information to make better choices next time.
Categories of machine learning algorithms:
Automated and iterative machine learning algorithms reveals patterns in big data, detects anomalies, and identifies structures that may be new and previously unknown. Therefore, when paired with statistical analysis, ML identifies relationships that may otherwise have gone undetected. All in all, it can surpass human capability and software engineering capability to make use of volumes of big data.
One of the reasons Gurucul Risk Analytics uses machine learning algorithms for deep learning to detect and prevent anomalous behavior is because it is not rule-based. The excessive alerts that come from rules create too much data to sift through and lots of false positives. Not to mention, rules can only detect known threats whereas algorithms not based on rules can detect unknown threats and new threat variants. A proper implementation of self-learning ML/AI can more effectively adapt to new attack patterns and multi-stage methods across long periods of time.
Fourteen of Gurucul’s most popular ML models serve to detect and predict malicious activity such as compromised accounts, fraudulent activity, insider threats, money laundering, and more.
Gurucul’s most popular machine learning models include:
With machine learning, we’re moving beyond tedious rules and patterns to rule out bad actors. Gone are the days of having to sift through heaps of data – a massive waste of productivity when your precious human employees can be focusing on other tasks.
Let the machine learn and do the heavy lifting for you with a reliable security analytics platform. Request a Gurucul Risk Analytics demo today! | <urn:uuid:599b9dd6-1200-417d-962c-ed5f9ae20352> | CC-MAIN-2024-38 | https://gurucul.com/blog/what-is-machine-learning/ | 2024-09-18T05:06:50Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651836.76/warc/CC-MAIN-20240918032902-20240918062902-00663.warc.gz | en | 0.938466 | 746 | 3.53125 | 4 |
Europe, the US, Japan and China are racing to develop the next generation of supercomputer – exascale machines - capable of a million trillion calculations a second by 2020. But why do we need computers as fast as powerful as this? And what are the technical challenges that need to be overcome.
Thomas Sterling, (pictured) chief scientist for the centre for research on extreme scale technology at Indiana University, talked to Computer Weekly during the International Supercomputing Conference in Hamburg.
Q. Why do we need exascale computers ?
The only bad news is that we need more than exascale computing. Some of the key computational challenges, that face not just individual companies, but civilisation as a whole, will be enabled by exascale computing.
Everyone is concerned about climate change and climate modelling. The computational challenge for doing oceanic clouds, ice and topography are all tremendously important. And today we need at least two orders of magnitude improvement on that problem alone.
Controlled fusion - a big activity shared with Europe and Japan - can only be done with exascale computing and beyond. There is also medical modelling, whether it is life sciences itself, or the design of future drugs for every more rapidly changing and evolving viruses - again it’s a true exascale problem.
Exascale computing is really the medium and the only viable means of managing our future. It is probably crucial to the progress and the advancement of the modern age.
Q. What are the barriers to building exascale machines ?
The barriers are daunting. The challenge we have right now is changing the paradigm after 20 successful years of high performance computing
The need to move almost uniquely to multiple processor cores is now requiring reconsideration of how we architect these machines, how we programme these machines, and how we manage the systems during the execution of the problems themselves.
We will be facing absolutely hard barriers as we get into exascale. Atomic granularity is just one of several barriers which will provide limitations [in chip design]. And it will require true and dramatic paradigm shifts.
I am sure, though I don’t know what the solution will be, we will find completely dramatic innovations.
Q. Have we reached the end of Moore's law, which says the number of transistors will double on a chip every two years, when it comes to supercomputers?
Moore's law itself will continue through to the end of the decade and the next decade. However using those transistors, their power requirements and so forth, are really problematic.
We can anticipate, if we are willing to innovate, we can address this problem into the exascale era. But fundamental physics will mean the heat capacity of the chips themselves and the need for cooling will become more of a barrier.
Q. How much of a problem is energy consumption to exascale computing ?
The power during the lifetime of the machine can exceed the cost of the machine itself. It is a dramatic change from previous practices. But more importantly, to effectively use such devices, reliability goes down as the heat within a system goes up. This is a major problem.
Fewer and fewer centres can host such critical systems because there are fewer and fewer facilities in Europe, the US, and Asia where enough power can be brought to support it. This is truly a barrier.
Q. Are people going to have to change their approach to developing software for exascale machines ?
This is a controversial statement. I believe the answer is yes, but many very good people disagree. Many people feel there will be incremental methods that extend prior techniques. But I do think there are going to be a need for new programming interfaces.
Q. Do we need something radically new technology to reach exascale ?
What I think is going to happen, and this is unfortunate, is that industry and a large part of the community who has invested in legacy techniques is going to push that as hard as they can, because incrementally that can seem to be politically financially easier to do - we will keep pushing it until it really breaks
At some point the community will simply say enough is enough and then they will begin to address radical techniques. This is already happening in the system community and user community.
You will see a ramp down on conventional practices and a slow ramp up of innovative practices. As one colleague put it at another meeting: “It's difficult but suck it up.”
Q. How confident are you that we will get to exascale ?
I certainly have the confidence that we will get there. When we will do it correctly, not as a stunt machine, but when we will do it correctly, will be some time in the early to mid-2020s.
Unfortunately there is still too much focus on the credit or pride factor in meeting the speed performance benchmark, and that means that stunt machines are the early focus.
Every time we have done a paradigm shift it has been an overlap between past practices pushed to extremes and future practices which need time to grow and mature. | <urn:uuid:b9082ad7-ef4f-492a-a434-188a498e9ded> | CC-MAIN-2024-38 | https://www.computerweekly.com/news/2240158385/QA-Why-do-we-need-exascale-computers | 2024-09-18T04:58:23Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651836.76/warc/CC-MAIN-20240918032902-20240918062902-00663.warc.gz | en | 0.957916 | 1,033 | 3.171875 | 3 |
How to avoid cyber security attacks in 2024
Technology continues to evolve at a rapid pace, presenting both opportunities and challenges. Among these challenges, the threat of cyber security attacks looms large. This poses significant risks to individuals, businesses, and governments alike. The importance of adopting robust security measures cannot be overstated. The repercussions of security breaches can be devastating, including financial loss, reputational damage, and compromised personal information.
In this article, we will explore 5 key trends to prepare for throughout 2024.
1. AI helping cyber criminals
The adoption of generative AI has been a double-edged sword in cyber security. It boosts defence and detection capabilities but also powers increasingly sophisticated cyber security attacks. Here’s how cyber criminals are exploiting AI:
- Convincing social engineering campaigns: AI is used to craft highly personalised phishing emails and social engineering campaigns, significantly increasing the chances of a successful breach
- Generating fake news: AI-generated articles can spread misinformation or manipulate public opinion, damaging reputations or creating chaos for further attacks
- Deepfake photos and videos: Convincing deepfakes can be used for blackmail, creating false evidence, or impersonating figures to spread false information
- Automated hacking tools: AI automates finding and exploiting vulnerabilities, allowing attacks at a scale and speed impossible for humans
- Bypassing security measures: AI learns the patterns of security software to devise strategies to evade detection, making it harder for traditional tools to identify and block attacks
- Enhanced cracking capabilities: AI algorithms crack passwords and encryption more quickly, posing a significant threat to data security
As technology evolves, the arms race between cyber attackers and defenders intensifies, requiring continual updates to cyber defence strategies. Understanding AI’s capabilities and potential uses in cyber attacks is crucial in developing effective defence mechanisms to protect against these advanced threats.
2. Supply chain/third party breaches
The complexity of modern supply chains has opened new avenues for cyber attacks. Threat actors are exploiting third-party vulnerabilities to infiltrate networks and access sensitive data. The interconnected nature of these chains means that a breach in one area can have cascading effects, highlighting the need for comprehensive due diligence and risk assessment strategies.
By something as simple as infiltrating a software update, a single supplier can involuntarily cause malware to spread to their customers. Operating with suppliers is inevitable, so consulting experts on how to introduce cyber security solutions to your supply chain is the best place to start. We always recommend that every part of the chain has awareness and understands the importance around preventing cyber security attacks. Further to this, a clear way of reporting any incidents is also needed.
3. IoT cyber security attacks
The expansion of the Internet of Things (IoT) has led to a heightened risk of cyber attacks. With an increasing number of devices connecting to the internet, potential vulnerabilities also rise. Here’s how these attacks manifest:
- Exploiting weak security: Many IoT devices have inadequate security, making them easy targets for attackers
- Infiltrating networks: Once an IoT device is compromised, attackers can use it as a gateway to infiltrate broader networks, accessing sensitive data and systems
- Rapidly multiplying devices: The sheer number of IoT devices exponentially increases the potential points of cyber security attacks
- Lack of standardisation: The diversity and lack of standardisation in IoT devices make it difficult to implement uniform security measures
- Botnets and DDoS attacks: Compromised IoT devices can be corralled into botnets to launch Distributed Denial of Service (DDoS) attacks, disrupting services and infrastructure
- Data theft and privacy breaches: Cyber criminals can steal personal data from IoT devices, leading to privacy breaches and identity theft
As IoT devices integrate into daily life, improved security becomes important. Recognising vulnerabilities and attack methods helps develop protection strategies. Solid security, regular updates, and awareness are key to managing IoT risks.
4. Human error: One of the most common causes of cyber security attacks
Despite advancements in technology, human error continues to be a significant vulnerability in cyber security. Even simple missteps, like falling for a phishing scam or misconfiguring settings, can result in substantial breaches. This underscores the necessity for continuous education and stringent policies to mitigate such risks.
For instance, an employee unknowingly clicking a malicious link can grant attackers access to an entire network. This type of incident, often preventable through regular training and awareness, exemplifies how human oversight can lead to serious security compromises. Therefore, fostering a culture of vigilance and knowledge can be one of your best cyber security solutions.
Zero trust security (or Zero Trust Network Access) is potentially the most effective way to reduce human error. As well as an ‘always verify’ methodology, it operates on the idea of only granting specific user’s specific access – based on their needs. This reduces the attack surface and risk of a data breach, should that individual’s computer/device be compromised.
5. Quantum computing attacks
Quantum computing attacks will be present in the near future, with the ability to break traditional encryption methods that are not quantum-safe. As these powerful computers become more accessible, the risk they pose to data security escalates. Preparing for this eventuality is vital, with research into quantum-resistant cryptography becoming increasingly important.
Concepts such as the ‘store now, decrypt later’ issue – where cyber criminals are storing encrypted data in preparation to decrypt it with quantum computing – stress the importance of preparing for quantum computing now. Deploying post-quantum cryptography (or quantum-safe cryptography) is one of the best ways to safeguard your business’ cyber security.
Prepare for cyber security attacks with CyberHive Connect
With these threats in mind, it’s crucial to have a robust cyber security strategy in place. CyberHive Connect offers a comprehensive suite of cyber security solutions. Stay ahead of cyber security attacks with the likes of zero trust network access, and post-quantum cryptography.
Get in touch
If you have a question or would like some more information, contact us today. | <urn:uuid:34bfd277-d42d-4a12-b7c0-ce94f6f07fca> | CC-MAIN-2024-38 | https://www.cyberhive.com/insights/how-to-avoid-cyber-security-attacks-in-2024/ | 2024-09-18T04:18:49Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651836.76/warc/CC-MAIN-20240918032902-20240918062902-00663.warc.gz | en | 0.914901 | 1,248 | 2.515625 | 3 |
Miniature drones, often also called micro- or nano-drones, are compact unmanned aerial vehicles capable of performing a wide range of tasks due to their size and mobility. In a military setting, they can have a significant impact, providing a tactical advantage on the battlefield.
Intelligence and surveillance. Thanks to their small size, these drones can conduct reconnaissance invisibly, transmitting real-time images and videos of strategically important objects, enemy locations and their movements. This allows you to gather intelligence without directly involving a person in dangerous areas.
Targeting and correction of fire. By using drones to determine the coordinates of targets, the military can direct artillery fire or airstrikes more precisely, minimizing the risks to civilians and reducing the number of strike attempts required.
Counterintelligence operations. Miniature drones can detect and track enemy electronic assets such as radars and electronic warfare stations, providing valuable information about the enemy’s air defense system and its weaknesses.
Cargo delivery. Despite the size, some models can be adapted to deliver small loads, such as medicines or special equipment for units operating behind enemy lines or in hard-to-reach places.
Electronic warfare. Specialized mini-drones can suppress or spoof enemy communications and radars, creating an “electronic fog” that makes it difficult for the enemy to navigate and pinpoint targets.
Psychological action. The simple presence of drones in the air can demoralize the influence on the enemy, as well as be used to distribute propaganda materials or warn the civilian population about future military actions.
Miniature drone technologies continue to evolve, increasing their capabilities and applicability for military purposes. This includes improvements in autonomy, range, resistance to electronic jamming and group interaction capabilities, making them an even more valuable tool on today’s battlefield.
Developed by Prox Dynamics, this miniature reconnaissance drone is used to capture video and still images in hard-to-reach and dangerous areas. It is equipped with cameras and can fly for up to 25 minutes.
Developed by AeroVironment, it is a small, hand-held drone for surveillance and reconnaissance. With a flight range of about 10 km, the Raven is widely used by the US military and its allies.
Developed by Anduril Industries, it is an intelligent and modular drone designed for automated reconnaissance and surveillance missions using artificial intelligence.
This is an unmanned aerial vehicle produced by AeroVironment, designed for pinpoint strikes on important targets with minimal collateral damage.
This is the latest version of the Black Hornet, offering improved navigation and advanced reconnaissance capabilities for the military and special forces.
Also developed by AeroVironment, the Wasp III is another hand-held reconnaissance and surveillance UAV that is capable of hovering and has a range of up to 5 km.
Although mostly used for civilian purposes, its portability and imaging capabilities make it useful for tactical reconnaissance and surveillance in military applications.
An advanced surveillance and reconnaissance drone with high mobility and long-range flight capability, designed for military and security operations.
Focused on security and military use, ANAFI USA offers powerful surveillance capabilities with its advanced cameras and portability.
Developed by Honeywell, it is a vertical take-off and landing drone designed for reconnaissance and surveillance operations in urban environments.
Developed by Harvard University, RoboBee is one of the smallest drones in the world, capable of flight and potentially suitable for pollination or micro-reconnaissance tasks.
Outstanding for long and quiet reconnaissance missions, the Trinity F90+ combines advanced technology for accurate data collection with high mobility and range. | <urn:uuid:84d168fb-b32f-407f-81fa-75b2cf650024> | CC-MAIN-2024-38 | https://hackyourmom.com/en/drony/drony-miniatyurnyh-rozmiriv-revolyuczijnyj-pidhid-do-vijskovoyi-taktyky/ | 2024-09-20T17:23:20Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701419169.94/warc/CC-MAIN-20240920154713-20240920184713-00463.warc.gz | en | 0.925451 | 738 | 3.046875 | 3 |
The cyber threat is a constant concern for Danish public authorities and private companies. In short, it is a matter of when, not if, an organization falls victim to a cyber attack. ”Effective cyber defence” is a guide that contains six steps that organizations can take to establish basic cyber defences. By following these steps, organizations can prevent many of the cyber attacks they encounter on a daily basis, and effectively mitigate successful attacks.
6 steps for an effective cyber defence
1) The management’s toolbox
2) Helpful technical measures
3) Conduct is key
4) Detect your enemy
5) Be prepared!
6) Spot the gaps in your cyber defence
Cyber security vigilance at the executive level is the cornerstone of an effective cyber defence. The top management has to govern cyber and information security by continuously supporting, prioritizing and following up on security objectives and strategies with the same vigilance as applied to other business matters, for example finance and HR. Ensuring an effective cyber defence is not a one-time project but rather a continuous process that requires constant evaluation and optimization. This applies to all six steps in this guide. Consequently, the top management has to ensure continuous follow-ups and improvement.
We recommend that Danish organizations use international standards and best practices as a starting point. In Denmark, the following cyber security frameworks are often used: ISO 27001, NIST Cybersecurity Framework, SANS and CIS 18. Compliance with standards and best practices creates a foundation for establishing set and repeatable processes that improve cyber and information security within organizations.
”Effective cyber defence” is intended for all public authorities and private companies with complex IT systems and may also be useful to anyone interested in good cyber and information security practices. This guide is directed primarily at top management, and cyber and information security staff. | <urn:uuid:79304593-1ee6-4264-8c61-5344b72a4d6a> | CC-MAIN-2024-38 | https://ciso2ciso.com/effective-cyber-defence/ | 2024-09-11T00:13:24Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651323.1/warc/CC-MAIN-20240910224659-20240911014659-00463.warc.gz | en | 0.935337 | 371 | 2.890625 | 3 |
How to Prevent Malware Attacks? 7 Security Tips to Follow in 2022
The FBI’s Internet Crime reports that cybercrime had cost businesses in the U.S. more than $6.9 billion in 2021. Part of these losses come from malware attacks as it is the most common cause of cybercrimes. As we enter the second half of 2022, only 43% of businesses feel prepared for cyberattacks.
Malware attacks can steal crucial data from your systems, exploit your business operations, and make you lose clients while costing your business millions. Therefore, it is essential for businesses operating online and generating digital data to prevent malware attacks.
Free Case Study: Secure Online Platform Fosters a Culture of Transparency
iLink has listed 7 security tips to keep your business safe. But before diving into that, let’s understand malware, its types, and how they enter your systems.
What is Malware, What are types of Malware, and How do they get distributed?
Malware is brutal software designed to harm a computer or network. Cybercriminals typically use it –
- To steal, encrypt or delete sensitive data to leverage it for financial gains.
- Hijack or execute unauthorized actions on victims’ systems.
- Introduce spam to slow down or stop the system from functioning.
Though all malware steal or trades data, they are often divided based on how they are designed or spread. Some common types are viruses, ransomware, trojans, spyware, worms, adware, scareware, and fileless malware.
Cybercriminals may use one or more combinations of malware to hack into your systems. They try to trick users into downloading malicious files such as email attachments, fake internet ads, or popups with links appearing legitimate. Once you click these links or buttons, it directs the users to a website that automatically downloads infected applications onto your device.
Why is it important to stop malware attacks?
According to the data presented by the Atlas VPN team, over 34 million new malware samples have already been discovered year-to-date. It means that, on average, hackers are creating more than 316 thousand malware threats daily in 2022. In 2021, 37% of all businesses and organizations were affected by ransomware. Of which, 32% paid the ransom but recovered only 65% of their data.
Let’s also take a look at the losses. While ransomware costs the world $20 billion, recovering from attacks costs businesses $1.85 million on average in 2021. It means one malicious software and your company can be on its knees. Fortunately, there are ways to safeguard your organization from these attacks. But let’s first understand how to recognize if your systems are infected.
How to recognize if your systems have Malware?
Here are some signs to look out for:
- Your device suddenly slows down, freezes or displays repeated error messages.
- Faces difficulty in shutting down and restraining your computer.
- Spams and inappropriate ads pop up everywhere on the screen.
- Redirection to unknown websites.
- Your systems tools are disabled or show unexpected tools.
- Creates new files and folders without asking your permission.
- Won’t let you delete or install the software.
Your system is under malware attack if it shows any of the above symptoms. Don’t worry; you can block them!
7 Security Tips to Prevent Malware Attacks
1. Install Antivirus or Anti-malware software
Antivirus software works by scanning incoming files and codes passed through your network traffic. It compares your files with the database already scanned for malware. Although antivirus programs automatically scan your computer for malicious files, you can also set it for manual scans. That way, you know in real-time which files are infected and neutralized.
2. Implement a Firewall
A firewall provides another layer of protection that gives your devices and network more robust security. There are two different types of firewalls – personal firewalls to protect your computer and external firewalls to protect your servers and networks. These firewalls act as a barrier between your internet and IT infrastructure to block any malware. You can also specify which traffic should be allowed and which should be restricted using a firewall system.
3. Employ secure authentication methods.
Authentication verifies the identity of a person or device before providing access control for systems. It checks if a user’s credentials match the credentials in the database of the authorized users and then allows to proceed. However, instead of just passwords, your organization needs to implement a more robust authentication method such as:
- Implement Multifactor Authentication such as PINs or security questions.
- Use phrases with at least eight characters, including an uppercase letter, a lowercase letter, a number, and a symbol, instead of simple passwords.
- Use biometric tools like fingerprints or facial recognition.
Another best practice is to never save passwords on a computer or network. Instead, take the help of the secure password manager. We know passwords are difficult to type and hard to remember, but it’s not the same for computers.
4. Grant Limited Access Controls
We know people you work with are trustworthy, but restricting access to sensitive data ensures your data isn’t vulnerable to hackers. Granting limiting privileges reduces the attack surface of the organization and the risk of attack. It’s considered a good practice to devise equal security measures to protect the organization from malware.
5. Avoid using administrative and application privileges.
Administrative privileges allow you to access the most sensitive parts of a computer or network system, increasing the chances of malware attacks. Therefore it’s better to use a separate account to browse the net, check emails or perform non-administrative duties. Use it only when you need to perform administrative tasks such as making configuration changes. It safeguards your computer and network better.
Additionally, avoid using administrative credentials to install software or using it on an open network. Make sure you validate that the software is legitimate and secure. Finally, log out of the admin account once done with administrative tasks.
6. Regularly update your software and systems.
Repeated alerts about software updates can be annoying, but clicking the ‘Remind Me Later’ button makes your systems more vulnerable to attacks. Cybercriminals take advantage of procrastinating these updates and use unpatched vulnerabilities as gateways to exploit your software.
As the best practice, it is crucial to validate and install all new software patches as soon as possible. Try automating software updates or implementing routine maintenance to ensure all software is current and free from vulnerability issues.
7. Educate your employees
Employees act as the first layer of protection against malware attacks. Educating them about real-world threats and how to respond to them helps reduce the chances of introducing malware into your network.
Here’s what your workforce should be aware of:
- Avoid engaging with suspicious emails from dodgy sources.
- Beware of the scam phone calls or messages saying their device has malware.
- Don’t click malicious links or popups on the screen.
Encourage employees to connect with the IT team in case of unusual behavior. Help them recognize credible sites and advise them to join secure networks only when working outside the office.
Protect your business with iLink’s intelligent and automated security solutions
We understand that regularly updating software and ensuring your systems are free from malware is challenging and time-consuming. You need dedicated professionals who can provide a highly tailored, automated, and effective solution so that you can focus more on offense rather than defense.
iLink offers an array of solutions that meet the glut of digital data demands for the modern era. Our solutions protect your business networks and data thoroughly from any harm and strengthen your organization’s IT infrastructure.
Choose your next read:
- 5 Most Common Cybersecurity Mistakes to Avoid
- How to automate security with AWS?
- Cloud Security: Top 8 Best Practices to Follow in 2022 | <urn:uuid:a68177f2-ff8c-4aab-bdc8-f065723f0079> | CC-MAIN-2024-38 | https://www.ilink-digital.com/insights/blog/how-to-prevent-malware-attacks-7-security-tips-to-follow-in-2022/ | 2024-09-12T06:38:52Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651422.16/warc/CC-MAIN-20240912043139-20240912073139-00363.warc.gz | en | 0.905043 | 1,657 | 2.59375 | 3 |
Wireless Access Point vs Router: What Are the Differences?
At 9:00 AM: you're having a video conference via your laptop at your office. At 9:00 PM: you're watching a live show with your phone at home. Wait half a jiff, have you ever thought about what wireless equipment is working for your unimpeded network? Surely, you've heard people around talking about "routers" from time to time. Then what about the wireless AP (access point)? Is it the SAME thing with the router? Absolutely not! Despite being used interchangeably at times, wireless access points vs routers serve distinct purposes within a network. It is critical to understand their functions, differences, and applications, especially when you consider whether to buy a wireless access point or to buy a router for your specific needs.
What Is a Wireless Router?
A router is a network device that can transfer data in a wired or wireless way. As an intelligent device, the router is enabled to direct incoming and outgoing traffic on the network in an efficient way. Traditionally, a router was connected to other LAN (local area network) devices through Ethernet cables for wired networking. Over time, wireless routers, providing a user-friendly installation without cabling, are increasingly becoming the "darling" for many homes and small offices.
A wireless router, also called a Wi-Fi router, is a vital network device that acts as a central hub, connecting a variety of wired and wireless devices to the internet and managing a LAN. It typically includes a built-in modem, firewall, and a Dynamic Host Configuration Protocol (DHCP) server that assigns IP addresses to devices, allowing them to communicate with each other and access the internet. Wireless routers enable Wi-Fi connectivity for devices such as laptops, smartphones, and tablets. In enterprise settings, they also support IPTV/digital TV services and Voice over IP (VoIP) communication. Moreover, they come equipped with firewalls and password protection to guard against external threats to the LAN.
Figure 1: Wireless Router Network Connection Scenario
What Is a Wireless Access Point?
A wireless access point, also known as wireless AP or WAP, stands for a networking hardware appliance that adds Wi-Fi capability to the existing wired network by bridging traffic from wireless stations into wired LAN. This Wi-Fi access point can act as a stand-alone device or can be a component of a router.
Additional Learning Resource: Distinguishing Fat APs & Fit APs Before Networking
Generally speaking, a wireless AP enables devices that don't have an inbuilt Wi-Fi connection to access a wireless network with the aid of an Ethernet cable. That is to say, the signals that run from a router to an access point are transformed from wired to wireless. Additionally, a WAP can also be used for extending the wireless coverage of the existing network in case of the future increasing access requirements.
Figure 2: Wireless Access Point Network Connection Scenario
Wireless Access Point vs Router: What Are the Differences?
Wireless access point and router, both support Wi-Fi network connectivity and perform similar roles. The confusion arises accordingly. Actually, these two network devices are more like cousins than twins. The differences between the two will be illustrated in the following.
Figure 3: AP vs Router
In General, most Wi-Fi routers combine the functionality of a wireless AP, an Ethernet router, a basic firewall, and a small Ethernet switch. While a wireless access point usually comes as an inbuilt component of devices like routers, or Wi-Fi network extenders. In a word, wireless routers can function as access points, but not all access points can work as routers.
Unambiguously speaking, a wireless router, playing the role of an "Ethernet hub," helps in establishing a local area network by linking and managing all the devices connected to it. An access point, however, is a sub-device within the local area network that only provides access to the router's established network. Therefore, wireless AP simply extends the existing wired network wirelessly. It does not have routing capabilities or the ability to manage traffic between different networks.
Connection & Coverage
Routers and wireless APs have divergent connection methods. Usually, the wireless router can offer Wi-Fi signals for devices directly, or connect to a PoE switch which can add wireless APs to extend the Wi-Fi coverage. They often have built-in antennas to disperse a Wi-Fi signal throughout the space. Wireless access points, however, are generally added to an existing network to extend coverage to an area where the main router's signal cannot reach effectively. Wi-Fi access points can be daisy-chained to cover larger spaces, with each AP being connected back to the main router using Ethernet cables.
Sometimes the Wi-Fi signals will be weak and have some dead spots if the wireless router can't reach the expected coverage area. Instead, a wireless AP can be added in locations that have bad network conditions, eliminating dead spots and extending the wireless network. For SMB networks, the enterprise wireless APs need to be connected with a PoE switch and then connected to the gateway to expand the wireless signal coverage.
Typically, network routers serve residential homes, SOHO working environments, and small offices or organizations, which can effortlessly meet fixed and moderate access demands. However, this type of router can't scale to reflect the climbing growth in network needs in the predictable future.
As for wireless APs, they are mostly used in medium to large enterprises and organizations. More than one wireless AP is involved in supporting multiple users. Unlike the previous situation, network managers can add additional APs as the demand grows, to cover a more extensive physical area.
Additional Learning Resource: Wireless AP vs Range Extender: Which Wi-Fi Solution Is Better?
Wireless Router vs Access Point: How to Make a Wise Choice?
Wireless access point and router, it all depends on your needs. If you just want a wireless network at home to cover your family members' needs, a Wi-Fi router is sufficient. But if you want to build a more reliable wireless network that benefits a large number of users, a wireless access point is more appropriate.
Before purchasing a wireless access point and router, there are some key factors to consider: the physical size of the venue, the coverage of the network, the current number of Wi-Fi users, and even the anticipated access demands. As a go-to choice for many users, wireless routers are almost indispensable for every household and small business. After the wireless APs come onto the scene, today's large enterprises tend to adopt them to cover a bigger area or to support more users in larger LANs.
Figure 4: How to Choose AP And Router | <urn:uuid:3234d8fb-66b3-44ea-b4d1-a33b24af5a8c> | CC-MAIN-2024-38 | https://community.fs.com/article/wireless-access-point-vs-router-what-are-the-differences.html | 2024-09-13T12:21:21Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651513.89/warc/CC-MAIN-20240913101949-20240913131949-00263.warc.gz | en | 0.943012 | 1,393 | 3.078125 | 3 |
What is Phishing?
Phishing is one of the popular techniques of social engineering. Attackers send misleading emails or communications while disguising themselves as reputable organizations, such as banks or well-known websites. These communications frequently include pressing requests, alluring links, or fraudulent login pages that look real. Unaware victims can unwittingly provide their information, which could result in identity theft, financial loss, or unauthorized access to accounts. Use secure, one-of-a-kind passwords for online accounts, and be wary of unexpected emails and information exchanges to be safe.
Even if an organization finds one phishing email, it only takes one person to make a mistake for the assault to succeed. For this reason, educating staff members about security awareness training is crucial.
What is Spear Phishing?
Attacks known as spear phishing are targeted specifically at one person or a small group of people, such as the employees of a particular company. These phishing attacks tend to be selective and sophisticated, and the attacker frequently does in-depth research on the victim to make the assault as convincing as possible.
Threat actors use Spear phishing often as an entry point to an organization due to the limited security by evading spam filters and even sophisticated security measures. The goal of this phishing attack might be an account take-over email compromise to help the attacker establish backdoor or escalate privileges to maintain persistence within the compromised system or network. This allows them to pivot and carry out further malicious activities such as data exfiltration, lateral movement or launching additional attacks that can have negative financial and reputational effects on the organization.
Whaling Attacks (CEO Fraud)
Whaling is identical to spear phishing except for the size of the fish, is also known as CEO fraud. They are highly targeted form of spear phishing attacks via malicious emails or phone calls where the spear phishers act as a legitimate sender and delivers malware (e.g., fake invoices) to gather confidential information, hijacks their email accounts, or credential theft.
Attackers might have different goals after taking over executives’ account, such as:
- One of the primary motivations for whaling attacks is financial gain. Picking on CEOs with financial control allows nefarious individuals to funnel money into their own accounts or change financial transactions for personal benefit;
- Data Theft or Espionage: Whaling attacks could also be carried out for reasons other than pure profit. Threat actors could try to obtain private data, business secrets, or intellectual property. This stolen information may be utilized for business espionage, unfair competition, or black market sales;
- Disruption and Reputational Damage: Whaling attacks have the potential to disrupt an organization’s operations and damage its reputation seriously. Attackers can spread false information, sabotage internal communications, or create a chaotic environment that interferes with regular business by posing as executives.
Types of Spear Phishing Attack Vectors
SMS phishing attacks (Smishing)
SMS phishing attacks (Smishing) is a form of phishing attack through carefully crafted text messages with a targeted approach that claims the recipient has won a prize or the attacker might send a text message pretending to be from the finance department requesting that the individual provide sensitive information via a malicious website (login page or click links).
Voice phishing (Vishing)
Voice phishing (Vishing) is an approach that involves phone calls where the attacker pretends to be calling from a trusted source to trick the intended victim into revealing personal details of their personal life or using social engineering techniques to perform a wire transfer. An example of vishing would be an attacker calling from a spoofed phone number posing as a particular person from your company with a matter of urgency to get information on trade secrets, or another example is they might ask for help with resetting their login credentials, sometimes called credential phishing.
Malware phishing attacks
Malware phishing attacks are delivered via phishing emails or texts containing malicious links to a fake website or a free trial of well-known antivirus software. The capabilities of the malware vary depending on the end goal of the attacker, such as installing RAT (Remote Access Trojan), Keylogger that captures keystrokes, installing additional malware that exploits security flaws or even recording audio and video.
How to Prevent and Protect Your Organization From Spear Phishing Attacks
Hornetsecurity Advanced Threat Protection is a cybersecurity solution that uses advanced technologies and techniques to protect against targeted attacks via spear phishing. It offers a variety of features, such as:
- Sandbox Engine – If the document sent with the email is found to be malware, the email is moved directly to quarantine;
- URL scanning – Leaves the document attached to an email in its original form and only checks the target of links contained in it;
- Freezing – Emails that cannot be clearly classified immediately are held back for a short period. The emails are then subjected to a further check with updated signatures;
- Malicious document decryption – Encrypted email attachments are decrypted using appropriate text modules within an email. The decrypted document is then subjected to an in-depth virus scan;
- Secure Links – Protects users from malicious links in emails. It replaces the original link with a rewritten version that goes through Hornetsecurity’s secure web gateway.
Since e-mail is one of the most used attack vectors for spear phishing, it is advantageous to strengthen the e-mail communication within your organization with SPF, DKIM and DMARC.
- SPF (Sender Policy Framework) – Is responsible for verifying the sending server IP addresses;
- DKIM (DomainKeys Identified Mail) – Adds a digital signature to verify email authenticity;
- DMARC (Domain-based Message Authentication, Reporting, and Conformance) – Combines both to enforce email security policies and provide reporting mechanisms.
Even if all technical controls are in place, attackers’ primary objective for spear phishing attacks is to deceive targeted individuals to achieve their malicious objectives. Security Awareness is like a superpower cape that helps individuals become cyber-savvy heroes! It shields against sneaky scams, thwarts malicious tricks, and empowers us to spot cyber villains. It is essential to train your staff with simulated spear phishing exercises to stay alert, as training dramatically reduces the probability of infection.
Spear Phishing Examples and Their Psychological Triggers
Threat actors use spear phishing emails to represent themselves as authoritative figures to gain victims’ trust. By impersonating a person of authority, such as a CEO or a bank representative, the attacker can sway the victim to take action without question.
Attackers create a sense of criticality to make the victim act quickly before they have time to think. Suspicious email about a compromised account or a time-sensitive task that needs immediate attention.
Attackers use curiosity to lure victims into clicking on a malicious link or downloading an attachment. This could be a phishing email with a message offering a free gift or a secret that the victim must see.
Spear phishing attacks may abuse personal information to make the target feel comfortable and familiar with them. This can make it easier to convince the victim to take action, such as providing sensitive information.
The phishing attack technique is most powerful when fear is instilled on the target, causing them to act irrationally. The email claims about a legal issue or a threat to expose shameful information. The target is more likely to comply with the attacker’s demands out of fear of the consequences.
More Spear Phishing Examples
Ransomware has been increasing over the years making spear phishing their primary attack vector. It has devastating consequences that can abruptly interrupt companies’ business and operations if no backup solution is deployed.
Conti is one of the most notorious ransomware that uses several attack vectors and spear phishing via email, delivering malicious attachments and phishing links containing embedded scripts that download other malware like TrickBot or Cobalt Strike, which are then used in later stages of the attack and to assist with deeper network infiltration. On 11th April 2022, it was reported that high-ranking Costa Rican officials were targeted and their credentials obtained from malware installed on the initial device, which then was used to deploy Cobalt Strike, more than 10 beacon sessions were detected, which were used in the later stages of the attack.
For an overall look at cybersecurity risks gained from analyzing 25 billion emails, see our free Cyber Security Report 2023.
To properly protect your employees against spear phishing, use Hornetsecurity Security Awareness Service as we work hard perpetually to give our customers confidence in their Spam & Malware Protection and Advanced Threat Protection strategies.
To keep up to date with the latest articles and practices, pay a visit to our Hornetsecurity blog now.
Unfortunately, spear phishing attacks are becoming progressively dangerous in the world of remote work, and attackers are becoming more and more skilled at using these attacks to their advantage. Spear phishing defense is essential to safeguard confidential data, avoid financial loss, maintain reputations, stop data breaches, and guarantee ongoing operations. Protecting against this constant cyber threat requires proactive tactics, knowledge, and skepticism.
In a typical spear phishing attempt, the attacker customizes their strategy to focus on a certain person or group. A threat actor might send you an email that appears to be from a colleague in the IT department at your place of business. The email claims that your email account needs an immediate password update and contains a link to a login page. You enter your credentials under the impression that the request is legitimate, unknowingly granting the attacker access to them.
Imagine getting a letter in the mail that appears to be written by your best buddy and is handwritten. Only you two would understand the special inside jokes and intimate information in the letter. You eagerly open it, but instead of receiving excited experiences, it requests information about your bank account. Spear phishing is that. It works well because it makes use of trust, customization, and familiarity, which makes it more difficult to detect dishonesty within the familiarity.
In 2016, Crelan Bank lighter by $75 million when the attackers compromised the business email of a high-ranking executive. They were able to spoof the CEO’s email account by impersonating the CEO as the sender. While acting like a high-level executive, the attacker then told the company’s employees to deposit money into a bank account he controlled. Although the attack was eventually identified through an internal audit, the attackers’ names remain unknown.
Definition spear is a form of a targeted attack against single or multiple employees in a company. They often target new employees who have yet to establish their foothold in their new environment, causing them to be vulnerable and easy targets. New employees can be easily found on the company website who are proudly announcing their new recruits and with a little bit of OSINT (Open Source Intelligence) on their social media. It can be rather effortless to find their real email and send them requests from a company that is out of the ordinary.
During the early stages of the COVID-19 pandemic in 2020, the whole world was in a panic, which granted threat actors a plausible cause and took advantage of the situation. There were many real-world examples, but one of them was when the attackers sent emails to employees with malicious attachments containing public records of people within their company who were recently affected by the virus. This caused great panic and fear that inevitably led to the targets opening the attachment out of fear and installing malware/keylogger in their machine. | <urn:uuid:ff60dbf2-30b8-4e5f-9ffb-886e44b717f8> | CC-MAIN-2024-38 | https://www.hornetsecurity.com/en/blog/spear-phishing-examples/ | 2024-09-13T11:24:15Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651513.89/warc/CC-MAIN-20240913101949-20240913131949-00263.warc.gz | en | 0.946536 | 2,370 | 3.328125 | 3 |
Machine learning (ML) has become a cornerstone of modern technology, underpinning advancements in various fields such as healthcare, finance, marketing, and more. Understanding the fundamentals of machine learning, including its primary types—supervised and unsupervised learning—is crucial for anyone interested in leveraging this powerful technology.
This blog will delve into the essence of machine learning, and then explore and compare supervised and unsupervised learning in detail.
What is Machine Learning?
Machine learning is a subset of artificial intelligence (AI) that focuses on developing algorithms and statistical models which empower computers to perform specific tasks without being explicitly programmed. Unlike traditional programming, where developers write detailed instructions for every possible scenario, machine learning enables systems to learn and adapt from data.
By identifying patterns and making data-driven decisions, these systems can tackle complex tasks such as image recognition, natural language processing, and predictive analytics with remarkable efficiency and accuracy. This capability to learn from experience and improve over time distinguishes machine learning from other approaches in AI, making it a powerful tool for addressing a wide range of real-world problems and driving advancements across numerous industries.
Instead of being programmed to execute a task, the system learns from data, identifying patterns and making decisions with minimal human intervention. The primary goal is to enable machines to learn from past experiences (data) and improve their performance over time.
How Does Machine Learning Work?
At its core, machine learning involves feeding data into algorithms that build a model based on the data. This model can then make predictions or decisions without human intervention.
The process typically involves the following steps:
Data Collection: Gathering relevant data from various sources.
Data Preprocessing: Cleaning and organizing the data to make it suitable for analysis.
Feature Extraction: Identifying and selecting key attributes (features) that are most relevant to the task.
Model Training: Using the data to train the model, which involves adjusting parameters to minimize errors.
Model Evaluation: Assessing the model's performance using a separate set of data (validation or test data).
Model Deployment: Implementing the model in real-world applications to make predictions or decisions.
Model Monitoring and Maintenance: Continuously monitoring the model's performance and making necessary adjustments as new data becomes available.
Machine learning can be broadly categorized into supervised learning and unsupervised learning, each with its own set of techniques and applications.
Supervised Machine Learning
Supervised learning is a type of machine learning where the algorithm is trained on a labeled dataset. This means that each training example is paired with an output label.
The goal is for the algorithm to learn the mapping from the input data to the output labels so that it can predict the labels for new, unseen data.
How Does Supervised Learning Work?
Data Collection: Obtain a dataset that includes both input features and the corresponding output labels.
Training Phase: Feed the labeled data into the machine learning algorithm. The algorithm uses this data to learn the relationship between the input features and the output labels.
Model Evaluation: Test the trained model on a separate validation dataset to evaluate its performance.
Prediction: Use the trained model to predict the labels for new, unseen data.
Types of Supervised Learning
Supervised learning can be further divided into two main types:
Regression: The output variable is a continuous value. For example, predicting house prices based on features like location, size, and number of bedrooms.
Classification: The output variable is a discrete category. For example, classifying emails as spam or not spam based on their content.
Advantages of Supervised Learning
High Accuracy: Since the algorithm is trained on labeled data, it typically provides high accuracy in predictions.
Clear Objective: The goal is well-defined, making it easier to measure the model's performance.
Versatile: Can be applied to various domains, including finance, healthcare, and marketing.
Disadvantages of Supervised Learning
Requires Labeled Data: Obtaining a labeled dataset can be time-consuming and expensive.
Limited Generalization: The model may not perform well on unseen data if the training data is not representative of the real-world scenarios.
Prone to Overfitting: The model may become too tailored to the training data, losing its ability to generalize to new data.
Unsupervised Machine Learning
Unsupervised learning, on the other hand, deals with unlabeled data. The algorithm tries to learn the underlying structure of the data without any guidance on what the output should be. The primary goal is to identify patterns, group similar data points, and reduce dimensionality.
How Does Unsupervised Learning Work?
Data Collection: Gather a dataset without any output labels.
Training Phase: Feed the unlabeled data into the machine learning algorithm. The algorithm analyzes the data to find hidden patterns or structures.
Pattern Recognition: The algorithm groups similar data points together or reduces the dimensionality of the data for easier interpretation.
Types of Unsupervised Learning
Unsupervised learning can be categorized into two main types:
Clustering: The algorithm groups similar data points together based on their features. For example, grouping customers with similar buying habits for targeted marketing campaigns.
Dimensionality Reduction: The algorithm reduces the number of features in the dataset while retaining the most important information. This is useful for visualizing high-dimensional data or speeding up subsequent machine learning tasks.
Advantages of Unsupervised Learning
No Labeled Data Required: Can work with unlabeled data, which is often more readily available.
Discover Hidden Patterns: Can uncover structures and relationships within the data that may not be apparent through manual analysis.
Scalable: Can handle large datasets more efficiently.
Disadvantages of Unsupervised Learning
Less Accurate: Since there are no labels to guide the learning process, the results may be less accurate compared to supervised learning.
Interpretability: The results can be harder to interpret and may require domain expertise to make sense of the identified patterns.
Evaluation Challenges: Without labels, it is difficult to quantitatively evaluate the model's performance.
Comparing Supervised and Unsupervised Learning
To better understand the differences between supervised and unsupervised learning, let's compare them across several dimensions:
Supervised Learning: The primary objective is to learn the mapping from input features to output labels, enabling the model to make accurate predictions on new data.
Unsupervised Learning: The main goal is to explore the underlying structure of the data, identifying patterns, groups, or significant features without any predefined labels.
Supervised Learning: Requires a labeled dataset, where each example is paired with the correct output.
Unsupervised Learning: Works with unlabeled data, relying solely on the input features to identify patterns.
Supervised Learning: Generally involves more straightforward algorithms since the learning process is guided by the labeled data. Examples include linear regression, logistic regression, and decision trees.
Unsupervised Learning: Often involves more complex algorithms due to the lack of guidance from labels. Examples include k-means clustering, hierarchical clustering, and principal component analysis (PCA).
Accuracy and Performance
Supervised Learning: Typically offers higher accuracy and performance on prediction tasks because the model is trained with explicit labels.
Unsupervised Learning: May have lower accuracy in terms of specific predictions but excels at discovering hidden structures and patterns within the data.
Supervised Learning: Commonly used in applications where the goal is to predict an outcome or classify data, such as spam detection, fraud detection, medical diagnosis, and stock price prediction.
Unsupervised Learning: Often used in exploratory data analysis, customer segmentation, anomaly detection, and reducing dimensionality for data visualization.
Spam Detection: Classifying emails as spam or not spam based on their content.
Medical Diagnosis: Predicting whether a patient has a certain disease based on their medical history and test results.
Credit Scoring: Predicting the likelihood of a loan applicant defaulting based on their financial history.
Customer Segmentation: Grouping customers with similar purchasing behaviors for targeted marketing.
Anomaly Detection: Identifying unusual patterns in network traffic that could indicate a security breach.
Image Compression: Reducing the number of colors in an image while preserving the essential features, using techniques like PCA.
Both supervised and unsupervised learning are essential components of the machine learning landscape, each offering unique advantages and challenges. Supervised learning is well-suited for tasks that require precise predictions and classifications based on labeled data, making it ideal for applications where accuracy is paramount.
Unsupervised learning, on the other hand, excels at uncovering hidden patterns and structures within unlabeled data, making it invaluable for exploratory data analysis and tasks where the underlying relationships are unknown.
By understanding the strengths and limitations of each approach, data scientists and machine learning practitioners can choose the most appropriate technique for their specific needs, ultimately harnessing the full potential of machine learning to drive innovation and solve complex problems.
As the field of machine learning continues to evolve, the line between supervised and unsupervised learning may blur, giving rise to hybrid approaches and semi-supervised learning techniques that leverage the strengths of both paradigms.
Hybrid models combine the precision of supervised learning with the exploratory power of unsupervised learning, enabling more robust and adaptable solutions. Semi-supervised learning, which utilizes both labeled and unlabeled data, strikes a balance by using a small amount of labeled data to guide the learning process while exploiting the vast quantities of unlabeled data to uncover hidden patterns. These innovative techniques expand the applicability of machine learning to scenarios where labeled data is scarce or expensive to obtain, enhancing model performance and generalization.
As these methodologies mature, they promise to push the boundaries of what machine learning can achieve, driving breakthroughs in areas like natural language processing, computer vision, and beyond.
Regardless of these advancements, the foundational concepts of supervised and unsupervised learning will remain critical for anyone looking to understand and apply machine learning effectively because they form the bedrock upon which more complex and specialized techniques are built. Mastery of these core principles allows practitioners to identify the most suitable approaches for different types of data and problem domains. Supervised learning's focus on labeled data and precise predictions is essential for applications requiring high accuracy, such as medical diagnosis and financial forecasting.
Meanwhile, unsupervised learning's ability to uncover hidden patterns and structures in unlabeled data is invaluable for exploratory analysis and tasks like customer segmentation and anomaly detection. A solid grasp of these fundamental concepts ensures that practitioners can adapt to evolving methodologies, hybrid models, and semi-supervised techniques, thereby maximizing the potential and impact of machine learning in solving real-world challenges. | <urn:uuid:5fdfc042-1fd5-4ec9-bedd-46a2f8ab6482> | CC-MAIN-2024-38 | https://www.datacenters.com/news/supervised-vs-unsupervised-machine-learning-a-guide | 2024-09-15T22:41:44Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.26/warc/CC-MAIN-20240915220324-20240916010324-00063.warc.gz | en | 0.91024 | 2,221 | 3.4375 | 3 |
The cliches are well known by now: data scientists spend the majority of their time simply preparing data for analytics, inheriting the responsibilities of IT teams that traditionally took months to process simple query results.
But not if they utilize semantics. A number of semantic technologies are directly responsible for reducing the time and effort required for basic data management staples of data preparation, data discovery, and analytics.
Today, these technologies are able to substantially accelerate data management from initial ingestion to analytic insight, enabling them to focus on building solutions to solve business problems while reducing the data backlog of IT departments.
Data preparation is a data management synonym that applies to the tedious aspects of data cleansing, transformation, and integration that consumes the time of IT and data scientists. Smart data technologies expedite this onus in multiple ways. Inclusive ontologies (semantic models) quickly adjust to incorporate new data and requirements so that all data adheres to uniform standards. Data governance and data quality principles can also be modeled and mapped to business glossaries, which impacts data cleansing outcomes. These models can also generate code for transformation, a vital prerequisite for loading applications. The autonomous nature of such preparation hastens integration efforts, allowing data scientists to explore the implications of application or analytics results.
Data discovery is the means by which data sets are deemed germane for specific business problems. Semantic graphs assist ontologies in this endeavor by connecting all data in a single framework. The underlying RDF system is designed to hone in on relationships between data elements, considering various attributes and metadata as they pertain to each node. In this environment, the semantic graphs are able to determine a contextualized relevancy between data that is crucial for timely, apropos data discovery. When deployed in departmental or enterprise-wide semantic data lakes, these graphs facilitate the discovery of a host of relationships and context that might otherwise be missed. This framework substantially assists the workloads of data scientists while reducing time to action.
Easier, Faster, Better
Semantic technologies are an enabler for data scientists. They tame and accelerate data preparation necessities, and engender the same effect for data discovery. Data-driven action - analytics or application operations - becomes easier, faster, and better with semantics, helping them do their jobs while reducing the wait for IT teams to assist.
To learn more about semantic technology, watch the on-demand webinar "Semantic Graph Databases: The Evolution of Relational Databases". | <urn:uuid:7bc1cc35-cbbf-4f88-bca4-2c13763855be> | CC-MAIN-2024-38 | https://blog.cambridgesemantics.com/enabling-data-scientists-by-reducing-the-burden-of-it | 2024-09-17T01:47:58Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651722.42/warc/CC-MAIN-20240917004428-20240917034428-00863.warc.gz | en | 0.90968 | 502 | 2.625 | 3 |
Most of us AS/400 professionals are already familiar with the acronym HTTP, although we seldom have to think about how Hypertext Transfer Protocol itself works. After all, connecting to the Internet is usually a matter of configuring TCP/IP on the workstation or the server. Once the connection is made, you can request and view Web pages from practically all of the Web servers available on the Internet.
However, TCP/IP provides only the connection between computers. When you enter a URL in the address bar of your browser or click on a link to a URL, your browser sends a request to a Web server, using HTTP. Along with the request, your browser can send additional information about the browser and your preferences. Using HTTP, the Web server can examine the request and the additional information and return the requested Web page and its associated files. If an error is encountered while processing your request, the Web server sends an error code and description, again using HTTP.
If you are familiar with HTML, you know that Web pages are described in terms of ASCII text and a set of tags that indicate to the browser how the page is to be rendered. It seems that it should be a simple matter to develop a protocol that can transmit ASCII text from a server to a browser, and indeed, the first version of HTTP was very simple, given that it provided only that functionality. However, with the wide availability of graphical browsers, HTTP evolved to support additional data types, such as the inclusion of binary graphics on a Web page. The current version, HTTP 1.1, provides additional support for Web-based communication with particular emphasis on performance issues.
In this article, I show how a request is transmitted from a browser to a Web server and how the response is sent using HTTP. I also describe some of HTTPs configuration options that control the HTTP response for the V4R3 and above versions of IBM HTTP Server for AS/400.
Start at the Browser
You can request a Web page at your browser by entering a URL in the format http://host:port/path. With that format, http: is used to identify the protocol, host is the name or IP address of the Web server host computer, port is the TCP/IP port (or the default port 80), and path is used to optionally specify the path to the resource you are requesting.
At a minimum, you need to supply the host name. The recent versions of Microsoft Internet Explorer and Netscape Navigator assume the http:// part of the URL if you do not enter it.
After entering the URL, the browser formats a series of text strings that are sent to the host. To retrieve a Web page, the first string includes a method, which is used to indicate the type of request the browser is making to the Web server. Figure 1 lists the methods used with HTTP. The GET method is used to indicate to the Web server that a specific resource is being requested. The protocol and version are also sent along with the path and name of the resource so that the Web server will know what level of HTTP the browser supports.
Following the method request, the browser usually sends one or more request headers. The headers are used to convey additional information about browser capabilities and user preferences to the server. For example, the User-Agent request header indicates the browser name and version number. An example of an optional request header is the Accept-Language header. You can configure your browser to request Web pages in different national languages (for example, Spanish and, if that is not available, English). The Web server can examine the request headers and select the most appropriate Web page to return to the browser when it has a choice of pages to return. The following lists some of the request headers that can be sent from the browser to the server:
Accept. Specifies media types that are acceptable for a response (for example, Accept: text/html).
Accept-Charset. Indicates character sets that are acceptable for a response.
Accept-Encoding. Indicates content codings that are acceptable in the response (for example, Accept-encoding: compress, gzip).
Accept-Language. Indicates the set national languages preferred in a response (for example, Accept-Language: es, en).
From. Provides the email address of the requester to the server.
Host. Specifies the port number and host address.
User-Agent. Contains client information (for example, browser identification and version).
After the request and headers are received at the server, the server can start processing to prepare a response. If only one Web page can be returned, the server simply sends that page to the browser. However, you can configure your Web server to work with the request headers sent from the browser to select the most appropriate Web page to return.
For example, you might create Web pages that take advantage of certain browser capabilities, such as VBScript support in Explorer. The Web server can determine the browser that sent the request by examining the content of the User-Agent header. In your Web server configuration, you can associate a specific file extension with data in the User- Agent header, as shown in Figure 2. When processing the request, the Web server will select a Web page to return if it includes the specified file extension, preferring that over a file that does not include the extension. The file extension can be specified at any point in the file name; some examples are Page1.html.ie4 and Page1.ie4.html.
The Web server configuration options for IBM HTTP Server for AS/400 can be used to specify supported request methods, languages, and encoding as well as the port number and values for persistent HTTP connections. You can use the browser-based configuration and administration forms (as shown in Figure 2) to configure the server or directly edit the server configuration file and its directives using the Work with HTTP Configuration (WRKHTTPCFG) command.
Back to the Browser
After locating the file or files to send to the browser, the Web server starts sending the files, preceded by one or more response and entity headers. Response headers indicate the
status of the request, and the browser uses entity headers to determine how to render the response entity, which is the data from the file. Some of the more interesting entity headers that can be sent from the Web server to the browser include Content-Encoding, Content- Language, Content-Length, Content-Location, Content-Type, Expires, and Last-Modified.
It may happen that a Web page you request is not available on the Web server or that you are not authorized to view the page. In that case, the Web server returns a status message to the browser instead of the file. Figure 3 lists some of the status codes that can be sent from the Web server to the browser. In some cases, the browser can automatically respond to the status code and attempt the operation again. Other status codes simply appear in your browser (for example, the famous 404 Not Found status code).
An Evolving Protocol
Additional needs and requirements for HTTP become apparent as more applications are hosted on the Internet. For example, HTTP 1.1 includes many features that relate directly to performance when compared with HTTP 1.0. You can find more information about HTTP 1.1 (the current version) and discussion of future enhancements to HTTP at www.ietf.org/ids.by.wg/http.html.
As you know from your experience using the Web and possibly configuring a Web server, you dont need to know much about the details of HTTP itself to successfully use the protocol. However, the more you know about how HTTP works under the covers, the more options you have for configuring your browser or Web server and creating Web pages that are intended for specific audiences.
METHOD NAME DESCRIPTION
CONNECT Reserves method name for use with a proxy that can dynamically switch to being a tunnel.
GET Used to retrieve information identified by the Request URI (Uniform
HEAD Identical to GET, except that the server does not return a message body as a response. Used to request metainformation and headers. Usually used for testing links for validity, accessibility, and modification.
OPTIONS Requests information about the communication options available. Can be used by the client to determine the options and requirements of a server without actually initiating a retrieval.
POST Requests the server to accept the entity sent with the request. Usually used when submitting a Web form to the server.
TRACE Used to invoke a loop-back of the request message. Allows the client to see what is received by the server.
PUT Requests the server to store the entity sent with the request. (Supported at V4R4 on IBM HTTP Server for AS/400.)
DELETE Requests the server to delete the resource identified by the Request URI. (Supported at V4R4 on IBM HTTP Server for AS/400.)
Figure 1: The first string of any HTTP request includes a request method.
STATUS CODE DESCRIPTION 1xx Informational
100 Continue The initial part of a request has been accepted, and the client should continue.
200 OK The request succeeded. 202 Accepted The request was accepted for processing, but the processing is not yet completed.
204 No Content The server fulfilled the request but does not need to return an entity body.
206 Partial Content The server fulfilled a partial GET request for the resource.
301 Moved Permanently The requested resource has a new URI. 302 Found The requested resource is temporarily at a different URI. 4xx Client Error
400 Bad Request The request could not be processed because of malformed syntax.
401 Unauthorized The request requires user authentication. 403 Forbidden The server understood the request but refused to fulfill it. 404 Not Found The server could not locate the resource specified in the
Request URI. 405 Method Not Allowed The method is not allowed. 407 Proxy Authentication Required The client must authenticate itself to the proxy. 410 Gone The requested resource is not available on the server, and there is no known forwarding address.
5xx Server Error
500 Internal Server Error The server encountered an unexpected condition and cannot fulfill the request.
501 Not Implemented The server does not support the functionality required to fulfill the request.
503 Service Unavailable The server is unable to handle the request because of a temporary condition.
Figure 3: A Web host may set up custom HTTP status messages, but more often, they simply send the standard HTTP status codes to browsers. | <urn:uuid:31cab7ee-b9b6-4827-a7de-0457b31846dd> | CC-MAIN-2024-38 | https://www.mcpressonline.com/it-infrastructure-other/general/http-undercover | 2024-09-20T20:45:24Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701423570.98/warc/CC-MAIN-20240920190822-20240920220822-00563.warc.gz | en | 0.865267 | 2,161 | 3.84375 | 4 |
Email is a critical communication tool for businesses worldwide. Unfortunately, it is also a significant attack vector for cybercriminals. Attackers use email to deliver malware, phishing scams, and other cyber threats.
As email security becomes an increasingly pressing issue, AI is emerging as a powerful tool to help protect organizations from malicious threats. AI-based solutions are being used to provide advanced protection against cyberattacks, phishing scams, and other malicious activities that can compromise email security.
Keep reading to learn how AI is changing the email security landscape and what organizations need to know to ensure their data is secure.
How Is AI Changing Email Security?
AI is transforming email security by providing new tools and technologies to detect and prevent cyber threats. With its ability to analyze large volumes of data, identify patterns, and make predictions, AI can offer valuable solutions to problems that may otherwise be difficult to tackle.
Below are some of the ways AI is changing email security:
AI-powered email filtering helps identify and filter out spam, malware, and phishing emails before they reach the inbox. These filters use machine learning algorithms to analyze incoming emails, flag suspicious emails, and block them.
Authenticating incoming emails by verifying sender information and domain names with the help of AI can prevent attackers from spoofing email addresses and domains to trick users into providing sensitive information.
Threat Detection and Response
AI can detect and respond to threats in real time. It can analyze email content, attachments, and links to identify malware and phishing emails. AI can also learn from past attacks and use that information to prevent future attacks.
You can simulate phishing attacks with the help of AI programs and teach users to identify them and report suspicious emails.
Benefits of AI in Email Security
AI-powered email security systems can learn from past attacks and improve their threat-detection capabilities. This means the system becomes more effective at detecting and blocking threats as it learns. Here are a few other benefits.
Email filtering is critical to preventing spam, phishing, and other email-borne attacks. However, traditional email filters are limited in their accuracy, and false positives or false negatives can still occur. By using AI algorithms, email filtering can be optimized to achieve better accuracy and prevent legitimate emails from being falsely marked as spam or blocked.
Faster Incident Response
AI-powered email security can enable faster and more effective incident response times. Automated responses to incidents like malware detection or phishing can ensure a quick and effective response. Moreover, the system can instantly flag any incoming emails that could lead to security incidents, allowing for prompt remediation and a reduced response time.
Effective email security is critical for a business’s productivity, as it can prevent significant data loss, reputational damage, and legal penalties. However, manual email security monitoring is time-consuming and requires significant human resources. AI can automate the monitoring and detection of threats, freeing the IT security team to focus on more complex security tasks, thereby increasing productivity.
Enhanced User Experience
AI-powered email security solutions can ensure a better user experience for employees, reducing the need for manual filtering and the risk of being misled by a malicious email. The risk of human error can be minimized by enabling a better and more secure email experience for users.
AI is changing the email security landscape, providing businesses with new tools and technologies to defend against cyber threats. If you are looking for a reliable IT services provider in Canada that can help you leverage the power of AI to secure your email communications, consider ManagePoint Technologies. We offer managed IT services, including email security, to businesses of all sizes. Contact us today to learn how we can help you secure your email communications. | <urn:uuid:029d80a3-aabf-49f9-ac3f-9dff6f022bbe> | CC-MAIN-2024-38 | https://managepoint.ca/blog/security/how-is-ai-changing-the-landscape-of-email-security/ | 2024-09-07T14:25:15Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650883.10/warc/CC-MAIN-20240907131200-20240907161200-00027.warc.gz | en | 0.941829 | 752 | 2.953125 | 3 |
What types of cybersecurity skills can you learn in a cyber range?
What is a cyber range?
A cyber range is an environment designed to provide hands-on learning for cybersecurity concepts. This typically involves a virtual environment designed to support a certain exercise and a set of guided instructions for completing the exercise.
A cyber range is a valuable tool because it provides experience with using cybersecurity tools and techniques. Instead of learning concepts from a book or reading a description about using a particular tool or handling a certain scenario, a cyber range allows students to do it themselves.
What skills can you learn in a cyber range?
A cyber range can teach any cybersecurity skill that can be learned through hands-on experience. This covers many crucial skill sets within the cybersecurity space.
SIEM, IDS/IPS and firewall management
Deploying certain cybersecurity solutions — such as SIEM, IDS/IPS and a firewall — is essential to network cyber defense. However, these solutions only operate at peak effectiveness if configured properly; if improperly configured, they can place the organization at risk.
A cyber range can walk through the steps of properly configuring the most common solutions. These include deployment locations, configuration settings and the rules and policies used to identify and block potentially malicious content.
After a cybersecurity incident has occurred, incident response teams need to know how to investigate the incident, extract crucial indicators of compromise and develop and execute a strategy for remediation. Accomplishing this requires an in-depth knowledge of the target system and the tools required for effective incident response.
A cyber range can help to teach the necessary processes and skills through hands-on simulation of common types of incidents. This helps an incident responder to learn where and how to look for critical data and how to best remediate certain types of threats.
Operating system management: Linux and Windows
Each operating system has its own collection of configuration settings that need to be properly set to optimize security and efficiency. A failure to properly set these can leave a system vulnerable to exploitation.
A cyber range can walk an analyst through the configuration of each of these settings and demonstrate the benefits of configuring them correctly and the repercussions of incorrect configurations. Additionally, it can provide knowledge and experience with using the built-in management tools provided with each operating system.
Endpoint controls and protection
As cyber threats grow more sophisticated and remote work becomes more common, understanding how to effectively secure and monitor the endpoint is of increasing importance. A cyber range can help to teach the required skills by demonstrating the use of endpoint security solutions and explaining how to identify and respond to potential security incidents based upon operating system and application log files.
This testing enables an organization to achieve a realistic view of its current exposure to cyber threats by undergoing an assessment that mimics the tools and techniques used by a real attacker. To become an effective penetration tester, it is necessary to have a solid understanding of the platforms under test, the techniques for evaluating their security and the tools used to do so.
A cyber range can provide the hands-on skills required to learn penetration testing. Vulnerable systems set up on virtual machines provide targets, and the cyber range exercises walk through the steps of exploiting them. This provides experience in selecting tools, configuring them properly, interpreting the results and selecting the next steps for the assessment.
Computer networks can be complex and need to be carefully designed to be both functional and secure. Additionally, these networks need to be managed by a professional to optimize their efficiency and correct any issues.
A cyber range can provide a student with experience in diagnosing network issues and correcting them. This includes demonstrating the use of tools for collecting data, analyzing it and developing and implementing strategies for fixing issues.
Malware is an ever-growing threat to organizational cybersecurity. The number of new malware variants grows each year, and cybercriminals are increasingly using customized malware for each attack campaign. This makes the ability to analyze malware essential to an organization’s incident response processes and the ability to ensure that the full scope of a cybersecurity incident is identified and remediated.
Malware analysis is best taught in a hands-on environment, where the student is capable of seeing the code under test and learning the steps necessary to overcome common protections. A cyber range can allow a student to walk through basic malware analysis processes (searching for strings, identifying important functions, use of a debugging tool and so on) and learn how to overcome common malware protections in a safe environment.
Cyber threats are growing more sophisticated, and cyberattacks are increasingly able to slip past traditional cybersecurity defenses like antivirus software. Identifying and protecting against these threats requires proactive searches for overlooked threats within an organization’s environment. Accomplishing this requires in-depth knowledge of potential sources of information on a system that could reveal these resident threats and how to interpret this data.
A cyber range can help an organization to build threat hunting capabilities. Demonstrations of the use of common threat hunting tools build familiarity and experience in using them.
Exploration of common sources of data for use in threat hunting and experience in interpreting this data can help future threat hunters to learn to differentiate false positives from true threats.
Computer forensics expertise is a rare but widely needed skill. To be effective at incident response, an organization needs cybersecurity professionals capable of determining the scope and impacts of an attack so that it can be properly remediated. This requires expertise in computer forensics.
A cyber range can help an incident responder to gain the necessary skills in cyber forensics. This includes the use of tools like Autopsy and FTK to properly gather evidence, and the interpretation of the data collected as part of a forensic investigation.
Building cybersecurity skills through hands-on experience
Cybersecurity books can provide a great deal of useful information; however, hands-on experience is essential to fully grasping a concept and gaining the skills necessary for a cybersecurity role. Cyber ranges enable students to gain the skills that they need via hands-on, guided experiences. | <urn:uuid:746ef6ef-0755-44ff-a498-c733aedc811b> | CC-MAIN-2024-38 | https://www.infosecinstitute.com/resources/cyber-range/what-types-of-cybersecurity-skills-can-you-learn-in-a-cyber-range/ | 2024-09-11T02:52:41Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651343.80/warc/CC-MAIN-20240911020451-20240911050451-00627.warc.gz | en | 0.923051 | 1,218 | 3.234375 | 3 |
A data center rack is a type of storage unit. As the name suggests, data center racks are designed to meet the specific needs of data centers. This means that they are both mainstream and specialist. Here is a quick guide to what you need to know about data center racks.
The main reason for using data center racks is to get maximum value out of the space inside the data center. The main benefit of data center racks is that it encourages standardization. This helps with both physical organization and with organizing streamlined working practices.
Greater organization can go a long way to greater efficiency. This can often translate into greater cost savings. For example, using data center racks can help to encourage airflow. This reduces the workload on artificial cooling systems. It, therefore, reduces the cost of running them.
Likewise, using data center racks can make it much easier for data center staff to perform hardware-related tasks. This can save a lot of time and hence get businesses more value for their staffing costs. It can also increase job satisfaction and promote workplace safety.
Generally, when people refer to a data center rack, they mean the rack itself. When they refer to a data center cabinet, they mean the rack plus everything in it.
There are multiple types of data center racks in use. The two most popular ones are standard racks and open racks.
Standard data center racks take their name from the fact that they are a standard size. This is 19” wide by 84 inches (7 feet) in height. A data center rack can, however, still be considered standard down to 73.5”.
For completeness, the height of a data center rack is generally specified in rack units. A rack unit is about 1.75”. The term rack unit is generally shortened to U or RU. This is often prefixed by a number. The number is a multiplier for the rack units. For example, 48U refers to a standard 7-foot data center rack.
Most open data center racks are standard data center racks. The reason they are given a separate designation is that they are left open. In other words, they are not fitted into a cabinet in the same way as most other data center racks including standard ones.
Using open data center racks can make it much easier for staff to access components. This convenience does, however, need to be set against the additional risk of leaving equipment exposed. For practical purposes, the main risk is the effect of dust. This can very easily drift (or be sucked) into equipment. If it is, it can cause a lot of damage.
Standard and open data center racks are the most commonly used types of data center racks. Other fairly common types of data center racks include wall-mounted data center racks and portable data center racks. There are also data center racks that are optimized for specific purposes such as storing especially bulky equipment.
Most data center racks hold similar components. This includes data center racks that were optimized for specific purposes. Here are the main components you can expect to see in a data center rack.
A power distribution unit (PDU) is more than just a basic power supply. It is a system for delivering the right sort of electricity supply to diverse components. PDUs come in different shapes and sizes to suit different data center rack configurations. Popular options include horizontal, vertical, and inline styles.
Cable clutter is a notorious problem in home offices. In data centers, it is a serious issue. Having some kind of cable management system is, therefore, a must in most data center racks.
Even though using a data center rack does help to promote ventilation, most people still need some kind of cooling system. Popular options include air conditioning units, fans, and liquid cooling systems.
Data center racks often hold performance monitoring systems. These essentially monitor the health of the components inside the data center rack. If they detect that anything is amiss, they alert for assistance. Using performance monitoring systems is much more efficient than waiting for technicians to spot issues.
Security systems protect against both environmental risks (such as fire) and the risk of tampering. They are considered essential in most data center racks.
Discover the DataBank Difference today:
Hybrid infrastructure solutions with boundless edge reach and a human touch. | <urn:uuid:9068d782-997c-4ee4-a862-2df9cae22cb1> | CC-MAIN-2024-38 | https://www.databank.com/resources/blogs/what-you-need-to-know-about-using-a-data-center-rack/ | 2024-09-12T09:41:09Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651440.11/warc/CC-MAIN-20240912074814-20240912104814-00527.warc.gz | en | 0.943235 | 871 | 2.515625 | 3 |
Nowadays, protecting our sensitive data from unauthorized and unwanted sources has become a significant challenge. There are numerous tools available that can provide various levels of security and aid in the protection of private information stored in any system. A ‘firewall’ is a network security mechanism that protects our systems and data against unauthorized access.
In this blog, we will provide an overview of what a firewall is, the various types of firewalls in network security, and their significance.
What is Firewall?
A firewall is a cybersecurity device or software application that filters network traffic. A firewall acts as a traffic cop at your computer’s port. A fundamental purpose of a firewall is to create a barrier that separates an internal network from incoming external traffic to block malicious traffic requests and data packets, such as malware and hacking while allowing legitimate traffic to pass through. A firewall enables only the traffic that has been configured to accept, like IP addresses. It differentiates between legitimate and malicious traffic and allows or blocks specific data packets based on predefined security rules.
Why do we need a Firewall?
A firewall is a necessary component of a company’s overall cybersecurity strategy. Most computers have an in-built firewall, but it isn’t always the best option for security. What can a firewall do to keep us safe?
Types of Firewalls
Here are the list of different types of firewalls:
1.Packet-filtering firewall : A Packet-filtering firewall filters all incoming and outgoing network packets. It tests them based on a set of rules that include IP address, IP protocol, port number, and other aspects of the packet. If the packet passes the test, the firewall allows it to proceed to its destination and rejects those that do not pass it.
Benefits of a Packet-filtering firewall
2. Stateful Multi-Layer Inspection (SMLI): Stateful Multi-Layer Inspection firewall employs packet inspection technology and TCP handshake verification to provide protection. These firewalls, also known as dynamic packet filtering, examine each network packet to determine whether it belongs to an existing TCP or another network session. The SMLI firewall creates a state table to store session information like source and destination IP address, port number, destination port number, etc.
Benefits of Stateful inspection firewall
3. Stateless firewall : Stateless firewalls monitor the network traffic and analyze each data packet’s source, destination, and other details to determine whether a threat is present. These firewalls can recognize packet state and TCP connection stages, integrate encryption, and other essential updates.
Benefits of Stateless firewall
4. Application-level gateway (Proxy firewall) : Application-level firewall, also called Proxy firewall, is used to protect data at the application level. It protects from potential internet hackers by not disclosing our computer’s identity (IP address). Proxy firewalls analyze the context and content of data packets and compare them to a set of previously defined rules using stateful and deep packet inspection. They either permit or reject a package based on the outcome. Because this firewall checks the payload of received data packets, it is much slower than a packet-filtering firewall.
Benefits of Application-level firewall
5. Circuit-level gateway : Circuit-level gateway validates established Transmission Control Protocol (TCP) connections. These firewalls typically operate at the OSI model’s session level, verifying Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) connections and sessions. These firewalls are implemented as security software or as pre-installed firewalls. Like packet filtering firewalls, these firewalls do not examine the actual data packet but observe the information about the transaction.
Benefits of Circuit-level gateway
6. Next-Generation Firewall (NGFW): The most common type of firewall available today is the Next-Generation Firewall (NGFW), which provides higher security levels than packet-filtering and stateful inspection firewalls. An NGFW is a deep-packet inspection firewall with additional features such as application awareness and control, integrated intrusion prevention, advanced visibility of their network, and cloud-delivered threat intelligence. This type of firewall is typically defined as a security device that combines the features and functionalities of multiple firewalls. NGFW monitors the entire data transaction, including packet headers, contents, and sources.
Benefits of Next-Generation Firewall
7. Cloud firewall : A Cloud firewall, also known as FaaS (firewall-as-service), is a firewall that is designed using a cloud solution for network protection. Third-party vendors typically manage and operate cloud firewalls on the internet, and they are configured based on the requirements. Today, most businesses use cloud firewalls to protect their private networks or overall cloud infrastructure.
Benefits of Cloud firewall
Feel free to explore related blogs:
How can InfosecTrain help you?
InfosecTrain is a globally recognized best training and consulting company focusing on various IT security training and information security services. They offer a variety of certification courses to help students gain hands-on experience and proficiency in various security domains. Their goal is to raise cyber security awareness. | <urn:uuid:87dfc439-59e8-4faa-a3e9-3e51ade36ffb> | CC-MAIN-2024-38 | https://www.infosectrain.com/blog/types-of-firewalls-in-network-security/ | 2024-09-14T20:15:45Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.74/warc/CC-MAIN-20240914193334-20240914223334-00327.warc.gz | en | 0.9067 | 1,088 | 3.375 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.